Latest updates to NVDA 2017.4


By Mike Jones Screen Reader Analyst (DAC)

Anyone who uses Non Visual Desktop Access (otherwise known as the NVDA screen reader) for windows, will be aware of some recent problems when using the software. What follows is an overview of the latest improvements and fixes to NVDA if you update to the current version, and some improvements to how NVDA works with Mozilla Firefox.

The elements list

The elements list ‘insert+f7’, now includes menus for form fields and buttons. This is an addition to the previous menus available which were links, headings and landmarks, and will make identifying edit fields and buttons much more efficient and less problematic. Fieldset and legend has now been provided with more support.

The previous situation was that NVDA would not announce the fieldset, even where the developer had provided this information. After significant investigation, I have found that the form field menu found within the elements list still does not support this area. However when the user navigates using the ‘f’ key which navigates to the next form field, the ‘r’ key to move to the next radio button, or the tab key to move to the next element some support is given. I found that when using these keys the fieldset and legend announces for the first radio button on the page, however this does not extend to other radio buttons within that group. I also found that once I had navigated past the radio button, and used shift+f to move to the previous form field, the fieldset and legend announced for the last radio button in the group. Where multiple questions appear on a page, the fieldset and legend will announce for each first option of the group, meaning it is now far easier to distinguish between questions when using one of these key strokes.

This affects us when testing with NVDA, as it now means that to some extent we can now test for fieldset and legend wen using radio buttons, and so can directly compare with JAWS. However I would caution against relying on this 100%, because although this area has improved significantly for the last update, it is still not reading for all radio buttons within a group. As such I would now recommend the following methods of testing (when not browsing using the cursor keys)

Testing with NVDA and looking for headings

To test for the levels of heading, use the ‘H’ key, as at present the heading levels are still not supported using the elements list ‘insert+f7.’

Form element labels

The user can now test for form element labels by using the ‘f’ key for all form elements, the ‘e’ key for edit fields, the ‘b’ key for buttons and can now locate these items using the elements list.

Radio buttons/fieldset and legend

The User should test using either the tab key, the ‘r’ key or the ‘f’ key,’ remembering that only the first option of a group will announce when cycling forward, and the last option when cycling backwards. The fieldset and legend is still not supported within the elements list, so this should be avoided for testing purposes.

A note on NVDA and Firefox

NVDA 2017.4 appears to have partially fixed the issue announced in November relating to Firefox, and NVDA appears to work in some instances. We would advise keeping in touch with the latest developments from NV Access on this, and other developments relating to NVDA by visiting their blog at: NVDA’s In Process Blog (external link.)

The future of Artificial Intelligence: A future for all


We now live in a world where artificial intelligence, and assistive technology is more accessible than ever before. In my previous post ‘the update round up’, I highlight some of the new updates to Apple, Windows, Android and iOS; and how each offering will improve access to content on mobile and desktop devices for various user groups. What about the day-to-day usage of artificial intelligence though? It’s actually closer to hand than we think.

Artificial intelligence or (AI), is fast becoming the norm in our daily lives. The first thing to identify is that it doesn’t just help people who have additional access requirements, all users regardless of whether or not they use assistive hardware or software benefit from using AI. If you have ever asked a virtual assistant such as Siri, Google, Alexa or Cortana to do something, you’re using AI. The technology is also developing to learn what we use most of all, and adapt to our digital habits. So if you frequently use Cortana to open aps or set reminders, it will become familiar with this task, and any others you use.

The use of AI can be incorporated in to apps, something which is on the increase with updates to the various desktop and mobile operating systems. This means that any third-party items which are installed to a device will be able to take advantage of AI, as long as the developer has included this functionality when producing the app. One app for iOS which is aimed at supporting blind or low vision users is Seeing AI. The app has various features including document scanning, a barcode reader, and the ability to share information via the iOS share sheet. This means that the app will be able to identify items from the camera roll, allowing users to include names for individuals in a picture such as relatives for example. So the use of AI is increasing as the updates and overall development of technology continues.

Additional Resources

To learn more about AI including the Seeing AI app, visit the following pages. *Note* The Seeing AI app is not available in the UK app store at the time of writing, when it is I will be giving it a good run through. The Seeing AI app for iOS (external link). The Cortana website (external link). All about Siri (external link). All about the Google Assistant (external link).

The icing on the cake: The difference between AA and AAA compliance



Achieving a level of compliance for your app or website means that as far as the Web Content Accessibility Guidelines (WCAG) are concerned, your offering is accessible to as many user groups as possible who require assistive technology to get online. The term assistive technology, and even accessibility can mean different things to different people, and here at the Digital Accessibility Centre (DAC) we offer level AA and AAA accreditation for our clients depending on their requirements.

What do the different levels mean?

  1. Single A is viewed as the minimum level of requirement which all websites, apps, and electronic content such as documents should adhere to.
  2. Double A is viewed as the acceptable level of accessibility for many online services, which should work with most assistive technology which is now widely available on both desktop and mobile devices, or which can be purchased as a third-party installation.
  3. Triple A compliance is viewed as the gold standard level of accessibility, which provides everything for a complete accessible offering, including all the bells and whistles which make the difference between a very good experience and an excellent one.

In his post called Why do we need WCAG Level AAA? (External link), Luke McGrath points out that problems may occur and cause a failure for some AA criteria when attempting to reach AAA. Trying to meet AAA will mean that your website is the best it can be, however including the additional implementation may not be possible if budget is a concern, and working through a particular problem may push back a go live date if trying to fix AA issues when trying to move to AAA. A good example of AAA is found below, which highlights how AA and AAA can make the difference for end users.

One key difference between AA and AAA is for screen reader users when navigating the page. If a screen reader user is viewing a list of links and hears their software announce ‘click here’ or ‘read more’, it will pass as double A if the links are associated with each other in a paragraph or list. This means that the link would be surrounded by text like, ‘to read the DAC blog click here’, click here being the link. While it is possible to read the information using another method of navigation such as reading the entire paragraph rather than just a set of links, the link text would be ambiguous when moving through all the links to find the required content. So including the icing (a clear link text in this instance), would make the link easier to read no matter what method of navigation is being used.

As shown above, moving to AAA if at all possible will create the best experience for all users, however AA is accepted as a very good commitment to accessibility. For more information feel free to get in touch, or check the following link for more information. Web Content Accessibility Guidelines 2.0 (WCAG2 External link).

The update round up: what’s new in accessibility when the updates are released?



It’s that time of year again when we all look forward to the regular updates of iOS, Android, and Windows and wonder what changes are ahead when the new updates are introduced. What can we expect from the assistive technology though, and in particular, what improvements are the big players planning in relation to their built-in software.

The latest updates from Apple

iOS 11 comes with many exciting features, however the big accessibility improvements are the 1-handed keyboard, adding another feature to its feature-rich OS. Other offerings include automatic image scanning, where Voiceover, (the built-in screen reader on iOS), will attempt to scan an image for text and read it to the user. This combined with the same scan for unlabelled buttons makes for interesting developments. For low-vision users, a new invert colour option, and additional integration with third-party apps means that low-vision users are able to have better contrast across more applications.

MacOS Users who experience difficulty using a physical keyboard will now benefit from an on screen keyboard in the September update of macOS. The keyboard will allow users to customise it to their requirements, although like other updates we will need to wait and see what the final result will be. Many of us talk to Siri, but have you ever just wanted to type a message to Siri instead? Now you can, Siri will still provide audio feedback, just type what you want if you can’t chat with Siri. Improved PDF support relating to tables and forms with Voiceover is another feature in the new Mac OS, a feature which I am sure will be much welcome by Voiceover users when attempting to quickly access PDF and other documentation. Similarly to iOS, Voiceover on the mac will describe an image by using a simple keyboard command, making it possible to interpret your photos maybe, I guess time will tell. Better navigation of websites which now use HTML 5 is also included in the update, meaning that Voiceover will support the new standard and provide better navigation when tables are used in messages for example.

Apple watch is also benefiting from a software update, including the ability to change the click speed of the button on the side of your watch. This means that users who have difficulty double-clicking for example, can customise the click speed when they need to use Apple pay or other such services. Apple TV will now support the use of braille displays. A braille display is a device which translates the print material on-screen in to braille via Bluetooth or USB, allowing users to navigate and read content such as programme guides ETC.


Improvements to Windows Narrator, the built-in screen reader on Windows devices, will see the ability to learn what command is performed when using another device such as a keyboard, via device learning mode. Narrator users will be able to experience a clearer and more unified user interface (UI), as improvements across all apps and devices will make Narrator easier to learn and use. The scan mode used to quickly navigate a screen or web page, will be set to on by default, and it’s setting across multiple apps will be remembered to further improve the user experience. Narrator will also include a service which attempts to recognise images which contain a lack of alt (alternative) text, by using Optical Character Recognition (OCR) to identify the image.

The Magnifier will follow Narrator’s focus, to make it easier for users who use both Narrator and magnification simultaneously. The desktop magnifier will include smoother fonts and images, as well as additional settings and the ability to zoom in or out using the mouse. Also included for low vision users are new colour filters, which make it easier for persons who have colour blindness or light sensitivity to use a windows device.


A new accessibility shortcut will be available for users running android o. The feature is set to toggle on and off Talkback by default, however it can be used to configure another accessibility service after set up, such as magnification or switch access. The shortcut can be performed by pressing the up and down volume buttons together on any compatible device, meaning that it will be easier than ever to get your required access option on Android O. When using Android o with Talkback, the addition of a separate talkback volume has been introduced to enable users to change the output volume separately from the media volume. For low-vision users, a new slider has been introduced at the top of the screen when media is encountered to easily perform the same action. So if listening to any media it is now possible to easily hear what Talkback is announcing. For devices running Android o with a fingerprint scanner, Talkback users can make use of customisable gestures which can be performed by using the fingerprint scanner on their device. To enable support for additional languages, multi-language support is another feature being developed for Android O, via Google’s text-to-speak software to detect and speak the language in focus.

When running an Android o compatible device, and having an accessibility service active such as magnification, users can implement an accessibility shortcut to magnify the screen when the Accessibility button is available. This means that, using the example of magnification, a user would be able to tap the accessibility button, and use a specific gesture to change the screen magnification. To return to the previous (or default) setting, all users need to do is press the Accessibility button again to remove the accessibility setting.

For low-vision users who may not require the features of Talkback, or for users who have dyslexia, select to speak will be a useful feature. Select to speak is a service which announces a selection of elements or text, and includes options to read by page, speed, and the previous or next sentence. As mentioned earlier, we will need to wait until the final updates are released in a couple of months, but the future is very interesting for built-in assistive technology.


To learn more about the latest updates, go to: The latest accessibility updates in iOS 11 from AppleVis (external link). The Microsoft Accessibility Blog (external link). The latest accessibility news about Android O (opens external link which contains a youtube video).

How do we deal with a CAPTCHA: Making authentication accessible for everyone.



CAPTCHA (completely automated public Turing test to tell computers and humans apart), is used to authenticate genuine users from others who have NOT SO GOOD intentions. The process of authenticating a person online need not rely on CAPTCHA though, as other methods of authentication can be used when proving yourself online. The problem with CAPTCHA is that it causes difficulties when users of assistive technology try to use it, and in the most inaccessible versions, can prevent users from completing the verification process. What follows is an example of the barriers faced by users of assistive technology when they encounter a CAPTCHA, and some alternatives to consider when implementing security on a website.

The need for authentication and the need for accessibility

Authentication of a user, and having secure channels when submitting a form is crucial when browsing the web. Not only for the use of contact forms when identifying real users from spam, but also for secure online transactions or account creation. When using assistive technology though, an added problem occurs; the one of accessibility to the CAPTCHA. There are many different methods of CAPTCHA from different organisations, and assistive technology can be affected depending on the type of CAPTCHA being used. It’s also important to point out that CAPTCHA can be displayed differently depending on the operating system (OS) being used, such as Windows verses Mac or iOS.

If completing an audio CAPTCHA on Windows for example, the ‘play’ button for the audio would do as expected assuming that all is working as it should be. On iOS however, the audio CAPTCHA prompts users to download an MP3 file meaning that users will have to remember the content of the audio, and switch to the required form to input the content to pass verification. While some audio is accessible though, a problem can occur if the files are heavily processed because it is difficult to pick out the correct letters or numbers if the audio is heavily distorted. While this is done to prevent bots from interpreting the information, an additional barrier is identified if users are not able to interpret the content clearly.

Image CAPTCHA which require users to select specific images and not others may work for users who have good vision, but will prevent users who have little or no vision from completing the verification process. A CAPTCHA which requires users to make a maths calculation, or select the correct response to a question will work for some users, but may cause problems for users who have a learning difficulty.

Implementing an accessible alternative will not only maintain security, but will also ensure that users of assistive technology are not excluded from the verification process. Some good alternatives such as ticking a box to indicate that it is a human and not a robot completing the form is one option. Another alternative would be to implement honeypot, which has a hidden form field which if filled in, will stop the submission. As long as the field is clearly labelled to warn screen reader users that it should not be filled in, this is a suitable alternative. While other methods of biometric authentication are being explored, one of the best methods would be 2-factor authentication, where the user enters an email address or mobile number, and receives a code to enter in to the form to verify their information. Each method has good and bad points, such as the 2-factor method would require the user to have immediate access to their email account or good phone signal.

Further information

For more information about good CAPTCHA and some alternatives, check out: Some CAPTCHA alternatives (external link.)