Questions About Screen Readers and Web Browsers

I’m currently working on something browser-based that is designed with a priority for screen reader users.

As always, finding information through a Google search is often really hit-or-miss when it comes to accessibility information, so I have some questions:

  1. Do different web browsers have different levels of compatibility with screen readers?
  2. Do all modern screen readers use Aria tag info on a web page, or should I be researching other protocols as potential fallbacks?

I’m using a screen reader on both desktop and mobile for testing purposes at the moment, but I want to make sure I’m not missing any functional cases that might leave large chunks of player base in the dark.

Thank you for your time!


Awesome, thanks for doing this. To answer your questions:

  1. Yes. Just like web browsers having various implementations of the DOM, how screen readers parse this DOM and present it to the end user may vary. For example NVDA on Windows exposes a toolbar role and its elements as a flat list, while VoiceOver (on MacOS) by default makes all the items nested, so one has to interact with the toolbar to see the items. This has the advantage that if a toolbar has lots of controls, those can be skipped quickly.
  2. Yes. VoiceOver, NVDA, Jaws, Narrator, System Access, Supernova have ARIA implemented, to list a few. The degree of implementation may slightly vary in certain cases.

Depending on the complexity of your project, it might be a good idea to test it with various screen readers and various browsers, even on the same OS. Sorry if this is obvious.

Hope I could help!


In theory, all modern screenreaders should be able to handle ARIA tags. In practice, yes, there are some ARIA tags that don’t work reliably with specific browser/screenreader combos.

Here’s a report I found looking at the reliability of various specific ARIA tags in various circumstances, which might be of interest to you: WAI-ARIA - Screen reader compatibility · PowerMapper Software

But to the best of my knowledge, the preferred way to handle this is not so much to add in any kind of alternative protocol, but rather to minimize how much you’re relying on ARIA (vs. native HTML elements) for basic functionality in the first place. And as you can see in the report, how you structure the HTML around the ARIA tags can also affect how well they work, so that’s something to keep in mind.


This is exactly the kind of thing that I wasn’t aware of and would be crucial to know, thank you!

In this home, I have access to an Ubuntu desktop, a Windows desktop, an Android phone, and an iPhone. Hopefully this covers enough of the use cases. I’ve tried to figure out how to install alternative readers on Linux but didn’t get much progress. I’ll make a note to try again shortly.

I really appreciate the link! :grin:

Excellent point! I’ll keep this in mind!

So far I’ve been working on some foundation stuff and proof-of-concept functionality. I’m gonna be working on UI functionality soon, though, so research is going to be imminently useful very shortly.


For obvious reasons I can’t help much (can’t debug…) but if your screenreader interface manages to change tone/pitch/volume in reference to styles (I follow Italian usage of styling, e.g. italics for thoughts, bold for raised voice &c…) will be useful in general…

Best regards from Italy,
Dott. Piergiorgio.


One of us might be misunderstanding the other; I’m not making a custom screen reader. I’m working on a custom text game engine for the web (but can also work all-in-one offline).

The primary goal in its design is that no feature for sighted users should ever require a compromise for screen reader users, and it should also be as compatible with as many screen readers as I am able to research and test.

However, if there are tags for causing the screen reader to use a certain tone of voice, then that’s amazing and I’m curious to learn about this, lol. I can’t imagine my desktop’s screen reader would know how to use those, though; it sounds like Stephen Hawking. I might be able to test it on mobile, though!

Condensed explanation for why I'm making my own game engine.

TADS 3 is absolutely amazing and grievously-underrated. However, the codebase I’m using for my TADS games is built upon the evolving Adv3Lite engine, and I’ve apparently reached the point of modification complexity where fixes and updates to Adv3Lite might mean parts of my codebase will break in unexpected ways. Additionally, I’ve recently had a big think about my goals. I came here with the intent to ditch graphics and create accessible games, particularly for blind and deaf players.

More crucially, and the reason why I originally chose TADS: I’m not interested in making puzzle games, but I’m also not interested in making games with a low degree of simulation. This meant that I needed to deeply-modify Adv3Lite, and I had plans for more extreme modification in the future, as I moved further and further away from the conventional parser world model, user interface, and Adv3Lite action system. It was starting to look like starting from scratch might be simpler than ripping everything out and rebuilding.

However, I’m just not okay with the minimal web functionality of HTML TADS—despite failed experiments and attempts to work on solutions—as well as the lacking screen reader support in QTADS, which was necessary because I wanted to create audio experiences in my games.

So now that I know more clearly what I want, and have confirmed that other systems don’t quite do what I’m looking for, I’m back on my original path to making my own IF engine.


That should be enough. VoiceOver on iOS and MacOS behave similarly.

Gnome Orca works reasonably well when using Firefox.

Generally, the more standard HTML elements you use the better, but with good ARIA markup when needed the experience can be seamless, so this is not a must.


I wouldn’t rely too much on testing in Linux/orca since practically no one uses it compared to eg Microsoft. Also be aware that asking for help with aria is going to result in 99% of people telling you to use native elements, even if that isn’t your question and the thing you’re trying to do requires aria and despite the fact that aria is almost mind numbingly simple. I would focus more on getting the accessibility tree to display the info you want and assume the users know how to access and navigate that info efficiently.


This is the plan, yeah. :grin:

True, but iteration times when testing on anything else is going to take a while, mostly because this isn’t going to be a general hosted service like Parchment, and I don’t have easy access to the other platforms at all times.

So the idea is to test stuff on Orca until it reaches a proof-of-concept stage, and then test that version on multiple browsers across multiple operating systems and devices to collect a list of issues to work on.

Thanks for the pointers! :sparkles:


ARIA is a complicated topic, but it’s easy to make it simpler: use a bunch of screen readers, see what works (and what feels usable), and then fix bugs.

Test on popular screen readers with the WebAIM survey

It’s just like testing your web site in various browsers. Focus testing on the most popular browsers and on the buggiest browsers in common usage.

Here’s the major industry survey of screen reader users, from WebAIM.

Now, you’ve gotta read this survey carefully, because it first talks about desktop browsers, and has a chart “Primary Screen Reader” but it’s specifically referring to primary “desktop/laptop” screen reader.

That chart indicates that “VoiceOver” isn’t very popular, but it’s referring to macOS VoiceOver in that section. (If you scroll down to the “Operating Systems” section, you’ll see that macOS itself isn’t very popular among screen reader users.)

JAWS for Windows is the leading screen reader, followed by NVDA for Windows as a close runner up. macOS VoiceOver is a distant third.

Note that JAWS costs money (and its licensing scheme is onerous), and NVDA is free. But also, NVDA tends to be buggier than JAWS; IME, anything that works in NVDA also works in JAWS.

Later, it talks about “Mobile Screen Readers Used” WebAIM: Screen Reader User Survey #10 Results

That chart shows that the OS builtin screen readers dominate, with iOS VoiceOver (70.6%) and Android TalkBack (34.7%). (These add up to more than 100% because some people use both.)

The survey also includes a “Mobile vs. Desktop Usage” chart, which shows that 10% use more mobile than desktop, 40% use more deskop than mobile, and the other 50% use the both the same. That doesn’t line up with my experience with users reporting bugs.

In my experience, the vast majority of bug reports I hear from screen reader users are from iOS users and NVDA users.

So, therefore, I recommend testing in this priority order:

  1. iOS Safari VoiceOver. I recommend mobile over desktop (because, I claim without data, that mobile is significantly more popular among visually impaired users) and iOS over Android, because iOS is overwhelmingly more popular than Android among visually impaired users.
  2. Windows NVDA on Chrome. NVDA is not quite as popular as JAWS, but it’s buggier. Anything that works in NVDA will also work on JAWS, but not necessarily vice versa.
  3. Windows JAWS on Chrome.
  4. Android TalkBack on Chrome.
  5. macOS VoiceOver on Safari.

But I think you’ll find that just testing in iOS Safari VoiceOver gets excellent bang for your buck. I normally test iOS Safari only, and then Windows NVDA on Chrome when I want to be very thorough, and then I typically stop.

It’s been at least five years since I’ve seen a user report a bug that occurs in Windows JAWS but not Windows NVDA. I think I’ve never seen a user report a bug on Android TalkBack at all.

Avoiding screen reader bugs

You’ll probably just pick this up yourself as you find and fix bugs, but:

  1. Screen readers work best when they can understand what your HTML means. If you can adopt a specific HTML element that captures your meaning, use it, instead of just trying to use <div> and <span> elements with CSS.

  2. Screen readers are like a text adventure; you are in a particular place on the page, where the screen reader is focused. When changes occur elsewhere on the page (in the status bar, for example), it’s complicated and confusing to have the screen reader announce those changes. You either have to focus the status bar (but, then, how do you get back to where you were?) or you have to announce the live updates elsewhere on the page without moving focus, which is, itself, confusing. Therefore, highly accessible UIs tend to be very simple. If you can live without having a status bar, do that.

  3. IF UI is particularly perplexing for many screen readers, because the player is intended to focus on some text input, type something there, and then have what they typed appear in the scrollable transcript. In the past, I’ve described IF UI as if you were running a D&D game, where the Dungeon Master would write on the page, then hand the page over to the player, who would write what they want to do, then hand the page back to the DM to write what happens next.

    In ChoiceScript, we clear the screen after every action, with no scrollback, as if you’d navigated to a new page. We set the focus at the top of the new page, so the screen reader immediately begins reading the text after making a choice.

    I think you might consider doing something similar.


(exhales with relief)

I’m really glad this is at 5th place, because I don’t own anything with macOS and absolutely cannot afford to buy such a device at this time!

Special thanks to my dad’s previous job for giving him an iPhone to keep. It’s literally the only Apple device I can borrow.

I’ve started running into this recently, actually. Learning that there are a lot of tags available that visually seem identical but have important semantic differences. I’ve been crash-coursing this a bit.

Ah, excellent! I was planning to ditch the idea of a status bar for this engine anyway…!

Generally, my current plan is to summon UI elements to the bottom of the transcript, as the player needs them. This way, the player is able to continuously search further downward for changes, and never need to search upward for them instead.

For example, if they type something into an input field or push a button, the results of their action appear below this element. The previous buttons or input get locked, and a fresh set of buttons and/or input field are made available below.

Also, wow this is a lot of amazing info!!! Thank you so much!


If you have old disabled input controls it would probably be good to set them to aria-hidden.

For status (grid) windows, Parchment sets role: status, aria-atomic: true, and aria-live: off. Except when the grid window is quite tall, in which case it switches to aria-live: polite, under the assumption that it’s operating as a main content window rather than a status window. Working out the ARIA properties for a general purpose interpreter like Parchment has not been easy, but it should be simpler for a new interpreter with less legacy conventions that need to be accounted for.


One rather interesting discovery I’ve made is the weird way that disabled and aria-hidden elements can apparently break aria-live regions.

I’ve been researching a lot of widely-supported workarounds for a few things that use normal HTML as much as possible.

This has honestly been a huge relief. The last attempt at this engine was when I first joined the forum and it was written in Java (yes, I know), and the accessibility documentation for Swing interfaces was extremely lacking, to put it politely.

I’m absolutely over the moon that y’all are helping me and that I’m also able to find so many answers on various sites and blogs as well.

Clearly web-dev is thoroughly battle-tested in accessibility technologies.


You don’t owe us or anyone else an explanation, but, fwiw, I find your motives as well as your continued advocacy for TADS while still doing what’s right for you and the community you wish to reach very admirable.

(P.S. Congrats on your “Russel’s Award” from the IFDBs. It was very well deserved and I am very proud of you, my friend. Very gratified to see you validated.)


Specific Question One

(Just one, so far, but I will likely have many in the future)

So I’ve been contemplating this excellent bit of advice:

…and I’ve been contemplating ways to do this with native elements. The most I’m willing to use aria for is aria-live, and aria-disabled (for buttons that represent spent or inaccessible actions).

Here’s my idea:

  1. Heading level 1 will be reserved for the title of the game.
  2. Heading level 2 will be at the start of every turn, and list the turn number. For previous turns that have had actions applied already, the turn label will begin with the action, followed by the turn number.
  3. Heading level 3 will be for location changes and to introduce the group of button-based commands at the bottom of every turn. (I know this is a cardinal sin in the parser crowd, but this will be a hybrid system, and you’re gonna have to trust me that I know what I’m doing here.)

After every turn heading, there will be a link that jumps to the current turn.

The idea here is the player can jump between headings of level 2 to specifically go to the turn they want to review, and then jump to the next level 3 to either review the location of the turn, or skip to the command buttons, which contain the frequently-used generic actions (such as look around, save game, load game, quit game, restart game, change player settings, review the help menu, etc).

Previous turns will also have a link (depending on how undo is implemented in the game) that will allow the player to revert to that turn.

Does this structure seem like a good idea?


I haven’t found any good resources on how one should organize things generally; guidelines may not be possible for generic dynamic applications. I’d also be interested if anyone has some. Using different heading levels sounds good to me. I’ve considered adding a menu or help page that describes how things work, which could be useful both in general and for accessibility, but haven’t decided if it’s a good idea or not.


Organize what? Headings? Something else?


Some good documents on organizing full blown web apps would be nice. There is no shortage of opinions and design philosophies for designing GUI applications, for better or worse. Eg I’m sure someone has strong opinions on whether the status line should be on top or bottom of the screen, though this is pretty unimportant in a graphical interface but more important in a screen reader where you either need to skip past it each turn or skip to it each turn/when it’s needed.


I was waiting until responses arrived, and last night I was feeling just a bit too manic and decided to continue coding, and it’s almost like y’all knew this somehow, and responded after I implemented a bunch of stuff lmao. :joy:

So I’m mostly keeping the design I outlined, but after reviewing a bunch of accessibility docs and running some tests, I changed a few things:

  1. The entire transcript is no longer one large aria-live region. In order to make it possible to tweak the status of buttons more smoothly, and create a more reliable and predictable (and less intrusive) behavior, new turn reports are appended to the transcript without being automatically read aloud.
  2. An invisible announcements section has been added to the top of the page, which is aria-live. When a new transcript section is added, an announcement is made that says something like “New report for turn 15 is written below, at heading level 2”.
  3. From here, the player can either jump to the new hearing level 2—which will begin with the turn number and end with the action which caused it—or they can jump to the previous landmark (the navigation section), which contains a shortcut link to the latest transcript report.

A lot of this was chosen because it seems like dynamically setting aria-live states completely breaks things.

That’s fair. While I have found some resources for organization, they’re mostly for normal web pages. However, I think I would rather design this game engine to behave more like a web page for screen readers, just to make navigation more intuitive.

I found a lot of posts online saying that trying to do fancy focus manipulations or other dynamic tweaks “for the player experience” largely works against accessibility in unintuitive ways, so my current goal is to just structure it like a normal, every-day web page, where the content just happens to grow (with announcements) as the player interacts with it. In as many ways as possible, it’s keeps a familiar, mundane, and static structure; elements also remain where they are, once they’re created.


Specific Question Two

Currently testing what I have so far on TalkBack in Firefox (I don’t have other platforms available to test right now).

There’s some weird stuff going on, and I wanna know if this is normal or I’m using TalkBack wrong.

  1. I can’t navigate by landmark…? Just headers, and header level doesn’t matter; it just uses all of them.
  2. Paragraphs are split up based on internal elements. If I have text italicized, it’s read normally on Orca, but TalkBack stops just short of that text and I need to explicitly tell TalkBack to continue, and then it stops just short of the italicized text ending. This happens when I navigate by line and by paragraph. It breaks one line or paragraph into multiple.
  3. Live regions don’t announce at all with TalkBack, and I’m not sure why. It’s set to polite right now, with a status role. I’ve heard that using assertive/alert will cause some alerts to suppress previous ones if they arrive too quickly, though. I have the live div inside of the header element right now.

I can’t find any information explaining or confirming any of this online, and I don’t have a clue where to poke next to work toward a solution.


Just tested TalkBack with Google Chrome. This kept the paragraphs intact and announced the live region correctly. However, it also did not seem to have access to landmarks, just like Firefox.

Oh, yeah, and I had to reload the entire page before TalkBack even allowed me to access the web view. What the heck?