This sent me down a bit if a rabbit hole. Found this:
“People of All ages without a hearing impairment should be able to hear the 8000hz. People under 50 should be able to hear the 12,000hz and people under 40, the 15,000hz. Under 30s should hear the 16,000hz, and the 17,000hz is receivable for those under 24.” -Source
Some additional searching reveals that frequency shifting is something fairly easy to do to recorded audio files. With that said, should adjusting the included audio of an IF game to be </= 4000hz (or at the bare minimum </= 8000hz to hit everyone without a hearing impairment) be something to consider regarding accessibility? Or does this not have value?
My first offering was not interpreter friendly and had white text on a black background, among numerous other problems. I’d like to make accessibility a bigger focus going forward, but I know I have an implicit bias a mile wide here.
I’ve been through the Accessibility Notes for Authors article, and already knew not to include game critical information in the audio or visual elements of a game, but nothing is said regarding audio frequency.
This isn’t my forte whatsoever, so I know I’m not doing the best job of explaining, but from what I understood, and as far as I understood it, it would be a form of compression.
Instead of the frequencies above 4KHz being removed leaving those under 4KHz unchanged, it would be sort of squishing all the frequencies closer together in a way that keeps them distinctly separated in relation to each other, but all spread between 0KHz and 4KHz, not 0KHz and 20KHz.
Again, not my forte, but allegedly, frequency compression is one of the things typically accomplished by a hearing aid to adapt the normal range of sounds into something an individual with hearing loss might better hear.
“Among many different frequency lowering algorithms, non-linear frequency compression (NLFC) has been implemented in modern commercial hearing aids, such as Phonak Naida hearing aids. The key concept of NLFC is to disproportionally compress high frequencies into lower-frequency regions.” -Source
If that compression was already done to the actual audio files, it might help an individual enjoy a game without the assistance of hearing aids.
If all you’re including are sound effects and instrumental music, then I suspect the compression shouldn’t change the experience enough for those without hearing loss to mind, while possibly making for a better experience for those with hearing loss.
Hearing aids are similar to prescription glasses in that they’re tailored for individuals. Because they now contain software, too, they can dynamically change how they’re filtering the input.
It’s impossible for the game creator to know what/how someone with hearing troubles needs the sound adjusted to work optimally for them. For this reason, it’s about as impossible to pre-treat sound in a uniform way to help that whole target audience. It would definitely be made worse for someone else at the same time. If we were to treat all the audio in such a corrective way, we’d severely degrade it for people with typical hearing as well.
Professional audio is pre-treated for the “audience of the world” by the process/industry of mastering. We know the frequency response of a normal ear to different frequencies and mastered audio accommodates that graph. Professional mastering engineers work in rooms and with monitors calibrated by absolute measurements.
The best way to produce comprehensible audio from a multi-audio source is to have a clear / well-balanced mix. A cluttered mix will have lots of frequencies masking each other, reducing legibility of elements, or clogging up bits of the frequency spectrum in an unpleasant way. Too much dynamic range or too little can be annoying as well.
In a game, the best way to have legibility is start with a clear mix. Important elements need to be audible against backgrounds. Having separate volume controls for background music, sound effects, dialogue, alerts is a big accessibility aid. If the game is likely to have sound going all the time, a basic dynamic range toggle can help (often presented to the public on a streaming device as ‘reduce loud sounds at night’ – the upshot is, the setting will limit and compress audio on the fly to reduce the dynamic range, making it easier to hear everything at lower playback levels.)
I’d say attending to all of the above first is more important than looking to frequency compression at the game end. If you present people with a well-mixed game, with audio calibrated by a mastering engineer, and the above switches, the final tweak can be added by the user at their end by whatever methods they use to help their hearing (e.g hearing aid, listener position, speakers, nature of room - maybe even an EQ on your sound output, if you have really specific needs) and they’ll be applying it to a reasonably consistent frequency spectrum.