Why do people with hearing difficulties have a hard time isolating conversations?

Imagine when someone is talking to you that you miss parts of words. Depending on the bacground noise I can miss just a few words or maybe as much as about 1/3 of the words spoken. I have to process the other words said to figure what I missed. If I miss too many words then I get lost with what is said.

I know my wife’s voice well after 45 years. I use her mannerisms, facial expressions, and lip movements to understand her with out my aids. but block any of them or add background noise and I loose more words. It is not that I can not hear her or anyone speak it is just that I can notcollect enough information to process some words.

funny thing about the heating aids is that I loose direction hearing. I am not sure where the sound is comming from. With out hearingaids I would turnmy head one way or the other until the sound cleared some.

Where are the microphones for the hearing aids? Unless they are located pretty much where the entrance to your auditory canal is in the pinna, you won’t get the correct head related transfer function (HRTF). A big part of the directionality information is derived from changes to the frequency response of the sound caused by your head, and the shape of the pinna. Phase information due to the different arrival times of sound at the ears is good up to about 1kHz. After that it is all down to differences in intensity at different frequencies.

I wonder if an enhanced frequency response in a hearing aid and perhaps some dynamic gain control might also act to interfere with the localisation. An AGC in an aid could wipe out a lot of the intensity differences due to direction. A multispectral gain control would totally wipe it out. But could make a huge difference to the intelligibility of sound, so might be a good tradeoff.

I like this explanation best for what happens with me.

I’ve got loss of hearing in my left ear, which is mostly in the middle to higher range, with normal (for someone over 50) hearing in my right ear. I have not problem hearing people in normal conversations, but if I’m in noisy surrounding such as a restaurant or a bar, then I have to have you sit by my good ear.

I also have a harder time hearing people on the phone it there a bad connection or on my cell phone, which is annoying, since it means I have to hold the phone to my right ear with my left hand if I need to take notes.

When I try to talk on the cell phone held to my bad ear, it does sound like the other person’s voice becomes muddied and it becomes harder to distinguish what they are saying. Although I can still hear them, the vowels are less clear and it’s more difficult to distinguish between similar continents such as “v” and “f.”

In normal conversations, people are good at filling in the blanks. We get distracted, a horn honks, the TV is blaring, whatever, but we can tell by context what is missing. However, as Snnipe 70E says, once you lose too much information, you can no longer unconsciously fill in the missing sounds.

as Snnipe 70E says, if you have

Thanks for all the great answers!

I don’t have hearing loss, but the reason I went to have it checked last time was that there were certain people I never could understand over VoiP or the phone. When I couldn’t understand someone in person there was always something wrong along the lines of “talking from another room” or “your nape mumbles.”

Once I found out my hearing was actually not at fault and mentioned it to other people, it turned out that they too had problems understanding those same people. Some of them couldn’t be arsed buy a decent microphone or had been using a broken phone, others always had some background noise, others talk to the phone from the back of their necks. Grey mist indeed!

This is a very good point. A lot of audio processing happens in the ear.

The first-order bit is that it acts like an FFT: telling the brain how much of each frequency is present (one frequency per hair cell). So, not one mic, but thousands, each tuned to a different frequency.

Much like the eyes, it has neighbor-cell suppression. That adds contrast (in both cases), so that frequencies (or pixels) that are louder or softer (brighter or dimmer) than neighbors jump out. This significantly reduces the impact of white noise on the signal. (MP3 takes advantage of this by removing neighboring frequencies from the signal, since your ears will do that anyway.)

There are mechanical controls on the ear, several of which help respond to changes in overall loudness.

And there’s the fine tuning njtt mentions above. It’s really quite fascinating, and remarkably complex.

I’m deaf and have had hearing loss since I was 4. Isolating conversations is one of the hardest parts of daily life for me, but I’ve always had a hard time understanding why as well.

I wanted to pop in and share this recent article from NPR: The Real Sounds Of Hearing Loss : Shots - Health News : NPR. The article has audio examples of what a phrase sounds like with various types of hearing loss. The second seems most relevant to this thread. I thought it was interesting, at least.

The other important point, that I don’t think has been made is that any given sound is a mix of a bunch of frequencies, and sounds that change quickly have more high frequencies.
If you think about it, a 60 cycle-per-second tone can’t change any faster than 1/30 of a second or so at the absolute max, or it’s not really a 60 Hz sound any more. While a 60,000 Hz sound can change thousands of times per second and still be more or less a 60,000 Hz tone.

So when you speak, even if the main part of your voice is very deep, the bits of your voice that convey changes as you speak are higher frequencies. So if someone can’t hear high frequencies, even if they can perfectly hear the deep part of your voice, all they’ll hear is a low rumble as you speak, without any clear words.