Why do people with hearing difficulties have a hard time isolating conversations?

I’ve been frequently told that when someone starts to lose their hearing it crops up as unable to isolate conversations. The common example is understanding a conversation in a noisy restaurant.

Why is this? Why did the noisy restaurant’s volume go up and my voice’s volume go down? It sounds more like a signal processing problem than a hearing problem.

I am also aware that the high frequencies tend to go first. My (male) voice is probably lower than the background of voices and tinking glasses. Shouldn’t my voice seem louder than the restaurant’s background?

A bit of a WAG here, but low frequencies travel better through the air, so there is less of a difference between near and far sources at the low end. Thus losing the high frequencies might make it harder to distinguish speakers that are near by from those that are far away.

Signal processing != hearing?

Since I suffer from hearing loss (greatly helped by my hearing aids), I can tell you that ambient noises tend to blend with conversations until the whole thing’s one big hum. Any sudden or loud sound will override this buzz, and it usually takes me a moment until I can refocus on pulling out the voice or voices I actually want to hear.

Yes, I’ve had my hearing tested and it was fine - but I’ve always had problems processing speech. If someone says something out of the blue, or mumbles, or says it from a distance- it will take me a few seconds to figure out what they said. I stay away from bars and other such noisy social settings because, among other things, I simply can’t carry on a conversation in that environment although many people have no problems with this.

But I agree the high frequencies prbably help distnguish different voices.

When I say hearing, I am talking about what your ear can or cannot hear due to loss of hairs in the cochlea.

By signal processing I mean our brains ability to interpret sounds in the noisy signals of our hearing. My wife’s voice may not be as loud as the restaurant, but I can understand her because my brain knows what her voice sounds like and is able to distinguish her voice from the noise surrounding it.

What I don’t understand is than when people have hearing loss, apparently either the noise seems lounder or the individual’s voice is quieter. I don’t understand why that happens.

Your post is a great example. I don’t understand the connection between hearing loss and the increase in “blending” between the noise and the conversation.

Maybe that makes my question clear.

I suffer from this. In my case it is for several reasons:

  1. My hearing loss is only in one ear. It turns out that a large part of my ability to isolate one speaker in a crowd was directional. I have since started paying attention, and you will see that people usually directly face the person they are tuning into. In a three (or more) way conversation, everyone turns toward the speaker, and if someone interrupts they will often have to repeat it. It is much harder (though still doable) to tune in to a speaker well off to the side.

  2. If only it were just hearing loss. My hearing loss is such that at anything above modest levels I start getting tremendous distortion. Kind of like Charlie Brown’s teacher. So I am trying to hear with my good ear, and I have all this honking distortion from the background noise going on in the other ear.

3)Like most hearing impairments, mine is worse at some frequencies than at others. Since it is in just one ear, this means that as the speaker’s voice rises and falls in pitch, it appears to shift spatially, So I can’t just turn my head to one angle to get the speaker into the sweet spot. Maybe if I were talking to Ben Stein I could, but I would be too distracted over keeping myself from punching him in the face.

Interestingly when this hearing loss first happened, doctors were unable to explain or fix it. After about 10 years it got better by itself for about 5 years. Now it is back. I have an appointment next month to have it looked at again.

As you mentioned, my lack of hearing is largely due to not being able to hear higher frequencies: it becomes increasing difficult, for instance, to differentiate between “g” and “k”, “d” and “t”, “m” and “n”, “p” and “b”. Women’s voices and children’s voices, being of a higher pitch, are much harder to undersand than men’s. Without the higher pitches, all the sounds are reduced to a much smaller range, hence the “hum”. I’m again very thankful I have finally found a good, quality pair of hearing aids.

Pure single-sided deafness here. Completely deaf on one side of my head with no residual hearing whatsoever, normal hearing on the other. The answer to your question is quite simple, but most people can’t fathom it. When a person with bilateral hearing is presented with two sounds simultaneously, they listen to one sound with one hemisphere of their brain, and the other sound with the other hemisphere. When a person who possesses unilateral hearing is presented with two or more sounds simultaneously, that person can only hear the loudest sound. The rest is garbled interference.

Live sound engineer here with severe hearing loss above 14k or so. Normal human hearing is stereo and high frequency tones give directional cues and clarity in speech. This is why surround sound works with those tiny speakers, they are putting out only high frequencies but your brain takes the directional cues from them and adds the more omni directional low end now the Starship Enterprise can zoom across your living room from back left to front right. Turn off your front speakers and sub, listen to the surrounds you will be surprised.

I can only WAG about the OP but, restaurants tend to have a lot of reflective surfaces causing sound to be bounced all over the place, toss in some hearing loss and it’s attendant loss of directionality, your brain has trouble isolating the conversation you want to hear, because it can’t tell where the wanted sounds are coming from. Your brain “ignores” unwanted or repetitive sounds for you all the time, we just tend not to notice it. Those of us with hearing loss just have a harder time of it.

Capt

I feel pretty qualified (at least from a layman’s point of view) to answer this question. 4 years ago I completely lost the hearing in my right ear.

This fact seems obvious to most people: having 2 eyes allows us to see in 3 dimensions.

What most people don’t realize is: having 2 ears allows us to HEAR in 3 dimensions. Do this experiment. Go somewhere where there is a little bit of noise. Close your eyes. Now try to hear things that are happening behind you. Now to your right. Now your left. Now straight ahead of you.

Now focus on a sound, say a bird chirping. You can tell where that bird sound is coming from, can’t you? For instance, maybe you hear it behind your right shoulder and some distance above you.

Our brains, by virtue of having stereo hearing, allow us to construct an “aural dome” of the sound around us. With any sound we hear we can almost instantly tell where it is coming from. Our brain constructs this dome of sound based on the information our 2 ears give us. It does this without us thinking about it.

How it does this is probably based on the small differences in time and intensity of the sound reaching our ears. (I can’t give you a scientific explanation for this; perhaps someone else can.)

Our brains also allow us to ignore sounds in this aural dome that we aren’t interested in. You can focus on the sounds in front of you (say) and completely ignore sounds from some other direction. And you can do this even with your eyes closed (i.e. without reading lips).

Now let’s throw in that monkey wrench and make 1 ear go deaf. Now you are left with monaural hearing. Your brain can no longer detect from which direction a sound is coming. It can no longer construct that aural dome that we have had all of our lives and so take for granted.

Everything you now hear is just flat noise: no directionality, and no brain ability to filter unneeded sounds from unimportant directions. It’s all just a single channel of noise.

Now, in a noisy restaurant a person’s voice sitting across from you may be louder than any other individual sound in the restaurant, but that’s not the problem. Is this voice louder than all other sounds COMBINED in the restaurant? Because that’s what you are hearing. Your brain can’t ignore the other sounds in the restaurant – all sounds are coming in on one single monaural channel.

In summary, you have no directionality, and no ability to ignore sounds.

Here are some of the adjustments I’ve made to make up for my 1-sided deafness:

When someone calls my name now, I have no clue where they are calling me from. I have to just swivel around until I see them waving at me.

When I want to hear a voice in a low to moderately noisy environment, I will cup my hand behind my good ear. This makes the voice louder and helps decrease the level of off-direction noise.

I just don’t go to very noisy environments anymore and expect to be able to hear anything useful.

I’ve learned some methods to try to localize sound, but it entails moving around, trying to determine in which direction the sound is getting louder and softer.

In 4 years, I HAVE managed to get my SO to eliminate the phrase “I’m in here” when I ask “where are you?”. “I’m in here” gives me absolutely no information (over and above the fact that my SO can currently hear me). The correct reply is a place name: I’m in the kitchen, I’m in my office, etc.

I’ve found that my enjoyment of movies (in a theater) has decreased somewhat. I’ve lost the “moving sound” feature that (I assume) is used so much in multi-channel movie theaters.

Anyway, that’s my 2 cents.

J.

I have had severe hearing loss since childhood, and it’s worse on one side than the other. I have difficulty following multiple speakers at the same time. Background noise makes it even more difficult because it overlaps and covers the conversation, making it difficult to pick out individual words or sounds. Other things that make it difficult is the fact that when you are talking to someone one-on-one, they are focused on you, their head (and lips) are turned to you allowing you to lip read and pick up other visual cues. Talking in a group means most people shift their focus to different members of the group which makes it more difficult.

Another factor is that I have almost no sense of sound direction. If I hear a sound I can not usually tell where it is coming from.
Also, I have to concentrate more than most people to process what was said. The more people whose voices I have to filter through, the more noise I have to try to filter out, the less I am able to concentrate. It’s like trying to walk a violently shaking balance beam.

The anology I always use is “Its like trying to read with camera flashes going off in front of you” Other load noises tend to draw my focus and then I have to reaquire the voice of the person I was talking to"

I have diminished hearing, which makes places like pubs and the like completely off limits for me, at least if there is going to be any conversation whatsoever. Added to the general noise problem is that people these days seem to shout their conversations, regardless of the setting. Restaurants, buses, walking down the street, they’re all talking LIKE THIS.

Leaving aside the issue of which signals get processed and how, the phenomenon called the

cocktail party effecthas puzzled researchers for a long time.

I’ve studied signal processing extensively in grad school, I can’t speak specifically to the biological side of this, but I think I understand what’s probably going on and try to explain it in lay terms.

I’m sure you’ve seen waveforms for music or recorded voices or whatever before so you have some intuitive understanding of what they mean. Our brains are generally very good at identifying frequencies and separating this out, so if we have a nice clean signal, it’s not terribly difficult to isolate certain frequencies. Our brain’s ability, or lack thereof, isn’t what’s affected by hearing loss, but it’s our ear’s actual ability to faithfully detect the signal and translate it into an electrical signal that our brain can interpret. As a result, our brains get less useful information and we result in having hearing difficulties.

But this affects our ability to hear things by virtue of how it affects the degradation of the signal. Different frequencies will be more susceptible to different types of interference. I’d suspect that, in a lot of cases, basically when we have hearing damage, those nice sharp wave forms aren’t as clean anymore. When this frequency is considerably louder than the surrounding background, it’s still easy to account for that, but this is different when we start getting background noise. When the frequencies are nice and clean, a steady background noise doesn’t make much difference in detecting it, because it’s modelled by adding across all frequencies, so the prevelant ones will still stand out. But once your signal isn’t clean, it’s not just a single frequency, it’s not got small associated frequencies, and the greater the background noise, the more of those smaller frequencies will get lost in the noise.

I’ll try to do an analogy using vision, let’s use black and white for simplicity. We can liken hearing damage to like adding a solid grey value across the board to the whole image, so whites get darker and blacks get lighter, like turning the contrast WAY down, and background noise is the same as the snow on an analogue TV. So using this analogy, if we’re looking at a solid white circle on a black background, it would take a whole lot of noise and/or a whole lot of damage to make it difficult to discern. But that’s like detecting someone using a tuning whistle. Speach would be like trying to do this with an object that has some depth and complexity to it. Imagine now, rather than a solid white circle, a ball with a light source. Without the background noise, we may be able to tell it’s a ball even with a lot of grey washing out a lot of the detail. But now when you add the snow, all of the wash out is completely overwhelmed by the noise. If the snow is intense enough, you’ll be lucky to make out generic shapes and only because they’re separated by significant differences in intensity.

As a fairly simple experiment, try having someone talk to you in a way that’s a bit muffled, like just covering their mouth and nose with their hands, or talking into a cup or something. It’ll be a bit harder than usual in a quieter environment, but you’ll notice if you then try to talk to them with the TV, a fan, the dishwasher, and whatever else going on at your house, it’ll be noticeably more difficult because some of those subtle things you could pull out just muffled or unmuffled but with noise, get completely lost.

As for what’s going on, this is probably a lot easier with some basic wave forms and their corresponding FFTs, which is how we translate signals into frequency space. If we have good hearing we see our signal should be fairly high in the expected space and low over the rest of the space basically like a normal curve that has a high, narrow peak with the average at the signal. As our hearing gets worse, we’ll receive that signal, with the same area under the curve, but with greater variance. Thus, the peak gets lower and broader. Broadband noise is basically just adding random values across all frequencies, or modeled by basically adding the same value to all frequencies. So if you add a flat amount across a narrow, tall normal curve, that point still stands out reasonably well, but the broader and shorter that peak is, the more quickly it will get lost in even moderate noise.

But basically, your ears just get crappy at gathering the signal. Still, if there’s only one, even if it’s bad, we can still generally figure out the signal without too much effort, but once you add broadband noise, the clean signal will degrade a lot less quickly with background noise than an unclean one.

nm. Posted in IMHO instead.

The simple answer is that there’s less data for the signal processing to operate on. Any algorithm works less reliably with less valid data. It essentially decreases the signal-to-noise ratio. You might think that the ratio would remain constant, but it doesn’t, not from a practical perspective.

Here’s a possible example. You might have lost a number of frequencies altogether. When there’s no noise, you can infer the missing frequencies when present, due to neighboring frequencies being stimulated, with a hole in the middle. But with background noise, there are LOTS of frequencies being stimulated, so no obvious holes to infer from the neighbors.

Like the Prof above, I’ve always had difficulty processing speech, even when I could hear up to 18K. The worst case was campus movies in bad little auditoreums with lots of echo, or speech in loud bars.

Now that I’m 55 and have been a gigging hobby musician all the while, I’ve lost a lot in the 4K-8K range (as most men my age), which is the crucial range for speech, and I hear little over 10K (which doesn’t help a lot for the recording & mixing I like to do, but I’ve learned to "fly on instruments).

I can still generally hear a band and know what instruments are playing what, but if you’re talking to me and aren’t facing me, you’re wasting your breath. My ability to decode speech has diminished, and as I said, never was very good.

Another bad case these days (I think it’s worse these days, but it might just be me) is the background noise and music on TV shows and some movies is way too loud, and I just can’t make out the dialog. When my wife isn’t around, I turn the closed captions on (they bug her, evidently less than me frequently asking “What did he say?”)

You seem to be under the impression that the cochlea functions like a simple, passive microphone, and that all the fancy, selective processing goes on solely within the brain. That is not the case. The cochlea is actually more like a dynamically, actively tunable amplifying microphone. The frequencies that it will respond maximally to can be changed, on a very short time scale, and in anticipation of which sounds of interest that the brain deems are likely to arrive. The organs mainly responsible for this are the outer hair cells of the cochlea (outer because they are at the end where the sound waves enter the organ).

The inner hair cells of the cochlea do indeed act as essentially passive transducers of sound energy within the cochlear fluid, turning it into nerve signals, and different inner hair cells, of different lengths, are differentially tuned to respond maximally to different frequencies. The outer hair cells, by contrast, are not passively moved by incoming sounds, but themselves move rhythmically, under the control of efferent innervation from the brain, to produce sound waves within the cochlear fluid. (When this function goes wrong, in some severe forms of tinnitus, it can actually cause the ears to emit a humming sound audible to others.) The sound waves produced by the outer hair cells interfere, both constructively and destructively with incoming sounds, selectively amplifying some, de-amplifying others, and producing beat frequencies (which latter may often be what are actually picked up by the inner hair cell receptors). Presumably the brain dynamically adjusts the frequency at which the outer hair cells beat, probably over very short time scales, to amplify anticipated frequencies of interest and tune out the uninteresting noise. All human perception is dynamically controlled by processes of attention and expectation in this way.

I am not, incidentally, just talking about something like tuning to the general frequency of someone’s voice. It is possible and even likely that, when listening to speech for instance, it is possible to adjust the tuning on a very short time scale so as to have the best chance of distinguishing between the most likely phonemes to come next (as anticipated on the basis of those that have come already, and of what one knows is being talked about). Distinguishing between a B and a P, for example, might require focusing on different frequencies than those required to distinguish an M from an N.

Hopefully this makes it clear why damage to the cochlear hair cells, the outer ones in particular (which are very likely the most vulnerable, being “outer”) leads to the cochlea losing much of its exquisite selectivity with respect to the incoming sound information. It can no longer selectively amplify the sounds the brain wants to attend to from the irrelevant background noise.

See: P. Dallos, The Active Cochlea. J. Neuroscience, 12 (1992) 4575-4585. [PDF]

Similar stories, incidentally, can be told about most or all of the other types of sense organ. Incoming perceptual information is actively and dynamically selected, and often quite highly processed, well before it gets to the brain. Indeed, one might say that the brain’s business in this is more about controlling the activity of the sense organs in order to ensure the right information (that which is considered most likely to be relevant to the brain’s ongoing control of general behavior) comes in, than it is about “processing” the information once it gets there.