How do we distinguish noises?

Like in a party, how does one distinguish what the person one is speaking to is saying from the noise of the rest of the crowd?

I thought I had read a Straight Dope article on this, but I can’t find it now. As far as I’ve heard, no one understands completely how we can pick out one conversation in a crowd. Something with pattern recognition, I imagine.

The pattern recognition must be done for you automatically by a nifty bunch of those brain cells you have.

The brain cells must be taking the audio input and putting it on an internal oscilloscope, and observing & recording a characteristic range of frequencies (tones), plus a characteristic range of phonemes, and making and labeling an envelope like this example: female voice, Southern accent, differentiated from other Female-southern & labelled Marge; tracking the input that matches that envelope; and extracting the information, which includes both the meaning of the words and emotion information, as well as another tag for instance that categorizes the reliability of information from this particular source.

That’s the kind of thing that makes me see consciousness (me) as just some froth on the surface of the pond that is the whole brain, which is taking care of all kinds of business while the (me) thinks (I)'m in charge.

This phenomenon is known as the ‘Cocktail Party Effect’, which should help on searching for information on the subject. This is a cop-out on my part, but I have no special knowledge on this front. I would suggest that using other clues like body language and lip-reading come into play when deciphering party conversation, especially at high decibel soirees.

Nothing to add, other than a comment that this part of my brain must be damaged, or undeveloped… Put some background noise in place, and I’m as deaf as a post. Always was tough in college fraternity parties.

I’d love to see some real info on this though.

-Butler

I believe it also has to do with concentration. When listening to music, I can concentrate and hear just the Bassline or just the drums over the rest of the music.

Not to hijack, but there is a similar example in the electronics world. In some of the newer cell phone technologies (CDMA, I believe, which is perhaps more common in Europe than the US) a number of cell phones are all transmitting on the same frequency at the same time, yet somehow the ‘base station’ separates out the different conversations.

Similarly, I believe, when you have WIFI at your house (or business) you can have numerous computers all transmitting to the base station at the same time.

Even though I work in the electronics field, this has always struck me as black magic and I’ve said so. Then, somebody always points out “It’s just like lots of people talking at a party!”

MaryEFoo, you are clearly a cognitive psychologist. I can tell this because you posted a paragraph, full of plausible noises, which has zero content on closer inspection…

As a(nother) cognitive psychologist, I can tell you “it’s something to do with attention”. I.e., we can direct attention toward the things we are interested in, with at least some degree of control. So, if you are attending to a person’s speech, you are better at deciphering it than if you are not attending. This is another nearly information-free paragraph!

Staggerlee is also spot on by suggesting lip reading plays a part in face-to-face communication in a noisy environment. IIRC lip reading is ‘worth’ a few dB - cannot recall how many, but 3 sounds familiar.

But anyway, no-one knows exactly how we can do these things, though there’s lots of info on how well we can do 'em and under what circumstances.

Another term for it is “selective hearing” – how, for example, a mother can hear her baby’s cry in a nursery full of other infants.

butler1850, you may have a mild hearing loss at the human voice frequencies which could hamper your ability to filter out the background noise. A good audiological exam will test your hearing at different frequencies, then repeat the test with background noise added.

I have a low frequency hearing loss in only one ear. One of the more noticeable effects is that it seriously reduced my ability to focus on one voice in a crowd (cocktail party effect). Noisy, echoing (damn those satillo tile floors!) resteraunts are by far the worst.

Anyway, based on this, I’d say that stereo hearing has a lot to do with this ability. Someone with good hearing could test this by plugging just one ear.

Regarding “selective hearing,” I’ve noticed that when everyone’s applauding during a concert, I can focus on the general applause, or that of the few people in my vicinity, or my own; and I can switch back-and-forth at will.

And yes, picking out specific instruments in an orchestra, as well.

Where is the “switch” in my brain that enables me to do this?

I can’t; I don’t have hearing loss (got tested again in December) but I have an auditory processing disorder because of some childhood medical problems and I’ve learned to compensate for it over the years - turns out I’ve been doing all along some of the things kids who are diagnosed with auditory processing disorders are trained to do. Apparently, most kids grow out of it, but I had so many problems at the wrong time that my brain never did.

I read lips and really pay attention to the faces of people who are talking; otherwise I never understand what people are saying when there’s a lot of background noise.

A little light reading on the subject.

Just an anecdote, but I have noticed that effect in the car with the radio on. I drive with the windows down when I can, which makes it rather noisy drive. But I like the “fresh” air blowing.

I listen to the classical station a lot for drive-time. The classical DJs tend to have a low-key delivery to the point of being inaudible. When I pull on to the freeway, the noise level of the road goes up and I can’t hear what they are saying until I turn the sound up to blaster levels. Once my ear has locked onto the voice, I can turn the sound back down to where it was and hear them fine over the rest of the noise.

Actually, it’s nowhere close to that. This is how machines process sound, not humans.

Humans have a bunch of cells in their ears that each respond to a very narrow band of frequencies, kinda like a whole bunch of itty bitty bandpass filters. Basically, if you’ve ever seen a graphical equalizer with a graphic display on it, that’s kind of what your ear is doing. It’s analyzing everything from a frequency perspective, not a time based waveform analysis.

It gets a bit more complicated than this, because now you are talking about nerve cells and such instead of wires, and nerve cells do weird things like they don’t like to pass on signals until they get a bunch of them, then sometimes they keep passing the signal even after the input has ended. You end up with some weird integration effects, and also some crosstalk from nerve signals all being in the same nerve bundle, and all this mess is before the signal even gets to your brain. While your ears can detect tiny differences in frequency, the way that all of these cells work together is they can’t detect changes in frequencies that occur faster than about 1/15th of a second.

Then the signals get into your brain, and from here on out the whole thing is so god-awful complicated that I don’t have a clue how it works. I don’t think many people out there (even those who study this sort of thing) really understand it much better than I do either. Suffice it to say that your brain is doing some sort of pattern recognition (something that the human brain does FAR better than any machine on earth) on the frequency signals it is receiving.

I’m afraid I have no real wisdom to share but I also find this subject fascinating. I saw a hearing specialist last year because I have tinnitus. We discussed how the brain automatically filters out sounds and sensations that it knows are not important, yet it can pick them up if attention is consciously drawn to them. Apparently, tinnitus can be treated to some extent in that the brain can be taught/trained not to attend to the ringing sound… and when that ever-increasing ringing sound gets to the point where I am ready to top myself, then I’ll pony up the $5K I need to get me the help that these guys can apparently give me.

This is entirely possible (in both cases). I’ve often found that I have a much easier time hearing men’s (grown, properly pitched adult male) voices, but have a hard time picking out women’s and children’s voices. I can hear the noise, I just can’t understand it sometimes. When I get back on a decent health care plan, I’ll be sure to have it looked at. Hasn’t gotten any worse as I’ve aged, as it was never very good to start with. When I was a kid/early teen I was examined due to this very same issue, but they found nothing wrong.

As for the selective hearing, I don’t hear my 10wk old daughter in the middle of the night. I can’t feed her (not properly equiped with the right OEM gear :cool: ) , so I block it out until it becomes a “longer duration problem”. My wife hears her through walls, outside, underwater, etc (slight exageration here) :cool:

Actually, you can’t. Only one computer can transmit at a time in WiFi, and there’s a rather involved procedure to avoid collisions.

Of course pattern recognition plays a huge part.

The simplest way this happens is when one sound with a particular frequency range and amplitude range is picked out from a background of other sounds.

The fact that we have two ears makes it a lot easier though. Signals from both ears are compared on-the-fly against each other, and many inferences are made based on the differences between them.

The simplest way that this is done is the difference in amplitude from ear to ear.

More complicated is the comparison of incredibly subtle reverberative qualities of the sound, from which we can extrapolate a lot of information about where the emitter is located.

If you’re sitting in a theatre waiting for a show to begin, it’s a lot easier to pick out individual words from people a few rows back than it would be if you were listening to a high-fidelity recording made with a single omnidirectional microphone places right where your head was. (Or if you were deaf in one ear.)

This is kind of parallel to how objects that have similar colour and texture as their backgrounds are harder to discern with one eye. If you have stereo vision, they “stand out.”

Same for sound.