How does stereo sound really work?

I’ve always been told that humans (and other animals) determine the direction of a sound using phase difference between the two ears. But the relative loudness of the sound between the two ears is also a big factor, isn’t it? If a sound is louder in your right ear, the sound is clearly coming from your right. So which is a dominant effect? How do we know we can detect phase differences at all?

And which effect does stereo sound rely on? I always thought stereo was based on phase differences. But in reading the How Surround Works article, it looks like surround sound (at least analog ones) uses phase differences to encode additional sound channels - which means that when decoded, all channels are at the same phase. So am I correct in thinking that making a stereo recording is just a matter of mixing certain instruments louder in one channel than in the other?

Mostly, yes. The pan pot on a mixing board simply controls the amount of current that goes to each channel.

However, with the advent and popularization of digital signal processing, more subtle techniques can be used.

You can simulate the way the head filters sounds differently based on their direction. This is called HRTF (head-related transfer function). It’s great on headphones, but difficult to use over speakers.

The most common technique, though, emulates the way sound changes as it travels to your ears. The main factors are: delay (and thus phase differences at the ears), low-pass filtering due to air absorption, and of course attenuation due to distance.

How do we know we can detect phase differences? Try the following experiment at home: take a mono sound file and make it stereo. Since both channels are now exactly the same it still pretty much sounds like mono. Delay one channel by as little as 2 milliseconds. You’ll hear a very noticeable effect.

I dont know if this will help but…

In The Beatles song ‘yellow submarine’. The instruments are on the left channel and the vocals are on the right channel. so if you move the slider or what ever you happen to have back and fourth between the channels you can hear a diffrence.

Mono ussually is one channel of music and is very cluttered unless it is vocal if it is a speech or someone just talking there is no need for it to be in stereo. But with stereo it you dont have to jam like…30 tracks on to one channel so you can put 15 on one and 15 on another. it just depepnds on how many tracks you want to blend into a song and the effects that one dessiars. Such as in the Pink floyd song ‘run like hell’ theres a part in the song that fades in on the right side of the track then moves to the left and fades out, this is to give the effect of it going by such as a car driving by on the street.

hope that helped.

      • The relative loudness between the two ears is virtually impossible to hear under real-world conditions; it’s the phase difference you’re hearing. Your ears are only about five inches apart, and under most circumstances the difference in dB level between any two positions only five inches apart is lower than the dB difference that human ears can discern. Also, the environment alters how a sound is perceived too: the sound waves interact with everything in the immediate area, so where you’re hearing the sound also makes a difference. In a room that echos badly (or well, is it?), it can be impossible to detirmine where any sound comes from.
  • Well, some of us have audio mixing software that allows us to alter the phasing and channel volume of stereo recordings, so we know what it sounds like. And sometimes songs on the radio mess with stereo phasing too: the stereo-wide chorus part of Puddle of Mud’s “Blurry” is one effect: two tracks, one normal and the other with the left track moved “earlier” and the right track moved “later”. The beginning and ending parts of Enrique Iglesias’ song “Escape” is the two stereo channels transitioning into phase (the beginning) or out of phase (the ending). - DougC

So how did A3D work? With two headphones, you could tell if “things” were behind or in front of you. I’m not sure what the technique is actually called, and A3D is out of business at this point.

Is it just a matter of how the sound sounds?

That would be the HRTF I mentioned earlier. The shape of your head changes the way sound waves get filtered before they reach your ear drum. Sounds coming from different directions go through different parts of your head and thus get filtered differently. By faking this effect digitally you can create the illusion that a sound is coming at you from just about any angle. You can get the same effect with binaural microphones, tiny mics you fit in your ears like earphones, or a dummy head.

Did you ever wonder why your ears have such a weird shape? Their folds and irregular shape ensure that waves coming at you from different directions don’t go through the same amount of skin.

[Slight hijack]worth noting, however, that Pet Sounds (and to the best of my knowledge ALL of the Beach Boys material) was recorded in mono, even though stereo has just been developed. Why? Brian Wilson was deaf in one ear. and those recordings are hugely dense and lush without ever sounding cluttered. Not saying you are wrong, cuz this is generally true, but pointing out that Wilson was a freakin genuious.

Pet sounds has been remastered in stereo and while the remasters are good, the originals stand up amazingly well.

too bad about brian’s mind… :frowning:

CJ