I don’t understand how any sound or group of sounds can be represented by just amplitude over time (Come to think of it I’m probably asking more about the general nature of sound rather than how it’s represented). I do understand that you need time to create frequency (though I’m not even sure why a change in amplitude creates pitch). I know that multiple sounds either reinforce or cancel each other thereby changing the amplitude, so why do they maintain their distinct timbres? Is it because our brain does a fourier transform in real time or something?
You might try the How Hearing Works article.
It’s incorrect to say that a sound is represented by “amplitude over time.” Amplitude is a property of a wave. Sound is represented as position or pressure as a function of time. Suspend a thin membrane in the air, and it moves with the surrounding air. You record the position of this membrane over time - as high as 44,000 times a second in case of a CD-quality audio system. This position vs. time data contains all the information you need to re-create the original sound. (Well, except direction, anyway.) If you do a Fourier transform, you will get a graph of frequency vs. amplitude, i.e. how much of each freqnency is in the sound. Yes the brain+ears do Fourier transform. Though that’s not too remarkable - after all, a prism does Fourier transform as well, separating white light into a spectrum.
Yeah, our brains do some pretty amazing stuff to break up a sound wave. Plus, have you ever looked at the waveform representation of a sound? There’s a lot of detail in there. You may as well ask how so much information is carried in one measly microchip: they’re both small, but oh boy do they hold information.
Check out http://www.howstuffworks.com/hearing.htm
oh man. i’m a creative sound geek, not an acoustic physicist, so while i’m un to the challenge of trying ot explain this, there may be folks who can help you better with the math.
When some thing vibrates, it creates a “sound” Air and other fluids (and solids) are sensiteve to these vibrations to varying degrees.
The pitch of the “sound” we hear depends on the frequency of the vibration (how many times the source cycles from one end of its vibration to the other- crest to doldrum, or full compression to full rarefaction), and the medium thru with the vibration is carried.
when a source is excited (say a guitar string), what you hear is a fundamental tone (the frequency of the compressions and rarefactions created by the vibration of the guitar string), there are generally any number of sympathetic vibrations, created by the interaction fo the string to the body of the guitar, the different parts of the guitar vibrating against eachother and all of the above vibrations are bounding around inside the guitar, exciting the whoe frame and essentailly acoustically amplifying, while at the same time adding undertones.
as you probably know, the range of human hearing (optimnally) is in the range of 20-50 Hertz to 20-22 KiloHertz. 1 hertz represents 1 full vibration cycle in 1 second. In other words, the lowest frequency human ears can detect is something around 20 cycles per second. Still on board?
when two sounds of unrelated simple frequency are combined, they create a complex wave. A simple wave looks (when recorded) like a sign wave, with rounded, even crests and doldrums. A complex wave can look like any thing you can imagine really, as long as it only has one amplitude value for any given time value.
Now, if you have two sign waves of the same frequency and you play them both at the same volume, the waves will reinforce eachother. This isn’t terribly complicated math, in fact i think its basically just addition. when you add them together they effectively double in amplitude (they get twice as loud).
if on the other hand you have two sound waves that are the same frequency but are out of phase (they peak at different times) you get a fairly distinct metallic, sort of difficult to describe sound thats genreally referred to as phasing. The is because, at different points in thier cycle, they are having both an additive relationship (when both have positive amplitudes or both have negative amplitudes) and a negating or subtractive (when one has a positive amplitude and the other has a negative amplitude).
If these two sign waves are of entirely reversed polarity (one has peak positive amplitude ath the exact same momneth that he other has peak negative amplitude) they cancel eachother out completely. You hear nothing. This is a fun trick you can try on a mix board, send the same signal to two channels and invert the polarity of one of them and as you move the faders closer to one another, the sound actually gets quieter until you reach the same point on both fades and you will (on a good mix board) hear nothing.
Its a fun trick and i use it to teach students about the importance of checking the polarity of your speaker connections and cables. It will also remind you why its important to wire your home stereo correctly or you can get some nasty phase cancellation.
the reason so much information is detectable in sound waves is basically because they move very very fast. the fundamental tones of human speech tend to be in the 4 KHz to 6kHz range, meaning that that when you listen to some one talk you are being bombarded primarily by sound waves vibrating 4 to 6 THOUSAND times per second. each persons vocal chords have different combinations of fundamental tones and ovetones and laryngial harmonics that all combine at or near thier source to form millions of constantly varying additive and subtractive relationships. But whether yo uare talking about a human voice or a symphony orchestra, or traffic in manhattan, you are talking about essentially the same thing. Wave bumping into eachother and reinforcing and cancelling eachother in dizzyingly complex paterns.
I think the man thing that helped me get my head around it in the first place was playing with a computer sound editing program, they are thousands of them out there, and a number of them offer free demos or are available as shareware. Pick one that lets you work on a file graphically- that is lets you look at the sound wave. Play around. Load in some sounds yo uare familiar with, Zoom in tight enough so that you can actually see the wave. Or try creating simple sign waves Many programs come with a tone generator where you can create a tone of a give frequency. Create a couple different simple tone and mix them together. Create the same tone and mix it with a polarized version of itself and watch both of them dissappear.
Computer audio is HOURS of fun, or if you do it for a living, years and years and years of fun. And its also a good way to get your head around “sound” and “sound waves” as concepts.
Hope this helps…
Good Luck
CJ