This chart has confused me (scroll to the bottom where it shows the entire spectrum). According to this, sound is on the EM spectrum, at the lowest end? Is that right? If it is, why is it so much slower than other EM waves? And why does it require a medium?
Related/non-related question: Why does the wind affect sound carrying? If you stand upwind (in a strong wind) of someone and speak to them, they can hear you; if you stand downwind they won’t. Soundwaves travel around 300m/s, a strong wind would be <20% of that, so one would expect that the sound would still reach them albeit a bit delayed, instead it doesn’t get to them at all. I suspect an obvious answer…
Heh, I have that same chart hanging on the wall here next to me. Any which, while the frequencies in question may be in the audio range, that doesn’t make them sound waves. In other words, the signal running down your speaker wires are EM waves in the audio spectrum, then the speaker converts them to sound waves (without changing their frequency).
If a frequency is allocated for the transmission of sound, such as AM or FM radio, the transmissions themselves are not sound waves. The radio, through the speaker, convert those EM waves into sound.
Lest there be any confusion, the AM and FM radio band frequency allocations are well above the audio frequencies and the transmitted signals do not resemble audio waveforms.
As it has ‘Audible Range’ alongside ‘AM Broadcast’, I’m wondering if there are uses of such low frequency radio signals as a way of directly transferring a sound waveform to a radio signal without going through the methods used in modulation?
This is it. The frequency of these EM waves are the same as the frequency of audible sound waves. Sound and EM waves are not the same, they just have the same frequency in this range.
No answer for the second question, but I would guess it has to do with noise. That ‘bit of delay’ the OP mentions is probably enough time for noise to degrade the signal to the point of inaudibility.
You could do it, but I’m not sure why you would. It’s not like modulation is difficult. In fact, it’s probably easier to modulate sound onto a carrier wave than it is to convert it directly.
As an amusing aside, some of the most interesting gravitational wave sources expected would fall right smack dab into the middle of the audio range. At talks on the subject, folks will often play sound files (simulated, of course, since we don’t have any real detections yet) of a supermassive black hole merger chirp.
A sound wave in air is a pressure wave, meaning it is made up of a series of compressions and rarefactions (compression when a speaker moves forward, rarefaction when it moves backwards). When the wind comes along, it messes up the nice progression of the compressions and rarefactions. This causes the loss of the sound. The wind also doesn’t affect all frequencies equally. Because their wavelengths are smaller, higher frequencies will be more affected than the low frequencies.
Why does wind effect sound asymmetrically? I believe that’s what the OP is asking. You can talk to someone downwind but not upwind.
Come to think of it, I can’t remember this actually happening to me. Does anyone else notice this sound asymmetry in the wind, or is it more like ‘more wind = more noise’ regardless of direction, until at high wind speeds other sounds are drowned out entirely?
You do realize that’s a log scale, right? The AM broadcast range is about 1000x the frequency of the sonic range.
One possible reason for keeping audible range in mind while looking at EM waves is for when things like microphonics occur (essentially unwanted transducers that cause electrical noise), or the reverse (whining on some devices).
Cause your mouth is on the front of your head. You can talk if the air blows around your mouth but not if it’s blowing into it. Additionally, the longer that sound wave hangs in the air, the longer it has time to dissipate. I can’t guess as to which affects it more.