Multiple ultrasonic sources interefere and become sonic?

This thread from the Threadspotting archive suggests I should pre-emptively apologize if this topic has already been discussed. The guest under my name should give away my noobie-ness. I asked Google whether ultrasonic issues have been addressed or not and the answer had to do mostly with bugs. If the unthinkable has happened and Google lied to me then please accept my humble apologies!

I stumbled across a story discussing the e-Lock company and its use of sound waves to transmit data. My first reaction was remembering vaguely a documentary in which two ultrasonic speakers are cleverly used so each individual normally non-audible signal interfered with the other at their intersection to create an audible sonic wave in a fairly small area. The result was that if you could hear the sound then you could take a step in any direction and the sound disappeared.

To help me understand this better I would like to assume that I can generate any two ultrasonic sine waves and from that assumption find a sonic interference wave. Knowing that I can easily pick a multiplier to turn sin(x) ultrasonic I am curious about this simplified version:


assuming 1 < a and 0 <= b < 2Pi,

define the ultrasonic wave generators f and g:
    f(x) = sin(x)
    g(x) = sin(a(x-b))

and the interference wave h:
    h(x) = sin(a(x-b)) + sin(x)

After some playing with this great plotter for awhile I suspect that the h you get by letting b = 0 and a = 2 is probably not audible but letting b = 0 and a = 1.5 gives an h which might be perceived as sonic.

Normally I would be content to prod Google until I found the inevitable resource which would answer enough of my questions so I would be able to piece together the rest. However, at the moment I happened to have a guest account here and thought perhaps I might like just once to get the straight dope on this issue. :slight_smile: Does anyone know whether ultrasonic waves can become sonic? Has anyone else seen the documentary or the actual device? Is there anything which would suggest a sonic interference wave is not possible?

You need a nonlinear device to produce the sum and difference frequencies. Just mixing the two sources is insufficient.

To add to what mks said, simply adding two sine waves doesn’t get you anything other than those two frequencies. However, if you add them, then put that resulting waveform through a nonlinear function, then the output of it will contain frequencies that are the sum and difference of those you started with. For example, if you have one wave at 40 kHz and another at 42 kHz, and you have some nonlinear effects, which could plausibly be happening inside your head, you could hear the 2 kHz difference tone.

I’m not sure how I understand how they can direct the two sources so narrowly that they converge only in a small space.

My understanding was you have two speakers each generating a single ultrasonic (ie higher than 20 kHz) wave. By spreading the speakers out and aiming them at a single point those two waves met and interfered with each other to at least appear sonic. I believe this rules out the nonlinear transformations.

Try plotting these three functions { sin(x), sin(2x), sin(x) + sin(2x) }. You should see the sum is a new waveform which has a large positive peak followed by small negative, small positive and large negative.

I was actually going to ask that myself! I just felt the question was already bordering on too much information – and the wrong side of the border at that! Any theories would be appreciated.

If you do a Fourier analysis of the new waveform, mapping it back into the frequency domain, you will find that it only contains two components, the original sin(x) and sin(2x).

I think that longitudinal sound waves would be pretty linear, but I assume that what would be required for you to hear the difference frequency, is that the nonlinear effects happen in your head. It’s not like the sound at audio frequencies would already be there waiting for you, but by adding your head at that spot, the audio portion is produced inside your head.

My first thought was that one ultrasonic speaker would be sending a (for example) 40 kHz sine wave, and the other would be producing a 40 kHz sine wave amplitude-modulated with the audio signal. But this would not be location-dependent; if the nonlinear part actually worked, the audio would be available everywhere. But if the two signals were each modulated in a more complex way, so that they each have complex spectrums but the difference signal would be the audio, then maybe at any point other than the target, the phase differences would cause you to hear an unrecognizably scrambled signal. I’ll have to think about that some more. Can you find a link?

Like mks said, that “new waveform” is just a combination of the two tones, and it’s the two tones that your ear will hear. I used to have a setup where I could add two audio frequencies and vary the phase between them. I had this signal going to a scope and optionally to a speaker. I’d vary the phase, which would result in a drastically different-looking signal on the scope, and ask people whether they would be able to hear the difference. Everyone guessed they could, but actually the sounded exactly alike.
I was actually going to ask that myself! I just felt the question was already bordering on too much information – and the wrong side of the border at that! Any theories would be appreciated.
[/QUOTE]

Am I missing something obvious when posters are saying it has to be a non-linear source?

This is analagous to tuning a guitar by listening to the interference pattern between two strings, isn’t it? You get a new waveform with an amplitude that varies with a multiple of one of the component frequencies.

So if two ultrasonic sources were used, couldn’t they produce an interference harmonic within hearing range?

Earlier I decided I might try prodding Google and discovered the documentary I had in mind was about 2005 Lemelson-MIT Prize Winner Elwood “Woody” Norris. He is calling his innovation HyperSonic sound which can be licensed from his American Technical Corporation. Commercial speakers are available online from Multi-Media Solutions, Inc. with the newest model being the HSS H450 for a mere $649. The HSS Technology White Paper contains a brief introduction to non-linear accoustics with summaries of early American parametric loudspeaker research in the 60’s and Japanese research in the 70’s. In short the air is a non-linear device with respect to sufficiently high frequency sound waves – as mks57 and CurtC correctly suggested was a requirement.

I also ran across an article that suggested how one might direct sources of sound narrowly. This page explains that “the directivity (narrowness) of any wave producing source depends on the size of the source, compared to the wavelengths it generates.” This wikipedia entry on Directional Sound is characteristically vague but might be a place to begin looking for more credible sources.

You are both right about the components being processed into component tones when their combination is strictly sonic. I realize now I should have emphasized again that the sin(x), sin(1.5x) and sin(2x) that I kept mentioning should be understood to be ultrasonic. I was attempting to use a shortcut so I could write sin(x) and sin(2x) when the components I had in mind would actually be sin(20 kHz2Pix) and sin(40 kHz2Pix). A rough approximation of the combination sin(20 kHz2Pix) + sin(40 kHz2Pix) can be made by iteritavely interleaving one full cycle from 1.75sin(35 kHz2Pi(x-2.75)) followed by another full cycle from 0.5sin(60 kHz2Pi(x-Pi). Since 35 kHz and 60 kHz are both outside the range of human hearing I suspect that the combination of these two ultrasonic waves is inaudible. A similar analysis of the sin(x) and sin(1.5x) case I mentioned can be done on the combination of sin(20 kHz2Pix) and sin(30 kHz2Pix). If you work through that example I think you will find you can only throw out part of the combined wave as being ultrasonic and are left with what I suspect is an audible 10 kHz tone.

Having tuned a few guitars myself I know exactly what you are talking about. This phenomena is known as Tartini tones. This page has a nice explanation of how Tartini tones work:

Norris said in his whitepaper that Tartini tones are only audible to the human ear up to a few Hz and would thus be unable to recreate sound all the way up to the edge of human hearing around 20 kHz.

They’re saying it has to be a non-linear combination. Simply adding two waves does not produce any new frequencies. However, if you look at the result and play connect-the-dots with the peaks, you can get a new (lower) frequency. This process is not a linear operation and does not happen automatically when you just add two waves. That is what mks57 and CurtC are referring to. (These are “beat” frequencies, not “harmonics” in the technical sense.)

However, when your ear hears two tones at the same time, it doesn’t simply add them. There is some sort of connect-the-dots (envelope detection) going on either in the air, in the mechanism of the ear’s sound wave detection, or in the brain’s processing of the combined waves. I don’t know exactly where, and it’s probably some combination of all three. Regardless, it clearly happens because musicians use the process to tune two guitar strings, trumpets, etc. to each other.

Also, parabolis, the tones in question do not have to both be ultrasonic for the effect to happen, as evidenced by guitar tuning. I’m not sure what you mean in most of your large, middle paragraph because you don’t define “interleave” and “combine” clearly. I can tell you for sure that simple addition won’t get what you’re looking for, as mks57 and CurtC said.

Either way, it looks like you found what you were looking for with Norris’ work. I can personally vouch for hearing this effect for myself and it is real. I think the demo was at COSI in Toledo.

Do you realize what you are saying? You are saying that you can tell me for sure that merely adding two inaudible waves will mean that just after you hear their possibly inaudible sum your brain seperates them right back into their inaudible component frequencies. This is a high tech version of a tree falling in the forest without an audience.

What I was after was whether the sum, or combination, of two inaudible waves is ever audible.

I think what you’re describing is “beat.” It creates a periodic variation in amplitude, but it doesn’t actually produce a lower-frequency wave. It’s just the original frequency wave getting louder and softer repeatedly. If you look at it through a spectral analyzer, you won’t actually see a peak at the lower frequency, unless there’s a non-linearity somewhere in the system (e.g. microphone).

I’m not sure what you’re trying to say. What people have been saying above is that frequency mixing does not occur in linear systems; it requires a source of nonlinearity. This is true, essentially by the definition of a linear system. So, yes: if the air, your skull, your ear, and your brain were all linear systems, the sum of two inaudible waves would still be inaudible.

But requiring a source of nonlinearity is not a very stringent restriction. Everything is eventually nonlinear; rectification of vibrational modes is just about everywhere. The papers loosely stacked on my desk, the pictures hanging on the wall, and the pencils in the jar all are able to rectify and thus to act as nonlinear audio mixers. The neurons in your inner ear are also nonlinear (they have some threshold for a pulsed response), so they can also act as mixers; there are probably joints and tissue interfaces in the human body which also allow efficient nonlinear mixing. I’m not sure if the air is an efficient nonlinear mixer at practical frequencies and amplitudes, but there’s no shortage of nonlinearity in the real world. More often, the problem in modeling the real world is finding some way of linearizing it, not nonlinearizing it…

In short: yes. There are plenty of sources of nonlinearity making this possible.

Makes more sense now. Things really get impossible to intuitively predict as soon as the ear/brain interface is crossed. Human perception is way too subjective and the brain’s processing way too non-linear to use one’s own ears as a valid experimental device.

I made the mistake of assuming people might look at the graphs that I was talking about. I think what I was saying was fairly clear if you see it and probably utter nonsense otherwise. Judging from the overwhelming utter nonsense vote I decided to reformulate a much simpler version and provide low tech ASCII art. Anybody who would like to see a prettier version is encouraged to visit this nifty online plotter or to use whatever else you are comfortable with. You can just cut and paste the formula I provided in my ASCII art as long as you delete the ‘kHz’ part.
Imagine you have two ultrasonic sources. The first is pumping out a 20 kHz tone. The second is pumping out a perfectly synchronized 30 kHz tone. The waves they create will mix in the air somewhere between the two sources. Also imagine you have a listener somewhere inside the area in which the two waves overlap. The wave inside this area of overlap is given by sin(20 kHz2Pi*x) + sin(30 kHz) Let’s call this sum the combination of 20 kHz and 30 kHz.

Part of my original question was whether this combination was audible. Can the ear detect anything at all? To answer this I asked myself what would keep an ear from detecting a sound. The answer is that if any part of the signal is ultrasonic then that part will be invisible to the ear. I marked the following graph to show single cycle elements in the combination’s graph and an approximate frequency:


ASCII Graph of sin(**20 kHz***2*Pi*x) + sin(**30 kHz***2*Pi*x)
       -                               -
     /   \                 -         /   \                 -
    |     |              /   \      |     |              /   \
    |     |             |     |     |     |             |     |
    |     |      / \    |     |     |     |      / \    |     |

    -----------------------------------------------------------------
          |     |    \ /      |     |     |     |    \ /      |     |
          |     |             |     |     |     |             |     |
           \   /              |     |      \   /              |     |
             -                 \   /         -                 \   /
                                 -                               -

...27kHz |      | 52kHz |     |   27kHz   |     | 52kHz |    | 27kHz...

I have been using 20 kHz as the simplified cutoff for hearing which means that the 25 kHz and 52 kHz are ultrasonic. Being ultrasonic means these parts of the graph are invisible to the ear. What would the graph look like if they were removed? Here is a graph with the inaudible parts just yanked out:


ASCII Graph of 'sonic residue' of sin(**20 kHz***2*Pi*x) + sin(**30 kHz***2*Pi*x)
                           -                               -         
                         /   \                           /   \       
                        |     |                         |     |      
                        |     |                         |     |      
    -----------------------------------------------------------------
          |     |                         |     |                    
          |     |                         |     |                    
           \   /                           \   /                     
             -                               -                       
                                                                     
          |      10 kHz       |           |      10 kHz       |      


How interesting! The result might just be interpreted by the ear as a 10 kHz tone. Since 10 kHz is below the 20 kHz ultrasonic cutoff it is not entirely nutty to suspect someone might hear this.

Now for questions that have come up along the way.

How do you justify a Fourier analysis for a signal that is not heard by the ear? What is there to do the analysis if the ear cannot get the signal? The thing seperating those frequencies is the brain. The brain can only analyse a signal it gets from the ear but I started out asking about what the ear detects. Talking about fourier analysis or component frequencies is begging the question.

You can’t just go snipping out portions of the waveform that you think are inaudible. If you put your composite signal through a 25 kHz low-pass filter, the result is a constant-amplitude 20 kHz signal. If I were you, I’d forget about waveforms and the time domain. If you stick with the frequency domain, you will have a better picture of what’s really going on.

Thanks for the reply mks57. I suspected the answer had to do with how low-pass filters work. :slight_smile:

It would bring me much joy if you were to explain where you got the 20 kHz and 25 kHz figures? I also suspect you meant a 25 kHz signal for a 20 kHz low-pass filter (the filter being the ear)?

Would this idea work for an ideal low-pass filter?