A 70mm film shot and shown at 60 fps is said to resemble videotape (until you get those film artificats: dirt, scratches, etc. )
Douglas Trumbull’s Showscan company uses this for their specialty films.
A 70mm film shot and shown at 60 fps is said to resemble videotape (until you get those film artificats: dirt, scratches, etc. )
Douglas Trumbull’s Showscan company uses this for their specialty films.
Try putting your waveform through a 20 kHz low-pass filter before it is sampled.
A square wave is composed of a fundamental (f) and its odd harmonics (3f, 5f, 7f, etc). Do you think that the ear can hear the difference between a 15 kHz sine wave and a 15 kHz square wave? All of the odd harmonics are well out of the range of human hearing.
The ear can hear the difference between a five kHz sine wave, and an 8 value stair step simulation of a 5 kHz sine wave.
Nothing needs to be simulated… as long as you sample above 10 kHz, the sine wave can be reproduced.
I’d think 100Hz a little high for the lower limit of the range; most people (AFAIK) don’t have any trouble hearing mains voltage hum, which is 50Hz (here in the UK, at least).
Comparing vinyl with a CD played on a car stereo is hardly fair! Car stereos are rarely true hi-fi in themselves, but more importantly – you’re sitting in a car! There can’t be a much worse place to be if you’re looking for decent sound imaging - cars are not designed for listening to hi-fi sound.
In a properly designed system, you will never have the opportunity to listen to “an 8-value stair step simulation of a 5 kHz sine wave”. That is what you might see at the output of the DAC. However, there is a reconstruction filter at the output of the DAC. This is a low-pass filter that removes unwanted artifacts produced by the D/A process. It is a sibling of the anti-aliasing filter used on the input to the ADC.
That’s not true.
Digitally sampling a wave involves measuring its amplitude (voltage) at discrete intervals. All information about what the wave does between those time points is lost. The DACs in early CD players simply maintained the output voltage between samples:
This approach produced distortion, as the voltage steps injected high frequency harmonics into the signal. Even with filtering, those steps meant that early players were essentially trying to build up a simulation of a sine wave, or whatever, with a series of square waves.
Now they ramp the voltage between sampled values. This produces a smoother output, with fewer spurious high frequencies, but it still doesn’t get the signal back to its original analog shape. It’s not possible to get back to the original waveform, because the information needed to do so was lost in the original A to D conversion.
It’s a bit of a misnomer to call it a ‘reconstruction filter’ The system has no information with which to reconstruct the voltage of the waveform between sampled values. It merely fills that time in with voltages moving smoothly from one value to the next. That produces a pleasing sound, and on average it’s close to correct, but it does not reproduce the original analog waveform.
Yes, it does actually, under the condition that the waveform is bandlimited by the Nyquist frequency (say 20 kHz, for example). I think the problem here is that you are picturing a waveform with a base frequency right at the 20 kHz boundary that is not a simple sine wave. But, what would such a waveform look like? If it were anything other than a simple sine wave, it would have to have components with frequencies higher than 20 kHz, which we cannot allow. To do reconstruction, we don’t just connect the dots to make square waves (you seemed to imply this was the only option with two sample points), but we use sinc interpolation, which would correctly reproduce the exact waveform up to the Nyquist frequency.
Now, this is all theoretical…there are real practical issues with 1) getting good pre-filters to ensure you are below Nyquist, and 2) having enough bits in the A/D converter to make quantization error negligable.
engineer_comp_geek got to the heart of why this won’t work. Think about it this way: if you tried to increase the dynamic range this way, the actual amplitude values represented by the samples would not be equally spaced. So, adding 00000010 to the sample would NOT (in general) increase the amplitude by twice what adding 00000001 to the sample would. As was pointed out, this would make the reconstruction a nonlinear operation, which is generally undesirable. Technically, you are right in pointing out that the dynamic range is not just a function of of the bit-depth, but most people assume a linear reconstruction scheme (equally spaced quantization intervals, etc.).
You seem to be fixated on waveforms (time domain). Much of this stuff makes more sense when examined in the frequency domain.
Here are the standard references on the subject:
Nyquist, H. “Certain topics in telegraph transmission theory,” AIEE Trans., vol. 47, pp. 617–644, Jan. 1928.
Shannon, C. E. “Communications in the presence of noise,” Proc. IRE, vol. 37, pp. 10–21, Jan. 1949. (available at http://www.stanford.edu/class/ee104/shannonpaper.pdf)
Sure, but looking at it that way eliminates any possibility of examining say 20 kHz sawtooth waves. Those surely exist in the analog world. I think we have here more a difference in emphasis on the analog or digital side, than any real disagreement.
Interpolation works when the exact waveform is a sine wave.
There’s another problem with sampling at half the frequency of a signal that hasn’t been mentioned thus far. When the sampling rate and the wave form are in a constant phase relationship, as when sampling a 20 kHz wave at 40 kHz, you lose amplitude information. In the worst case, when your sample times correspond to zero amplitude nodes of the waveform, the signal disappears completely. These errors in amplitude don’t completely go away when you sample at a rate above the Nyquist frequency.
You said earlier that you were not intending to suggest that the Shannon-Nyquist sampling theorem was false, yet you make this statement. I think you don’t really understand what the theorem says. It says that with a band-limited signal (limited to ~20 kHz in this example), that if you sample it at a rate more than twice the cutoff frequency, then there is enough information to reproduce the signal perfectly. Not approximately, perfectly. This isn’t an engineering “close enough” kind of thing, it’s a mathematical proof.
Maybe, but those harmonics were all above 20 kHz and would be filtered out before you hear them.
Please study the sampling theorem to understand why this isn’t true.
Yes, they could exist, but they do not meet the criteria for the sampling theorem because in order to exist they must contain frequencies above 20 kHz. Though there is some work in reconstructing non-bandlimited signals, the straightforward sampling case we are discussing here can only work with signals that meet the Nyquist condition. BUT, in reference to your first question, signals meeting those criteria can have their waveforms exactly reproduced, and not just “render a square wave at the same frequency” as you initially indicated.
Just to be clear, interpolation works with ANY waveform that meets the Nyquist criteria (which only at the exact Nyquist limit, is restricted to a sine wave). Again, this is only speaking theoretically, and there are real practical issues involved.
You are absolutely right. The problem you described only happens at the exact Nyquist frequency though. I honestly don’t remember off the top of my head if the original statement of the theorem is a strict inequality or not, but it is clear from your example that exact equailty with Nyquist has problems. But, if you make the signal have frequency (20 kHz - epsilon), the signal and the sampling cannot stay out of phase forever, and the problem goes away. This does require signals that are very long, which is obviously a real practical problem. But, remember that any finite length signal is not going to (technically) be bandlimited. So, we are faced with a situation where there is a nice theory, that cannot be strictly applied in practice, but which is often close enough for us to still use it.
I frequently experience a nonlinear scaling of my bits when I see a shapely woman in tight pants.
Indeed, but they’re composed of 20 kHz sine waves plus higher harmonics, and those harmonics are inaudible, so you can’t hear the difference anyway (if you can hear the 20 kHz tone at all!). Filtering changes it into a simple 20 kHz sine wave, which can then be digitized and reproduced perfectly.
Oddly enough though, a car has some very good acoustic aspects.
A car does not have problems with bass saturation: the fact that the upper part of the cabin is glass acts like an infinite bass-trap. Lack of parallel surfaces eliminates standing waves, and the sheer number of different angles diffuses quite nicely. Also, you’ve got a variety of different surfaces (dashboard vinyl, seat padding) that trap freq. over a wide range. Hypothetically, in a car you’re really hearing the speakers/amps, with almost no room coloration.
Mind, I’m not really disagreeing with Colophon: it’s virtually impossible for the listener to place themselves optimally to percieve a stereo image, it’s far to cramped a space for the soundwaves to “breathe”, most car electronics (even high-end) are not really Hi-Fi (so the electronics themselves color the sound), and there is no soundproofing, so all ambient noise further colors what you’re listening to.
Note: this is based on an article I read by someone who designs acoustics for recording studios and mastering rooms. I’ll keep an eye out for it and post it if it seems relevant.
But real life analog audio signals are NOT band-limited. When you filter them either before digitizing, or with the digitization process itself, you introduce distortion. See figure 3 here for an example of low pass filter distortion. How important that distortion is to human perception of the sound is up for debate, and I won’t pretend to know the precise algorithm our brains use to determine the quality of a sound. I don’t think anyone knows.
You still get amplitude problems at non-nyquist frequencies, as the sampling moves in and out of phase with the analog waveform in a regular sequence. That gives you beat frequencies that weren’t in the original analog signal. What Nyquist sampling guarantees you is that the main component of your output signal cannot be a beat frequency. It doesn’t guarantee that the output won’t contain regular changes in amplitude that weren’t part the original signal.
A couple of people have stated that the information of what the waveform does between sample points is lost, but this isn’t really correct for a bandlimited signal sampled above twice the highest frequency. That “information” is redundant, and contained in the complete set of samples.
squink wrote:
This will happen if you just linearly interpolate between the samples. If you use more than just two sample points, you can more accurately reproduce the original signal. The closer the signal’s highest frequency is to half the sample frequency, the more points you need to accurately reproduce the signal. For (say) a 22 KHz signal on a CD, it ends up being hundreds of samples
I assume you’re referring to the square wave picture “b”. That isn’t distortion; that’s just what a square wave looks like with some of the harmonics removed. (I wrote a little wave graphing program last night, inspired by this thread… it takes a lot of harmonics to make the wave look square.) Since the frequencies that are removed are out of the range of human hearing anyway, you can’t tell the difference. Remember, the ear hears frequencies, not waveforms.