Do Compact Disks have the capability to reproduce sound perfectly?

No. That statement is simply incorrect. Sampling above the Nyquist rate for a strictly bandlimited signal will allow you to reconstruct the signal waveform exactly. Period. The properly reconstructed signal will not contain any components that were not in the original signal.

I think the confusion you are having comes from the fact that you are picturing the samples themselves as representing the original signal. Trying to extract information about the signal (such as the peak or RMS amplitudes) is fundamentally different from performing reconstruction. In reconstruction (sinc interpolation), all of the sample points contribute something to almost all other points in time (except for the exact points where other samples were taken). This allows the redundancy between the points to be exploited to perform better reconstruction than if you just linearly connected the dots (which would show the beating you describe). The incredible thing here is that the reconstruction is not just better, it is perfect. Again, this is all only true under assumptions that are often violated in real life…but we’re only talking theory here.

      • CD’s don’t reproduce sound perfectly; they simply can be easily accurately duplicated in mass-production, and in normal use are very wear-resistant. In those respects they were better than anything else available at the time. Nobody really argued that vinyl wasn’t good-enough audio quality; the consumers’ argument was that vinyl was too-easily susceptible to damage and wear.
  • CD’s use an audio signal that is 16 bits depth @ 44.1 Khz, but when studios use digital recording equipment they usually operate it at 24-bits and 96Khz, if not higher. The reason is that every time a digital signal is processed or combined in any way, you lose some accuracy to the rounding of signal levels. So they record and mix the audio it a higher-bit-rate, and than in the final step, mix it down into CD-audio bit-rates.
  • The part above about “mixing digital audio resulting in losing accuracy” is the justification for the newer multi-channel audio formats. With analog recording media you can mix channels and still separate them and get a good result–them having essentially an infinite bit-rate, you see–but you don’t get a good result if you use an audio-CD track for this–because the two separated tracks must each have a bit-rate that is less than one-half of the track they were mixed together in.
  • Interestingly enough, most major recording studios still maintain fully-analog recording equipment, running tube amps and analog tape recording equipment–and something like three-fourths of all new recordings are still done on fully-analog setups. When digital studio recording equipment became available, many industry pundits claimed that digital would eventually be less expensive to buy and maintain, and was easier to use–but that never happened. The reason is because the musicians and producers prefer the way the analog equipment sounds. Most reasons center around analog-equipment’s tendency to “soft-overload”, instead of clipping like digital equipment does. You might be old enough to recall when the first fully-digitally-produced CD’s came out, they used to have a little blurb on there that basically bragged about being “fully digital”–but you don’t see those anymore. It’s because most artists who were able to choose went back to using analog equipment.
    ~

It’s been a while since I looked at this, but I think you can prove that a square wave is THE slowest converging signal as you add more harmonics. Also, remember that the wiggly peaks at the discontinuities will never completely go away…Fourier series converges in the mean-squared sense and not the pointwise sense. The peaks get taller as you add more terms, but the width gets narrower at a faster rate so they end up having zero area underneath. Another way to say it is that as you add more and more harmonics, the reconstruction is not error-free, but the power in the error converges to zero.

      • Well damn, should have previewed:
        In the last paragraph, “When digital studio recording equipment became available, many industry pundits claimed that studios would all abandon using analog equipment because digital would eventually be less expensive to buy and maintain, and was easier to use–but that never happened.”
        ~

How many samples does the current generation of CD players store to correct for this?

I would be very interested to see a proof of this, as it would imply that you can fit an infinite amount of information onto a CD. To illustrate: Suppose I have a set of N different sound sources. Each source produces a pure sine wave output, at a single frequency, and each frequency is in the range 100 Hz to 1000 Hz. If I understand you correctly, you’re saying that an audio CD can perfectly capture and reproduce each of those sources. You could then make N CDs, one for each sound source, and each CD would be distinguishable from all of the others. But there’s no limit on how high N can be! I could, in principle, pick 2[sup]1000000000[/sup] (that’s two to the billionth power) different frequencies in the range 100 Hz to 1000 Hz, and make a sound source for each, giving me that many distinct CDs, but there are not that many distinct combinations of bits on a CD!

That’s an interesting observation Chronos. The subtlety here is that the sampling theorem assumes that you are able to represent the samples exactly. Of course, we can’t do that on a computer (or a CD) because it would require an infinite number of bits to represent each sample. As people have mentioned earlier, CDs are limited to 16 bits to represent each sample. You can analyze the impact of the quantization error on the reconstructed signal (which often ends up looking like you’ve added noise to the signal), and in that you would find practical limits to the number of distinct sources you could identify with any reliabilty.

Sorry…we haven’t been very explicit about quantization in our discussion of the sampling theorem. Like many other things, it is a practical impossibility to do it, but the theory it provides is helpful in guiding us to better real-life applications.

If you would like to see a proof, mks57 mentioned some primary sources earlier. Lots of hits turn up when you do a search for things like “sampling theorem”, but most of the ones I looked at require some working knowledge of Fourier theory. If I see one a little more accesible to people without background in signal processing theory, I will post it.

So that deals with the issue of the sampling rate on the fidelity, I hope. The Nyquist theorem says that you have to filter what you digitize, and what you reproduce. Real-world filters aren’t brick walls, they’re gradual, so they produce some tiny amount of distortion, arguably audible, then more modern techniques like oversampling reduce these still. In any case, the distortion is less than with high-end analog reproduction equipment.

But the sample resolution is yet another effect, as Chronos’s example highlights. Fortunately, there’s an easy way to look at this. Imagine that instead of sampling with 16 bits of resolution, you sample exactly, then later add random noise at a level of +/- 1/2 of a bit. I hope you can see that this is exactly the same thing as sampling at a fixed resolution. This extra noise comes out to be around -76 dB, a level that’s way below the noise level of any analog equipment I know of.

I do not know this.

(bolding mine) It’s worse than noise, it can show up as frequencies that weren’t originally present, and be much more noticeable than just noise would be. To avoid this, rather than simply truncating or rounding (from, say, a 24 bit source), the signal is dithered. That’s where a low level of noise is added to the signal before rounding, as CurtC mentioned. The simplest dithering would just be to add a random value between +/- 1/2 of the 16th bit before truncating. More complicated schemes used colored noise to make the noise level lower where your ear is most sensitive.

They have a nice discussion of this over at Digital Domain

[nitpick]CurtC, it’s -96dB according to my link.[/nitpick]

Well, the problem with the old vinyl vs. CD saw is that the vinyl-fetishists ignore the fact that by all measured means, CD is closer to the original recorded sound.

This is generally true, but it says nothing about the quality (i.e., fidelity) of CD sound.

No, analog certainly does not have an ‘essentially infinite bit rate’

The use of analog distortion as a musical instrument is quite separate from the use of analog as a faithful reproduction medium.

I suppose you don’t have any Telarc discs? From their website:

“Digital technology immediately broadened the dynamic palate of sound recording, and was a perfect marriage for our minimal miking approach. In particular, it allowed us to put the previously ‘missing’ low frequencies of the sound spectrum back into the sonic picture. The major labels had produced recordings for years that had attenuated low frequencies, due both to their perception of consumers’ tastes and to the technical limitations of the disc mastering process. The digital recordings we made were a nightmare to master for LP’s, but we knew it was the only way to create the realism of live performance that had just become technically possible.”

They also speak of their current state of the art in digital recording on the same page.

I also have some Chesky CDs that tout the digital cutting edge. Many classical CDs are ‘DDD’, too.

I think that ‘fully digital’ is no longer touted because it’s no longer at all unusual. Vast numbers of recordings are recorded and mastered digitally, though some people still like the euphonic distortions of analog. That’s fine, but euphonic distortions are not accurate reproduction.

Whether CD sound is perfect is not really the issue. What is is whether the same sound encoded on CD and on a medium with a higher bitrate / depth would sound perceptibly different. This is currently not known, but it’s likely that the music would not be perceptibly different, all being equal.

Yes, you are right in some regards. “Noise” is only a first approximation that people use, knowing full well that it is not necessaily accurate. I was only trying to point out the assumption behind sampling theory that Chronos’ example was highlighting.

I wouldn’t assume that it is necessarily always worse than noise though. Quantization is (generally) a completely deterministic process. The problem obviously comes because the quantization function is not invertible. Consequently, there are things you can to on the reconstruction side that are more powerful than standard noise reduction techniques because you can exploit your knowledge of the quantization function. I believe such techniques go by the name “consistent reconstruction”.

I’d be interested in reading about that. I tried Googling “consistent reconstruction” with various incantations, but nothing that came up seemed like it was relevent. Anything you can point me to online?

Sure. I’m sorry if that wasn’t the right search term…sampling isn’t exactly my area. Another possibility might be “consistent estimate”?

The paper I was thinking about is called “Quantized Overcomplete Expansions in R^N: Analysis, Synthesis, and Algorithms”, by Goyal, Vetterli and Thao (IEEE Trans. on Info. Theory, 44:1, Jan 1998, pp. 16–31). I am hesitant to post a link here because of copyright considerations. If you have access to IEEE publications it will be easy to find. I also notice that if you Google on the first few (3) words of the title, you won’t be dissapointed :).

The result is very abstract (it is in terms of general expansion systems), but sampling is just a special case of this. The redundancy inherent in the expansion (i.e., any factor of oversampling above Nyquist) can reduce some of the noise/quantization. This is basically why oversampling can be used instead of increasing the bit-depth (which can be more expensive) to get better representations. They then go on to show that by using “consistent estimates” (i.e., taking into account the deterministic nature of quantization) you can reduce the effects even more. I’ve only read this paper once, but that is my understanding. I am more adept with the general expansion setting than I am with samping specifically, so let me know if I can be any help translating between the two.

Do you work in signal processing somewhere Zen? I was also going to ask CurtC where in TX he/she is at?

You may also want to look at other papers by Thao and Vetterli that seem to deal with this issue (though I haven’t read them). Things like:

-“Reduction of the MSE in R-times oversampled A/D conversion from O(1/R) to O(1/R^2)”, (IEEE Trans. Sig. Proc., 42, Jan. 1994, pp.200-203)

-“Deterministic analysis of oversampled A/D conversion and decoding improvement based on consistent estimates”, (IEEE Trans Sig. Proc 42, Mar 1994, pp.519-531)

Zen, I just also stumbled on another paper in my notes that you may want to look at. It is “Resilience Properties of Redundant Expansions Under Additive Noise and Quantization”, by Cvetkovic (IEEE Trans on Info Theory, 49:3, March 2003, pp.644-656). In it they specifically talk about consistent reconstruction under quantization. Again, they work in the generality of redundant expansions, but sampling is just a special case of this.

Im not the poster you´re respondinge to, but of course, you can´t store unlimited data on a CD. But you can´t do so on an analog tape either. If you overlay lots of sine waves of even the same frequency and amplitude, but with different phase shifts, the result is a signal (if you visualize it in the time domain) that consinst of very, very high frequencies in the frequency domain; the closer the zero - crossings of your sine waves are, the higher the contributing frequncies become, with no limit.
That means that a square wave can never be recorded, nor reproduced with real live equipment.
** Squink [\B] said that 20 kHz sawtooth waves “surley exist” in the analog world. No, they don´t. For this to be possible, analog equipment would have to be able to record and reproduce infinite amounts of data. Look at the frequency spectrum of a sawtooth wave (or a sqaure wave).
** Squink [\B] also said that “real life analog audio signals are NOT band-limited”. This is absolutley untrue. Of course they are. Analog equipment IS a low-pass filter, really. That´s why you can´t backup your 40GB harddisk on a cassette tape, or a video tape, or press it on vinyl.
A signal that is limited to 20 kHz in the frequency domain can perfecty reproduced with 40 kHz sampling, perfectly, in theory.
A SACD or analog equipment could sound better, if you are able to hear it, because it allows for a “gentler” low pass filter to be applied.

Im not the poster you´re respondinge to, but of course, you can´t store unlimited data on a CD. But you can´t do so on an analog tape either. If you overlay lots of sine waves of even the same frequency and amplitude, but with different phase shifts, the result is a signal (if you visualize it in the time domain) that consinst of very, very high frequencies in the frequency domain; the closer the zero - crossings of your sine waves are, the higher the contributing frequncies become, with no limit.
That means that a square wave can never be recorded, nor reproduced with real live equipment.
** Squink ** said that 20 kHz sawtooth waves “surley exist” in the analog world. No, they don´t. For this to be possible, analog equipment would have to be able to record and reproduce infinite amounts of data. Look at the frequency spectrum of a sawtooth wave (or a sqaure wave).
** Squink ** also said that “real life analog audio signals are NOT band-limited”. This is absolutley untrue. Of course they are. Analog equipment IS a low-pass filter, really. That´s why you can´t backup your 40GB harddisk on a cassette tape, or a video tape, or press it on vinyl.
A signal that is limited to 20 kHz in the frequency domain can perfecty reproduced with 40 kHz sampling, perfectly, in theory.
A SACD or analog equipment could sound better, if you are able to hear it, because it allows for a “gentler” low pass filter to be applied.

To clarify, sure you could transfer your harddisk to those mediums, but what I meant to say is, you could not transfer it to ONE video tape or vinyl record, because despite it being “analog” it is limited in bandwith, as the equipment acts as a low pass filter, limiting the amount of data you can store.

I used to think vinyl was better than CD. But then I heard a new version CD of a record I had and realized a couple of things. I was used to hearing the vinyl records at a lower than normal rpm and that some of the earlier cds were crappily made and probably from secondary sources.

This seems to be somewhat of a philosophical question; after all, the world around us is analog.

Are we interested in knowing what would produce the most accurate reproduction of a live performance with the least additional noise? Or are we interested in hearing the truest reproduction of what someone in the studio wanted you to hear based upon their understanding at the time of the medium you’d be hearing? The whole basis of the modern studio is in creating something the musicians had in mind which you couldn’t hear live.

The Beatles (citing one popular example) were present for the mixing of most of their material for mono on vinyl. The things people have done with it since do change it. Whether on not a clean mono vinyl copy of “Sgt. Pepper’s…” through a tube amp sounds better than a stereo CD through your new wall mounted micro speakers is as much a philosophical question as a tech one. (I happen to prefer the former but your ears are your ears to fill as you like.)

CDs seem to lack a ‘warmth’ and vibrancy of sound present in vinyl. They’re way more convenient than LPs, but I have many recordings of songs in both formats and the CDs usually sound rather flat, even in a home setting through the same speakers.

I can’t help but to think a lot of music would have been produced differently in the studio on purpose in the analog era if they knew the medium would be digital. I’ve also read record producers in the 50s and 60s would listen to their stuff on car stereos as they knew this was how a good deal of their target audience would first hear it.

I even trace part of my dislike for most modern music in the digital era (I’m only 33) to its awful, awful production, which is no doubt intended to sound best on digital formats.

Crikey z_z_z, I wasn’t claiming anything about ideal sawtooths. They’re just a figment of our imagination. The band we were talking about was 0-20Khz. Have you ever listened to a violin, a set of drums, or even rattling keys through an ultrasound detector? They ALL produce sounds above 20kHz. In that sense, the one that matters when dealing with CD sound reproduction, they’re not bandwidth limited.
Theoretically, you can perfectly reconstruct a sine wave from, what, 4 data points? As Chronos pointed out, real world CD recording and playback do not reach that goal. The difference is called distortion. That term relates directly to the question asked in the OP "Do Compact Disks have the capability to reproduce sound perfectly? " The answer is no.
Now theoretically, you could take every data point in a sound file and fit it to a Legendre polynomial or mash it with Fourier transforms and come up with a pretty good fit to the original analog waveform, but 1) real CD players don’t do that, and 2) It still wouldn’t be a “perfect” fit to the analog wave. As crozell referenced, there are several fancy ways to theoretically improve CD output, but AFAIK, none of them have made it into commercial players.
That being the case, I’m perfectly content with CurtC’s take on the CD fidelity: