So how do CD's work, exactly?

Gotcha Crafter_Man.

However, while my calculation was off so was Bad Hat’s and yours (we’re all wrong). 2 to the 16[sup]th[/sup] power is very different from 2*16.

44,100 * 16 * 2 = 1,411,200 bits

44,100 * 2[sup]16[/sup] = 2,890,137,600 bits

Thanks for the explanation though! Good stuff…

Let me try to clear up a few things here. First, Mr. 2001 wrote:

The 150 kbps number should be bytes, not bits. The number of bits on a CD is indeed 44100 x 16 x 2 = 1,411,200 bits per second. Actually, there are more raw bits on the CD, since there is redundant information in the form of an Error Checking and Correcting scheme. I’ve read that a CD can have 2400 consecutive bits totally obliterated, and the ECC algorithm can recreate every one of them (in a single pass, too).

Then Crafter_Man wrote:

Not true. The 0x00 and 0xFF (I didn’t want to type all those ones and zeroes) represent the rails of the converter, where it can’t go beyond. You will get complete silence by repeating any number, because a constant value would have no sound. To get the max SPL, you would need to change the numbers with time. The most power would come from a square wave (alternating between 0x00 and 0xFF) several times per second.

Also, Crafter_Man wrote that with an oversampled data stream at 2x, you could move your filter’s cutoff out to 40 kHz and not have atrifacts. Actually, with linearly interpolated data, there would still be some artifacts left below that, because a straight line between the sampled points still has harmonics.

The Ryan wrote:

That’s true, but the question I was referring to said something about not being able to accurately reproduce sounds above 8800 Hz, and that concerns just the time-sample part. There is a good answer to the digital approximation argument, too. A wave sampled at sixteen bits would have an error each sample of +1/2 of a bit to -1/2 bit. This is the same as taking an ideal signal, and superimposing (adding) a noise at that level. With a 16-bit converter, that added noise is around -95 dB. Now analog systems also add noise, but at higher levels. The digital approximation argument doesn’t hold water for CD’s, because that approximation is more accurate than the error of analog systems caused by plain old noise.

yall got me… damn fuzzy math…
ofcourse the actual rate of read would be 44,100 x 16 bits x 2 stereo channels…

::::hanging head in shame::::

You’re correct. I should have said that 1111111111111111 represents one of the “extremes” or something. Which brings up a few questions: Since an audio waveform has a zero mean, would the input to the ADC be bipolar? And if so, would the digital words actually have a range of -32767 to +32767?

Correct again, though I was trying to keep the explanation simple. Thanks!

Generally, yes it would be bipolar, at least conceptually. In real life, it is often AC-coupled, then biased up to the midpoint of the A/D input range, and fed into a unipolar A/D (especially with battery-powered equipment without split power supplies). Then, after it’s sampled, the samples are converted from 0-65536 straight binary to -32768 to +32767 two’s complement for further processing.

Arjuna34

Oops, I meant 0-6553**5[\b] straight binary.

Arjuna34

Argh!!

I often see CD players labeled “1-bit DAC”… what does that refer to? And is that better or worse than, say, a 2-bit DAC?

Nominally, you’d need a 16-bit DAC (digital to analog converter) to convert the digital samples back into analog voltage signals, since that’s how the sound data is represented. However, a 16-bit DAC is expensive … A rather arcane DSP algorithm lets you use a 1-bit DAC instead, with a much higher sampling rate. In the digital domain, the 16-bit 44,100 Hz data is upsampled to something much faster, as high as 64 times faster. Special filters are applied to shape quantization noise, and add dithering. Then the output is fed to a 1-bit DAC which cranks out samples at the faster sampling rate (each sample is one of two voltage levels, since it’s a 1-bit DAC, and 2^1 = 2). This high-speed signal is then filtered by analog filters, whose shape determines what digital filters were applied earlier. The resulting analog signal is equivalent to a 16-bit DAC. In general, you can trade speed for resolution- i.e. you could make a 24 bit DAC with an even faster 1-bit DAC. An easy way to think about is that many 1-bit DAC outputs are averaged together to make an N-bit DAC. The analog filters are the “averaging”.

This can also be done in the reverse direction, with a 1-bit A/D. In fact, it’s more common on an A/D, because A/Ds are generally more expensive than DACs. Many audio codecs have all the circuitry and logic for this built into the chip, so the engineer using the chip doesn’t even have to think about it.

As to why it’s advertised on a CD player- it’s all marketing hype. An engineer can make a good (or crappy) CD player with a 1-bit DAC or a 16-bit DAC, it’s just a matter of what’s cheapest.

For a good tutorial on audio signal processing, try this DSP app note.

Arjuna34