Since you raised the issue, I’ll explain my reasoning for saying “half” is an overstatement, not an understatement. Please excuse me if this is too simplistic for you. Since this is a general interest board, I figure it’s best to explain on the level of the Teeming Millions, whose expertise may lie elsewhere.
In any analog modulation encoding a digital signal, from a simple square wave on up, noise and errors are stochastic, probablistic phenomena, described as an average over time. (When we model the effects of noise make certain assumptions [with varying degrees of justification] about the statistical properties of the effective interfering signal, but noise is -er- noisy, and variable in the short term.)
One error per million would probably be imperceptible in audio: even without error correction, it would create, at worst, an isolated ultrasonic click every 3 seconds or so. A dog whistle or the bats who fly, unheard, in the summer night of every temerpate or tropical clime are like foghorns or stadium cheers by comparison.
The same error rate would quickly crash a computer or trash a hard drive, because the bits in a program or datafile aren’t independent. Audio bits aren’t independent, either, but it’s hard to select a threshold where the effects are preceptible, esp. when the errors are likely to be systematic (e.g. occurring more in certain bit combos) due to physical effects; and therefore produce overall nonrandom bias.
In fact, digital audio EC schemes begin to be overwhelmed at surprisingly low error rates: most errors are fixed, but some are not, and some new errors are created by incorrect on-the-fly correction. The algorithms are written to err on the side of “unobtrusiveness”, but that in itself is a systematic bias.
EC isn’t magic. It can’t restore what is gone. It can only guess the correct bit, or fill in something unobjectionable. Error correction capabilities aren’t thresholds. If a bitstream has 10 ppm errors, and an EC is rated for 100ppm correction; some errors will still get through, if only because sometimes two of the 10ppm errors will occur within, say, 1 kbit of each other, for a transient error rate of 2000ppm
Since perception can’t be objectively quantified, I looked at the threshold of objective digital measurement. which is ridiculously over-sensitive in the time domain. [Who cares about X bit errors per 70 minute CD?] I apologized for my overstatement because it’s possible for noise that is under 1% of the signal power to flip bits, because noise often comes in transient spikes. Those errors should usually be inaudible even without EC and processing, but they’re there.
Raise an incredibly good post EC error rate to the 650 millionth power, and you’ll several errors. That’s why mission critical servers use EC RAM even though normal RAM may only generate one error in a million million million. are these few errors meaningful in audio? No, but it’s all I can objectively calculate.
A Modest Rant
Here’s a paradox for you: a lot of very knowledgable people, even professional audio engineers, present very technically compelling arguments for the significance of jitter and other mechanical end analog effects in CD reading. Yet if you rip a CD ten times on the cheapest desktop CD drive, and then do a binary compare of the resulting files, you’ll find remarkably few bits of difference.
This easy test seems universally ignored. I did it almost 10 years ago on a $20 drive in a rattletrap benchtop system built of spare parts – my first pair of files were identical! Was it my raw manly McGuyver-with-better-hair tech wizardry, or had I come into possession of cleverly disguised alien technology?
CD playback is far more accurate than many audiophiles like to admit. It’s no fun. If I record a CD-full of data, I can reasonably expect a bit-perfect playback, time and again, until the CD-R begiins to age (admittedly CD-ROM is a different disk format). When I start getting bit errors, the CD isn’t far from dead.
When I hear bout mechanical jitter (and the like) ]perceptibly affecting CD-audio playback, I have to wonder what the heck these guys are playing their CDs on. A Flintstone foot-driven player with “shiny rock” optics? Cheap CD/DVD players actually use commodity-market CD/DVD computer drives for playback.
I’m not the first to raise this argument, and audiophiles have plenty of rejoinders involving the differences between CD-audio and CD-ROM - obscuring the fact that when you rip a CD, it is CD-audio, not CD-ROM. I’ve definitely seen bit errors between successive rips, but if those few bits affect your injoyment, real life must be intolerable. There must be ADA disability for that.
The Unbearable Sharpness of hearing
Science isn’t ready to draw firm thresholds in human audio perception yet, and audiophile perception… well, money can’t buy love or happiness, but apparently you can pick up Steve Austin’s bionic ears for $1395 (plus accessories) at your neighborhood audio ship.
In a mysterious quirk of statistics, up to 3/4 of owners of expensive audio gear can detect differences with the audiological sensitivity of the top 0.1% of humans, even if years of riock concerts have left them unable to respond to their own names. Just ask them - or read the reports/opinions in prosumer mags that quibble over effects at levels that even recording studios don’t worry about.
[What are they listening to, if not studio productions? Are artists being kidnapped to secret governemt labs to record über-CDs for the poor unfortunates whose sensitivities force them to fork over five figures to listen to 1970’s rock that they once loved on AM radios and scratchy 45s? (Five figures would be admittedly cheap for a machine that could make 80s Mmm-bop tolerable) Is it truly possible that any live jazz or classical performance meets their rigorous standards? It’s hard for me to understand how classical music demands a reproduction quality that exceeds what the original artists and audiences had available. That’s not fidelity, that’s alteration, as surely as anything a rapper or remixer does.]