Yep, that’s true - it’s possible that some devices can have “holes” in their bandwidth. CD players (to a pretty good approximation) don’t however - they have a flatter bandwidth response than records by far.
The electronic parts of the CD player (the D/A’s, etc) might be limited to 15 Hz, but the digital audio itself on the CD is good down to DC.
Also, record players have a lot of issues to contend with below about 60-80 Hz or so. It doesn’t make it impossible for them to reproduce lower freqs, but it makes it trickier.
In the real world, I think CD’s win and win big on both the high and low ends of the spectrum. In optimal lab conditions, a record player probably wins at the high end, being able to have some level of reproduction over 22,000 Hz, where a CD has a hard limit there.
If I understand the intent of your graphic here, you’re talking about the fact that we have discrete samples at regular time intervals, right? It turns out that sampling (by itself) does not result in the loss of information for band-limited signals, as long as you sample at >= 2X the highest frequency present in the signal. Let’s first talk about a theoretically perfect world, and then the real world.
In our theoretically perfect world, we sample with perfect precision. I.e, we are not limited to the 16 bits that our CD player is, we have prefect filters, and perfect D/A’s. In this case, if we band-limit our input signal to less than 22 KHz, we can perfectly recreate it even though we’ve sampled it at discrete intervals and “missed” some of the peaks of the waveform where they didn’t line up with our sampling points. One way to think about this is that we’re going to fit a curve to our samples, not just blindly draw lines between the sample points - because we know the limiting frequency of the original signal, we know how to “fill in” the signal between sample points. We’re not going to just “clip off” the tops of curves and so forth as the dotted lines in your graph imply, because we know that would generate higher frequencies than were in the original signal. It’s possible to perfectly reproduce, say, a 11561.34663 Hz sine wave, even with discrete sampling. One can prove this mathematically, which is what the Nyquist theorum is all about - I might even be able to do it if you give me a while to dig out some old lecture notes.
Now let’s look at the real world. In real life what probably happens is that the D/A does generate something close to your dotted lines, missing the peaks as you show, and we get a signal with high freq components. But then we run this signal through a “brick wall” low pass filter and viola, as if by magic, our peaks come back in the right places, because we’ve removed the high freqs that were clipping them off like that. We don’t have perfect brick wall filters, but overall it works pretty well and what we get out the end much more closely resembles the orange lines than the dotted ones in your graph.
Ther are other sources of error though. Not only don’t we have perfect low-pass filters, we don’t have perfectly linear D/A’s, and we quantize at 16 bits, thus incurring amplitude error. I.e, maybe the value we measured as a discrete amplitude of 15621 was really supposed to be 15621.49. Obviously the magnitude of this error as a percentage of the signal increases as the signal gets weaker.
One can then ask what percentage of the reconstructed signal is error. Leaving aside some techniques like dithering, the maximum ratio at 16 bits is about 98 dB, and of course decreases (i.e, the error ratio increases) as the input signal becomes quieter. (I.e, an error of 0.5 out of say 28000 isn’t much, percentage wise, but the same error of 0.5 out of 3 is rather a lot!)
Of course, records also suffer from error which limits their S/N ratio, but they fare far, far worse then CDs do. Just from memory, I recall that ratios of 50 to 60 dB were considered good for records. CD’s win bigtime when measuring this sort of error. If you reduced your CD down to 9 or 10 bits instead of 16, it would have a more record-like S/N.
So I guess the summary here is: (1) The imperfection in the reproduced signal doesn’t result from the discrete sampling, as you imply. There are some gains to be had from oversampling, but at least in theory, and to a good approximation even in practice, increasing the sampling rate does not get you a more accurate reproduction of signals under 1/2 the sampling rate. And, (2) increasing the level of quantization does help both in theory and practice, at least to a point. But even 16 bit quantization is very, very good, and is almost never a limiting factor. And also, (3) it’s quite possible for a record to sound much better than a CD, but in all likelyhood this isn’t because the record is reproducing the signal more accurately. It’s much more likely because of some feature of the signal itself (perhaps the way it was processed beforehand), or perhaps because some types of errors induced by the record are pleasing to the ear (this can happen!) There are a zillion other potential factors that might make some particular record sound better than some particular CD.
Always glad to oblige
peas on earth