This is fun one. The way the grooves cut, in phase L-R signals are cut as transverse groove wiggles. Which makes them easy to track, and also compatible with mono cartridges - which was important when stereo was new.
Perfectly out of phase L-R signals cut with vertical motion. So out of phase bass can bump the stylus straight out of the groove. Bass in one channel cuts with a 45 degree motion relative to the plane of the record, and can do the same. So bass is mixed to be close to a mono/centre signal for vinyl mastering.
In general this doesn’t end up being a bad thing. Humans have difficultly locating very low frequencies anyway. So mono bass isn’t that big a loss, and it makes managing low frequencies in a domestic setting much easier. But managing the transition from non-directional to directional takes some finesse.
Indeed. Digital overs are a really interesting thing. It seems somewhat counter-intuitive to realise that even if the digital stream doesn’t appear to have reached saturation (ie FFFF) that the signal can still be clipped. And if it has reached FFFF things are really bad. Digital doesn’t have a red to go into.
Which ties in with the typical reproduction setup in both homes and theaters where there is a single subwoofer (or dense subwoofer array in a theater) someplace, but ideally on the room center axis to project the deep bass signal from the electronic realm into the acoustic realm.
IIRC - there is inverse square law at work … with a 6dB loss per doubled distance
which brings a 90dB sound down to 60dB at 32m distance (so, a 30dB loss)… which is also in line with real-life exeriences
(if it were 1dB loss x 30meters, you could easily hear/understand people speaking 150m away from you, and yodeling would never have been a thing … life in the city, would be hell, tho …)
The inverse square law is valid for point sources, and just describes how the acoustic energy is being distributed over ever-larger spherical surfaces as it gets farther and farther from the source.
The issue that @md-2000 was referring to is the conversion of acoustic energy to heat as the sound waves travel through the air, a completely separate phenomenon. If you could create a planar acoustic wave (for which the inverse square law would not apply), you would still find that air is a lossy conductor of sound, and is more lossy for higher audio frequencies.
I don’t have the knowledge to evaluate that cite to Engineering Toolbox, but I find it interesting that the attenuation values are measured in decibels per kilometer.
And for freq’s at/below ~1Khz, the attenuation is at most a handful of decibels per KM. So for most of the freqs that matter for speech that’s the order of attenuation we’re dealing with.
IOW, this effect is not quite a rounding error on the inverse square law attenuation, but it’s close. Or so it seems to non-expert me.
That graph of attenuation is evil. The Y axis is logarithmic of decibels.
At 1kHz, yeah not much attenuation. Anywhere between 2 and 6 dB over a kilometer. At 3kHz - the top end of basic intelligibility for voice, up to 12dB over a kilometer. So still nothing special. But at 10kHz, it is as bad as 120dB per kilometer. That gap from say 3 to 10kHz covers a lot of ground where musical timbre lives.
You will hear that even indoors in a concert hall. It is enough to take the edge off quite a few orchestral instruments. Perhaps less of an issue as we all get older, but not to be sneezed at. (I used to joke that this roll off of the top end was why vinyl enthusiasts thought it was closer to the real music - they could only afford the cheap seat - so they never knew better.)
Outdoors in a large venue it is going to matter. Modern line array systems can be flown quite high and designed with tailored vertical coverage of an audience, which can afford the opportunity to balance the frequency response. (Really large venues bring with them a host of problems, with a very real - you can’t get there from here - set of competing requirements.)
At the other extreme, a few years ago I was at a new year celebration at a local beach. There was a large stage and live band. Even about 200 metres away the top end was ice-pick brutal. I have no idea what the FOH mixer had done to his hearing to imagine that was in any way good sound. Sadly all too common.
Why are we talking about distance attenuation in relation to distinguishing digital from analog? While that certainly affects our listening experience, that doesn’t seem to be the question, to me. If we’re moving on to listening experience, we should also take nodes, nulls and comb-filtering created by room reflection into account.
To me it looks like the original question was answered in the first few posts that discussed frequency response and dynamic range—the two things that matter when reproducing audio for human listening. FWIW, this recording engineer works at 24/48k, unless either doing something for DVD release, which requires 24/96kHz, or 24/192kHz for some direct digital releases.
Am I missing something? It wouldn’t be the first time…
I think we finally got here by a process of pointing out all the other confounding effects that stump the OP’s stated goal of some magic “digital-vs-analog recording absolute playback sound difference” detector.
And eventually got all the way here, that there is no definitive sound of a recording. There’s a different sound perceived based on that recording at every point in any venue and associated playback system, much less in every venue and on every playback system.
Thanks for clearing that up for me - my Earl Grey hasn’t kicked in, yet. A 'scope would be the best way to test, but there’s still going to be resolution/quantization errors, due to the nature of the displays, among other things (cables, connectors, etc.). Same issues arise if using a gnomiometer to check signal phase.
And, yes, I’d absolutely agree with that, and it goes further down to every component of the system. For example, cable shielding coverage/type, speaker alignment, dirty/clean power, etc.
I guess my answer to the OP is, there aren’t any specs. Digital, by definition will never be an precise 1:1 repro of an analog signal. But, it is certainly accurate enough to eliminate any human-perceptible differences.
His goal was a magic meter to measure those human perceptible differences and thereby prove to audiophiles that a zero on his magic meter means their preference for analog is based on mere conceit, not facts.
Yeah, right. That’ll work. NOT!
But we had lots of fun talking about why not. And that’s really the point for most of us.
Actually, I was referring to the differential attenuation of different frequencies. If we compare listening to music live (I assume what the OP refers to) vs. recorded, a concert you hear live from 100 feet away or more, vs. one where you listen to a recording made with microphones placed in a different location than you sat, playing in your rec room listening to the playback, then too much is different. Manipulating positional factors to attempt an accurate mimic of live music for one particular subject would appear to me no different in spirit than performing digital manipulation on the recording to achieve an aparent identical experience. I suppose the same argument could be made with analog components, i.e. tuning the microphone or speakers, amp, etc.
Is the OP’s goal to simply record and playback and do the comparison? Or are manipulations allowed?
I looked at “The Design of Active Crossovers” by Douglas Self, 2nd edition. There is a graph on p45. The exact attenuation can be calculated, but it is non-trivial mathematics.
It’s already been said, but the inverse square law has nothing to do with this, and anyway it certainly doesn’t hold in a reverberant environment.
I know you aren’t the OP, but I thought this was an interesting take, because the very act of recording, to any medium, manipulates the sound.
What you (used generically) hear on a recording is decidedly NOT what was happening in the room when it was made. As soon as it hits one transducer (microphone), it’s “colored” (sometimes for the better, sometimes worse) and that effect is multiplied by every piece of equipment it goes through, such as preamps, EQs, compressors, etc. In fact, there’s a huge market for vintage analog gear and microphones, specifically because of that effect. Recording engineers generally view that as a positive - the OP might not…
That is 100% what my point is. I wanted to take two recordings using two different technologies and compare them to see if the ways that each one of them manipulates the sound is distinguishable in the resulting analog playback signal.
In the abstract this is sort of true. However if we use a precise definition of the characteristics of the signalling channel the two can indeed be identical.
This again goes back to Shannon.
If we characterise our analog signal by the frequency response of the channel and the signal to noise, we can define a digital (sampled quantised) channel that is a precise reproduction.
This analysis does need us to agree that pure noise - additive independent white noise, whilst always present, does not contain information and can be considered as identical in each channel.
This result is the basis of modern information theory and underpins pretty much anything you see in communication systems.
Quantisation doesn’t cause any problems so long as the sampling and quantisation is performed correctly. Similarly upon reproduction there is no problem. Same for frequency response. Indeed the two go hand in hand.
Well, then the answer is no, you can’t really do that unless you rigidly define a bunch of other things, and not really even then unless you have a recording that is created to exploit the advantages of one or the other. If you start with a recording that was prepared properly for each medium and the original recording was originally mixed to be compatible with both mediums, they ought to be generally indistinguishable. Digital can reproduce a recording that was prepared for vinyl just fine as long as you master it to compensate for the idiosyncrasies of a digital medium. I mean, we can all hear the needle drop, but once you get into the actual recording the noise floor is generally irrelevant (except for moments of absolute silence).
Where that idea breaks down is when you get into recordings where the digital version does things that some analog formats can’t do. For example. my recording with two different (electronic, very bassy) kick drums hard panned to the left and right, hitting on different beats. That works fine on digital and tape, but isn’t really do-able on a vinyl record with any level of amplitude on the bass frequencies for the reasons documented by @Francis_Vaughan earlier.
Also, as he explained, even that effect is going to depend on the method you’re using to deliver the recording to your ears. You can hear the stereo separation between the two kicks very clearly on my monitors that I use to mix, because they’re positioned so you can hear that sort of thing (and also everything that sucks about my mix). The same is even more true for headphones. In my car or my home stereo? Not so much. In those environments I can hear it moving back and forth some, but it’s nowhere as pronounced. If anything, only the high end of the attack on that waveform is discernibly moving between left and right.
So if you moved that recording to a vinyl record and heard it on headphones, I’d at least be able to tell it wasn’t my original digital mix even if you did everything properly. The low frequencies of the kicks would not pan, because vinyl really can’t do that well. If you moved the same pair of recordings to my home stereo where a single subwoofer handles the lowest frequencies and you mastered both for their target medium, I probably wouldn’t be able to discern which was which, and that’s a pretty extreme example where I did all of the recording and mixing.
I don’t think that anyone has yet mentioned RIAA equalization (pre-emphasis) in vinyl records. Phono pre-amps (whether part of another component’s circuitry or as a separate component) apply frequency adjustment(s) to compensate for pre-emphasis applied to the recording when it is mastered for vinyl. It is not applied to tape and digital masters. There are several different pre-emphasis standards that have been used for this purpose, but the RIAA equalization “curve” is now pretty much the norm.
If you happen to have access to a high-gain straight amplifier, like some used for dynamic microphones, you can hear that the analog signal straight from a turntable cartridge sounds very different without the RIAA equalization applied during playback.
There are two RIAA curves. The new one, introduced in 1976 and known as the IEC RIAA curve adds a low frequency roll off on playback with no corresponding change to the recording curve. Not everyone thinks this was a good idea.
Tape has always used frequency shaping curves on record and replay - but for tape the problems are more complicated. Tape has a built in drop off in frequency response. What you record comes back with less high frequency energy relative to low frequencies than what you recorded. It also has problems with very low frequencies. But there isn’t a simple answer about how to fix this. If you compensate for the drop in frequency on replay you will get lots of high frequency noise as well as your high frequency audio. Boost the highs on record and you will run out of headroom and saturate the tape and/or tape heads. Worse, different tapes have different high frequency drop offs. So eventually you need a mix of high frequency boost on record, a bit more on replay, and a some additional pre and post fiddling with frequency response to get the whole thing to leave you with a flat frequency response. In a professional setting they would simply choose a tape formulation and align the entire chain to get them the best result. This would also include setting bias levels. Studios had both technicians and test equipment there for this job.
With domestic cassette recorders there were two emphasis curves 120us and 70us and these were chosen depending upon the tape formulation - which also would set bias levels. Remember ferric, chrome and metal tapes?
And then we can’t forget Dolby. In the professional arena Dolby A was ubiquitous, and right at the end Dolby SR almost clawed tape up to digital levels of performance. At the domestic side Dolby B was similarly ubiquitous, and Dolby C came along near the end, lastly Dolby S pretty much right at the point CDs killed domestic tape use. Dolby provided for dynamic changes to tape recording drive, with both fixed and sliding (on the advanced versions) frequency band companding, and was supposed to reverse the changes on replay - with varying success. But it did usefully reduce noise.
The granddaddy of cassette tape machines was the Nakamichi TT-1000 II, which would automatically optimise the bias and emphasis for a given tape, and then write a little digital marker at the start of the tape encoding the settings, so that on replay it could exactly apply the optimised settings. A fabulous misuse of technology.
A completely different and very clever approach to noise reduction was the Dolby HX-Pro variable-bias system, where HX stands for Headroom Extension. HX-Pro was invented by Bang & Olufsen in 1980, and licensed to Dolby. When a recording is made with a lot of HF content, it is effectively adds to the bias signal and limits the HF levels that can be recorded. HX-Pro reduces the bias level in the presence of high-amplitude HF audio, allowing recording at a higher level for the same distortion, and thus relatively reducing noise. On cassette tape the improvement is from 7 to 10 dB at 15 kHz.
The beauty of this system is that no decoding is involved, you simply have a better signal on the tape. It made 8-track 1/4-inch tape recording very practical.
Talking of fabulous misuse of technology, I give you the extraordinary Nakimichi TX-1000 turntable introduced in 1983. It measured the disc eccentricity and corrected for it. A secondary arm measured the eccentricity of the run-out groove and used this information to mechanically offset the spindle from the platter bearing axis. This process took 20 seconds, which I imagine could get extremely tedious once the novelty had worn off. Doomed by the date of 1983…