LP Vinyl

jz78817: Did you read the first sentence of that paragraph?

"Strictly speaking, the theorem only applies to a class of mathematical functions having a Fourier transform that is zero outside of a finite region of frequencies. "

Note: Frequencies. Not everything is made of simple frequencies within one range. Everything else feeds off of this.

Basically, the DFT works well on those things that it works well on.

Duh.

That means it doesn’t work well on the things it doesn’t work well on!

And music has a lot more of those things that you’d expect.

Again. If you want to slam vinyl there’s a lot of ways to slam vinyl without referencing anything digital. But not by making a very poor citation to an “advantage” about digital that isn’t as great as some people think.

(There are a lot of advantages of digital such as perfect copying, etc. But that’s getting even further away from the vinyl thing.)

Yes, I read it. Difference is, I understand it.

And that finite region of frequencies is everything below f/2. That’s what the anti-aliasing filter is for; it defines that finite region. For CD audio, that is 1 Hz* up to 22,050 Hz.

yes it is. every sound decomposes down to a fundamental frequency with harmonics at various intensities. Your examples of a hammer strike or a string pluck is an example of a transient event which has both a time component and a frequency component. but if you can hear it, it can be captured perfectly via sampling so long as the sampling frequency is high enough.

  • actually I think the Red Book standard only defines it down to 5 Hz.

I don’t think the LP comeback has anything to do with audio quality, really. I don’t care about which is better…music is about feel to me and I don’t have a super sharp ear. To me there’s no doubt that the charm of the LP experience is what is driving this. I love my records…when David Bowie died and when Prince died I pulled out their albums and listened to them the same way I did when I first got them, decades ago. Tapping an icon on your phone just doesn’t satisfy in the same way as pulling the record out of the sleeve, putting it on the turntable and dropping the needle…and then gazing at the cover art while you listen. I think the kids of today are really missing out.

yeah. I have a younger cousin who’s into collecting and listening to LPs. mostly because he’s into classic rock and wants to listen to them on the medium they were originally recorded and mastered for. can’t see any fault in that.

Sure, if people are honest about it.

I’m perfectly happy with people being upbeat about vinyl as a medium. I accept that it has inherent properties some, like you, see as an advantage.

What I get unhappy with are people who do a bait-and-switch, who claim to like vinyl for its actual properties and then begin to claim that it sounds “better”, where “better” is undefined. That “better” can either mean “more like vinyl”, which is tautologically true, or “more like reality”, which is a lie.

It’s equivocation, using the same word to mean two different things without acknowledging it, and it grates on me.

Sort of. For a digital system to work at all it has to have a defined bandwidth. If you don’t record with an anti-aliasing filter and ensure there is just the right amount of noise in the input signal, it simply won’t work. It is for all intents a busted implementation. This is so if you are doing audio or building a radar system.

It turns out that a lot of recording systems inherently have enough noise present anyway (for instance many microphones are limited by the noise of air molecules individually hitting the diaphragm.)

Adding additional frequency response shaping beyond what is mathematically needed is an artistic choice, not a technical one. So rounding of the highs on digital to make it sound smoother and more vinyl like is essentially just a tone control. It isn’t a required part of the digital process.

Excellent :smiley: Guitars are a great exemplar of what you talk about. There is however a lot of myth associated with much of this. A guitar pickup is an evil mess of interacting bits. They don’t just look like a coil with a simple inductance and resistance - there are vagaries in the winding geometry that mean the inter-winding capacitance can’t be treated simply, and they all have a ridiculously high impedance, which as you note makes the tone controls in the guitar very critical. You can match the vintage attributes, but you may need to measure more than just the simple resistance of the pot. And of course the mystic nature of the capacitors is even more insane.

My rule of electric guitars is that no matter what one might design, someone will try to push it way past its design limits to see what it sounds like. A good example is the JTM 45 versus Bassmaster. Famously Jim Marshall copied the Bassmaster circuit, but made one of two small but critical changes that changed the world of rock and roll. Probably the most important one was that he increased the feedback ratio.* Now in ordinary amplifier design this would be expected to lead to an amplifier that had less distortion. However this is only true when the amplifier is operating in its linear region. Once the output saturates the feedback loop opens, and far from reducing distortion, the larger feedback ratio actively pushes the distortion higher. So bliss is a Marshall Plexi in full roar.

  • The other was replacing the classic Fender tone stack - which made for a different set of tone voicings.

This is very true, and I do sort of miss it. But some of the reasons one doesn’t do things like this anymore are just the modern life. When I first got a CD player, I did used to listen intently for a whole album. But somehow life took over.

OTOH, I prize my season subscription to our local (very good) symphony orchestra. I don’t care how much money your sound system cost (and I have heard quarter million dollar systems) there is nothing you can do to match the real thing.

There is both some significant truth and some misunderstanding here. The Fourier theory contains a critical caveat. The signal must be periodic. Now this doesn’t mean what it is often taken to mean. It doesn’t mean that the only components of the music must be things that are periodic (a plucked string) versus transient (drum hit.) It means that the entire segment of signal, no matter how long it is, is periodic. Now this is clearly violated for just about all music. If I take the Fourier transform of Beethoven’s 9th, the symphony does not repeat endlessly for the life of the universe. However in all signals work, the simple answer is to wrap the sampled input onto itself, so in simplistic terms, the end of the signal is wrapped back to the start. So the symphony does repeat forever. Once you do this, you can perform a Fourier transform. (In fact actual implementations of the DFT - such as the FFT do this wrapping implicitly as part of how the algorithm is designed.)

The transform is the same size as the input, you have the same number wave-numbers as you did samples in the initial sample. You can reason perfectly about the frequency response and energy content of the signal.

There is however a tiny flaw in the system. The symphony doesn’t exactly wrap around. There is always a slight discontinuity at the join (unless you really did have a purely periodic initial signal.)
In normal signals work you will see this discontinuity result in ringing artefacts in frequency space. The usual answer is to “window” the signal so that it is gently tapered off the zero amplitude at each end before the join is done - so the join wraps over a stretch of zero signal. The precise choice of window function is part of the art.

However for a long signal, we don’t actually have a problem. The theoretical artefacts of the join will turn out to be so vastly below the noise floor, that in a real system, rather than a purely theoretical system, the system behaves perfectly.

If you took just a percussive transient, and cut it out in isolation from the rest of the recording, with a brutal edit either side, and took the Fourier transform, you would not get a nice result. Because you have indeed violated the basic theory - the percussive isn’t periodic. But if you leave it breathing room in time, then you get a result that is ever closer to a perfect answer. When we are discussing the sampling theorem and digital audio we are talking extended periods of time, not artificially truncated snippets of signal.

In general, the duality between temporal space and frequency space is one of the most powerful tools around.

Francis Vaughan’s explanation for why you need double the frequency or more for sampling is good, but I thought maybe a more basic, intuitive version might help.

First off, we can use a simplification. You can imagine every frequency as a series of clicks. This, in fact, is how old computers used to make sound. I remember specifically programming loops that would make a certain number of clicks per second.

Each click has two part to be distinguished as a single sound: an on, and an off. If it was just an “on” it would just sound like a longer click, and that wouldn’t work. So, to represent a given frequency, you need to be able to encode both an on and and off. That’s double the frequency.

If you have less than double, then, when you sample, you’d sometimes run into two clicks in a row. And that would sound like 1 click at a lower frequency. You’d be adding a lower frequency that wasn’t present in the original.

Now, real audio isn’t just on-off clicks. They have different values in between on and off. You get a sort of wave. But the same basic principle applies. If you try to record frequencies above half the sample rate, you create lower frequencies that weren’t present in the original.

Francis Vaughn - all good; thanks. Yes, the Marshall story is fascinating and your cardinal rule about the electric guitar is all about pushing the technical limits sounds spot on ;).

One point of ignorance:

[QUOTE=Francis Vaughn]
Sort of. For a digital system to work at all it has to have a defined bandwidth. If you don’t record with an anti-aliasing filter and ensure there is just the right amount of noise in the input signal, it simply won’t work. It is for all intents a busted implementation. This is so if you are doing audio or building a radar system.
[/QUOTE]

Any chance you could explain this in more lay terms? Why does a digital system require some noise? And why would that be considered a “busted” implementation?

I think the fundamental point you reinforced for me is that a digital system can do more than an analog, and part of the trick of making digital recordings pleasing and music has been replicating some of the artifacts/limitations digitally.

There are a few elements to this, so bear with me.

If you are digitising a sound source you are doing two things. You are sampling it, typically at a constant rate, and you are quantising it - that is you are measuring the values at each sample with a limited precision, rounding to some implementation imposed resolution. The limit of resolution is the distance between adjacent values you can measure. Typically we use a binary representation, and thus the limit of resolution is commonly referred to as the least-significant-bit (LSB) or just 'one bit" of resolution. For a 16 bit quantiser you have 2[sup]16[/sup] levels = 65,536 levels.

So, now imagine we have a signal that is very low in amplitude, of a similar order to the least significant bit. You can see that sampling that signal is going to be a mess. Sometimes you will see the amplitude higher, sometime lower than the threshold between levels, but because of the low amplitude, you are not going to expect much more than a rough approximation to the signal. Now imagine that the signal has a frequency that is an integer multiple of the sampling frequency. You will get a very different result - the sampling will cut the signal at the same point in the waveform each time, and you will get some sort of sum or difference signal. If the signal is almost but not quite an integer multiple you will get beats. If the signal is varying in frequency you will get all manner of weird moving beat effects. The problem is that the signal is correlated with the sample clock. Visually you can think of moire patterns or fringing in digital pictures - or jaggies in computer graphics. It is a very closely related thing. If you can de-correlate the sample clock from the signal you can avoid this problem. You could make the clock sample times vary randomly, but that is really difficult, so adding a small amount of noise to the input signal, noise with an average amplitude of half the LSB, removes all these correlation artefacts totally.

There is a deep truth here. We define the dynamic range as the distance between the noise floor and the highest level. In a digital system we get lazy and just talk about the dynamic range as the ratio defined by the bit depth. This laziness is actually an error. The noise actually has to be there. If it isn’t you didn’t specify the channel correctly, and it doesn’t work. The fact that it doesn’t work in such a counter intuitive manner is just part of the fun.

Now, imagine you are trying to measure the voltage on something - say a battery. However, instead of a meter, all you have is a special light bulb, it lights up if the voltage is over 1 volt, and won’t light up if the voltage is under one volt. So, at first sight, all you can do is say whether you have more or less than one volt. You have one bit of resolution.

Now imagine I give you a special device that generates a random voltage, one that varies somewhere between zero and one volt. Every time you connect it, it generates a different voltage, and I guarantee that the voltages it generates are truly random. So this is a noise source of 0.5 volts average voltage. Does this device help you measure voltages?

The answer is yes. You connect your special lamp and the noise source in series and measure voltages. Say you have an unknown voltage that turns out to be 0.75 volts. Before all you knew was that it was less than one volt. Now you can measure multiple times, and you can average the reading. On average, 3 out of four readings will light the lamp. For every doubling of the number of measurements you end up with an extra bit of resolution. Yet your actual measuring device is only good for one bit. This noise source is dither.

Hmm, so what happens when you are sampling sound? Well that answer is that if the noise you add to your audio signal is perfectly pure white noise (AIWN - Additive independent white noise) you can decorrelate the sampling artefacts. But of you shape the noise spectrum so that there is more energy at higher frequencies, and less in the mid bands, you end up effectively adding multiple dithered samples in the mid bands at the cost of adding noise to the higher frequencies. So, not only do you get the decorrelation of quantisation noise, but you get, essentially for free, better mid band resolution. Shannon’s theory covers this perfectly. The total information content in the channel remains the same, but by shaping the noise, you can reduce the dynamic range in one region in order to increase it in another. So long as the total area bounded by the band and the dynamic range remains the same, the information content is the same, and Shannon is happy.

In a full high resolution audio reproduction chain everything is symmetric. You must bandwidth limit the input to avoid aliasing of energy out of the pass-band creating unwanted products in the pass-band with an anti-alias filter. You must have the noise floor where you said it would be - at half the LSB. On the way out, you dither the digital signal which allows you to provide signal that is actually below the noise floor, and you must bandwidth limit the output from the converter, a step that precisely reconstructs the waveform - hence it is called the reconstruction filter.

End to end you need noise and bandwidth limiting at both ends. Otherwise it won’t work properly, and it won’t work properly in a big way.

IMHO this symmetry, and the manner it is covered by Shannon is quite beautiful. Which is why I am happy to go on and on about it :smiley:

Dude, er, sir, er Francis Vaughn - I said “in lay terms”!!! :wink: :smiley:

I’ve read it a few times. Here’s what I’ve got.

  • The digital sample rate matters.
  • if an incoming signal mathematically syncs up with the sample rate, you get weird effects. You list a few; it seems okay to add another example of this effect: wagon wheels on movie stagecoaches appearing to spin backwards because the film exposure rate picks up the image of a spoke positioned just behind the prior image, even though they are moving in the correct direction. ?
  • so there has to be a layer of noise, so that any signal coming in sits on that layer and it is far less likely for the signal to sync up mathematically with sampling rate.
  • a digital system has to account for this “sample rate sync up zone” in how it takes signals in, then in how it outputs sounds. That’s why the signal chain is as complex as you describe.

And, in addition to all that, a digital producer must “add back” some of the digital artifacts/limitations heard on analog systems, because “flat digital” doesn’t shape sound in ways that our ears look for in (analog) nature.

How’s that?

Vinyl is back enough that Jack White just had eight, very expensive, vinyl presses built for his Third Man Records label:

Wow, Francis Vaughn, I’ve gotta echo WordMan: I’m fairly good with the theory at a high level, and I’ve got a non-trivial understanding of the hardware involved in both analog and digital signal processing (I can turn the knobs, write software, can understand the principles of both analog and digital compression, but can’t design an amp), but wow. If I had the background to understand that explanation, I could probably build Digital/Analog converters as a hobby. :wink:

I’d ask you to boil it down a different way, but you’ve provided enough free condensed education already. I’ll try to climb up enough to understand it. The wiki page for Sampling (signal processing), and all of the pages it links to, seems like a good place to start researching. :slight_smile:

Of the questions I had about your explanation, the one that bothered me the most was, “Why is white noise important to do sampling well?” I was led to the dither section on Digital/Analog converters, which while is filled with math above my education, provides the explanation:

Which seems to boil down to: The introduced noise makes subtle transitions from the baseline appear more natural to the human brain, otherwise it would sound (or look) like it was produced by a simple algorithm. Perceived silence is rarely really silent, and black is almost never reflecting absolutely no light When we actually perceive either as an absolute, it’s kind of disturbing. Without the noise, things tend to sound like a square wave, and look like a jagged digitization.
wguy123, he ain’t the only one investing in vinyl. The press I’m using for the current record, Josey Records just bought all of A+R’s presses and from what I understand, reconditioned them. Fat Possum also opened it’s own plant due to demand in 2014. When I used Rainbo a few years ago, they were slammed, and turnaround was a bit slow. I understand United was the same way at the time.

It isn’t just the brain. The big thing is this - the information is actually there. It isn’t an illusion, or a trick of human perception. There is actually proper real resolution of information that was in the original source and can be found.

As I said - this is where counter-intuitive things are going on.

So, listen to something where there is a lot of noise. Lets just assume proper random white noise. Even when the signal you are trying to listen to is actually lower in level than the noise, you can often still make things out. Radio operators would use the phonetic alphabet to try to get difficult to hear things across, at the absolute limit, you can use Morse code, and send radio signals very far under very poor conditions, with the signal almost buried under the noise. The other example I love to give is the GPS system. All these hand-held receivers, some just built into your phone yet they receive signals from satellites that are whizzing around an orbital height of 20,180 km (12,540 miles). So what is the signal to noise ratio of the signal from those satellites? The answer i that the signal is 30db lower than the noise.

All of the above is a perfect illustration of Shannon’s theorem.

You need four things to define a signalling channel.
[ol]
[li]The frequency of the start of the band that carries the information[/li][li]The frequency of the end of the band that carries the information[/li][li]The highest level of signal that the band will carry[/li][li]The lowest level of signal the band will carry[/li][/ol]

The first two define the bandwidth. (For audio the start frequency is zero.)
The second pair define the dynamic range or signal o noise ratio. This is because the lowest level i the channel is the level when you are not transmitting information. Which means the noise. (If the noise level was zero, you could transmit an infinite amount of information.)

Now you can divide signal channels into four types - the combinations of:
Those that sample the signal in time and those that have a continuous signal over time.
Those that quantise the signal levels and those that leave the signal level continuous.

[ul]
[li]What we term as “analog” audio is unquantised unsampled.[/li][li]What we term as “digital” audio is samples quantised.[/li][/ul]
(An example of a sampled unquantised device is the old bucket brigade delay.)

Now, without going into detail again, the entire trick is this. All the channel types are subject to exactly the same rules. Exactly the same mathematics plays out whether you sample or not, quantise or not.

So, listening below the noise. You can hear (ie receive information) below the noise so long as the information rate of the source remains less than or equal to the channel’s information capacity. For GPS and Morse code this means a very slow information rate in what is otherwise a quite wide bandwidth. In simple terms, if you gibe the receiver (or your brain) time to eek out redundant information in the signal, which could be as simple as the long pulse of a Morse code beep, versus the fast moving frequency spectrum of a human voice in the same channel, or just repeating yourself lots of times, eventually the information gets through. Shannon’s theory tells you the fastest you can ever reliably move the information.

So, say you have a studio tape machine. The entire recording chain, from the moment the individual air molecules hit the microphone, the various electronic noise sources in the amplifiers, and finally the inherent nature of tape hiss, all form a basic noise floor in a recording. But you can actually hear audio at levels below the noise. There isn’t any high frequencies, and it is generally pretty muffled, but you can make a recording where the levels never get higher than the noise floor. Again, Shannon tells you exactly what yo can expect to get. The loss of anything in the high frequencies is what is being traded off to get you the resolvable information in the mid to low bands.

When you play it back, the noise is there, but so is the low level audio.

So, bottom line for digital. It is exactly the same. But in a very counter-intuitive manner, if you try to make a recording, and you don’t have real noise at the level the converter needs it - it won’t work properly. If you have a 16 bit converter, you have defined the system to have 65,536 levels. Lets say the highest level you can record is one volt. (What is often called FFFF level.) You must have noise in the input at 1/65,536 volts. If you don’t you have not correctly defined the dynamic range. In the analog world you seem to have a higher dynamic range, because the real noise floor is lower, so in effect, your claim of 16 bit resolution is a wrong. The digital system won’t be able to see the noise, and - very very counter intuitively - it won’t work properly because of it. Clearly it won’t be able to see any signal at a level below the noise floor, even though the mathematics says it should be able to resolve it.

The above is a fundamental point that many anti-digital sound golden eared pundits failed to grasp. They make arguments about the digital encoding “slicing off” the low level information. Which it will if there isn’t the correct noise floor for the depth of resolution of the quantiser. But there is the magic in the mathematics. If the noise is present at the correct level, everything works perfectly, and the system behaves with identical performance to an analog system with the same bandwidth and dynamic range. (What is weird is how the digital system misbehaves if the noise is at a too low level. I described how you can get weird correlated quantisation artefacts. But the manner in which the noise allows the system to capture the signal residing below the noise floor suddenly cuts in. And that is utterly beautiful.)

Where I wasn’t quite explicit enough before.

You don’t add new noise into the system on playback. It is already there from the recording. But when you are mixing down a high resolution track (or perhaps creating sound in the digital domain ab-initio) the moment when you create a final lower resolution master (say a 16 bit CD version) from your higher resolution (say 24 bit studio digital mixing system) you need to reproduce that same thing - you must ensure that the noise floor for that 16 bit form is at the correct level, or you will get exactly the same problems as you would have done had this been digitising from an external source.

All of this stuff works and is necessary in any domain you are working in. The issue with noise is very commonly misunderstood. (There is another group of anti-digital pundits who claim that the inherent flaws in digital quantising are simply covered up with noise.)

The easy thing to take away from this is what I described earlier. Four things define a channel. The two sides of the band, and the top and bottom of the signal range. Three of those are easy to grasp. The one that defines the bottom of the signal range - the noise floor, cannot be ignored. It is not, as is commonly thought, a hard limit (like the other three) but rather it is a core part of how the universe operates, and if you don’t correctly define it, and ensure it is present at the level you have defined it to be, your system will not work correctly. If you do do it right, everything works exactly according to the fundamental mathematical description (aka Shannon.)

Noise in digital systems is no more and no less than ensuring this is true.

You lost me at Hello.

Darn. This is the problem with being an ex-academic.

I would really appreciate it if you could outline where you lose it - over the years I have been trying to refine my description into more understandable terms. It is very useful to know where I fail.

Yes.

Yes. However the wagon wheel effect is a good example of aliasing. Indeed understanding digital audio with reference to film is a good idea.

The problem with film is that there is no known way of making a bandpass filter for moving images. For audio is can be as simple as an slightly overgrown tone control.

The issue with quantisation distortion happens only at the very low level of the signal - right down at the limits of resolution. Aliasing happens at all signal levels. Quantisation distortion is addressed with noise, aliasing is addressed with correctly limiting the bandwidth of the input signal. (Both cases can be viewed as ensuring that the technical description of the input signal is a match for the recording system.)

Not just far less likely - but so long as you choose the noise correctly - never.

Sort of. The signal chain isn’t really all that complex. It is more that you have to ensure that the design parameters have been got right.

Two things here. The producer can, as a matter of artistic choice tweak the sound. They always have done. Digital is more brutally accurate, so you might need a bit more artistic tweaking than with vinyl, which already has a built in “sound”.
The use of things like shaped dither isn’t an artistic choice, and is simply a matter of tweaking the capability of the digital reproduction to have much the same trade-offs as the ear does. Thus making best us of the digital system.

Lemme think about this. You clearly know this stuff, but the pesky bit of making it understandable to civilians seems a bridge too far. :wink:

Who were you teaching? Clearly acoustic engineering PhDs near as I can tell.

Digital is not easy. At best, it starts by mimicking analog really, really well. From there, it can do much more. That level of abstraction matters, both: a) to the attempt at replication and where it misses, and also b) to how one attempts to explain it because it is once removed from a simple recording of the input.

Starting on that basis makes sense. ???

Maybe a video will help: This video explains the basics of digital audio and video, including the concept of sampling and the math behind it, in a very informal and accessible fashion.

Funny thing is, you can run 10 gauge wire from the amp to the back of the speaker, but if you look inside the box, you’ll probably find 22 gauge going from the connecter to the crossover! :smiley:

Yes, yes, I know…

Out in my shop, the very avatar of “low-fi”, I have a pair of speakers waaaaay on the other side of the building. For simple ease and convenience, fuck it, I just ran phone wire. :o

But I would never stoop to vinyl. :wink:

Since I listen to Hip-hop music, vinyl never really went away to me. There were many independent labels (and I believe also in dance and electronic music) that released vinyl singles and albums throughout the '90s, '00s and now. There were some artists that only released vinyl singles, had some buzz, and people wonder whatever happened to them. If anything some genres, vinyl sort of declined more recently as a lot of underground labels and record stores that specialized in vinyl went defunct.

For some acts and fans, in 2017 vinyl is so important that it is possible to record an album and give it away completely for free, while making a living off the optional sales of vinyl and special fan packages.