Digital remastering

What exactly do they do when they remaster a recording that was originally an analog recording? Is it possible to remaster a recording that was recorded digitally to begin with? How do they ‘clean up,’ say, recordings of 1930’s jug band songs? Are they eliminating sound to do it?

I do a little audio work as a long-running hobby, so you shouldn’t take my answer as strictly authoritative.

When you’re turning an analog recording into digital, the quality you get is based on the “sample rate”, or the rate at which you’re taking digital snapshots of the sound.

Back in the days of '486 processors, my computer struggled to manage a 9.6 kHz rate. So that means basically that the highest frequency possible is 9.6 kHz. Human hearing in a healthy adult is from 20 Hz to 20kHz, so the effect is basically a tinny, “flat” sound.

Modern sampling is done at 40 to 48 kHz, or even higher in some cases (my software can hit 96 kHz with ease but I never use it.) So it accompanies the full audio range and it sounds “fuller” as a result.

Cleaning up sound is “lossy”, if you will. The very short version of it is to identify a section of audio that’s “noise”, and the software can algorithmically “subtract” that sound from the audio. You will get a cleaner sound as a result, but some small amount of data was lost. Too much processing, too heavy-handed a filtering, and you start losing valuable data.

In short, yes, sound is being eliminated. With a delicate touch, you won’t notice. With a hamfisted approach, softer syllables of speech will drop out entirely.

Oh, and it’s not meaningfully possible to remaster a digital recording. I mean, you “technically” can, but there’s no new data to be found. A lower-frequency sample is “stretched” to fit with extrapolation. Your higher sampling rate is just sampling extrapolated data. I feel like I’d need to make a diagram to make that more clear.

If you have access to the original tracks, then obviously you can do anything you want, just like normal mastering. Digital recordings are already digital; analogue tracks can be digitized.

Poor-quality recordings can be ‘cleaned up’ to some extent via digital signal processing. You can even take a crappy wax cylinder and adjust for pitch drift, edit out pops, etc. But (at least normally :slight_smile: , as explained in the post above, you cannot take an old, scratchy, filtered, recording and magically reconstruct information that is not there and make it sound like you are in the same room and hi-fi audio to make a grown man weep. And, obviously, you can do much less manipulation if you do not have access to the original tracks.

BTW if you are really doing professional-quality remastering, you need not worry that the digital formats are inferior to good old reliable reel-to-reel analog formats that many still swear by, because you will not be limited by a 96 kHz (to say nothing of 48 kHz) sampling rate. One common digital format for editing audio uses 24-bit PCM samples at 352.8 kHz. You can digitize everything and throw it into your digital audio workstation at that resolution, and, again, if you have original digital recordings made on DAT or whatever, it’s easy to convert.

Mastering is a bit of a flexible term.
Back in the early days the product of the recording and mixing process was a “master tape”. This tape was used to create copies that were sent out to the pressing plants from which acetate “masters” were cut, and from there records pressed. But as time went on their became more and more “master” copies in the chain. Above we already see there are two. So “master” has lost a lot of its meaning. In modern production, “mastering” has become a verb. It is a separate step, one performed by a mastering engineer, and they take the result of the final mix, and then mess with it - tweaking the tonality, but usually, most importantly, messing with the overall compression. This was the step where the loudness wars were fought. Artists would sometime complain that perfectly good sounding mix was turning into a pumping slamming horror. But the wisdom was that this is what would sell.

So re-mastering can mean many things. I could just mean taking the final mix and re-tweaking the balance of the audio. But usually we hope that it means going back to the original tracking tapes and performing a new mix-down with higher quality steps, and the opportunity for some different artistic input into the final sound.

Digital is no differnt. If you have the digital tracking files still, a remix, is obviously possible. As to a new mastering step. With the lessening of the intensity of the loudness wars, it is possible to re-release a recording without quite the damage it was first released with. Some artists are doing this.

In the glory days of tape, a recording might be made onto a zillion track 2 inch machine, with each channel taking just one tiny part of the sound. Once the tracking recording was complete the mixing would proceed, working out how to turn the recording into a final result. Neatly there are automation controls on many of the mixing desks used, and it is possible to actually replay some of the dynamic mixing as it was originally performed.

One of the big deals about re-mastering from the original tracking tapes is that we can extract more information from those tapes than was ever done before. Very high sampling rates allows us to replay not just the original audio, but also the bias signal. This is big because it allows the correction of scrape flutter. Scrape flutter is the effect when the physical tape is pulled past the tape head. Even with the best equipment the tape scrapes slightly, and just like a bow on a violin string, the tape very subtly stretches and contracts as is passes the head. This leads to a modulation of the signal. So right away, welded into the recording is a distortion mechanism. Subsequent replay just adds another layer of scrape flutter. But if you can see the bias signal, you can work out the precise dynamics of the tape’s travel, and correct for the scrape flutter, and not just the replay flutter, but the flutter than was originally welded in. So the result is audio that is more pristine than was ever previously accessible.

The final result, can be reissue re-mastered recordings from the glory days of tape recording that have significantly better audio than anyone has ever heard.

Recordings that were made and mixed on limited track machines (think the 60’s) often bounced the mix between tracks on the tape building up the final result. This may destroy the original pristine audio. Depends upon how it was done, and whether they preserved the basic tracks, and only mixed onto a new tape.

Going back to even earlier recordings, say recordings that are only available on records, it becomes an artistic interpretation of getting the sound off to best effect. There were no standards back in say the 30’s. So equalisation and replay speed will all need tweaking. Reduction of noise is difficult, so much noise is in-band with the music, so attempts to use dynamic noise filtering can often lead to a lifeless or just plain weird sounding result. But as for tape, once you know a lot about the production chain, you can make intelligent choices about how to get the best result. Some of the big band mono 78 recordings are remarkable. They were cut direct to an acetate master, and mixing was done by the band members well honed ability to know how to mix themselves in a performance. Fast cutting speeds and actually quite high performance microphones and electronics means there is better sound to be extracted than one might guess.

Pedantic correction here: the highest frequency possible is actually half of the sample rate. Sounds is recorded as changes in samples, and the fastest possible change is on-off-on-off-on-off-…, or high-low-high-low-high-low-…​. The faster the change, the higher the frequency.

Oh, and you can remaster digital recordings, but I’ll get into that in a longer post.

To answer the OP, it’s probably best to go over what “mastering” audio means in the first place. The process of mastering is to take the original recording and apply all the effects to it to make it sound the way you want the final version to sound. This can include changing things like the volume, frequency response (aka equalization), reducing noise, removing pops and clicks, fixing spots that are out of tune, adding special effects like reverb or pitch shifting, and so much more. The final version after you do all of that is the “master.”

Remastering is, as the name implies, doing this to something that was already mastered. This can be done for various reasons, but the most common are to repair something that had gone wrong or had gotten damaged, or to change the audio to sound more like how things are mastered today. It can also be used to add stereo or surround sound to things that didn’t originally have it, or to just improve the quality.

Ideally you get the original recordings again, and make the changes to that. But the original recording may not be available or may be damaged. So you may wind up using things that were already altered, or even use multiple sources to try and get the audio back to its best. The same sorts of things can be done as in the original mastering.

Then there are AI techniques, which involve training an AI by giving it examples of recordings that need to be remastered, and the remastered versions, and having it learn how to alter one to make the other. This is most often used for repair scenarios–you give it both damaged and clean audio. But it can also be done to try and remix it to work in higher speaker setups, like stereo or surround sound. Rarely will the AI create the final product, but it’s often a good initial step.

And, yes, all of this can be done with digital recordings, same as mastering is now usually done with digital recordings. The main thing is just that it’s unlikely the digital recording will be damaged. And since it was recorded digitally first, it removes the most problematic step in the (re-)mastering process–i.e. converting the analog recording to digital.

The most common reason to remaster something digitally recorded is either that the artist didn’t have the skills to do it as well as they’d have liked, or that the original audio is lost, and they need to try and recover it from a lossy version, like an MP3.

For example, imagine that you are new, and created some music, mastered it yourself, and released it online. Then, you get better making music, and your old music makes you cringe. It makes sense that you might want to go back and make a remastered version, even if there was nothing actually wrong with with the original. You just want it to sound better.

You may even have lost your original recording, and can only take the version that was both mastered and compressed for distribution. So you even have to even edit that.

BTW, while a remastered version won’t likely include rerecording the main event, like the actual singer singing, it can include replacing sound effects and such with better versions. If they don’t sound the same, you get into the world of remixing instead of remastering. The difference is that a remaster is trying to still sound like the original, while the remix is doing its own thing.

I recall reading about the early days of CD’s. Audiophiles complained they didn’t “sound right”. One suggestion was - on old equipment, the higher frequencies were not detected by microphones nor amplified as well as the mid and low frequencies. So one form of mastering would be to judiciously boost the high frequencies, so the sound was not so muffled. Computer software can be really smart at analyzing the frequencies and determining which frequencies did not get recorded at the right volumes, and boost those. (and also detect and remove aliased frequencies).

Another complaint was that the same was done in analog studios back in the day as technology got better. so cheap analog-to-CD translations would take those master tapes, with high frequencies already boosted, and simply digitize the signal as is, which meant the high frequencies were too loud.

A core problem was that often the CDs were not digitised from the real “master tape”. There is only one master tape, and it tends to live in a vault. Often CD “masters” (ie the digital version of the audio in the correct format) were created from the tapes originally used by pressing plants for vinyl. The pressing plants would call these tapes “masters” even though they did not contain the same audio as the real master tape. (For them it was the master, they didn’t care about its provenance.) These tapes had been rebalanced to take account of the vagaries of the vinyl format. (In modern parlance, they had been “mastered” for vinyl - which is part of why the word “master” has lost its original meaning.) So indeed, the audio was usually further compressed and had some top end added - both to make the vinyl reproduction sound better. Bass frequencies would be mixed into mono to stop the stylus jumping out of the groove. There was real artistry in this. But used as the source for a CD, it sounded pretty evil. Some artists were horror struck when it became apparent how bad these early CDs sounded. Some re-issued new versions. Some of these were called re-masterings, some were just “better”. (Fans of King Crimson will know that there are at least four versions of all the classic era albums, five if you count vinyl. All sound significantly different to one another. The first CDs were not good.)

Nowadays the word “master” is also a verb, and denotes the creation of the final version for use by the reproduction chain. This is annoying.

For the earliest days of CD’s, one paid attention to the coding provided on the case. AAD, ADD, and DDD. The first letter indicated whether the original studio recording was all digital (D) or analog (A). The second, if I recall, was the mixing and balancing. The final was the master for the disc - obviously, always digital.

So some analog recordings were transferred to digital after the studio processing, some the original session tapes were digitized and then merged with processing digitally. The latter was extra work to “rework” an album, so obviously was considered the better choice. All digital was assumed to be the best.

Of course, despite my extensive collection of CDs from then, I really can’t tell really really good audio from OK. I’m not that much of an audiophile. What I did note, for example, is the first release of an album on CD like CCR’s Greatest Hits you can hear the progression over the years of recording technology, as the background white noise becomes less and less obvious over the years songs were recorded.

The Beatles were masters of studio tech, apparently they would do things like record on 4-track (best at the time) and then record 4 tracks to one while recording additional tracks to get 8 and 16-track effects on a 4-track deck.

While every step of the process does inevitably lose information, it’s possible to partially reconstruct that information, with varying degrees of accuracy. You can, for instance, say things like “The sound on this track appears to be a violin, except with the higher frequencies chopped off”, and then add back in the higher frequencies that a violin would be expected to have. The information you’re putting back in won’t be an exact match for the information that was removed, but with the right combination of skill and technology, you can get it close enough that human listeners can’t tell the difference.

I realize our hearing gets lousier as we age, but it seems to me a lot of older recordings, when I hear them on the radio or in a movie/TV show, are playing back faster and/or higher pitched. Anyone? Bueller?

I’m not saying this technology doesn’t exist, but in 10 years of working as a sound engineer, I’ve never come across something like you describe. There is some voodoo shit out there, sure, like a program that allows you to change a recorded guitar chord from minor to major, and similar stuff, but I’ve never heard of anything like that. Can you point me in the direction of something like what you’re talking about?

No, I can’t point to specific examples, as I don’t know of any myself. I was speaking of things that would in principle be possible. I do know of examples in other domains of ill-defined data inversions based on best guesses, but those would be rather off in the weeds for this discussion.

Not your imagination. Many examples. For instance, concerning “Caroline, No”

During the mastering process, Wilson sped up the track by a semi-tone, following the advice of his father Murry, who thought that the vocal would benefit from sounding younger. In doing so, the song’s tempo increased by 6% while the key was raised from C to C♯.

But if you heard “Caroline, No” as it was originally released, the speed increase baked in on purpose, it ought to sound the same pitch-wise today, unless a.) your pitch perception changed over time, b.) some revisionist engineer, aware of the alteration that was made back in the day, decided it needed to be “corrected” (but that would lower the pitch, and yes I know that applies only to this special case which isn’t what burpo asked), or c.) burpo is correct and there is some conspiracy afoot to monkey with the pitch of old records which seems unlikely and I haven’t noticed anything like that happening but who knows?

Well movies were shot at 24 frames per second, and in countries with 50 frames per second TV, they ran the film at 25 fps. So there was a slight but perceivable pitch change.
Another famous pitch change was reissuing of Miles Davis’ Kind of Blue. That was originally released at the wrong speed, and decades of fans heard it at the wrong pitch. It was finally reissued (another from the master tapes effort) and it sounded noticeably different.
Now everything is digitally sampled it is very difficult to make a mistake, and the pitches are pretty much welded in, and require explicit processing to mess with.

The slow creep of concert pitch is another question. One not without controversy attached.

Maybe we just slow down with age, and everything just seems faster to us.

More of Hugo Chavez’s posthumous bastardry?

Nitpick. The Beatles knew next to nothing about studio tech. All the tech work was accomplished by George Martin and his crew.

As I understood it, though, from some documentaries - they didn’t do the techie stuff, but they understood they could write far more than the basic 4 tracks by combining the first 4 to one, and recording 3 more, and so on… which allowed them to add in extra effects after the fact (like a French Horn?)

While computer-wise it might be possible to create higher frequencies, i think a lot of the computer magic was simply boosting the amplitude of higher frequencies (while minimizing the associated white noise) that did not get recorded as loud as the original performance due to technical limitations; or recreating “natural” from the Dolby-processed signals.