I was goona add that, as well as experimenting with changing the speed, you should also see how changing the pitch affects the music. You can lower the pitch without reducing the speed if you have the right tool.
Yeah, Audacity does that, but why? The “true” recording would just be a speed change.
It’s very possible - even likely - that a recording from the 30s plays a little too fast or slow, even off the original 78. (In my experience, more likely a little too fast.) The industry was still using counterweight-powered cutting tables to record, and what few technical standards there were, were loosely interpreted. Each company generally put out records meant to be played between 76-80rpm, but that’s about as close as they came.
Finally, there was the changeover from wind-up to electric motors in home phonographs in the late 20s-mid 30s (it took a while, mostly because record and phono sales just about stopped for several years in the early 30s). This meant that the listener could no longer fine tune the speed of the turntable. So even if a record was off pitch, nothing much could be done.
Not necessarily. Distorting elements can change more then speed. Choosing a different microphone for example, might well make a difference in the the pitch of the recorded voice.
Or the software you use to slow down the speed might in itself have a result in slightly changing the pitch. In an analog recording, for example, lowering the speed would definitely lower the pitch, so you would probably increase the pitch to compensate. How much, if at all, a similar phenomenon would occur with the software would depend on the algorithms used to affect the changes.
And, since you’re already making one set of changes to subjectively improve the sound quality, why limit yourself to changing only one parameter?
I’m not a musician, but I do have a degree in digital signal processing. The extant recordings of Robert Johnson were recorded in 1936-1937. I’m under the impression that it is a pretty difficult task to speed up a recording while maintaining pitch using the technology available in that era. I’m not familiar with Audacity, but I assume that it uses DSP to change playback speed while maintaining pitch. Wouldn’t it be pretty obvious to the ear if the speed of the tape was increased by a substantial non-integral multiple due to the change in fundamental frequencies? Also, I don’t understand how a microphone could change the pitch of a recording. It certainly can change the frequency response or equalization, though.
Moderator reminder: Folks, please, if you’re quoting someone … be careful about whether to parse links or not. In this case, ZenBeam correctly did NOT parse the link to a site that’s not workplace-safe; but when that was quoted, the link was automatically parsed.
So, please, everyone: be careful about quoting non-parsed links, OK?
Boyo Jim, no sweat, it’s an easy mistake to make, no harm, no foul.
I don’t know jack about all this recording technology stuff, but I thought someone ought to mention that one of the first side effects of nerves, for the performer, is a tendency to rush everything.
I imagine being recorded with some fairly new technology when you are used to only playing live would be kinda nerve-wracking. (Another side effect is a tendency to sing sharp, although it would be hard for Mr. Johnson to go very sharp, pitch-wise, without sounding noticeably out of tune with his instrument.)
This is really fascinating stuff. What’s **Fishbicycle **'s take on this one?
No, a different microphone will affect the tone, but not the pitch. Changing the speed will alter the pitch. Those are completely different things.
The goal here (for me, anyway) is to replicate what Robert Johnson sang as well as possible. If recording was sped up, then slowing it back down would be more accurate. Now it’s true that we may not have an objective way to determine if the audio was sped up. We’re then left with only a subjective means (listening and deciding what sounds most natural) of determining an objective quantity, the true speed of the audio. This isn’t entirely impossible. At some speed, you can tell the audio is too fast or too slow.
I wrote the software I used to slow down the audio, and I know (based on testing) that it only affects the speed of playback. Essentially constructing the analogue signal encoded on the CD, and resampling that at a different sampling rate.
Now, it is true that when processing the audio, the tone may be affected, just as it’s affected if you turn the bass and treble knobs (or more knobs if you have an equalizer) on your stereo. Those are subjective decisions. Assuming a constant recording speed, however, there is one true speed of the audio.
In this thread on another board from a couple years ago they talk about using 60 Hz (or 50 Hz) power line frequency to determine the true frequency. I looked at this for one of the tracks, and got some sharp peaks. If anyone wants to noodle over them, they are
30.38 Hz
51.45 Hz
60.03 Hz
60.75 Hz
The above are probably accurate to +/- 0.03 Hz. There were also peaks at around 93 and 121, but they were 5 or 6 Hz wide. Also, a DC peak, rolling off by something like 3 to 10 Hz, depending on how you measure it.
So is 60.03 the power line hum? Was the 60.75 Hz the power line hum, corresponding to a 1.25 percent speed up? Were both the hum at different parts of the process, with a speed up in between? Was there a hum at about 60.4 Hz, that was removed, leaving these peaks on either side? Is the 51.45 Hz signal from 50 Hz power, meaning the audio was sped up 2.9 percent (half a semitone)? Tune in at 11:00 to find out*.
(*) You won’t really find out.
Hmmm. Wikipedia’s entry on Pitch begins with “Pitch is the perceived fundamental frequency of a sound.” Wikipedia’s entry on Music Theory says of pitch “Pitch is determined by the sound’s frequency of vibration.” In my post above, I’m taking pitch to be the actual frequencies of the audio, closer to the second definition.
When I talk about “tone” I mean the relative amplitudes of the different frequencies in the audio. Wikipedia doesn’t use tone in this sense, so maybe I’m using the wrong term.
The oldest silent movies were hand-cranked while filming. As the cranker’s hand became tired, he slowed down, which results in an apparent speedup if played back at a constant speed.
Motor-driven silent cameras were at 16 or 18fps. If played back at the same speed, they are fine, but modern projectors use 24fps standard for sound, and if the projectionist doesn’t flip the sound/silent switch or doesn’t have that option, the film will run fast.
Sometimes it’s intentional. We get so used to seeing the Keystone Cops’ lickety-split antics it’s hard to watch them any other way. And, mercifully, they are over sooner.
I’m also not a musician, and while I’ve worked some as a recording engineer, I worked more often mixing music for live performance. My only real point in the quote you posted was to consider using more tools to improve the sound than the speed alone.
I agree that adjusting the recording speed back in Johnon’s day without affecting the pitch would be difficult, though it’d more precise to say that the difference would only be apparent if there was a difference between the recording speed and the playback speed.
As to obviousness, it the pitch change of a speed difference would be obvious assuming you could compare to the original sound. Since you can’t, and you have no reference tones or any other means to calibrate the speed, you’re just left guessing about how accurate the final product is.
Microphones can make a difference because they all distort the incoming sound to some extent, and different mics tend amplify different frequency ranges. For example, so called “ribbon mics” were preferred for vocals and were known as sounding “warmer”, but they didn’t respond well to sharp percussive sounds and so weren’t used for drums. This is in fact a matter of frequency response, though over only a portion of the range, and the result may be perceived as a difference of pitch.
I dissagree with your last statement. As both a musician and a sound engineer, I have to say that a microphone will NOT change the pitch of a sound. What you are probably getting at is a change in overtone (harmonic) emphasis, which can change the “color” of the sound.
A clarinet and a flute can play the same note and impart a different color to the sound (the clarinet emphasizes odd harmonics, the flute is mostly fundamental and 1st harmonic only). But a clarinet playing a C won’t sound like it’s playing a C# if you alter the harmonics by filtering or changing mics.
You could get a drastic change in the overtones so that the fundamental is overshadowed by some other harmonic. The most likely change in perceived pitch would be an octave, then a fifth. The least likely change would be a small change like a step, half-, quarter- or less.
Contrast the kind of pitch change created by a small change in speed of playback. THAT could make a 1/2 step change easily. A 6% speed error is about a 1/2 step, IIRC.
Well this has been fun. I’ve been looking at the spectrum of the Robert Johnson recordings at the low frequencies, where any power line contamination would be. All recordings are from Robert johnson: The Complete Recordings issued in 1990*.
Here’s picture of a typical plot between 0 and 160 Hz. There is a sharp spike near 60 Hz, and also one near 50 Hz. 50 and 60 Hz are both possible frequencies for electric power in that era.
If I zoom in on 50 to 62 Hz, to cover the two spikes, I get plots like these: Disk 1, Track 4, Disk 1, Track 6, Disk 1, Track 8, and Disk 2, Track 4. (Disk 1, Track 8 might not show up; I’ve been having problems with mediafire) For reference, I added a dashed line at 51.4 Hz. The first three were recorded in Nov. 1936, and the last in June 1937. These are representative of all the tracks I’ve looked at (about 15 total, from both recording sessions). The spike near 60 Hz varies from track to track, but the other spike is rock-solid, with its peak at 51.4 Hz. If I extract each one, amplify it, and listen to it, the 51.4 Hz spike is just a hum, but the spike near 60 Hz has some structure to it, and seems to (kind of) follow the music. I imagine it’s something like RJ’s hand rubbing the guitar as he strums, but I don’t really know.
So what can we tell from this? If the 51.4 Hz hum was from the original recording sessions, then the speed doesn’t vary from song to song (at least over the ones I’ve looked at). Any speed change was done to all the songs equally.
Beyond that, it get’s murky quickly. It’s not inconceivable that San Antonio and Dallas had 50 Hz power in the late 1930s. For example, parts of southern California had 50 Hz power until 1948. There were other frequencies around also. (I’ve started a thread here to find out if anyone on the board knows.) If it is a 50 Hz power line frequency, and if it entered the audio when Robert Johnson made his original recording, then the recordings are 2.8 percent fast, about half a semitone.
It’s also possible it’s a power line hum, but from some other stage of the processing, and isn’t directly related to the audio speed at all. In that case, it only tells us the songs are at a consistant speed, at least from when the hum entered the audio.
Finally, it might not be power line hum at all, although it’s kind of hard to imagine what else would be that constant in frequency.
*This was reissued in 1996, supposedly with “corrected fidelity and pitch problems from the cardboard-packaged box”. Does any one know precisely what the changes were? I compared a sample of one track online with my track, and AFAICT they were the same speed.