Audio Engineers: Where did my voice track go?

I work as an editor at a TV station, and I had a real WTF moment this evening.

I had to use a tourism commercial as part of a news story. It was dubbed off a beta tape from the ad agency into our Avid system. The commercial was in stereo, and I sent it to playback as such. For various reasons, all audio is mixed monaurally by the audio operator during the newscast.

The strange result of this is that when the left and right channels of my video file were mixed into one channel, the voice track was gone, and all that was left was the music. If I listened very closely I could make out the faintest remnants of the voice track, but it was as if the phase had been engineered in such a way that the voice would be cleanly cancelled out if the audio was mixed. I verified this result by toggling a setting in my Avid editor that does a mono mix.

So my question is, what sort of audio engineering technique results in near-perfect, Karaoke-like voice removal when left and right tracks are mixed? And was this likely done deliberatly, or a mistake on the part of the ad agency?

I’m not an audio engineer, but I’d guess accidental phase cancellation.

Yes. At some point, probably when it was dubbed off beta, the phase of one of the channels was reversed. When mixed to mono, the material that was exactly the same on both channels*–voice over–cancelled, leaving whatever was true stereo–the music.

  • that is, centered in the stereo field

No if the phase was reversed in dubbing, then the phase shift would apply to everything. I think hawthorne is correct and somehow the vocal track has been recorded out of phase.

This would be easy to do as it is not unusual to find microphones, even from reputable manufacturers, that are wired out of phase. However I don’t know why anyone would use two mics for a vocal track.

They could have been expanding the stereo field on the vocals. A signal 180 off phase would have a thick dual-mono sound that stands out.

Sorry to disagree, but this is exactly how the original Thompson Vocal Eliminator worked. Sounds that are centered in the stereo field are eliminated when the phase of one track is flipped. Sounds that are panned left or right are not affected. The Vocal Eliminator uses other tricks as well, but that’s the basic way it functions.

Let’s take a very simple example: voice center, piano left, guitar right. If we listen to only the left track, we hear voice and piano. Listening to the right, we hear voice and guitar. If we flip the phase of the right channel, and add it to the left, we have piano (in phase), guitar (out of phase, but that makes no difference, since it’s not out of phase with anything) and vocal (in phase) + vocal (out of phase) = nothing.

NoCoolUserName nailed it. I just imported a song into Audacity, separated the channels, inverted one, and did a mono mixdown. It did a pretty good job of elminating the vocals. Who knew? :slight_smile:

Next time I’m at work I’ll have to see if my fancy expensive Avid can invert phase as easily as my open-source audio editor.