Satellite transmission question

I was just watching a journalist reporting from the Middle East, by satellite. Considering that the speed of sound is so much less than the speed of light, how is the sound synced with the image? Is the image delayed at the receiving end?


This is what I meant to ask, but saved too late:

I was just watching a journalist reporting from the Middle East, by satellite. Considering that the speed of sound is so much less than the speed of light, how was the sound synced with the image back before things were digitized? Was the image delayed at the receiving end?

Or more likely, was it not actually ever in the form of sound at all, even in the old days? Or was satellite usage begun at the same time as digital technology?

Both the sound and the video are carried on the same signal at the speed of light. The speed of sound has nothing to do with it.

The differences between the speeds of sound and light are only going to make a difference in terms of the distance from mouth to microphone and image to camera lens. Once the feed is electrical, everything is at the speed of light.

There certainly will be a delay from the point of origin to the recipient, but as UncleRojelio points out, the audio/video signals travel together, and are always in sync.

To look at it another way, the sound is traveling at the speed of light. Cool, huh?

Then why is there so often a noticeable delay? The person on this end asks a question, and the reporter just stares for a few seconds before responding.

The delay has nothing to do with the speed of sound. Although I do not know for sure what causes that delay I assume its that the person in the “remote” location is getting a feed from the studio that is a bit delayed. Also the person in the remote location has to send their feed back to the studio and then to our TV’s which may add to the delay.

Simply put, it’s because the signal has to go from where the first studio is, through a ground station, up 26,000 miles to a satellite in geosynchronous orbit, down another 26,000 miles to another ground station, and then out to where the other studio is. It’s kind of a long trip, and it takes light a little bit to make it the whole way.

Not only is the distance a factor, but there is some delay introduced by the electronics as well. There is about a 2 second difference between our local cable analog channel and the twin digital channel. Not intentional, but just the accumulation of store-and-forward technology time.

That’s only for a one-way trip.

The anchor in the studio asks a question, and the signals travel 50,000 miles to get to the correspondent in the middle east. Correspondent answers as soon as he hears the question, but now his answer has to travel another 50,000 miles to get back to anchor in the studio. From our perspective (watching things in the studio), the radio signals have introduced a total delay of 100,000/186,000 = 0.53 seconds.

As has been noted, the delay we observe between anchor-question and correspondent-answer often seems to be a lot longer than that. I can only guess that there’s a lot of processing and buffering of the signal taking place on either end that adds to the delay.

I think so, too. We send a digital signal from our TV station to Charter over fiber, then it is distributed to analog customers (no discernable delay) and digital ones (a second or more delay), yet the difference in physical distance traveled, if any, can’t be more than a few hundred miles, intra-state. And I know of no reason why Charter would deliberately introduce a delay (it’s not long enough to allow for a manual signal cut if needed for censorship). So I can only conclude that it’s due to digital signal processing and store & forward buffering.

audio only communications (telephone) will have the same delays for the same reasons both through satellite and terrestrial.

Just for completeness, if the speed of sound did (for some reason) enter the equation, it would take sound more than 9 hours to get from Iraq to Texas.

Distance: 11562 km
Speed of sound in air: 1235 kph.

One of the problems that a the delay inherent in the satellite link has is that it is hard to know when to start talking. You very quickly come to realise that you can’t simply start talking when you think you hear a gap in the speech from the other end. Ordinary human face to face conversation has a finely tuned collision detection and backoff that allows us to to very quickly interleave conversation, often in small snippets. With a large fraction of a second delay this breaks badly. So not only does the conversation suffer from the delay, but the natural conversation mechanisms that provide for free flowing conversation break. One learns to wait long enough after someone has stopped speaking to be really sure that they have stopped, and that you do now have the token in the conversation the start talking. If you don’t the conversation turns into a set of juddering half starts at sentences. Journalists would be very conscious of this, and a live interview of a satellite link becomes curiously stilted.

This video is likely the best way to learn how modern video transmissions work without going to college or buying an expensive textbook. It’s a high-level survey of the modern field from the people who actually work in it, making software and defining file formats.

In summary, here’s how audio works (this is a very simplified excerpt of part of the first section of the video): A microphone picks up the sound waves from the environment in the form of this really complex wave, which is all of the ambient sound added together into a single signal. (If you have two or more microphones, you can do stereo audio or surround sound or something more complex.) A piece of hardware called the ADC (analog-to-digital converter) takes ‘snapshots’ (samples) of this complex signal tens of thousands of times a second.* What it records is a binary number of some fixed size that describes how loud the sound is at that instant.

This is the part that answers your question: All those numbers are transmitted on the same digital signal that’s carrying the video, in between individual frames of video. (A frame of video is one picture; think of a reel of film you’d feed into a movie projector and you have a useful-enough lie for our purposes here.) Therefore, there’s no delay between video and audio feeds unless something has screwed up in the hardware or the software. Everything is being transmitted as part of the same digital signal, which, since it’s a radio wave or a light beam or an electrical impulse down a wire, is going to be travelling at close to the speed of light.

*(How many samples/second it takes partly determines how good the result sounds; the other part is how large that binary number it’s using is. More samples is better (up to a point), bigger numbers is better (up to a point). For audio as good as what you’d get off a compact disk, you need 44,100 samples per second and sixteen bits per sample. FM radio is a little worse than that, and a phone call is much worse than FM radio.)