You may think it’s a nitpick, but AFAIK, no signal “bounces” off of a satellite (they tried that once, with a satellite called “Echo” and it wasn’t all that great.) Instead, the signal is received at the satellite, verified, decrypted, recrypted and retransmitted, adding even more delay.
I don’t consider it a nitpick, as I’m a layman on the topic. I consider it a useful elaboration.
This is true of low earth orbit packet based digital communications satellites, and satellites that provide dedicated IP access from geostationary orbit (ie Inmarsat’s BGAN service.) However for things like TV and other standard international services, satellites provide a range of transponders, and each transponder receives a transmission of a set frequency and bandwidth, translates the frequency to a different band, and retransmits the signal back to Earth. There is no processing of the signal beyond the frequency shift. This is the “bent pipe” approach. Doing this allows a customer of the satellite to use whatever encoding they desire, so long as it fits in the signal/noise and bandwidth constrains of the end to end transmission. Indeed a customer can lease a fraction of a transponder if they limit their transmission to a sub-band of the transponder’s bandwidth, or they can use TDMA to provide multiple streams via a single transponder. Also a customer can lease more than one transponder, and get more bandwidth. This freedom allows satellites to provide services that are not limited by the particular communications technology or use cases at the time they are constructed, but to continue to provide useful services over their entire life. Also services can be leased on any time frame. Services are limited to big customers - you need to have a need that can justify leasing a transponder. Individual people need not apply. Which is why services like BGAN have a role.
The earlier comments cover the main aspects of delay. Delay to GEO is 120ms. So one way link is 240ms. When you have two people conversing the minimum round trip delay is 480ms. This is a very long time in human speech terms. What happens is that people go into one of two modes talking. You get the constant stutter of people just starting to talk, and then they hear the other side talk, so they stop, restart, and the conversation becomes a mess - then they go into a mode where they listen for at least a second before starting to talk to be absolutely certain that they don’t start talking over the other person. So you get these long pauses in conversation. TV interviews and handovers happen like this.
There are other delays in the link. There will an echo canceller on the audio link, to avoid the sound echoing back after 480 ms. This can act to clip a little bit more off the beginning of the sounds, and adds to the perceived delay (although it doesn’t actually add any delay.) IN video the above comments are very important. Encoding and decoding adds a frame delay each step. When you are transiting an international link you may well perform transcoding of the video a couple of times too. Bandwidth costs serious money of the transponder, so serious compression of the video many be warranted. Thus by the time the video has left the camera and been received, it may have been subject to quite a number of frame delays. Field reporters with lightweight gear may use services like BGAN, which is about ADLS equivalent. This is digital end to end, and does not provide real time guarantees. A reporter calling in a report with a BGAN groundstation (which is about the size of a laptop) may be subject to a huge amount of delay, and in this case there are many stages of digital store and forward involved. Much of the link will occur over the ordinary internet infrastructure once it leave the groundstation.
Worth pointing out that it was fibre optics that made the huge change in use. When communications were limited to wire and radio - microwaves and satellites - bandwidth was very expensive. Satalites were the big bandwidth providers. Microwave links provided inter-city trunking for voice, and undersea cables were not really price competitive with satellite - but were preferred due to the lower latency. Now you can get a Terabit per second down a single fibre. Wavelength division multiplexing allows a single fibre to carry many ling links, and bandwidth is almost free. So satellite is relegated to those cases where a fixed connection is not possible, or you want major broadcast capability. There is enough of this need that satellites are not going away anytime soon.
Delay systems like that eliminate the possibility of audio feedback, the screeching sound you get when a live mike is too close to the speaker.
You’re right though. For a packet - switch network, the choices are :store each packet before forwarding, or risk forwarding incomplete packets. Furthermore, the packet stream has to be buffered at the receiving end, because there’s no guarantee that they will all arrive in the correct order.
From what I recall of my Cisco class, individual packets (network layer 3 data units) tiny, like ~ 1.5 KB. The lower levels get a bit bigger (they add their own packaging on top of that 1.5kb) but we’re still talking <2kb. Also, networking hardware is specifically designed to process that particular type of data incredibly fast. So, I doubt the networking protocols are adding any significant delay.
The biggest delays are gonna be from the signal processing and compression that take place before it’s sent to the transmitter.In live feeds, there’s also the possibility that the tv network itself adds in a delay so they can censor any objectionable words.
Well, yes and no. Signals in actual wire travel at about one third the speed of light, as I understand it. But these days, “wire” transmission almost always means fiber optics, and those go at the speed of light. But that’s the speed of light in the fibers, which is slower than that in air or vacuum.
Optical fiber is composed of two-layer glass. The outer layer is usually pure glass and the inner is a doped glass. The two layers have different indices of refraction which enables the fiber to act as a waveguide. So the speed of light in fiber is determined by the index of refraction of the doped core, which is usually about 1.6. Thus the speed of light in fiber is roughly two thirds that of c.
There’s also a delay in Internet phones, and a slight delay in cell phones, in fact. Try calling someone next to you.
Correct
Incorrect. When the network is working properly, the significant delay (for speech communications) is transit time. Each hop on a router is measured in microseconds or hundreds of nanoseconds. Of course, if traffic is buffered up multiple packets deep (due to heavy traffic, or a traffic burst) times are longer. A well-managed network should have minimal buffering.
That would apply to continental broadcasts, also, so it’s probably not the major factor. Even if compression is different for transcontinental signals, as Francis Vaughan points out it’s only a few frames, which is insignificant when discussing spoken communications.
Not unless there are thousands of hops, which there normally shouldn’t be.
No, the biggest delays are due to the speed of light and the distance the signal travels, for trans-continental coms, and especially for satellite coms. However, the delays at the ends are noticeable, as mentioned by AaronX above.
Bingo. I’ll take your word for it that it’s two thirds. Even assuming the signal travels at the speed of light in a vacuum, delays are significant and noticeable in latency-sensitive applications.
For example, musicians dream of being able to jam over the internet, and often post about it. Unfortunately, the round-trip latency is a killer. 10 msec is ideal; it’s like sitting about 5 feet apart. 50 msec is tolerable but not conducive to tight playing; 100 ms is no good at all for anything rhythmic (a tenth of a second is a long time, in music).
Unless the musicians are close (within say, 3K miles), or are using tricks (e.g., the first musician can’t hear the subsequent ones, or subsequent ones are playing a measure later than the earlier ones), it just doesn’t work, and the problem isn’t one that technology is likely to solve, ever. Currently, the limit is more like 1K miles due to 10 ms or more processing at each end (5 ms each way at each end). I’m also assuming the use of headphones to avoid additional delays.
Regarding phone calls, sometimes they do have significant delays, though, less often these days. A call from NY to Perth probably takes an 18K mile trip, or more, which is at least a 300 msec round-trip delay: enough to cause the timing difficulties causing the stuttering that Francis Vaughan mentions above. I’ve certainly experienced that with shorter distances.
Right: it’s noticeable, but usually not long enough to cause the stuttering issues, or the long pauses in broadcasts. That’s mostly the speech encoding/decoding, I believe.
Just to add - the term often used for the relative speed of a signal is velocity factor - which is a fancy way of talking about the fraction of the speed of light in vacuum a signal travels in some other medium. Velocity of propagation is another term.
For fibre optics it is simply the reciprocal of the refractive index, and is typically about 0.7. In copper it is all over the place. For a twisted pair of wires, about 0.68 for UTP, (ie Cat 5), but can vary from say 0.4 to 0.9. Parallel transmission lines range from about 0.8 (300 Ohm TV antenna lead) up to very close to 1.0 for open wire and carefully constructed transmission lines. For coaxial cable it commonly ranges from about 0.65 to 0.9, depending upon the dielectric used. For radio signals in air it is so close to 1.0 as to not matter. In a vacuum, well it is 1.0, essentially by definition.
A rough rule of thumb for telecommunications would be 0.7 of the speed of light for most transmission in wire or fibre, and for radio, 1.0. Two thirds is probably a pretty good rough approximation for fibre and wire, maybe a few percent low, but trivial.