I was asked a trivia question today about one of the first transatlantic telegraph messages. The message was sent from Queen Victoria to former president James Buchanan and it was 98 words long. The question was how long did it take to transmit this message?
My answer (figuring that this message was being sent via electrical signals and I’d always been under the impression that electrical signals traveled at the speed of light) was that it was limited to how fast the morse code operator was (since 2300 miles is barely a fraction of a second when it comes to the speed of light) and that it took maybe 10 minutes.
Turns out that the answer was about 17 hours. Other reading online indicated that each character took 2 minutes to send.
This may be a question for Uncle Cecil, but I thought I’d hit the teeming millions first. Anybody have an answer as to why it took so long for this message to cross the Atlantic?
Due to signal loss and reflection, it can take some seconds for the voltage to stabilize on a poorly done transmission line. This severly restricts the bandwidth.
Materials (insulation) and knowlege have improved greatly since those days, but the same issues crop up when it comes to communicating over high loss paths…Submarines and Deep Space probes spring to mind.
It had nothing to do with the proficiancy of the telegraphers:
Most ham radio morse conversations happen at around 18 WPM, but there is still a lot of traffic above 30 WPM. Sending as slowly as the OP indicates would be really difficult for an actual telegrapher.
Was it only a one-way communication? I would think that at the experimental stages, each letter would be sent back westward as a confirmation. I’d think that many transmission problems (static, whatever) would give garbled info, and it might have taken several tries to get each character across.
Even today, data transfer protocols often (always?) include some sort of error-checking, and retransmission when the error-check so indicates.
Almost. In insulated conductors, the signal actually propagates at the speed of light through the insulating medium, which for typcial plastic insulations is about .5-.8 c. There’s a formula to calculate the exact volocity factor based on the insulation’s dielectric constant which I don’t recall off the top of my head.
Uninsulated wires in free air propagate signals at ~.99 c.
Also, as noted previously, signals through very long cable runs have to deal with significant stray capacitance and inductance which tends to limit things like rise and fall times which in turn affects bandwidth.
but that is for today’s high-quality stuff. We’re talking about 150 years ago! I have no idea how long it actually took, but a minute or two would not surprise me a bit. I’m still amazed they could do it at all.
Actually. . .I just did the math on my own here- we’re talking about 19 miles per second if we’re only taking into account the signal crossing the ocean. That’s .0001c if I’ve done my math right. Can a low quality cable over that distance really have that much of an impact?
It has to be a combination of that and verification of/dropped signals. . .
http://en.wikipedia.org/wiki/Speed_of_electricity: “Propagation speed is affected by insulation, such that in an unshielded copper conductor it is about 96% of the speed of light, while in a typical coaxial cable it is about 66% of the speed of light.”
Granted that’s still ridiculously fast, but I’m curious how the signal made it all the way across the Atlantic. Were there any boosters of any sort along the way?
I am pretty illiterate regarding this but I think you are misunderstanding the posts Q.E.D. and Kevbo submitted in much the same way I was inclined to.
Q.E.D. and Kevbo, am I correct in understanding that the actual propagation of voltage information is close enough to the speed of light such that this in itself isn’t the cause of the huge delays, but rather the voltage information is so messed up by the lousy cable that it isn’t identifiable as the same thing that was sent? In other words, you have to keep sending a specific voltage for a long period of time before the other side can be reasonably sure their readings represent what sent?
19 mps is about 3 minutes for a one-way trip. If they verified each character, that’s 6 minutes. Add in a good number of retries, let’s say and average 10 minutes per character? That’s 6 characters per hour.
The OP says that the message was 98 WORDS. At 5 letters per word, without blanks, that’s 490 characters. Over 17 hours, that’s 29 characters per hour.
Yeah, the math sounds right to me. Or even impressively fast.
Yes, that’s part of it exactly. Most of the transmitted signal is lost, and noise from thunderstorms and even solar driven ionospheric activity is added. The signal is so weak, and the noise so high, that you have to average the received voltage for long periods so that the (random) noise averages to near zero.
The other part is that the physics of long transmission lines were not well understood in those days, and even today it is not lay knowlege. Unless such a line is carefully made (for consistancy) and properly terminated, any signal will reflect back and forth several times, the electromagnic version of an echo. So you have to let these echos stabilize for each symbol.
It is not coincidence that the math discribing the behaviour of transmission lines is known as The Telegrapher’s Equations
From what I’ve been able to gather, they did this with just a straight cable - at the time (1858) they had no way of setting up relays or boosters on the cable.
The cable itself didn’t last very long. If I recall correctly, about a month or so later, someone ruined the cable by overloading the voltage.
Tom Standage’s excellent book The Victorian Internet covers the history of the first Transatlantic Cable pretty well.
The signal quality down the first cable was atrocious. The reason the first message took 17 hours to transmit was that even using the most sensitive receiving equipment, it was difficult to discern the morse being sent, and hence they spent a lot of the time going back and forth correcting errors.
As ninja batmike lincoln says, the first cable only lasted a few weeks before breaking down completely. Fortunately, by the time they came to replace the cable (eight years later) the technology of cable-laying at sea was much better understood., and the second cable was able to support message transmission at eight words per minute.
It’s not so much that c (the constant speed of light) is different, but that the propagation is slower. It takes a photon hundreds of thousands of years to get out of the sun because it keeps getting absorbed and re-emitted by atoms until it finally, by chance essentially, makes it to the surface.
It doesn’t, really – the reason for the comparatively long time it took lies more in the low chance to get anything across at all, than in the speed of the signal in the conducting medium. As Q.E.D. already said, the speed of light in a medium is dependent on the medium’s dielectric constant, which describes its permittivity ε, i.e. the way electric fields propagate within it, and also its permeability μ, which is the corresponding quantity with respect to magnetic fields. Basically, c = [sup]1[/sup]/sub[sup]½[/sup][/sub], and real-world speeds may be about half of the vacuum light speed and up, as has already been quoted.
The story of the first Transatlantic telegraph cable is a fantastic story. That such a project was attempted and succeeded in 1958 is amazing. The cable was so large and heavy two ships were needed to carry it. It broke repeatedly so they had to grapple the ocean floor and fish it out multiple times. Sadly the name Cyrus Field is almost unknown today.