If I remember correctly, it takes on the order of minutes to transmit a single image from a space probe, such as the Mars Rovers or the Saturn probes. Contrast this with the routine ability to send 30 frames of video per second here on Earth.
What is the reason for this much lower data rate?
It’s a weak signal!
Mars is, as of this posting, 63,000,000 km from Earth. Saturn is more than 1,200,000,000 km away.
And the power of a spacecraft’s transmitter is limited, as with any equipment on spacecraft, by its weight (which affects the size of the transmitter and the dish) and the power it will draw from the spacecraft’s power source (solar panels for Mars missions, and radio-thermal generators for Cassini.)
You think the baud rate sucks over a noisy telephone line, try downloading from a tiny radio transmitter on the other side of the Solar System.
NASA spacecraft downlink to Earth through the Deep Space Network and you can read more at their website.
Also keep in mind that reliability in the transmission is MUCH more important than speed.
And older probes are made using older radios/encryption algorithms/modems etc…
One thing to keep in mind is that it takes a LONG time to develop a microprocessor that can survive in space. You can’t just take an off the shelf Pentium 4 and shove it into a space probe. You have to make the processor radiation hardened and all of the other things that it needs to survive on board the harsh environment of a space craft.
Also keep in mind that it takes a long time to put a space probe together. A probe launched today probably got its start a good ten years ago, and ten years ago we were using 386 and 486 computers running DOS and Windows 3.1. Windows 95 was brand spanking new. Since 486 computers were state of the art, the stuff that was finally getting radiation hardened and space ready were processors like the 286. Ten years ago you can start your project assuming that something as powerful as a 486 would be radiation hardened before you got too far into your project, but you have to have your computer designed and functioning fairly early in the project so that the software guys can start testing their code on it. If you don’t get enough testing done, well, that’s how mars rovers end up getting planted 20 feet underground at 1400 mph because someone did a software calculation wrong.
Here’s a couple of examples to give you a rough idea of how much processing power in space craft is lagging behind what is on your desktop.
Sojourner (which landed on mars in 1997) had an 8085 processor on it, which is an old 8 bit processor that is even older than the 8086 used in the IBM XT. Remember the old 8 bit computers of the early 1980’s? That’s what we’re talking about for processing power. For comparison, desktop computers in 1997 were running Pentium 1 CPUs.
Mars Spirit, which landed in 2004, had basically a radiation hardened version of an old power PC chip, the same processor you’d find in a Macintosh from many years ago.
And, of course, there’s also the power issues and everything else mentioned previously.
The 2 rovers IIRC each have 2 transmitters, one that goes directly to earth, the other goes to a satalite we have in orbit around the red planet.
I think the baud rate of the rover to orbit rate is 128 kb/s, while the rate directly to earth is 9.6 kb/s.
The primary constraint is the link budget, which includes transmitter output power, transmit antenna gain, path loss, receive antenna gain, and receiver noise figure. Power is a limited resource on a spacecraft, so you just can’t arbitrarily increase the power of the transmitter. The bit rate supported by a communications link is dependent on the signal-to-noise ratio at the receiver. See the Shannon-Hartley theorem for the details.
Processor speed and age of the hardware on the spacecraft are mostly irrelevant.
Channel codes, such as convolutional codes and Reed-Solomon codes, are often used to detect and correct errors induced by noise. This improves the effective bit-error-rate of the channel and the link budget.
Depending on how much money you have to throw at the problem, the link budget can also be improved by using a larger antenna at the ground station, and by investing in low-noise preamps.
Cooling is also an issue, since most computers on Earth rely heavily on movement of air (either convective or forced by fans) to cool off computer chips, and in vacuum, that’s not an option.
Another consideration is that most things on a spacecraft are only going to be as good as they need to be. In general, making something better (whatever “better” might mean for the something in question) will make it more expensive. When we send a probe to Mars, we don’t expect to see anything moving quickly, so we don’t need 30 fps video. Since we don’t need 30 fps video, there’s no point in spending the money to make the system good enough for that. If we were launching some sort of mission where that kind of video were considered valuable, we could do it, but it’d cost more. For instance, for the Apollo lunar landings, the PR value of TV footage was considered high enough that there, they did spend the money needed for real-time video.
Compression, not encryption. Unless NASA really is hiding things from us.
:smack: Though… I’m not one to spread any rumors… who says they aren’t hiding things?
I work in IT and have had recent problems with compressed drives. I’m probably mentally blocking the term out!