I was once told by a FOAF that the hubble telescope (or it could have been the Mir space station but i doubt it) was recently upgraded to a 486 processor. This seems to me to be a load of bunk, but you never know so I’m going to throw it out at the masses. Is there any possible way that the hubble telescope has a slower processor than my long since outdated P100???
“If you can’t answer a man’s argument, all is not lost; you can still call him vile names.” - Elbert Hubbard.
I think that is true. I believe that it started with a 286. Cost issues you know. The thing already cost $1B and 486’s had just come on the market when it was being finished.
“He love people, all of them, washed and unwashed; he loves his wretched pack of sponging relatives. He shoots people, arrests people, but he doesn’t like it.”
I don’t have any specific info on the Hubble Space Telescope but having worked on several government programs I can assure you that most of them do not use cutting-edge technology.
The main reason is that they have a ponderous acquisition process in which the proposed design is pretty much set in concrete very early. The design may be cutting-edge when they start, but not by the time they’re done, typically many years later. Then, once entrenched and surrounded by a mountain of documentation and maintenance procedures, changes are very, very unlikely. Especially a change as momentous (to the government anyway) as a change in the basic processing chip.
This was compounded for the Hubble by being grounded by the Challenger accident. It sat for a couple of years just waiting for a ride. So it is very likely that the Hubble uses “ancient” technology.
The DOD, at least, is aware of this problem. They find themselves paying for systems that are obsolete by the time they are fielded. Their verbal response is to encourage the use of COTS (commercial off-the-shelf) components rather than specially designed, military standard parts. Their actual response is to insist on the same requirements that led to military standard parts in the first place. The military is an incredibly conservative organization. (“It was good enough for Grant at Gettysburg, it should be good enough for you whippersnappers today!”)
he sleeps on that pile/of newspapers/in the corner/and when he
takes off his/shoes you cannot/smell his breath
“king nicky”, archyology
Don Marquis
For one thing, it takes a long timer to qualify components for the harsh environment up there. The temperature fluctuates a lot, but it’s basically cold up there.
And, as the NASA spokescritter said in the article “it’s not like we have to run Windows or surf the Internet or anything like that. All we have to do is move a telescope around.”
Along the same lines, one major design concern for space rated stuff is radiation hardening. The best rad-hard chips typically are quite a few generations behind the state of the art. In fact it’s only fairly recently that intel x86 cpu’s have been much of a choice for space stuff. Other CPU architectures have been used quite a bit in the past.
I worked at Cape Canaveral AFS from 1981-87. Now, even though '87 was a long time ago technologically, it was still much more advanced than the 70s (IIRC, the IBM PC came out in 1981).
But in my visits to the various blockhouses, I was shocked at the age of the electronic equipment still in use. My assumption was that the stuff installed in the 60s still worked, so they kept it.
And on the long acquisition cycle topic: we in law enforcement there were still using the S&W .38 revolvers, but the military was rapidly changing to the 9mm; the USAF was just starting the transition then. Even though you might think of Cape Canaveral as being important, it was far, far down the USAF priority list, and consequently at the bottom of the supply chain. Our joke was that, by the time we got 9mms, the rest of the world would be using lasers.
Reliability is the most important consideration up there, not speed. Computers control everything, and if they do something seriously wrong, it can make a satellite useless in a matter of seconds. The environment is also harsh - huge amounts of radiation. Radiatoin can cause noise in the computer and flip one bit, corrupting the program or data. Radiation hardening, i.e. making chips more resistant to radiation, is one solution, but not the whole solution. They may have mechanisms that detect these hits and stop the computer before it can do damage. On really critical systems such as manned spacecraft, they have multiple computers and make sure their results agree.
I’m occasionally involved in the operation of one astronomical satellite (the Yohkoh solar observation satellite). Every few days it gets a radiation ‘hit’ and sends an error signal; we have to then send a command to reset whichever computer it was, then make sure it’s working correctly instead of doing something stupid like leaving the shutter open without a filter. (The camera can burn up, making the whole satellite useless.)
This is probably just an urban legend, but I heard that a designer working on a control circuit for a jet fighter (F-15 maybe?) used the 8086 instead of a later processor because all the bugs in the CPU were well documented after years of use.
Okay, it would seem to me that the rumour is in fact true, they -upgraded- to a 486. While I see the points about not surfing the web or running windows, as well as the hassel of changing the design mid-stream, it’s still a tough one to get through my skull (my brain runs on a P90, those ones that melted all the time? yeah, those ones) It just seems wrong. Oh well, thats NASA for you. While we’re on the subject of Nasa, did they -really- land on the moon in the 60’s or was it a soundstage in Hollywood? (k, I’m joking, I don’t wanna go there)
“If you can’t answer a man’s argument, all is not lost; you can still call him vile names.” - Elbert Hubbard.
Rumor has it that the Shuttle uses the same central computer as the F15. I am fairly sure it uses some of the same components and protocols to communicate on the avionics bus.
A point in every direction is like no point at all
But if you think about it, a 486 is a pretty fast processor, faster than you normally need for an embedded application. Most of the computers on a satellite do routine jobs, like operating and monitoring various devices and instruments according to program and command from the ground. It’s similar to what the microprocessor in your car does, though satellite computers are more flexible - you can upload a new program, for example. The only computation intensive part of a scientific satellite is the data acquisition (being able to handle the fast stream of data) and compression, which I suspect is what the 486 does on the Hubble.
Also, there isn’t just one computer which is hooked up directly to every single sensor and instrument. Each component has a driver circuit, often with its own little processor. Otherwise you couldn’t build and test each component separately.
As I (dimly) remember from the newspaper article, the 486 in question managed the gyroscopes/gymbals/whatever that maneuver the Hubble so that it is aimed at particular stars.
As the NASA spokescritter mentioned, this doesn’t take a whole lot of processor firepower - and in fact the earlier (286?) processor had been doing the job OK beforehand.
Still as a techno-geek I can’t help thinking “a 486? Come On!!!”
Why does it seem wrong? It’s really the only practical choice. It might seem like, “hey, it’s on a spacecraft, it should be the hottest CPU available”, but there are other criteria that have to be considered, and as someone else mentioned, there usually isn’t much need for massive amounts of CPU power anyway. Heavy crunching can be done on the ground. Better to increase reliability instead, since spacecraft are rather inaccessable if they break. Fast CPU’s tend to have small feature sizes, which makes the chip more susceptible to problems in harsh radiation environments.
A while back I worked for a company that make high performance workstations - at the time, they were many times faster than any PC available. We actually saw an increase in hardware errors when our systems were used in mountainous locations, compared to sea level. If that can be measured at just a few thousand feet ASL, imagine how much worse it will be for something even in LEO, 250 or so miles up, let alone the geosync sats.
No one has mentioned an obvious point: Power and heat. No reason to use more power if you don’t need it, and waste heat is a much bigger problem in space than it is on earth. You can’t use a fan, you’ve got to radiate all of the heat away, without screwing up anything else. Not a trivial problem. It’s typical of today’s mindset to assume that faster is better, it must be, otherwise I just wasted a bunch of money upgrading to the latest and greatest.
I’m wondering why they upgraded at all, maybe they couldn’t get the 286s anymore.
If you want to talk old electronics take a look at the ICBM program. They’re still using 50’s technology because it’s so reliable.
Your not kidding, I work in building that my company sublets from a company that once provided all the computers for NASA. The company was once very large, but now subsists on repairing and selling the spare parts for these old computers that are still in use by NASA and a few other government agencies. These computers are refrigerator sized and the cabling inside is insulated by what seems to be string or twine. Hardly any of the parts are recognizable to me at all. Basicly, these things are ancient and are still in use by NASA and others.