I’m sorry but I’m going to need you to cite both of these claims. Most of the advances in astrogation, telemetry and surveying applied in Apollo came directly from research for ballistic missile programs. Apollo was largely application and extension of existing (if nascent) technologies.
I’m having a difficult time locating the link, but I’m sure the radios on Spirit and Opportunity are actually off-the-shelf Motorola parts with all the connectors removed and replaced with soldered wires and potted for ruggedness.
I am sure there are some, but that wasn’t the main advantage of the moon race for technology advancement. The main contribution, so some old-timers in NASA once told me, is that many technologies received much study and development to move the idea from brainstorm to product. Those steps, safety evaluations, manufacturing tests, etc, benefited literally thousands of technologies. Many of which weren’t invented for NASA. NASA had the almost unique advantage that their labs and contracting officers could justify all sorts of development. After all, they were doing something no one had ever done so who is to say that this new idea isn’t necessary? So a new compound or new camera or new radar or new something would be tested, refined and documented. And then maybe shelved. Only to be the basis for a new home computer 10 years later (or something). NASA was exploring-and oh by the way they went to the moon. Almost all their explorations and discoveries occurred in labs here on earth.
A few billion dollars here and there. Make sure we have our priorities straight. One Iraq war equals how many Apollos? Use “Apollo” as a standard project size and cost, then for example ask what our 1999 military spending was in Apollos per year.
Back in 1966-67 I took some night time enrichment classes offered by LA Unified school district. One of those classes was taught by a real live rocket scientist. He had worked on Mercury, Gemini, and had been hired for Apollo
What he told us was that when the the design criteria for the computer in the Mercury capsule came down the computer guys said the computer would be about the size of a 2 car garage. :eek:
He said they were told to make it smaller.
He said they came back with one about the size of a bedroom.
They were told to go back and make it smaller.
They came back with one about the size of a breadbox.
Close but no cigar.
Then they came back with one that was smaller than a toaster, and that is what went into the capsule.
Now it could be he was telling us a tall tale, but that is what he told us.
The “guidance computer” on Mercury was about as sophisticated as a bunch of kitchen timers linked together. Mercury had no onboard navigation capability to speak of. (A sextant was intended to be included as part of the gear but deleted when it was realized that there was no effective way for the pilot to utilize it.) All of the navigation computing for Mercury flights was done on the ground based upon telemetry data. As Mercury had no real orbital maneuvering capability–just a primitive ACS system and some solid rocket retro motors–it was entirely dependent upon the Redstone and Atlas boosters which delivered it into its final trajectory to navigate correctly.
Gemini was the first space capsule to carry a digital computer for spacecraft control and navigation. Although some aspects of this computer were novel (especially the memory access system), it couldn’t be described as cutting edge. The IC manufacturing technology and software development and test philosophy was derived from the experiences on the Autonetics D-37C Missile Guidance Set used on the LGM-30F ‘Minuteman II’ ICBM.
If true, this got done no doubt by cutting features out of the computer. It is done all the time. Do you think they went out and went through a couple of generations of semiconductor technology from first to last?
One of my student’s parents brought in a family friend who showed us his NASA notes. He said he thinks he’s the genesis of the digital readout, because he requested one, so then NASA asked for one to be designed.
You’d have the NASA guy, the guy running the test, and the guy from the instrument manufacturer all there, and they would get different reading from the needle on the gauge. It made precise readings difficult.
A book I’m reading now says that NASA purchased more than a million silicon chips between '62 and '67, mainly to “force the manufacturers to perfect the manufacturing process”. So while the chip may not have been invented for Apollo, the process to mass produce them may have been.
I’m finding that claim (as you report it from the book) to be suspect. Raytheon was contracted to build the Apollo Guidance Computer, developed by MIT Lincoln Labs (which was not the AP-101/System 4Pi as I stated above; it wasn’t used until the Apollo Applications Program, i.e. Skylab), so one would expect the contractor to have subsidized microprocessor development. “Computers In Spaceflight: The NASA Experience” does indicate that “…by the summer of 1963, 60% of the [34] total U.S. output of microcircuits was being used in Apollo prototype construction34. This is one of the few cases in which NASA’s requirements acted as a direct spur to the computer industry,” so that does lend credence to the claim that the development of integrated circuits in a production capacity was spurred by Apollo (although I would still argue that Polaris and Minuteman II contributed more significantly, and the Lincoln Lab team that worked on the AGS came directly from the Polaris program) but the quantities if IC processors used were more likely in the thousands or tens of thousands rather than millions.
The book says that many of the chips bought were never used. It seems to imply that Nasa used this “extravagant policy” of purposely buying a large number of chips to spur the production process.
But I agree, it could just be a good story.
Robert Heinlein came up with a bunch of spinoffs in his testimony before the House Committee on Aging. Jerry Pournelle has often told the story of his time doing materials testing for the space program, and what spun off from that (safer motorcycle helmets, for example.)
I can believe this. In 1980, when I started working in this area, manufacturers typically had IC testers to screen chips coming in the door, because they were often crap. This was just before the Japanese pointed out to us that American chip makers produced garbage. The Teletype plant in Little Rock recorded the serial numbers of lots shipped to them by major manufacturers, because the lots would often come back after being rejected for poor quality.
But I wonder about millions. I’m not sure fabs had that kind of capacity back then. In any case, if NASA did this to teach IC makers about quality, it obviously didn’t succeed.
Assuming your claims re technological priority are correct there seems to be a lot of misinformation and WAGs out there in written histories of the space program. Do you just “know” this data as a result of your work in aerospace engineering, or are you referencing some history source of your own for these corrections?