Modern cell phone vs. Lunar Lander computer

I know the phone is more powerful but by how much ? 10x or 100x?

The memory capacity alone is way more now.

Computer Weekly.

Just for fun
TV Tropes

Wikipedia says the Apollo Guidance Computer ran off a 2.048 MHz oscillator, divided by two to make a 1.024 MHz four-phase clock for internal operations. I’m not sure what the 0.043MHz number in the article quoted above means.

I recall it being said that the Commodore 64 was slightly more powerful than the one on the LM. (No cite, sorry.)

so the phone processor runs about 1000 times faster if I did the math right.

That doesn’t sound right.

There’s also the difference between speed and power. The two are correlated, but a 1000x faster clock doesn’t make a 1000x more powerful computer, especially when memory, co-processors, and the CPUs themselves are considered. Even the basic size of the registers was different. We’re used to things based around 8-bit processing units (or, by extension, 16-bit or 64-bit or whatever) but the Apollo computer was 6-bit, IIRC. I’m not sure a truly meaningful comparison is actually possible, since the hardware was so different.

The 1MHz vs 0.043MHz difference could be an approximation taking actual throughput into account. Clock speed is correlated, but the number of actual instructions executed per second will not be same as the number of clock cycles per second. For example, the computer might only be capable of 100000 additions per second but only 50000 multiplications, even if the basic clock cycle is the same. A more useful measure would perhaps be MIPs, which itself could be highly variable depending on the application and the programming involved.

Benchmarking computer speed is hard even now. There’s no single number to compare. For example, what is often reported for supercomputers these days is performance using LINPACK (or similar) test where a known test with known parameters is performed. Perhaps a particular supercomputer will be great at that particular application but awful at another benchmark while a different supercomputer will be mediocre at LINPACK but great elsewhere.

Compound that by comparison to a highly optimized computer from the late 60s/early 70s, and the result will be fraught with too many approximations to be meaningful.

Just how much computational power do spacecraft need? If it’s just collecting and transmitting data, then not much at all. All the data analysis is done on Earth-bound computers. These rovers on Mars aren’t autonomous, they take pictures of everything around them, send the data to Earth and commands sent back, like “move three inches to the left” … take more pictures … get new commands … etc etc etc.

Space is a caustic environment … solar wind, gamma rays … electronics that work fine inside the Earth’s protective magnetic field don’t last very long in interplanetary space. I understand that cell phones work in LEO, but do they work on the Moon?

Maybe that’s why the OP is about the manned and piloted Apollo Lunar Lander, which WAS autonomous of Earth.

It is extremely misleading to equate clock speed with processing power. Within the scope of the cited computers you will get answers that are wildly inaccurate.

The LM’s guidance computer was, by modern standards almost incomprehensibly slow. Looking at the clock rate doesn’t even begin to convey the issue. The computer was a simple accumulator architecture machine. It had a single register that “accumulated” the computation results. Another register was loaded with memory addresses to cause data to be fetched from memory. The memory cycles took 12 machine cycles. As a good first approximation it took 12 cycles to execute many instructions. Any modern computer design has many general purpose registers and rather than taking many cycles to execute an instruction can often manage two or more instructions per cycle. And the more modern machines could do far more complex things in those single cycle instructions. (Let alone the ridiculously complex things that could occur when many cycles could be used.) And the modern machine’s cache hierarchy allows the processor core to stay fed with data. Data can come and go to cache memory in much fewer than 12 clock cycles (more like 3 for level 1 cache).

The AGC was only a 14 bit machine. For any task that needed more than 14 bit precision the arithmetic would take much longer (a rule of thumb is 8 times as long for a doubling in work length). And there was no multiply or divide instruction, let alone hardware assistance for multiply.
Nor was there any floating point in any form.

The Cray X-MP is a machine that goes the other direction in terms of compute per clock cycle. For a start the MP means multi-processor. But even a single processor Cray could do an astounding amount of work in a single clock cycle. Being vector machines they could deliver a 64 bit floating point result every cycle, and keep delivering at that rate for sustained performance that was eye watering. It wasn’t just the processor design that made them fast, but insane levels of parallelism in the memory controllers that allowed complex gather scatter operations across memory to keep the processors fed and delivering results. The idea that an 800MHz pentium III could match a Cray X-MP is fanciful. Only on some contrived very trivial benchmarks could the Pentium have competed. But on real sustained number crunching, the Cray ripped the heads of the tasks.

On the other hand, this would make for a rather cumbersome smartphone.

Here’s how a $35 Raspberry Pi compares to a Cray X-MP:

To quote Seymour Cray himself: “Anyone can build a fast CPU. The trick is to build a fast system.”

He would be wrong too*. Whetstone isn’t a particularity useful benchmark for a supercomputer. It counts as one of the trivial ones. The standard for the Top 500 supercomputers has been Linpack - although there have been arguments for a more expanded benchmark for years.

Linpack is a serious numerical benchmark, solving large matricies. It represents a real workload, using one of the most important numerical libraries. Whetstone represents nothing in terms of real workloads, and was rather famously even the subject of benchmark gaming.

*I actually know the author of the piece.

The Raspberry Pi 2 n=100 linpack DP is 169MFLOPS, Cray X-MP is 21
from here and here.

The Cray does drastically better with n=1000, but I couldn’t find a comparable Pi result.

There were two Apollo Guidance Computers (AGC), one on the Command Module (CM) and a basically identical one on the Lunar Module (LM). The clock was 1Mhz and they had 36,864 15-bit words (about 69,120 bytes) of ROM and 2,048 words (about 3,840 bytes) of RAM. It was designed and programmed by MIT (mostly in assembly language) and manufactured by Raytheon.

Factoring in a rough estimate of clock cycles per average instruction, the AGC probably did about 40,000 to 80,000 instructions per second. It had no native floating point hardware or instructions but used only integers. It operated on floating point datatypes essentially by using software subroutines, which made those even slower.

On the LM there was an additional Abort Guidance System (AGS) computer which was even more rudimentary than the AGC and only intended for abort cases where the AGC and/or Primary Guidance, Navigation and Control System (PGNCS) failed.

The AGS computer had only 4k words total. It was manufactured and programmed by a separate team at TRW to avoid any possible failure commonality. The AGS computer was slower than the AGC.

On the LM the systems are usually referred to as PGNCS and AGS but from a computer standpoint, the AGC was separate from PGNCS which was the inertial platform and related components. The AGS was a combination inertial platform and integrated computer.

From the standpoint of MIPS, megahertz and megabytes, the AGC was incredibly slow. The difference in CPU performance to a modern cell phone is difficult to quantify. Nowadays we often measure performance in FLOPS, but the AGC had no hardware floating point which reduced its performance at least 10x for that datatype.

An iPhone 6 will do about 1.5 Linpack gigaFLOPS, so a very crude comparison would be:

AGC 80,000 integer instructions/sec = about 8,000 FLOPS
1.5 billion FLOPS / 8000 FLOPS = 187,500x performance difference

However the AGC was capable of real-time fly-by-wire control of the LM, including reading radar data, reading input from the hand controller, DSKY keyboard, driving the digital displays, controlling the main engine and all the thrusters.

No AGC computer ever crashed due to hardware or software on any Apollo mission, although there were “program alarms” or what today we’d call software exceptions on Apollo 11.

The amazing thing is NOT how primitive the AGC was but how did it get so much done and never crash, when today’s “advanced” computers (including cell phones) sometimes crash and have laggy response.

From a gigaFLOPS standpoint an iPhone 6 is about 15 times faster than a Cray-1 supercomputer which consumed 115,000 watts. In fact the first supercomputer which was clearly faster than an iPhone 6 was probably a Cray XMP or Cray-2.

An 18-core Intel Xeon E5-2687W v3 can do nearly 1 Linpack teraflop, which is 12,000 times faster than a Cray-1. However the fastest supercomputer today is 93,000 times faster than that Xeon CPU, so despite the computational progress in desktop or mobile device computing, supercomputers remain proportionately faster.

This underlines the difficulty (and indeed futility) of trying such comparisons. The Cray was designed for real life large computational tasks. It actually performs poorly on small tasks (as the vector systems simply can’t get going, and the setup costs dominate) but once it gets into its stride it delivers jaw dropping performance. It is hard to convey how big the gap from a Cray to most other available computers there was at the time. So long as you were talking big numerical problems.

Linpack does have a few failings, and it is possible to tweak the benchmark parameters to yield quite flattering results. Making the problem size small enough to fit in cache makes for very good numbers. The Cray vector machines didn’t have caches at all. They didn’t need them. The memory controllers could feed vectors of operands to the cpu’s vector registers fast enough that the cpu never slowed. That was where the magic lay. Not in clock speed. The modern SSE instruction set starts to achieve some of these benefits, although the early SSE versions were badly hamstrung, the 64 bit x86 with SSE4 can start to manage some good performance on vector like operations. Performance that makes a clock for clock comparison more realistic.

I would expect even more than 10 times. But I also suspect that the system didn’t even emulate floating point. Likely calculations that needed it were done in fixed point, (Gien the code is all available on GitHub now, it would not be desperately difficult to check it out. If I had a few more hours in the day.)

Focussing on floating point is arguably artificially putting the AGC in a poorer than realistic light. It underlines the futility in comparing it to a Cray - a machine for which floating point was it’s entire point. Eventually what matters is how it performs for the task for which it was designed. There is little doubt that the AGC was a key enabling technology that made the lunar landing possible. Not just for its navigational capability, but in the manner in which it allowed for large scale simplification of many of the housekeeping elements of the spacecraft. For many of those tasks floating/fixed point arithmetic is largely irrelevant.

You are essentially correct. The AGC did not use a defined floating point datatype, even software emulated. However it dealt with floating point data via fixed-point arithmetic.

To achieve this the fractional data representation required the programmer assume the responsibility for managing the magnitude, or scale of the number. This was documented in the book “The Apollo Guidance Computer: Architecture and Operation” by Frank O’Brien.

The point about floating point performance was just a rough attempt to evaluate performance across a vast span of technology. The other way is disregard floating point and just compare based on MIPS. An iPhone 6 does about 25,000 MIPS which relative to the AGC would be:

25000E6/80000 = 312,500x

So from an integer standpoint this is within a factor of 2 of the previously-stated floating point performance comparison, showing either one is probably in the rough ballpark from a pure CPU standpoint.

As already stated the AGC (as was common in that era) was mostly programmed in assembly language. There were no run-time software frameworks, objects or GUI displays to manage. This enabled it to get the job done with the available hardware.

As I indicated, the AGC never crashed and did fly-by-wire control of the spacecraft, while concurrently managing navigation, user switch input, and driving user numeric displays.

If you literally had to bet your life that a computer won’t crash, lockup or become so laggy it can’t function, the “primitive” AGC from 50 years ago is probably better than a modern cell phone, despite the cell phone’s huge computational and memory advantage.

The AGC certainly streamlined and facilitated various spacecraft navigational and control functions. However it was theoretically possible to fly segments of a manned lunar mission without computers. The LM could revert from a computerized fly-by-wire mode to direct analog manual control, and had redundant non-computerized control paths and even redundant thruster activation solenoids.

There was a practiced contingency where the LM could lift off from the moon and achieve orbit with no computer whatsoever. This involved using charts, a stopwatch and making stepped pitch changes to align window etchings with the lunar horizon. Astronaut Gene Cernan (who flew two lunar missions) said he achieved this in the simulator and felt it was possible.

However the nominal mission required both primary AGC computer and backup AGS computer be on line for the lunar landing, else it was an abort. This custom edit of the Apollo 11 mission control comm loops reveals the tension during the landing due to the computer problems. The program alarms were what we’d today call a time slice exception, where the AGC kept running but dropped lower priority tasks.

Computer expert Jack Garman, who made pivotal decisions during the landing, was only 24 years old.

4MB .mp3 audio file: Apollo_11_Landing.mp3 - Google Drive

Explanation of dialog in audio file. Time is offset into audio file, not offset into descent. Where possible I identified the speakers by voice.

Time/Speaker/Statement/Comments

00:00: Gene Kranz (Flight Director): Polling flight controllers for continuing PDI (Powered Descent Initiation to lunar surface).

00:17: Neil Armstrong: “Houston are you looking at our delta H”. (Is the difference between inertial nav altitude and radar altitude within limits to tell the on-board AGC computer to accept radar data)

00:18: “Program Alarm”

00:20: Charlie Duke (CapCom): “Looking good”

00:22: Neil Armstrong: “It’s a 1202” (He sees the error code on the LM display)

00:21: Gene Kranz (Flight Director): "Is he accepting it, Guidance? (Does Guidance Officer Steve Bales see Armstrong hitting the accept button to switch from inertial to radar altimeter?)

00:28: “1202, 1202 alarm, what’s that?” (Sound of papers turning)

00:33: Jack Garman (AGC Computer Specialist): “It’s executive overflow, if it does not occur again, we’re fine…continue”

00:41: Neil Armstrong: “Give us a reading on the 1202” (He wants to know how serious it is)

00:44: Steve Bales (Guidance Officer): “We’re…we’re go on that, Flight”.

00:45: Gene Kranz (Flight Director): “We’re go on that alarm?”

00:47 Steve Bales (Guidance Officer): “If it doesn’t reoccur, we’ll be go”.

00:51: Jack Garman: “It’s continuous that makes it no-go, if it reoccurs we’re fine” (Correcting Guidance Officer’s statement)

01:04: “We have another 1202 alarm”

01:10: Jack Garman: “Single alarm, tell them to leave it alone and we’ll monitor it, OK?”

01:13: Steve Bales (Guidance Officer): “OK, we’ll monitor his delta H, Flight”. (Delta H is the difference between radar altitude and inertial altitude. Bales mistakenly thinks Garman wants to monitor this remotely for the astronauts)

01:14: Jack Garman: “We’ll monitor his alarm, Steve”. (The computer alarm is what they’ll monitor, not delta H).

01:28: “Throttle down” (The LM descent engine is variable thrust and must decrease throttle as fuel burns off to maintain proper descent rate).

01:30: Neil Armstrong: “Better than the simulator” (LM throttle behavior is smoother/better than the simulator)

01:31: Jack Garman: “Get those guys out of there!” (By telemetry he sees the astronauts using a computer mode he fears is worsening the problem)

01:36: Charlie Duke (CapCom): “You want them to stay out of 68?” (Program 68 is a computer mode which allows the astronauts to monitor their delta H, but Garmin fears the additional processing task is causing a CPU overload).

01:41: Jack Garman: “Yeah, AGC, the noun 68 may well be causing the problem” (Each two-digit computer command was called a noun).

01:50: Jack Garman: “Make sure he does nothing approaching P64, do you understand?” (P64 is the next-to-last program the computer runs before landing. Garmin doesn’t want the astronauts keying in any optional tasks during P64 which he fears will worsen the problem).

01:55: Jack Garman: “Off that DSKY as TGO (Time to Go) comes down” (Garman doesn’t want the astronauts touching the DSKY computer keyboard any more for fear of making things worse).

02:03: Gene Kranz (Flight Director): “Everybody hang tight, 7 1/2 minutes” (7 1/2 minutes to lunar landing).

02:04: Gene Kranz: “Descent 2, fuel crit” (The LM has two independent fuel monitoring systems, one for each fuel tank. They estimate system 2 is more accurate, so will monitor that).

02:08 Gene Kranz: “Is it converged?” (Has the inertial altitude and radar altitude converged to similar numbers, which indicates confidence)

02:10: Steve Bales (Guidance Officer): Yes.

02:31: Buzz Aldrin: “1201” (Another type of CPU overload, indicating computer is falling behind and must disregard less essential tasks).

02:33: Steve Bales (Guidance Officer): “What alarm, Jack?”

02:34: Jack Garman: “Same type, we’re go”

02:39: Gene Kranz: “How’s our margin looking, Bob?” (How is the fuel margin? Since the engine is constantly throttling up and down, one engineer was assigned to extrapolate remaining flight time based on current trends)

02:42: Bob Carlton (LM flight controller): “Four and a half, looks good” (Four 1/2 min. flying time remaining)

02:47: Jack Garman: “We have another 1202 alarm”

02:49: “Roger, no sweat”

02:53: Gene Kranz (Flight Director): “How about you TELCOM…Guidance, you happy?” (Because of the computer errors, Kranz is polling the flight directors to ensure vehicle is still controllable and on the right trajectory).

02:59: Jack Garman: “We have another 1202 alarm”

03:02: “You don’t have to keep calling it up, we can monitor it”

03:05: Gene Kranz: “OK, the only callouts from now on will be fuel” (They are so short on fuel, no other flight controllers should call out any non-fuel items)

03:10: "60 seconds: (60 seconds flying time remaining until abort)

03:15: Buzz Aldrin: “Light’s on” (Low fuel warning light is on)

03:19: “He’s done a lot of RCS” (They can tell Armstrong is vigorously maneuvering the LM, expending a lot of Reaction Control System fuel)

03:23: “30 seconds” (30 second flying time remaining until abort)

03:28: Buzz Aldrin: “Contact light” (66 inch-long probes extending beneath LM touch lunar surface, indicating to pilot that touchdown is imminent)

03:33: “Shutdown” (Sound of clapping. Flight controllers via telemetry can tell pilot has shut down engine and is on surface).

03:35: Jack Garman: “How’s that for you?”

03:45: “Keep your eye on the computer” (If an immediate lunar liftoff is required the computer will be needed. They are worried the previous problems will interfere should this be needed)

03:53: Bob Carlton (LM flight controller): “Keep your eye on it, don’t relax yet” (Despite touchdown, still worried about computer in case an immediate liftoff is needed).

04:01: Gene Kranz: “OK all flight controllers, about 45 seconds to T1 stay/no stay” (Should a cabin pressure leak or other problem require an immediate liftoff, flight controllers will be polled 45 sec after touchdown to assess this)

04:11: “Take note of all your little alarms there” (Joking with Jack Garman about his hand-written notes each computer alarm)

04:35: Steve Bales (Guidance Officer): “Hey, Jack…thank God we had that meeting…good show” (Bales remembers the meeting they had just before launch to discuss this type of computer problem and prepare for it).

Given the primitive accumulator ISA of the AGC versus the many registered RISC of an ARM, you may well be quite justified in calling it a round million to one performance and leaving it at that.

Steve Bales got the Medal of Freedom however. But Jack was a real star. The story of the 1201/1202 errors is one I used to tell my software engineering classes about. It was a perfect exemplar of how extrinsic matters can affect software. If I were teaching operating systems, the design of the AGC real time kernel would be right up there, and again, 1201/1202 a core example.

I would suggest it would be more reasonable to compare the AGC to a modern microcontroller rather than a modern general-purpose computer. Designed for real-time control and performance-optimized to the task at hand. The evolution of microcontrollers has generally not been exclusively faster processing; the trend lately is reducing power for given levels of performance. Zillions of application run perfectly well on the old 8 bit 8051 core.

I once heard a definition of engineering as “doing for 50 cents what any fool could do for a dollar”. These days you could say “doing with a microwatt what any fool could do with a milliwatt”…