What are/were the energy needs of the old UNIVAC type computers verses, say, a new iMac?
If the difference is a lot, how much less energy will future computers need?
What are/were the energy needs of the old UNIVAC type computers verses, say, a new iMac?
If the difference is a lot, how much less energy will future computers need?
Can’t give you a quantitative answer but I think the most immense jump was when they went from vacuum tubes to integrated circuits.
UNIVAC 1 used 125 kW for its ~6000 vacuum tubes, which were each roughly equivalent to a transistor. So about 20 watts per tube.
The latest iMacs use an M1 chip, which uses 40 watts at full power and has 16 billion transistors. So, about 0.0000000025 W per transistor.
Thus, power use per computing element has gone down by about 8 billion times. Although that’s an underestimate since the M1 transistors switch about 1000x faster. So it’s more like 8 trillion times as efficient.
There are probably a few factors of 10 left in computing efficiency, but not a factor of a trillion (until some dramatically new tech comes out).
Said another way, a single iMac made from Univac technology would consume 320GW of power. Of course it could not actually function due to speed of light constraints, but we’ll ignore that for now.
To put 320GW in perspective, the total installed generating capacity of the USA is ~1200GW. So the entire US power infrastructure can power almost 4 Univac-iMacs.
Here is the manual (pdf)
http://www.bitsavers.org/pdf/univac/military/1219/PX5010_1219B_TechDescr.pdf
Power Requirements
l15-volt, + 5 percent, 3-phase, 400-cps,
2000 watts maximum, air cooled (for 16
I/O channels and 32K memory)
This is a late version of the machine, since the manual mentions the availability of a Fortran compiler.
The ENIAC, by the way, generated 174 kilowatts of heat. That’s from the Britannica entry about it.
A bit more than I expected!
Thanks for quantifying those numbers. When I was an undergrad at university, I remember the city power utility having to install a new substation when the computing center got the first of their two new computers. On my current desktop computer, I have a simulator that emulates that exact new computer (KA-10 model of DECsystem 10, aka “PDP-10”) and has its original OS and applications. They run at least 1000 times faster on the emulator than they did on the real thing.
Note that @Voyager’s manual and 2KW power consumption was for an early-mid 1960s minicomputer made from transistors: UNIVAC 418 - Wikipedia. Not nearly the same thing as the late 1940s early 1950s vacuum tube Univac 1: UNIVAC I - Wikipedia.
They’d already achieved a 50x improvement in power efficiency there in 1963, plus a speed-up and storage capacity increase.
It’s perhaps more remarkable when you consider even smaller modern systems. For instance, the chip inside AirPods is tens, maybe hundreds of millions of transistors–and it fits in your ear! Along with a battery that powers it for hours. It’s also immensely more powerful than the UNIVAC, but isn’t used for much more than figuring out if you said “Hey Siri”.
Some new products fit a processor and entire biometric sensor suite (and battery) in an ordinary ring. Again, with vastly more processing power than a UNIVAC, but used for relatively mundane purposes.
You can read some of the raw data here:
https://www.ed-thelen.org/comp-hist/BRL61-u3.html#UNIVAC-I
It seems that every installation was somewhat different, but here’s a typical one:
Army Map Service
Power, computer125 KVA
Room size, computer 1,400 sq ft
(Not including peripheral equipment or personnel)
Capacity, air conditioner50 Tons
Weight, computer 19,000 lbs
False ceiling installed - return-air ducts above
false ceiling. No false floor - cabling between
equipment, and input air ducts, suspended from ceil-
ing of floor below. Control system cooled by air
system rather than chilled water - automatic controls
to switch between direct outdoor sir and internal
re-circulating conditioned air depending on outside
temperatures. Computer designed for 2-phase power-
80 KVA Scott transformer used to convert from 3-
phase.
The power reduction was impressive. But at the same time, we’ve also produced more (electrically) powerful systems–megawatts, then tens of megawatts. So there’s been two boosts: first, in the underlying improvements in efficiency; and second, with organizations willing to spend more and improvements in power delivery systems that allow more power to be used in total.
Individual modern cores are insanely small. For instance, the Zen 4C core is 2.48 square mm, or about 1/16th of an inch to a side.
This entire discussion is making me think of The Last Question.
UniMacs.
Another useful link:
Note the efficiency of the top system: 65.396 GFLOPS/W.
The UNIVAC I didn’t have floating-point math, and used binary-coded decimal, but on the other hand it did have more raw precision than 32-bit floating-point. I’ll call it a wash overall. Addition and multiplication took differing amounts of time: roughly 2000 adds/second and 500 multiplies/second. I’ll be generous and average these as 1000 ops/sec. Divided by 125 kW, we get 0.008 OPS/W, or 0.000000000008 GOPS/W.
Dividing through, we get a 8,200,000,000,000x improvement, which, coincidentally, is almost exactly what I stated upthread!
Just as an aside, since it’s bugging me and might confuse people:
FLOPS stands for FLoating-point Operations Per Second. And a watt is a joule/second, so with FLOPS/W, we really should just cancel the second to get operations/joule. Unfortunately, both the P and S are overloaded–we’d like to say FLOPS/J, but in that case the S is supposed to mean it’s plural, and the P is the second letter in OPerations. Not “Per Second”!
One way around this is to invert the metric, which arguably makes more sense anyway: joules per op. So the supercomputer above becomes 15.3 pJ/op (that’s picojoules, or a trillionth of a joule). And the UNIVAC I becomes 125 J/op. That all looks much more sensible.
Actually, they went from vacuum tubes to transistors, and then went from transistors to integrated circuits.
Integrated Circuits have gone through several stages of evolution too - each arguably as significant as the change from tubes to transistors:
The early ICs just miniaturised and replaced the groups of transistors that were required to make logic gates - so instead of requiring a few transistors to make each logic gate, there were general-purpose ICs where each package contains several logic gates - so the central processor for a computer could be built using just a collection of those general purpose logic ICs.
Following pretty soon after that, all of the logic necessary to make a central processing unit was integrated into one ‘CPU’ package, but it was still necessary to have a load of peripheral hardware for various interfaces, memory, etc.
Quite soon after that, the peripheral hardware was also integrated into the same package, so a whole computer could be in one package, requiring relatively little external hardware to make it work - the System-On-a-Chip (SOC)
I suppose these stages could be discounted as just further miniaturisation, but they are still discrete steps in the evolution of computers.
What’s interesting is that the development of ICs>CPUs>SOCs was pretty swift in terms of invention, but much slower in terms of adoption - the concept was all there in the early 1970s, but computers continued to be made with previous stage technologies long after the next stage came along - mostly I think because if you want something nonstandard, you have to build it out of a larger number of simpler parts.
There is also possibly another stage in the middle of all that, where the peripheral hardware in computers got bunched together into dedicated ICs, for example a single IC to drive video output, or to handle other I/O, so maybe the stages are:
I’m probably overlooking something.
You are looking at this wrong. Computers (worthy of the name still fill up a room; they’re just faster. That Army UNIVAC-I used 125 kW of power; today’s Oak Ridge Cray uses 21 MW. So it needs let’s say 150x more energy. Of course, you do get a lot more bang for your buck. (And, thanks to all the aforementioned technological improvements, you can do some great stuff in the comfort of your own home or lab for the cost of only a couple of kilowatts, let’s say that kind of much more common use is 100x more efficient in terms of your power bill.)
I recently read an article centered around the fact that it took Eniac 70 hours to compute π to 2035 decimal places. It suprised me that even a computer that old took that long.
I installed a π app on my phone and it did 2035 decimal places in 0.005 second. This specific app does only a maximum of 100,000 decimal places, but my phone did that in 0.537 second.
(And as I compose this I see there is a check box in the settings that allows for as much as 1,000,000 decimal places. That took 10.85 seconds.)
(And I just installed a different π app and it did 100,000 digits in 0.18 second and 1,000,000 digits in 1.93 seconds.)