Energy needs of UNIVAC type computers

I would include going multi-core. Modern CPUs aren’t a computer on a chip, they are a network of computers on a chip.

BTW, at first glance the new Ryzen Threadripper 7000 sounds terrible for needing up to 350 watts, but that figure whopps a lot less when you stop to think about that being around 3.65 watts per core (with each core being much faster than a Cray 1).

There have been advances in mathematics (as well as absolutely crucial ones in computer science, like various fast multiplication algorithms) since 1948. Also, today it is much less of an issue to use algorithms that require more memory.

There was a stage before tubes: electromechanical relays.

I find this outcome charming. It changes the somewhat abstruse into a comparatively easy to visualize quantity: “how much energy does one computer operation require?”

So it took days for the few existing early computer users to play video games and download porn? Thank heavens for progress!

The energy requirements to power the machine would need doubling to cool the computer room. Most of the power consumed was released as heat then requiring more energy than that to remove.

The IBM 704 (1954) used 20KW, but there was more to it than just bolting it into the power grid. Incoming power fed a motor generator with a large fly wheel. This provided clean power and would even bridge power glitches.

Turning the system on required as much as half an hour. The power was sequenced, with filament power being slowly applied to the vacuum tubes under the control of magnetic amplifiers. When the filaments were hot, DC power was sequenced on and monitored by d’arsonval meters. The meter dials had contacts on their tips to provide an indication of the voltages being in range. After a delay if any voltage was low or had overshot a limit during power up, the whole thing shut down and started over. As far as I remember it had regulated 6.3 and 12.6 VAC and ±250 and ± 50 volts regulated DC.

My dad was a computer scientist in the early days. They used to time-share an IBM 360. When he got his first Mac, he was amazed at how fast it was compared to the old mainframe. I was amazed that he could actually write large, complex, and useful programs on such puny, slow machines. Modern computers have just made programmers sloppy - no need for hand-tuned code when a piece of Javscript will do what you need in 10x the cycles.
I often think about how much of a modern computer’s processing power is just “window dressing.” The amount of processing to necessary to scale and fade the screensaver images in realtime didn’t exist 50 years ago, for example.

Yeah, it was a different world.

When you pushed the start button on a 704 you got a read select of the card reader, 24 copies and a transfer to memory address zero. Anything that happened after that was in your program. There wasn’t any operating system. Printing required that you create a print image in your program and then manipulate the printer to get it on paper.

The new thing is 3D and 2.5D integration. instead of growing the CPU, or spending the increasingly large amounts of money for fabs for smaller process nodes, you create chiplets, which gets put on a silicon interposer which has t he benefit of not needing to do the buffering needed for sending signals on wires. That’s 2.5D. 3D is when you place a memory chip on top of a CPU chip or ASIC, and route signals up. I’ve been reviewing papers on the test implications of this for years, and it is finally happening.
Testing of chiplets is beginning to look like testing of boards back 20 years ago.

I was going to reply “THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”

Now, now. My assembly language class term project on a PDP-1 was to implement the Game of Life to display on the big CRT (the one used for Spacewar) without flickering. The Game of Life was a common screensaver on Sun workstations. So, even more than 50 years ago you could do it. You just couldn’t do anything else.

When we did burn-in on some of our biggest processors you didn’t add heat, you just didn’t cool it quite as much as you would normally. And the heat sinks would almost qualify as a skyscraper in the old days.

I suppose we should not overlook purely mechanical computation before that.

We do this for mobile SoCs. The chipset, RAM, and FLASH are all in one package.

Good point.

I wonder if the next integration will be the addition of a full storage array.
Obviously current SOCs already have storage, but not typically all of the storage they will need for their intended application.

Maybe not though, especially as solid state storage wears out quicker than compute (and also companies like to upsell customers to the bigger storage option of the same device)

And Babbage’s Analytical Engine said, “Let there be light…”

Drawing black-and-white dots on a screen is a far cry from taking a 4 Mpix 32-bit color image and both scaling and cross-fading it with another image at video frame rates.

What about backside power? Really a big deal or PR hype?

https://www.intel.com/content/www/us/en/newsroom/news/powervia-intel-achieves-chipmaking-breakthrough.html

How soon before surges of submesons replace the old clumsy molecular valves?