Energy needs of UNIVAC type computers

I assume you mean a system with a 387?

Earlier this century, the computing room in the physics department I was at was shut down due to an air conditioning failure. No supercomputers (we’d contract out to someone if we ever needed that much compute), but some desktop-sized machines with lots and lots of graphics cards.

The irony was that this happened in the dead of a Montana winter, and there was plenty of cold available. We could have solved the whole problem just by cracking open a window… except that the computing room was in the interior of the building.

That sounds like a description of the IBM 650, which was the first commercial computer (or at least, the first commercially successful one). The processor was based entirely on vacuum tubes, so it was a step up from electromechanical relay computers, but it did indeed use an electromechanical drum for main memory. There was also an option for electronic memory, but it was very expensive and very small (I think something like 1K words).

The challenge of writing programs for it may have involved the manual placement of machine instructions on specific drum locations in the early days of the 650 or in its more limited configurations, but the process was fully automated when IBM released an assembler for it called SOAP (and later, SOAP II). The acronym stood for “Symbolic Optimizing Assembly Program”, and the “optimizing” part referred to the assembler’s ability to look up the execution timing of every instruction and load the next instruction on the appropriate place on the drum. Today of course this is long obsolete, but we do still have to deal with the problem of how object code is loaded into memory and linked to the necessary support libraries. The component that does this is called the “loader” (or sometimes, “linking loader”), but in the primitive good old days of the 650 that component hadn’t even been invented yet!

The IBM 650 was way before my time but I have an interest in computer history. Some of you may be picturing this “memory drum” as about the size and shape of something that Ringo Starr may have pounded on, but set on its side. It was not like that at all. It was typically a very small, very heavy cylinder that rotated extremely fast. This was the guts of it:

Go back a little earlier, and the drums could get pretty big:

There’s going to be some tradeoff between bitrate (for which surface speed is important, and which would encourage larger but slower-rotating drums) and latency (for which angular velocity is important, and which would encourage small but rapidly-rotating cylinders). As bit density increased for other reasons, the balance would get pushed towards reducing latency.

Right, what I posted I believe was the drum actually used in the IBM 650. As with much of technology, the earlier ones may well have been much larger, slower, and generally more primitive. But I think some of those bigger ones may have been intended as secondary storage – which were later flattened and became disk drives.

That picture needs a banana for scale.

I don’t have a banana handy, but the pictured drum is 4" in diameter and 16" long. It spins at 12,500 RPM. The capacity is either 10K or 20K decimal digits, or 1K or 2K 10-digit words on the IBM 650, which was a decimal, not binary, machine.

I see also that I was misremembering the capacity of the optional core memory. It was in fact 660 digits, or 66 10-digit words, not 1K!

As I mentioned, the machine I learned on was the transistor version of the LGP-30.
I had always thought that the reason for executing instructions from memory was to allow for assemblers and compilers and the like. In researching a chapter on my book on early computers, I found out I was wrong. The real reason was to allow doing just what Mel did. (Index registers were in the future.) Otherwise to loop through an array you needed one instruction for each entry. (The LGP 30 had no index registers either.)
I had a tic-tac-toe program which only fit into memory if I change a bunch of adds to subtracts when the player went and back when the machine was deciding its move. I had to initialize the instructions at the beginning of the run.
Hey, it worked.

Technically the 650 was bi-quinary.

The IBM704 at Lockheed (1958) had a drum memory unit. It was six feet long and about as high with a window in one end so you could watch the drum operate. The physical drum was over two feet in diameter. It was wrapped with nickel wire as a recording medium. The thing was scary to work on. You had to defeat the safety interlock then run the drum up to speed and reach one hand in to adjust the R/W heads while setting the oscilloscope with the other. The danger was that you would get the head too close to the drum and damage the surface. Which was unavoidable because that’s where you got the best signal. One morning I came to work and the window was filled with loose wire. During the night a R/W head had snagged the surface and unraveled close to 2 miles of nickel wire. It was taken offline and not replaced.

I remember tales of the UNIVAC FASTRAND drums coming out of their cabinets at full speed and demolishing computer rooms.

The drums didn’t spin fast, but they were large and massive, so they could do some damage if they got loose.

I think it was the IBM System/360 (don’t quote me - it might have been a different computer of the time) that came in a slower model and a faster one. If you wanted to upgrade the slower one to the faster one, a technician came out and removed a board. The slow machine had a boars in it that had the sole function of injecting wait states to slow everything down.

Not only did they slow down the computer, it was a real hassle to keep the boars fed.

Have you ever tried to get boar poop out of a keypunch? Don’t ask. :grin:

We just fed them hamburgers - hold the piggles, of course.

The first commercial computer in the world was installed by J Lyons & Co. They ran teashops and supplied confectionary to stores all over the country. They called their computer LEO.

My great-aunt worked for them as a data input clerk and told me that they employed people whose sole job was walking around the galleries replacing valves as they blew out. They had to take frequent breaks because it was so hot.

When I worked for Western Electric, I used to visit boar assembly plants all the time. I was in hog heaven.

BTW System 360 came in a variety of speeds and price points, but they were way different. The lower end ones were microprogrammed, the higher ones were hard wired. I’m sure that there were machines sold at different clock rates with the same basic hardware. If there was demand for the slower ones the faster chips could be throttled back.
Computers based around the Sun Niagara chip were similar. There were 2 core, 4 core and I think 8 core versions. We were getting good yields at higher cores (same chip, same process, you just disabled failing cores) so we sold lots of two cores systems with four working cores, two of which were disabled in firmware.

I don’t know if this particular story is true. However, the price points were often based on service contracts and they made a lot of versions of 360s over the years. It wouldn’t be surprising if a price point was achieved by derating a machine.

I’ve heard that story also, and if didn’t happen to a 360 it definitely happened to another machine. The story makes a lot more sense if they swapped a board - peripherals all run at more or less the same speed and replacing a slow CPU board with a faster one is definitely possible.
I don’t buy having a special board to slow the machine down - boards are expensive, and some sort of hidden dip switch would do the job just as well and more cheaply.

In every related case I know of it was a switch or jumper. And not a secret either, again just a contract matter.

So, sort of like Wheatley?