The same was also true on the early computers with drum memories, a notable commercial example being the IBM 650. The code was optimally distributed on the rotating drum by the assembler so that the next instruction would be rotated into reading position after the execution time of the current one. Hence the IBM 650 assembler was called “SOAP”, for “Symbolic Optimizing Assembly Program”. Core memory was available for the 650 at some point, but it was exceedingly tiny and expensive and main memory was always the drum.
If I was going to design and build an old-style computer from scratch I’d avoid vacuum tubes because they’re so unreliable and build one using discrete transistor logic like DEC Flip Chips, which were basically cards that were individual or sometimes composite logic gates. This is what one looks like, and this is what the backplane of one of those computers looked like from the card side. The other side was an unbelievably complex mass of wire-wrap between thousands or hundreds of thousands of pins. Not really relevant to the vacuum tube question but perhaps interesting to some.
Have any of you guys actually seen one of these old time vacuum tube computers in operation? I have, at the Argonne National Laboratory just outside of Chicago, back in the late 50’s or early 60’s. Its sole use was for payroll. I remember a big room, containing a rather dense array of cabinets if about the same size of refrigerators. Each of these had a hood connected to the top, and this had ducts connected to a huge airconditioning system. This was required because the tubes gave off so much heat that without them the computer would have suffered heat death in short order. I will say, though, these old timers were a lot more impressive than the new little bitty things!
A vacuum tube computer has been compared to using a freight train to move a pat of butter.
More stuff from memory. One of the above posters mentioned that relays would work, but would probably be noisy. Years ago I was reading a history of computers, and there was a short article about a relay based computer built by the Bell Telephone Company (IIRC). Anyway, it was described as sounding like a whole stadium full of knitters with busy knitting needles.
One more thing from Memory Lane. I understand the the ENIAC computer was programmed to dump all results at intervals of roughly five minutes. This was because five minutes was about all they could get before a crash caused by one of the thousands of tubes burning out.
This makes no sense. First of all you can’t build a core memory with just row and column wires, you also need sense and inhibit wires, which in later generations were combined into a single wire running diagonally through the cores. But that’s not even the point. The basic fact is that if you have 128 vacuum tubes holding 4096 bits of information, then each one is somehow holding 32 bits of information. The wire count and the square root of anything have nothing to do with it. In point of fact, the Williams tube and the Selectron tube did store many bits of information in a single tube, but they did it through electrostatic charges on a CRT-like surface. A discrete switching element, whether a vacuum tube or anything else, represents a single bit of information. Perhaps I’ve greatly misinterpreted what you were saying, but in any case, core memories become the predominant form of computer memory for many decades until they were eventually superseded by modern integrated-circuit memories.
I was talking about the number of driver circuits required to control the magnetic core array, which rises with the square root of the number of magnetic cores due to the separate row and column lines. I assumed that was what weirdvideos was claiming, that each core required a separate tube to control it.
Ah, you were talking about the memory controller, not the actual storage. Yes, and now that I re-read the original point you were responding to, that makes more sense!
The Computer History Museum in Mountain View has both a Williams tube an an acoustic delay line memory on display, by the way.
You can’t hold a bit of information in one vacuum tube or transistor. No way of holding state. The push for Williams Tube and other memories in the early days is an indication of how unreliable and big vacuum tube memories were. Transistor memories started to replace core memories in the '70s - we had a PDP 11/20 with a transistor memory. Which was flaky because all the transistors were put in backwards.
Hmmm… my dad mentioned programming this once. He said you had a huge grid sheet, and as you wrote the program, you filled in the square on the drum where it went. (dozens of heads reading a series of magnetic tracks that were circumferences of the drum). After you wrote the instruction, you looked up how many clock cycles it would take and found a spot a bit further than that along the drum. Each instruction included the address (track, count) of the next instruction. Otherwise, it had to wait a full revolution to read the next instruction.
I guess in a sense a dual triode is cheating (dual triode is essentially two triode tubes in one vacuum glass case). Ditto for a Williams tube.
Consider each element requires a vacuum tube. Storing a bit is a flip-flop, two transistors or triodes. Gates - to allow data onto or off the data bus - again, a triode.
Elementary computer - a register (say, 8 bits, 16 tubes). Gates in and out of register - another 16 tubes. Instruction register, say 8 bits, ditto. instruction decoder, gates on and off instruction bus, etc. the number of units required gets bigger and bigger.
I have the plans for the original 8008 microcomputer from Radio Electronics. It included plans for 256 bytes (!!) of RAM, using flip-flop chips. The 8008 had 3500 transistors on chip. With the RAM, bus support chips, etc. let’s say that translates to 5,000 tubes to do the same job. Better invest in a lot of solar panels and air conditioners.
A story I heard was that one of the first tube-based computers, there was a crew of grad students who pushed shopping carts full of new vacuum tubes through the room it was in, constantly looking for and replacing tubes. They ran every calculation three times because a faulty bit or gate could mess up the result.
Some of the earliest electronic computers actually did use dual-triode tubes as flip-flops to store one bit of memory each. Of course, this means an impractically large number of tubes for a decent memory capacity. Building an all vacuum tube computer is going to be a lot easier if you allow yourself to use a Williams tube for memory, but I have no idea if it’s even still possible to find one.
You can still find Dekatron tubes on Ebay. Each one of those can store a single digit (0-9) and has built-in count up/count down functionality. That should be easier and cheaper than tube-based flip-flops. It does however mean your computer will be working in base 10 math instead of base 2.
Here is Chapter 8 of the LGP-21 programmers manual - the machine I learned on in high school. I suspect the next instruction would be in the next word, since the program counter was not going to be sophisticated enough to do anything more than increment. However the operand of the instruction could be place anywhere - and that can be done so that the read head was over it when needed. That’s what this chapter talks about. I had one of those timing wheels. I still have a copy of the manual.
The Computer Museum has an LGP 30 - the vacuum tube predecessor to the LGP-21.