World's fastest computer-1972-compared to today

My mother was rummaging around some books that survived Katrina at her house and found a 1972 edition of The Guiness Book of World Records. Out of curiousity, I looked up what the world’s most powerful computer was at the time. However, I was unable to convert its abilities to a computer from today. Here are the stats: “It can perform 36 million operations per second and has an access time of 27 nanoseconds. It has two internal memory cores of 655,360 and 5,242,880 characters (6 bits per character) supplemented by a Model 817 disc file of 800,000,000 characters.”

How does this translate into performance for a computer today?

It was a “Control Data Corporation CDC 7600” by the way and the book says it retailed for $9 million to $15 million to commercial buyers.

Thanks.

Looks like a 36MHz machine with 4 megabytes of RAM, and, I think an 800 MB hard drive.

My computer around 1989 was, ir I recall correctly, 10 MHz, 650 Kb RAM, and 40 MB hard drive.

I don’t know what modern computers’ access time is.

-FrL-

Control Data Corporation (CDC) 7600: 1971–1983 - Drool over this. :slight_smile:

Access time is a function of the system RAM.

PC100 SDRAM has an access time of approximately 10ns. The newer RAM types have dropped that time considerably, in that more information can be transferred per clock cycle.

The term “access time” is no longer used to describe RAM, rather it’s now described as peak memory bandwidth. PC1600 means 1.6GB/sec.

That 27 nanosecond ‘access time’ happens to be 1/36,000,000, so it was a 1 instruction per cycle machine.
These days, access time usually refers to the speed at which data can be fetched from memory, not how fast the instructions can flip inside the cpu.
Back then, core memory had an ‘access time’, in the modern sense, of about a microsecond. That’s faster than a modern hard drive, but still glacial compared to 1980’s tech 190 nanosecond RAM.

As Rico points out, the meaning of access time, and RAM speed measurement continues to evolve.

If those are 6-bit characters, then 800,000,000 characters would be 800M * 6 bits, or 4,800M bits, or 600 MBytes. Assuming no parity.

Well, of course. That’s what I meant. :slight_smile:

-FrL-

You mean one memory access per clock, of course, not one instruction per clock. I taught assembly language on one of these machines - actually on a simulator running on the machine, so the students got good dumps and didn’t mess anything up.

CDC machines, in case anyone wondered about the six bit characters, were 60 bit machines. They had very advanced, for the day, logic to schedule instructions to maximize use of functional units. Any decent microprocessor today does more, of course. IIRC, it would not get up to one instruction per clock except for very exceptional instruction mixes. It was definitely a CISC machine.

Famous stuff that ran on this machine was the original Pascal compiler (which claimed to be portable but assumed 60 bit words for sets) and PLATO. Thus, much of the computing for the four color problem ran on Cybers.

One of my favorites, back in my computer architect days.

But it’s not accurate to compare it that way.

The CDC 7600 specs are given for the Central Processing Unit only. The CPU did no I/O processing, it operated only within cache & main memory. There were between 10 and 20 Peripheral Processors which did all I/O processing, and basically kept the main memory filled with raw data for the CPU to work on. Each of these was about 1/20th of the speed of the CPU, and had 1/8 to 1/16 the amount of memory of the CPU. So realistically, the PPU’s in total are equivalent to a second machine.

Also, the 7600 CPU was the first machine where Seymour Cray used pipelining. This meant that the machine could have periods where it actually processed more than 1 instruction per machine cycle. This took the right kind of problem (and good programming) to do this regularly. This technology was extended much further in the later Cray machines. And now current microprocessors do some of this within the chip itself.
And I would not agree with Voyager’s classification of this as a CISC machine.
Starting with the CDC 6600, Cray started a trend toward RISC machines. Compare the number of instructions for any of them to the IBM 360 series, for example. I consider Seymour Cray the original RISC designer.

Bandwidth and latency (access time) are two different things. Both are critical to system performance.

There is probably no good way to compare this computer with any machine the OP is likely to have ever used directly. What t-bonham@scc.net mentions about PPUs means that most benchmarks will be skewed one way or the other depending on how important I/O is to them. The fact the 7600 used six-bit characters as opposed to the modern standard of eight-bit characters means all text processing benchmarks will be fairly useless. The floating-point format used by the 7600’s hardware is likely different enough from the modern IEEE floating-point standard used in modern hardware to render floating-point benchmarks meaningless.

A pure integer benchmark with no I/O might be fair to both computers in one sense, but a modern CPU on a commodity desktop personal computer spends most of its time waiting for I/O to complete when it’s running any software a human interacts with directly (web browser, text editor, etc.). Therefore, you run into the PPU issue mentioned previously.

There is probably one somewhat meaningful way to compare the relative ‘capacity for work’ of both systems: Run PLATO or some other CDC 7600 software on a CDC 7600 emulator running on a modern computer. You can get a feel for how much faster the real hardware is than the emulated hardware by noticing how hard the desktop machine has to work to give a good experience with the emulated machine. Using this method, I’ve discovered that my Pentium-M laptop with 1 gig of RAM can emulate any eight-bit microcomputer (Commodore 64, BBC Micro, etc.) without breaking a sweat, can run any PDP-10 software I can find (TOPS-10, ITS, and TWENEX) with about equal ease, and only starts to labor when I run something along the lines of Fifth Edition Unix on a PDP-11 or 4.3BSD on a VAX. Running an emulated CADR Lisp Machine is also fairly taxing.

(No points for guessing I’m a retrocomputing geek with more disk images, tape images, and emulators than actual pieces of physical hardware. SIMH and Bitsavers.org are my sword and my shield.)

Here’s a link to the Cray-1 super computer from the 70’s. I have to wonder if anybody born in the ninties and later knows the term super computer. Link to history of Cray Computer and founder. I was always proud of the fact that the worlds best computer manufacturer was making them in Wisconsin, when the home computer boom took of. I love the design of the early Cray models, they’re so sharp and modern. Der Cray ist sehr scharf und modern.

It’s hard to compare today’s computers to those from the 70’s due to their custom made individual nature.

All I know is that it works faster than I can type, though that can be said about any compututer except a Timex-Sinclair TS-1000 and my Tandy WP2 “word processor,” and can calculate faster than I, though THAT can be said about most third graders.

RISC vs. CISC does not depend on the number of instructions, but on the complexity of the instructions that are implemented. It’s been a while, but I seem to remember the 7600 having some fairly complex ones. The 360 instruction set, being targeted for so many application domains, was just absurd in its size - CDC machines, being mostly meant for scientific computing, didn’t need all that BCD junk.

RISC came out of microprogramming. The IBM RISC designers, who really invented the concept if not the name, were well versed in microcode, and Dave Patterson did his dissertation on microprogramming. That was not part of Cray’s heritage at all.

I do agree totally with your point on I/O processors. People today don’t appreciate what a big part of mainframes this was. They started getting migrated inbound in the '80s, I believe, though you could consider the processors in a disk drive or printer connected to a PC examples of I/O processors in a sense.

Here’s a definition of RISC from John Mashey which seems to hit the high points:


MOST RISCs:
	3a) Have 1 size of instruction in an instruction stream
	3b) And that size is 4 bytes
	3c) Have a handful (1-4) addressing modes) (* it is VERY
	hard to count these things; will discuss later).
	3d) Have NO indirect addressing in any form (i.e., where you need
	one memory access to get the address of another operand in memory)
	4a) Have NO operations that combine load/store with arithmetic,
	i.e., like add from memory, or add to memory.
	(note: this means especially avoiding operations that use the
	value of a load as input to an ALU operation, especially when
	that operation can cause an exception.  Loads/stores with
	address modification can often be OK as they don't have some of
	the bad effects)
	4b) Have no more than 1 memory-addressed operand per instruction
	5a) Do NOT support arbitrary alignment of data for loads/stores
	5b) Use an MMU for a data address no more than once per instruction
	6a) Have >= 5 bits per integer register specifier
	6b) Have >= 4 bits per FP register specifier
These rules provide a rather distinct dividing line among architectures,
and I think there are rather strong technical reasons for this, such
that there is one more interesting attribute: almost every architecture
whose first instance appeared on the market from 1986 onward obeys the
rules above .....
	Note that I didn't say anything about counting the number of
	instructions....

It is true that the concept of RISC was immediately latched on to by marketers but that doesn’t mean it has no technical meaning. RISC was actually a somewhat complex idea that coalesced a lot of ideas that started making sense when compilers got good, on-die transistors got cheap, and memory access started to get relatively slow compared to on-die cache and register access. (Note how many features are focused on making memory access as rare and as stereotyped as possible.)