1990 Cray supercomputer vers. my grandmother’s Dell

Let’s see if I can trump that. I sat on (and was a computer operator of) the Cray 1 (serial number 1) in its original days at Livermore, in the late 1970’s. I wrote a program for it using Livermore’s enhanced version of Fortran, which allowed the programmer to write vector operations. I don’t know what became of that machine in later years. It went to Los Alamos? Are we talking about the same machine?

There is one (or maybe two) Cray computers – possibly including this very unit – on display now at the Computer History Museum in Mountain View. Photo of someone (who isn’t me) sitting on a Cray computer.

Links of everything you ever wanted to know about Cray computers from the Museum archives. (ETA: Lots of pictures and several technical reference manuals in PDF format.)

I can’t trump sitting on, or writing for one, but I do own two circuit boards out of one. They are astoundingly heavy, basically being solid copper with two PCBs, one each side bolted to the copper core. The core inserted into rails that were liquid cooled. That was back when they really engineered computers. (My cards came from a machine at a US lab, but unless I dig deep, I can’t remember which one.)

How are zombies at doing vector operations? Can they perform matrix calculations well?

Well, they must not be memory cards, since you mentioned that before - post #29!

:smack: :smiley:

One of the many reasons I hated Crichton is that in Jurrasic Park he had the real-time control of the park done by a Cray computer; probably the worst possible choice. The movie showed a Thinking Machines supercomputer in the background.

I thought the Cray was used for genome sequencing rather than control, but it is a loooong time ago. Thinking machines loaned four CM-5 Scale-4 cabinets for the making of the movie. The CM-5 LEDs were a standalone system and you could put them into a couple of standard blinkiing pattern modes, even if the rest of the cabinet was empty. (Unlike the CM-2 where the LEDs were on the matrix cards and would only blink when the system was running.) What was annoying was that in the movie the cabinets were set out side by side, not in the proper zigzag pattern of connected cabinets.

Genome sequencing is a good example of where conventional supercomputer architectures help not at all. The critical element is simply memory. Some colleges of mine have recently commissioned a dedicated sequencing system. Not much compute, but a Terabyte of RAM. OTOH, if you can’t get enough memory, a really fast parallel IO system helps a lot. Something that is often overlooked with all the supercomputer system and something Cray were really good at, was fabulous IO. Thinking Machines were pretty good too. The CM-2 had one of the first really serious RAID systems (The Data Vault - 144 SCSI disks in RAID 6 for a massive 9GB) and the CM-5 offered the Scalable Disk Array, where the RAID system was directly coupled into the data network of the machine. Fine machines.

Not only do I have the Cray cards, but I have eight CM-5 LED panels in a box. There were four per cabinet. One day I’m going to get them going again :slight_smile:

What’s a flop. For the record, the singular of flops is flops (floating point operation(s) per second.

In 1976, it took 1200 hours on the Illiac IV, a supercomputer of its day, to check around 1800 “minimal irreducible” map configurations to prove the 4 color theorem. Eighteen years later it took overnight on a PC to check around 600 such configurations (the mathematical analysis had reduced the number needed by 2/3).

In the early days of PCs, even older generations of mainframes had much greater I/O capacity than even high end PCs. The first PC hard drive I saw was just 10MB. Two days ago at Costco, I saw a 1TB drive on a PC!

If you count the GPU, the latest PCs can run at more than a teraflop, and software can use the GPU for non-graphical tasks, albeit with limitations, but the CPU can be used for what the GPU can’t easily do.

Also, how fast are supercomputers when measured by instructions per second (not the same as flops, which are floating-point)? In this case, many programs can run without using any floating-point. PC CPUs are also designed to handle a wide diversity of tasks as efficiently as possible, so they’d likely easily outperform an equivalent-speed supercomputer in general purpose computing (i.e. what most people do on their computers).

I’m curious about questions like does my cell phone have more computing power than the Apollo 13 capsule?

Trivially. The compute power of the Apollo computers was tiny. For the time they were a technological marvel, but their speed and capability was astoundingly slow by even the most modest modern standards. On the other hand, their development was a critical enabler for the programme. They were just enough, just in time.

Searching reveals this thread.

This is clearly a horses for courses issue. Bottom line is that the really big hard problems are mostly, but not exclusively mathematical in nature and require floating point. Pretty much any form of modelling, simulation, physics, chemistry, they are all expressed in mathematical form. There are a lot of paradigms, and as the systems get bigger new science becomes tractable, and interestingly, some science that was once the preserve of supercomputers becomes commonplace on desktops.

But some serious areas don’t need floating point. Genome sequencing is one.

There are other ways you can optimise a CPU. Server CPU designs don’t need much is any floating point, but do need the ability to run a large number of concurrent tasks at once - typically on-line transaction processing systems (which is what you big web server systems become) need to service millions of tiny transactions. SO you get designs that optimise for this. Additionally you can get things like IBM’s Power that are now adding units such as encryption units on the chip, so that your web transactions can unload the encryption to a dedicated unit.

A modern x86 is pretty remarkable in the way it is optimised to cope with less structured code flows. Multiple instruction decode units, huge branch prediction caches, call frame caches, speculative execution control, and so on. Not a lot of this really gets much traction on traditional highly structured big compute jobs. But it is seriously impressive how well it makes an x86 work on codes that have less structure.

Now the vast majority of supercomputers are x86 based, with the next big thing being the uptake of GPU based processors - NVidia’s Tesla for instance. Not suitable for all problems by any means, but pretty worthwhile if you can use it. Again optimised for highly structured floating point intensive jobs. If you look inside it looks awfully reminiscent of older supercomputer systems. But instead of half a cabinet of logic you cram it all on one chip.

Let’s see, 20 years ago…the pole speed at the 1992 Indianapolis 500 was over 232 miles per hour. That’s an average speed, over four laps on a relatively flat 2.5-mile oval.

If you can head down to your local dealership – I don’t care which one – and pick up a “family sedan” capable of even making the field at Indy that year, much less sitting on pole, I’d love to know about it so I can pick one up.

Are there family sedans out there capable of performing identical tasks of SOME race cars from 20 years ago? Sure there are…there are economy cars that can also outperform SOME modern race cars, because “race cars” is a term encompassing a wide variety of cars with a broad range of performance characteristics. But your average soccer mom is not out there driving a minivan that could lap the field at Indy 20 years ago.

The small microcontrollers used in things like coffee pots and microwave ovens these days are more powerful than the Apollo guidance computer. The engine computer in your car is much more powerful than the Apollo computer.

Your cell phone is many orders of magnitude more powerful.

Apollo computer:
2k of memory, clock speed 512 kHz (0.0005 GHz).

Samsung Galaxy 5 (from 2010, not the latest and greatest cell phone):
512 MB (512,000k) of memory, clock speed 1 GHz (1,000,000 kHz).

You can buy a PIC microcontroller that has about 2k of memory on it for a couple of bucks, but it’s going to be much faster than the Apollo computer (your typical PIC runs at maybe 20 MHz to 40 MHz). And it will have more features like built in A/D converters and timers and such.

Indeedy. I suspect most people have little conception as to just how much power a modern car’s ECU has, or indeed just how staggeringly complex a modern car system is. Between 50 and 100 separate CPUs live inside a modern car. There is a level of real time control and a level of reliability that makes launching a Saturn V look trivial. Every day millions of cars drive on the roads with anti-lock brakes, anti-skid systems, airbags, collision detection, seatbelt pre-tensioners, all active, all the time. Cars driving in rain, snow, ice, blistering heat, maintained by all manner of people, or not at all, and yet people trust these systems, and the systems are worthy of that trust. It is remarkable.

I’ve heard this before and it just strikes me as insane that they’d need that many CPUs, or why anybody would think of using that many; I mean, it can’t be economical, can it, even if each one cost a dollar? Especially since they are probably more like microcontrollers than PC CPUs, so you could easily use one really powerful CPU (of course, not a PC CPU but a specialized chip) at a fraction of the cost (including assembly and support components; at the least, a voltage regulator and bypass capacitors). It’d probably be more reliable overall as well since there is just one CPU to fail, never mind that each CPU, even as a microcontroller, probably has at least a dozen support components, just look at a PC motherboard which has hundreds of parts (and in the case of failure, easier to design an emergency override that safely stops the car).

The vast majority of these CPUs are very little, and have a single job. They are one chip wonders. A big change in a modern car is that the car is no longer controlled via a loom of wires, each one running to some component. Cars of old would run a wire to each indicator lamp, brake lights, door locks, etc etc. A modern car runs the whole thing off a set of serial buses (CAN - Controller Area Network for simple stuff, FlexRay for fast stuff.) and every device in the car has a little controller listening for instructions. Mundane bits like the lights, much more important bits like the wheel rotation sensors, suspension sensors, used to control the anti-skid and anti-lock systems. All individual CPUs living on a serial bus. This saves a huge amount of weight and build complexity in the car. The switches on the dashboard don’t connect to the device they control anymore, they connect to one of the car control computers, and that then commands the device. Good examples are in-car entertainment systems and climate control. The car entertainment system now often doubles as the controller front end for other tasks - like climate - parking sensors and so on.

In that case, it is probably better to call them interface chips than CPUs (there are even chips with serial interfaces that are certainly not CPUs, such as a serial EEPROM memory), even if they do have a CPU inside of them, since processing power isn’t why they are used. Sort of like using a CPU inside of a computer mouse or keyboard. The ECU would need a “real” CPU though, and I’m assuming there is a centralized controller for all of that stuff, probably in the ECU, so my comparison to a PC (centralized CPU and peripherals) probably isn’t far off.

The ECU is not the central control unit for the car. The ACM or BCM (auto or body control module) is the central control system that coordinates all system-level functions. Most modern engines have at least two different ECMs. It would not make sense to have one controller performing critical functions (engine management, traction control, active safety systems, et cetera), secondary functions (HVAC, primary illumination and signaling, navigation), and incidental functions (secondary illumination, seat positioning, miscellaneous stepper motor functioning), as you could easily overload the processor with inane functions and generate a lot of EMI and crosstalk. Isolating functions to dedicated embedded processors not only reduces the need for extensive wire harnesses but also simplifies software and data management requirements.

I don’t have exact figures, but I highly suspect that the modern automobile is the most highly engineered device on the planet if one calculates the cumulative effort that went into developing the materials, manufacturing, controls, dynamic and structural analysis technology that allows for the production of this product. More person-hours have gone into the evolution of automobile technology than the Space Shuttle, B-2 aircraft, or nuclear power, and the the technology is consequently more refined and astonishingly robust, to the point that you can rely on a modern, moderately priced automobile to function reliably for years with minimal maintenance and service; essentially replacing wear items, consumables, and lubricants.

Stranger