Fastest Supercomputer vs. P4

I was just reading that The Earth Simulator in Yokohama, Japan performs 35.86 trillion calculations per second — more than 4 1/2 times greater than the next-fastest machine. Is it possible to do any comparison between that and a top of the line pentium 4? How many calculations per second is a P4 capable of?

To a rough approximation, the number of calculations per second is the number of megahertz that the machine has, which would mean somewhere in the vicinity of two billion calculations per second for the current top-line Pentia. In actual practice, it depends on how many basic operations you do per clock cycle (some architectures can do two operations per cycle), how many basic operations are needed per “calculation”, and how much memory is required for the calculations (memory often ends up being a bottleneck).

Incidentally, that supercomputer you mention is probably actually a massively-parallel system of several thousand off-the-shelf processors, so if you knew how many it used, you could get an exact comparison.

Funny you should bring this up. Cray today revealed their new X-1 supercomputer. The new US supercomp blows the Japanese one out of the water, offering “up to 52.4 teraflops, or trillion mathematical calculations per second”. This machine is using “4,098 custom-designed 800MHz Cray processors” but Cray is also working on a 40 tflop machine called “Red Storm” for Sandia National Laboratories that “will contain about 10,000 AMD Opteron processors”, which are all off-the-shelf procs.

Linkage: http://www.msnbc.com/news/835023.asp

So, how many years back would one have to go for the top of the line supercomputer to equal a top of the line desktop of today? When would 2 billion calculations per second have been the domain of the fastest supercomputer?

c 1996

1996? Really? If true, that’s amazing. If I could take my P4 back in time to say 1992, and show it to some Intel engineers, would it amaze them, or would it be like, “Yeah we could do this now if we wanted,but why bother since we can just speed them up a little at a time and make more money”.

“Blows out of the water” is a bit of an exaggeration – the Cray is only 23% faster than the NEC computer, which was put into service eight months ago. That’s hardly the major jump that the NEC represented for the then state-of-the-art, and I wouldn’t count on the fact that NEC won’t be following up with another leap-frog. Cray can’t match the R&D money that NEC can pump in. However IBM can, and has shown a determination to keep up with the supercomputer war.

I might be mis-remembering but IIRC, the comparison is still a bit of apples and oranges overall since the memory and internal data throughput pipeline of a supercomputer (even in 1996) was wider and faster than even current desktop PCs can manage.

I don’t think it’s 1996.

Actually, I was looking this stuff up yesterday.

A top of the line P4 can manage a sustained performance of not too much more than 1Gflop. If the software is written to do all the work in Matrix math, it would probably be in the 2Gflop range.

None the less… that is damn amazing.

The top 500 fastest list only goes back go 1993. But, if you took a top of the line (still just a few 1000 dollar home computer) back to 1993, it would be about 400th fastest on the planet.

That is not even 10 years!

The cray 2 had a **peak[/p] performance of 1.9 gigaflops. It wasn’t until the Cray Y-MP8D in 1988 could a sustained 1Gflop be sustained.

So, about 15 years ago… What is a high end (but not exotic) home PC, would have been in the hunt for fastest computer on the planet.

Another interesting note, the cray-2 had a maximum memory of 2GB. Your average new PC today can hold more than that.

Looking back at past years, it seems to be holding pretty steady that what is the fastest machine on the planet is 14-18 years before is available as a home pc today.

Think about it, if this holds up… in 15 years or so, you can own the equivilent computing power to the “Earth Simulator” for a couple grand and it will sit beside your desk. Just imagine what Doom 17 will be like…

I recall that they used to apply a version of Moore’s Law to this effect. If you break down platforms into three parts, supercomputer, midrange, and PC, each platform matches the performance of the platform above it after seven years.

That would mean 14 years, which is pretty close to scotth’s data.

Or Windows2020? Sorry, couldn’t resist.

Anyone care to shed some light on what they actually do with these supercomputers and what problems they’ve solved? All I’ve been able to find are vague “atmospheric simulations, financial projections, fluid flow modeling, etc…”

Seriously, what exactly are they working on? I imagine little ditties like “While i is not prime do i++” would get stale after a while - and I can’t imagine a bunch of researchers sitting around playing super-multiplayer Doom. Ok, maybe I can… :smiley:

I think that Chronos’ rule of thumb using Megahertz is rather misleading on the concept of computer speed, but I see his point. Megahertz or Gigahertz, as the case may be measures the internal clock speed of the chip. This is not the same as the number of “floating point calculations per second” that the computer can make on a sustained basis or in a burst of speed. These are called “FLOPS”. Computer speed of general computers (ones that may be programmed for most tasks) are measured in FLOPS. If you do not used general purpose chips, but rather make custome chips to do a specific task, those custom chips will be a lot faster than general purpose chips. It is my understanding that the NEC earth simulator is a custom purpose chip to do things like weather modeling, a very important task, to which general purpose supercomputers are frequently dedicated. If you are going to use your supercomputer to run essentially one task, such as weather modeling, you might as well use special purpose chips. (I recall a few years back that special purpose chips for calculating orbits of astronomical objects had something like a million times the speed of general purpose chips with software).

An example of comparing Megahertz where it will not work are the current MacIntosh computers, which have slower clock speeds than their Intel based competitors. The Macs are simply faster anyway.

I’d show it to the Cyberdyne engineers. What could possibly go wrong…