Single-CPU Supercomputers?

      • While on another board and pondering the general state of the computer-farming industry, some people including me got to wondering who still makes ultra-fast single-CPU computers these days. Most all the biggest computers you read about now are parallel networks, even though many technical problems cannot be separated into non-interdependent tasks anyway.

Does any company left still make very-high-speed single CPU machines? Such as with CPUs running at 10-20X or more the speed of what consumer machines are currently at?
~

Hey Doug,
If you want a performance computer, you might try alienware.com. These computers are a bit expensive (about 100-200 more than normal) but they are great computers.
Also, you can go to Dell.com and take a laptop or a desktop (I recomend the XPS series) and upgrade to a 80 or 100 GB 72000 rpm hard drive, 1 or 2 GB of RAM, a 2+ GHz Pentium Processer, plus other upgrades and then get a quick internet connection.
You can email me at golferdude507@aol.com if you have any more questions.

Josh

Well…

What, exactly, do you mean by “CPU speed”? Floating point operations per second? Do SIMD-type operations on vectors count as a single op or multiple operations? When answering, bear in mind that the super high-end number-crunchers available (e.g., the Cray 1) at the time microcomputers were taking off were vector processors. If they don’t count, then would single-operation instructions that could potentially stall if there are depencies count? If the answer to either of those questions is “yes,” then it seems that you are willing to allow some kind of parallel computation to occur. If that’s the case, why wouldn’t massively parallel systems count? If it’s just an arbitrary “single-cpu” restriction, then do the new multi-core processors (AMD, Sun, and Intel all have 'em now) count as a single CPU? Are we neglecting the memory heirarchy and pretending all the data is in the highest cache?

Also, when you say “consumer machines,” what, exactly, do you mean? Are we talking the first mass-market consumer machiens (like the IBM PC) or the first micro computers (like the Altair)?

Also, if you’re going by floating point operations and not integer arithmetic, is it even fair to compare high-end number crunching hardware to consumer hardware that may not have had hardware support for floating point arithmetic? Remember the days when you had to buy a “math coprocessor” for that?

Hopefully those questions will clear things up. :wink:

Just to be clear–the reason that I asked this question isn’t because it’s relevant to the answer, but because I’m not entirely sure if an implication made by the question is true.

DougC asks if there are any computers still available that operate at 10-20 the CPU speed of what consumer machines run at…

But I’m not entirely sure if there ever were high-end number crunchers that were clocked that much faster then consumer machines, and I suspect the answer depends very much on what you choose as the first “consumer machine” and how you define CPU speed.

For example, if you pick the IBM PC as the first consumer machine and compare it to a Cray-1 (a supercomputer available at the time) , and you only look at simple clock speed, then the IBM PC would have been clocked at (up to) 10 MhZ, and the Cray-1 would have been running at 80 Mhz–only 8 times faster, by our naive criteria.

I will take this from a different angle. I am pretty sure that there are not any chips that are greatly faster than mainstream consumer chips available to anyone. AMD and Intel are in competition and trying to push the envelope of the consumer and business market all the time. Mainstream chips and high end chips like those used in servers are only different in small ways. I am certain that any company that had a chip that was twice as fast as current off-the-shelf processors would be frantically finding a way to get that chip out to as many customers as possible.

I work in IT and follow computing news closely. I say that they don’t exist. There are advanced prototype chips out there. I once interviewed with a Boston area company that tests prototype chips before they are put into production. Those chips still have flaws and are less than 2 years away from production. Due to Moore’s law, that means their clock speed is about twice as fast as the current high-end chips.

Believe it or not, chips that you can order off the web are the best general application chips available to anyone and anything. I hear that Google simply the harnesses the power of a huge number of pretty cheap computers connect in parallel to do it’s thing.

Hey Perplexivemaverick.
If you want a place to advertise for other businesses, you might try a different place. These ads may be a bit expensive, but they are better than tying up our forums to advertise.
Also, you can go to newegg.com and find all the parts that they put into Alienwares (I recommend the FX-57) and put them into a pretty case for much less.
You can buy the parts yourself and put them together yourself. If you have any questions, a separate thread would be the place to ask.

cb

Here is a thread you will like that I started a while ago.

It is called: "Do the major PC chip makers limit the speed of new chip developments on purpose?

http://boards.straightdope.com/sdmb/showthread.php?t=275921&highlight=chip

That’s right. And in fact the technology to make much faster cpus does not exist. Increased speed comes from new process nodes, and Intel, whose manufacturing is better than their design, is first or close to first. Given a process node you might get more throughput (not clock speed, which is not that important) by adding features - but adding features increases chip size which increases the distances signals travel which slows down throughput. Splitting the cpu into two chips makes the situation much worse.

It’s no accident that the supercomputer speed record these days is always held by the makers of standard machines, and that there are no more Crays.

Cray got its speed, by the way, by using bipolar, instead of CMOS, logic. There is no advantage to bipolar these days, since you can’t cram as many transistors on a chip, and thus lose big in interconnect delays.

<mod>

chaoticbear, you’re a charter member, you should know better than to accuse somebody of spamming in a thread. Use the “Report Post” feature - let the mods make the decision.

This isn’t a warning, just a…well, as Tubadiva puts it, a scolding. Leeave the moderating to us, help us with the report post feature, but leave it out of the thread.

perplexivemaverick, welcome to our boards. While we frown on people advertising for businesses, we certainly don’t frown on people suggesting reputable businesses when people ask for suggestions. The only problem I see with your post is you misunderstood the OP. He was not asking about a computer for home or gaming use.

Carry on.

</mod>

      • To put it simply: for this discussion, Alienware doesn’t count. They use regular consumer-level hardware. For a few thousand dollars you can buy a very fast desktop PC; what I am asking is that if you had, say, one hundred thousand dollars and needed the fastest small-computer possible, what is available? We can even assume that it does not need to be able to run any specific OS, as the software that you would run on it will be written specifically for it anyway–just that parallel processing would not be useful. It’s quite surprising to me that there is nothing beyond the stuff any ordinary credit-card can buy at online PC parts vendors.
        ~

Well, you can still buy workstations from Sun and SGI, based around UltraSPACRC and MIPS processors respectively. However, while they probably offer more horsepower than x86, they tend to run at lower clockspeeds. You can stack them up with four or more times RAM than a PC would take, etc. so they are more powerful, however they still rely on multiprocessing for extreme tasks.

As others have said, the fixed costs of developing processors are now so high that it generally does not make sense to do it unless you have a large sales volume, which means general-purpose processors. And since this is the market AMD/Intel target, you may as well use their chips and focus your development efforts on the bus, memory controllers and other elements. It’s all about the trade-off between cost versus the benefits of extreme specialisation. Also, the limits of what can be engineered onto a single piece of semiconductor come into play and prevent one building anything that offers performance orders of magnitude greater than what a competitor can build for a similar task.

It’s also interesting to look at GPUs - these are high-volume super-dedicated chips where there are big payoffs for specialisation, and a huge market. As a result they are now far more complex than CPUs and massively more powerful at their specific dedicated task, but totally useless for general computing. And again, once you hit the engineering limits of what you can do with a single chip, throwing multiple chips at the problem is necessary.

So what do you want this hypothetical super-powerful desktop for? If it’s general computing (produtivity software) then it’s far more cost-effective to stuff it full of generic CPUs, be they PowerPC, AMD, Intel, or whatever. If it’s for a dedicated task, then you build an add-on processor for that task and use generic CPUs for controlling it.

And that brings up an interesting example.
According to a text on cryptography I read, when the CIA/NSA need to crack a well-encrypted document, their preferred method is actually to run the encryption through exactly what slaphead suggested… they have a custom ASIC [1] that they pipe the file into which runs the decryption job gobs faster than a building full of PCs.
It would be an elegant solution to all kinds of computing problems, except the expertise and the hardware needed to do what the intelligence agencies do is sucktacularly expensive.
Want to get abstract, interesting, but nearly useless comparison data from one CPU line to another?
I would reccomend checking out how various CPUs from various vendors fare in decryption jobs by looking at distributed.net, but the exact link to send you to is down right now due to a problem with their stats server.
I remember when I found out what you are finding out… that commercial off-the-shelf hardware is just as fast at the CPU level as low-volume proprietary solutions. I was kinda’ shocked, and a little bit sad. I had hoped to learn about the high-end stuff, but… I was already running it, FPU-wise. The big difference with high-end systems appears to be bus speeds, number of CPUs, RAM amounts and (sometimes) I/O processing speeds and disk speeds.

[1] http://en.wikipedia.org/wiki/Application-specific_integrated_circuit

You need to figure out which of the many benchmarks are closest to your application. For instance, I believe the Itanic has pretty good floating point performance, though it is lacking in many other ways.

Mr. Slant, I would probably go with an FPGA long before a custom ASIC. They may not be as fast, but they are a hell of a lot cheaper in terms of NRE, and much safer for people just starting out. Design tools are cheaper also. Of course you’d have to design a board and an interface and a whole system.