Whatever happened non-Intel RISC workstation computers?

Back in the day PC owners could only gaze enviously at HP & Silicon Graphics workstations etc. with RISC CPUs that were used for all kinds of number crunching, modeling and science related stuff that PCs couldn’t handle. I’m not as much of a hardware geek as I used to be , but I can’t recall seeing anything about non-Intel “workstations” in the last few years.

Are Intel CPUs so powerful now that they drove RISC workstations out of business? Do workstation computers far more powerful than Intel X86 CPU based units even exist as product lines anymore?

Intel and AMD kicked their butt.

While there are companies other than AMD and Intel in the market, my WAG would be that unless you require a purpose-specific chip, the forces of competing for the general market mean that those two have products so amazing there was no point trying to make the low-volume RISC stuff better.

They’re not all that anyhow. Lots of interesting commodity hardware just doesn’t want to work with them. I had an HP B1000 with a 300Mhz PA-RISC processor and a gig of ram a while back. Unfortunately I never could get anything but the wacky-output FX2 video card it came to with work in it, and the limited stuff I did with pa-risc linux via terminal session didn’t excite me that much, since you have to compile nearly everything you get for it.

I bought mine from auctiondepot.com online for way too much money, but they have the occasional one there for ~40$ + shipping. Hell, if you’d have answered my ad in the Bargain Finder last year you could have had mine for 40$ flat.

If you want power, I recommend a dual-core Athlon64, which I sadly only get to build at work, not use. :wink:

I used to work for Intergraph which made the “Clipper” chip at a fab in California. They were great at first but the price was hellish and Intel just gradually got better until there was no good reaon at all to buy this incredibly expensive piece of engineering. The workstation back then ran around US $60K, Or, as gaspacho put it, they kicked our ass.

Regards

Testy

Mainly, it was because nobody could keep up with the R&D work of Intel & AMD and because nobody was making programs for non x86 CPUs. There were a few dozen companies making Pentium clones in the early days but they all died out because they couldn’t come out with chips that were as fast as a pentium.

Whatever became of the lawsuits Intergraph had against Intel? IIRC, at least one was focused on Clipper.

I think the last RISC chipset to be used in anything even vaguely resembling a personal or workstation PC was DEC’s Alpha. DEC was bought by Compaq, which was bought by HP. HP stopped developing Alphas in 2004.

IBM and Sun continue to sell workstations with their own chips.

Recent Floating Point Benchmarks:
IBM Corp, POWER 285 Workstation (1900 MHz) 1 core (SMT off) Peak=3027
Hewlett-Packard, ProLiant DL385 (AMD Opteron 254) 1 core Peak=2267
Dell, Precision 670 (3.8 GHz, Intel Xeon) 1 core(HT disabled) Peak=1920

Those are just samples of some of the best benchmarks for each processor.

Additional interesting info:
Sun’s new 8 core Niagra chip appears to be the real deal bsed on the first benchmark they posted.

Also, Mercury systems will begin selling a Cell based workstation in the spring. Based on postings from a developer programming for the cell, the performance is as advertised.

And, AFAIK, HP workstations (e.g. c8000) have non-Intel CPU’s them too. The website http://www.hp.com/workstations/risc/c8000/specs.html says they’re equipped with PA-8800 or PA-8900 ‘modules’. These don’t sound Intel to me, but I’m not an expert. We use them to run CATIA V4, V5, &c on them. V4 won’t run on PC anyway, and V5 runs better on the workstations than the PC’s here too.

NB

Intergraph won huge bucks. I think the lawsuit was on splitting the cache into data and instruction units and there were some more things as well.

Testy

OK. Wonder how much they actully kept, and how much went to the attorneys and little folks like me six years ago that were “coding” documents for the lawsuit. No idea which side I was working for, actually. We just spent months circling names, dates and subject lines in emails, for the most part.

One of the worst ways to compare processors is to compare clock frequencies. So, of course, this pretty much becomes the industry standard. :smack:

The whole idea behind RISC is Reduced Instruction Set Computer. You strip the instruction set down to a lean and mean set of instruction, using only those that you absolutely need. This allows you to streamline the hardware. Since a processor can only run as fast as its slowest stage, cutting the fat out of the execution stage allows you to run faster processors.

Since RISCs ran at higher clock speeds, eveyone thought that they were faster chips. This wan’t necessarily so. While a RISC might execute 50 percent faster than an Intel, the Intel can often do the equivalent of two or three RISC instructions simultaneously. When you really start comparing processors on how much work they can actually accomplish in a given second, the benefit of the higher clock speeds of RISC starts to diminish. A lot of people started to realize that RISC wasn’t as great as they initially thought.

On top of that, you have Intel and AMD pouring tons and tons of money into R&D, because they have tons and tons of money. As previous posters mentioned, that made it difficult for the RISC guys to keep up. It’s kind of a harsh truth in CPU design that once you get ahead of the other guys it is easier for you to stay there, because since you have the sales, you have the money for R&D, and since you have the R&D to make a better processor, you get more sales, etc. It’s a big vicious circle. The up front costs of setting up a CPU manufacturing line are huge (typically somewhere in the neighborhood of $50 to $100 million). If you sell 10 million chips, that’s a cost to you of only a ten bucks a chip. If you only sell 1 million chips, the setup costs have to be spread out over fewer chips, so your cost per chip just jumped up to $100 per chip. This means you have to sell your processor for more money, which makes it even harder to compete with Intel and AMD. The workstation market is a lot smaller than the desktop PC market, so the workstation guys end up forcing themselves out of the market due to simple economics.

A lot of supercomputers have been forced to change the way they do business too. It used to be that supercomputers were all custom processors. Now, they get better bang for the buck by using clusters of Intel chips than they would if they designed their own dedicated CPUs.

By the way, the nail for the coffin for the DEC ALPHA was when Microsoft decided to kill the NT ALPHA operating system. Once that died, the market for the ALPHA chip died with it. There were people using Unix operating systems, but not enough of them to keep the ALPHA viable.

The problem with the R&D statement is that AMD and Intel do not make the most powerful processors on the market based on benchmarks etc.

So no, they did not, and have not (yet) kicked butt when it comes to CPU performance.

Economically is where they have kicked butt. Large market, lower per unit cost, etc.

Who else makes general purpose chips that will benchmark faster for real-world applications then? We had a thread on the buying the highest performance chips available a few weeks ago and concluded that the high-end consumer level chips from AMD and Intel were, in-fact, the high-end for most every other application too. If you want a supercomputer, you just start hooking those processors in parallel and this is what happens in the real-world today. There are no better general purpose chips available to anyone at any price.

There is also a software component to this. Specifically, Windows NT and Linux are both x86-centric, Windows terminally so and Linux merely conventionally so.*

*(That is, Windows NT doesn’t run on anything but x86 (except perhaps Itanium), whereas Linux runs on just about everything but most hype and development is centered around x86 machines.)

Back in the 1980s, when Unix was experiencing its Warring States period, there were double-handsful of hardware vendors selling workstations and servers, and each hardware vendor had its own proprietary version of Unix to go along with its machines. Theoretically, they were all compatible, but in reality each company would differentiate its product and try to achieve lock-in by adding features that made it impossible to move software that relied on those features from one flavor to another.

In the 1990s, Windows NT stepped into this mess and promised compatibility to both the server and workstation markets if they would dump their expensive proprietary hardware and move to commodity boxes running Microsoft software. They did in droves, and drove the traditional Unix vendors to bankruptcy. No matter that Windows NT would never be as stable or featureful as Unix, it was a bit cheaper and it’s what all the cool kids were doing. Besides, the x86 chips are really capable little dynamos and the RISC chips can’t quite keep up.

Of course, Windows NT didn’t stay ‘cheaper’. As the 1990s wore on, Microsoft went from being a fairly normal software developer to King Hell Monopolist, largely on the strength of their desktop sales. The US Department of Justice convicted them of as much. The server and workstation markets, never as deeply tied to Microsoft as desktop users are, were ready for a way off the Endless Upgrade Treadmill. In 1998, seemingly out of nowhere, Open Source happened: Netscape, losing marketshare to Microsoft in the server realm, decided to open-source their browser as a Hail-Mary to prevent them from losing the Web to Microsoft entirely. (This is where Mozilla comes from, as in Mozilla Firefox.) This had a huge knock-on effect throughout the software world, leading to massive publicity for Linux, an open source Unix clone that was ready and waiting to take over all of the markets that had been burned by Microsoft.

Linux is portable. It was portable in 1998, too. However, it was born on the 80386 and it has always been most strongly tied to x86 hardware. This was a selling point: Companies could get a great OS without kowtowing to Microsoft and without invalidating all of their hardware investments. Thus, the hardware hegemony was reinforced just as the OS hegemony was undermined.

There is a lesson here: Progress comes from the bottom up. The low-end wins, because it becomes more capable without becoming that much more expensive, whereas the high-end just becomes more expensive.

(Funny story: Remember the proprietary Unix vendors from the 1980s? A lot of them went under completely (SGI is dead and so is Irix, HP was bought by Compaq and HP/UX is extinct, etc.) but those that didn’t either switched to Linux (IBM dropped AIX and is now actively developing Linux and sharing code) or, in the case of Sun, open-sourced their Unix flavor. I just recently bought an open-source version of Solaris as a magazine premium. How the times have changed.)

The industry analysts, experts and benchmarks disagree with the conclusion of your thread. Also, I just searched it and read it, it’s fairly short with limited factual links, so I think using that as a conclusion might be a little pre-mature.

On a per CPU basis, Power5+, period (well kind of, Sun’s Niagra just came out and it looks pretty good).
Multi-core with efficient SMT places it’s real-world performance significantly better than the closest competitor.

Spec FP - listed previously
Spec Int - per core not great, per CPU still impressive

TPC
IBM Power5 server (64 CPU’s) 3,210,540
HP Superdome (64 Itanium2) 1,231,433

In addition Power has excelled at SAP and Java benchmarks.

Power5+
Niagra (8 cores)

Are both more powerful than AMD and Intel x86.

I recommend looking at spec.org (keeping in mind those benchmarks are 1 core no SMT, real world performance is significantly better due to utilization of cores and SMT), tpc.org, articles from industry analysts, etc.

Correct.

Not correct (many have gone by the wayside, but of the remaining processors, Power and Sparc, AMD and Intel have not matched them, yet).

Absolutely correct.

IBM will continue to make money off Power because large customers need the performance. But their only real long-term hope to stay in the processor business (other than custom services) is high-volume which means consumer electronics.

I have no clue what Sun’s strategy is, but even with great processors they appear to be in a weaker position.

It’s still being sold, but your point remains, the writing is on the wall.

RaftPeople Those computers that you have listed are not in my opinion work stations. Work stations in my experience were computers that you had in your office with primarily one user.

Where I work we used to all have a PC and a sun of some sort in our offices. We still do but now the sun is mostly used as an xterm and every thing is run remotely on either a sun or an AMD linux box.

In the last few years we have moved towards having rooms full of inexpensive AMD computers running linux. I generally try to have my simulations run on the linux boxes because they are much faster than the sun boxes. You might be able to get suns that run faster than the linux boxes. But our company has decided that it is better to have 200 linux boxes vs 50 sun boxes for about the same price.

Another comment SGI was killed by 3dfx, ATI and NVIDIA as much as by intel.

As far as Sparc processors go, they haven’t been consistently ahead of x86 processors. I believe they’ve been leapfrogging off and on for the last 10 years. I remember in about '02 Intel had a definite lead… but Sun also had a new core under development.
Those Power5 chips are neat. I didn’t realize they were benching ahead of x86. I’m curious as to whether or not one Power5 CPU is really a “CPU”. Much like a Pentium IV shows up as two CPUs in your computer’s task manager, the Power5 lets you set up to 10 “micro-partitions” per CPU. They also have the system bus on-die on the CPU, which blows my mind.
I’m all for competition, but at least it seems like Intel and AMD are intent on beating one another up, and that’s good for the rest of us.

Certainly the 64 processor server is not a workstation.
But the IBM Intellistation Workstation is a workstation.

I don’t disagree with your points, people are buying less expensive equipment, even if they have to buy more of it. And x86 is more capable of competing than in the past. They have reduced the gap.

I was primarily disagreeing with those that incorrectly thought x86 out-performed all other general purpose processors on the market.
Mr. Slant, Sparc sure did go through a dry spell in the last few years (can’t speak for before that w.r.t. Sparc), but this Niagra chip looks pretty good. I didn’t think they would ever deliver anything because they talk so much trash about the competition I figured they were just full of it. Have you looked at their recent benchmark?

RaftPeople: Performance is difficult to quantify, and it commonly doesn’t really matter anyway.

You can quote benchmarks all day and all night, but people really care how fast their applications run:
[ul]
[li]If none of their applications can be run on non-x86 hardware, game over. Their choice is made.[/li][li]If their applications are optimized for x86 hardware (they are written with a hand-tuned assembly kernel that does all the heavy lifting, their loops are arranged to fit into the Pentium’s L1 cache, etc.), benchmarks won’t make much of a difference to them, either, because they won’t reflect how their applications will actually perform.[/li][li]Benchmarks don’t tell the complete story. A benchmark that focuses on integer arithmetic and branch prediction won’t tell you jack about the speed of an application that’s floating-point heavy, or is I/O-bound.[/li][/ul]Application developers–or at least the ones who have to care about performance–know how their target hardware behaves, and can tune applications to take advantage of that behavior. Benchmarks give them that information, but benchmarks can’t say how fast a given machine will perform in truly real-world conditions.