Computer Speed (July 4, 2003)

http://www.straightdope.com/columns/030704.html

Can someone explain what the limits of wafer technology are?

Oh yeah, bolding mine.

Basically it means the limit of how many circuits you can fit on a given area of silicon chip, before the insulating “gaps” between electric pathways get so thin that they no longer effectively insulate.

Her’s a brief explanation from this page:

By the way, this Nature article says the limit will be reached sooner, in about 2012. FYI, currently the gate oxide layer in commercial chips is about 25 atoms thick. Anything less than four or five atoms will likely be “so leaky as to be useless”.

I once saw a picture of Microsoft’s terraserver, on which they have all the satellite pictures they could get, Us, Russian, et al. It was a set of Compaq Alpha servers. I also asked to see my employer’s IBM MVS computer. The mainframe was a cabinet similar in size to an Alpha server. The mainframes have become physically smaller but more powerful while the pc based ones have become a bit larger and a lot more powerful. Cubic foot for cubic foot, as far as I can tell, the IBM mainframe can still kick a pc’s motherboard all over the place. LANs and WANs provide immense flexibility of course. One reason the mainframe won’t go away is that it is a kickass database server, connecting to the LAN despite the fact that the use different binary schemes to represent data.

Just for the record, supercomputer performance claims are as ripe for abuse as anything else. The most common ploy is to hook together hundreds or thousands of commodity microprocessors and claim you now have a “teraflop” machine (these are known in the trade as “massively parallel” systems). In theory, yes. In practice, almost never. Not even close. It’s unusual to get even 10% of the theoretical peak performance out of such a machine on “real world” problems, and on many important problems it won’t do a whole lot better than a desktop PC. And massively parallel machines are a nightmare to program.
If you want extreme performance on real-world problems, you have to get a real supercomputer, like a Cray. Their new machines are awesome.

Much of the reason for the “inefficiency” of massively parallel systems is due to the lack of good software for them.

Our computers for the past 50 years have been pipeline rather than parallel processors, and so we have 50 years of software tuned for that type of system.

Also, our problems seem to be mainly an algorothimic or ‘pipeline’ type of problem – where you must solve step 1 before you can start on step 2, etc. (Or maybe we’ve just taught ourselves that those kind of problems are the only ones that are effectively ‘computable’. Or maybe several centuries of using the ‘scientific method’ means current scientists mostly come up with pipeline rather than parallel solutions to problems.)

A lot of the work in using massively parallel systems involves producing software that can automate the conversion of ‘pipeline’ problems into ones that can be solved in parallel. We haven’t been able to do so very efficiently, so far. But they’ve only been working at this for a couple of decades, vs. 3 times as long on regular computers.

I expect massively parallel systems will become more effective in the future, as programmers fine tune the software for them, and as scientists come up with ‘parallel’ algorithims that run effectively on such machines.

No doubt software for MP systems will improve eventually; however the last 20 years of intensive work have produced little fruit and much teeth-gnashing.
Even if that problem were solved, however, certain classes of important problems simply don’t adapt well to MP architecture–you wind up spending most of your time exchanging messages between the processors and little time actually computing. Examples are sparse matrix problems, and those involving finite element analysis where the mesh is dynamically resized. For jobs like this, there’s no substitute for a very large, flat, shared memory, which MP systems don’t offer.