Faster CPUs - What is stopping them ?

My Electrical Engineer friend says :

" What stops is making new better “circuits” "

My Materials Scientist friend tells me :

" What stops them is making better semiconductor materials/processes to put the circuits on in a small space"

My Chemical Engineer friend tells me :

"What stops them is the heat removal method. Then he goes on to microheatexchangers … "

Which one of the above is correct? I know all of them are correct in some way or the other. But what is most important among the above ?

My fiancee is a materials scientist. She does R&D on processes used to make fabrication equipment for companies like Intel and AMD. She’s typically going on about how to figure out ways to make things smaller, so I think that’s the big one.

Of course, you have to figure out how to do everything smaller, so that might include things like heat removal and circuit design. I don’t personally know myself – perhaps I’ll ask her.

There is nothing stopping them. The average PC that you see now in stores should have a processor with twice the Mhz that they had 18 months ago. This is known as Moore’s law and has withstood the test of time (at least three decades worth).

I really don’t understand your question. What are you expecting to see?

It takes time to design, test, and debug chips before they are shipped to consumer outlets. This process follows a logical flow and takes time.

I interviewed with a company one year ago that builds test equipment for PC chips. That was some of the highest security that I have ever seen. They had chips on hand at least a year ago that will not go to market until 1 - 2 years from now. The companies just must make sure that they are fully tested before they are released to the public.

I’ve read where they’re working on using atoms and even subatomic particles in lieu of silicon. If and when these are perfected, what we’re using now will seem to be the Dark Ages of computers.

"There is nothing stopping them. The average PC that you see now in stores should have a processor with twice the Mhz that they had 18 months ago. This is known as Moore’s law and has withstood the test of time (at least three decades worth).

I really don’t understand your question. What are you expecting to see?

It takes time to design, test, and debug chips before they are shipped to consumer outlets. This process follows a logical flow and takes time.

I interviewed with a company one year ago that builds test equipment for PC chips. That was some of the highest security that I have ever seen. They had chips on hand at least a year ago that will not go to market until 1 - 2 years from now. The companies just must make sure that they are fully tested before they are released to the public."

Doesnt it become physically imposible to ‘build smaller’ at a certain point?

Heat is a very important factor. The highest end AMD Athlon processors of each model dissipated about 75Watts of heat, which is a significant number. To keep such a processor at a safe operating temperature, larger and louder fans are needed, along with bigger and more expensive heatsinks. A few years ago all heatsinks were cheap aluminum. Now all usable heatsinks have at the very least a copper heat spreader in the base, and some are solid-copper. Some even use silver! We’ve also moved to 80mm fans on heatsinks now, as opposed to the 60mm fans we’ve used since the original Celerons. A top of the line heatsink from the Celeron days, the Globalwin FDP-32 for example, would barely allow a modern Athlon to complete the boot process before overheating.

Heat is not a problem since we can dip the chip in liquid N2 and run it many times the clock speed. The fault lies somewhere else. Now for a fan cooled PC then heat is a problem.

[homer]mmmm… N2 dipped chips…[/homer]

Anyway, if dipping chips in liquid N2 allows them to run faster, doesn’t that indicate heat is a problem? If you could cool current chips more effectively (without needing a weekly delivery from a liquid N2 tank truck), it would run faster.

In a way, all the problems you cited are related.

To make a faster chip, you need to make the elements of the chip smaller: the transistor gates, the pathways, you name it. This ends up causing all kinds of problems. Perhaps crosstalk starts occurring, where electrical signals in adjacent pathways start to interfere with one another. Perhaps the heat density becomes too high. Perhaps a dozen other issues come up which all need to be addressed for the chip to run reliably. Once you manage to overcome all these issues, you have to determine whether your solution is even cost effective (if not even the richest companies can afford your fast chip, what use is it?)

Sooner or later, you reach the limit of your established process and can’t shrink your basic design any further. Then, you have to modify the way you approach the challenge of shrinking the chip. When aluminum fails to conduct well enough to carry a reliable signal at small aspect ratios, you might switch to copper. When you reach the fundamental limit on how small you can reliably etch silicon, you need to develop a new silicon-etching and processing method. If that’s not possible, you might have to consider using something other than silicon, perhaps. As you shrink chips smaller still, you need to develop technologies to manipulate materials atom by atom. And so on.

As for the heat issue, yes, heat dissipation is a very important aspect of chip design; too much heat will destroy some of the elements on the chip. One way around this problem is simply to enlarge the temperature gradient between the chip and the surroundings (blow cool air across the chip, cool it with water or a refrigerant, etc…), but again, your solution has to be cost effective in relation to your target market. However, the primary limiting factor in how fast a chip runs is the circuit speed, which is directly related to the size of the chip elements. You can cool a 100MHz chip all you want but you’ll likely never get it to run reliably at 1GHz.

There are all sorts of obstacles.

To make faster chips (integrated circuits) there are two ways: make the individual circuits smaller, or find materials with better electrical characteristics, such as narrower bandgap, higher electrons/holes mobility, and so on. The problem is these two methods aren’t compatible so far. Silicon is the best material (still) for photo lithography, but it has pretty bad electrical characteristics. OTOH, materials such as GaAs have excellent electrical characteristics but they aren’t suitable for making VLSI and beyond.

There are also practical limits on how small you can make these circuits. Right now they use far UV in etching the 0.13 micron (0.08 in a few years) circuits, but due to wavelengths further miniaturization will have to be done with x-rays. However we still can’t find anything that can focus x-ray. Even if we can, we will still run up against the wall of quantum effects.

Memory bandwidth!

If you don’t have that, a CPU is just another form of rock. Intel appears to have finally broken away from its sordid recent history of paper releases with the 850E chipset, which uses a 533 (quad) memory bus. But the “quad memory bus” is still more marketing bullshit because if it were actual we would be seeing far larger gains in performance.

CPUs are only as fast as the information that can be pumped to them. That information usually originates from a CD-ROM or a hard drive, gets imported to the RAM, and then gets piped to the CPU for processing. CPUs are far, far, faster than most any other component on your computer, so they are chronically information starved.

There are certainly CPU tests out there which appear to benchmark the relative speed of a chip, as well as a few practial applications like distributed computing, but what will really hold computers in general back is the pipeline of information that comes to them.

A 133 front side bus that occasionally cherry-picks data three extra times a cycle is a start, but not a solution.

Heat may not be a theoretical limitation, but it is a practical one. Sure, you COULD make a CPU that dissipated 250Watts of heat, but there’s no way in hell you could make a cheap, effective, and quiet air cooler to handle that much heat AND fit in a current system. Not to mention the issues of supplying that much power.

There is something that’s slowing CPU/mobo speed advances: backwards compatibiliy. A chip with a 20MB internal cache and a hardware BIOS featuring, like, a 32MB GDI can be made; however, programs will have to be written from scratch, and compilers will have to be remade, in order to be implemented into the new machine.

capacitor: Not necessarily. The Pentium 4 is a radical departure from standard x86 CPU design, and it runs normal x86 programs just fine, allbeit very slowly. The new AMD Opteron processors are 64-bit, and they will still run current programs at least as fast as their predecessors. New compilers and applications are only needed if you wish to take advantage of the added advanced features.

“Running x86 programs very slowly” does not cut it in the budget-minded business world. Too many people use them and are quite comfortable with these programs. This is not dissimilar to UNIVAC-era computers being used for fifty years, triggering the Y2K crisis.

Not just rewriting software to take full advantage of the new featurtes will be a headache. Also there will be a problem with making parts for the would-be new mobos. Even hard drives are being outpaced by the new mobos and RAM, and that can cause huge latency. And those parts that do keep up will be prohibitively expensive.

Thermonics is moving ahead quickly. Heat in chip design may within five or ten years be a forgotten problem. Thermonics is creating electrical energy out of heat, a division of photonics. Recent advances have taken us from vaccuum-tube style thermonic chips to semi-conductor style. With this step, heat removal is close to becoming a moot point, as any excess heat will be “absorbed” as electrical power.

Besides, with nanotechnology and nano-carbon structures doped with silicate impurities and other various elements, single-electron computing gets very close to being a reality, and as carbon nanotubes are room temperature near-superconductive, there is VERY little loss to heat. Doesn’t most of the heat come from the skin-effect of electron transmission through metals? Well, nanotubes don’t skin, they flow the electrons through the “empty” insides.

Of course, none of that matters if we can manage cross-linked photonic grids throughout Bose-Einstein condensates for quantum computing. Very interesting process.

Links to be provided if requested.

Tim

capacitor: The Pentium 4 seems rather popular to me. The deal is that you need a MUCH faster Pentium 4 to equal a Pentium 3. Back of the hand calculations, a P3 1Ghz would be roughly equal to a P4 1.4Ghz or so, which is technically operating at twice Intel’s stated speed, or 2.8Ghz. The price for similar levels of performance are similar, however.

Again, you don’t necessarily NEED to recompile or rewrite applications to use them on new CPUs. The AMD Opterons will need a new operating system to use the advanced features (64-bit), but you could toss Windows 95 on one and get performance that would still be faster than one of today’s Athlons, probably.

Almost all the heat is dissipated in the transistors. P = I * V. If you have charge moving from one voltage to another energy is being transferred. In the case of digital electronics there is really now where for this energy to go besides heat. A certain amount will be radiated away as more or less general RF noise. If all the metal in a chip was superconducting the only power savings that I can think of would be due to faster switching times mean both halves of the cmos circuit would be on for a shorter duration.

You maybe thinking about the Itanic, not the P4. The Itanium is Intel’s new 64-bit VLIW EPIC processor, and it ain’t called the Itanic for no reason :wink:

FDISK, your own words betray your point. Why would people pay a whole lot in cooling the extremely hot P4 if it performs ‘just as well’ as the P3 with older software and hardware products? That is the main reason why backwards compatibility would be a huge problem when upgrading: the theorhetically superior performance that is paid for sometimes will not be there.

If one gets newest software for the newest machine, I would expect performance to be exceptionally higher. But these are not the rebuyer’s spending habits. They get a bigger boost in performance by changing RAM, the graphics card, the sound card and the hard drive.

It is not that I am not excited by the revolutionary and evolutionary new abilities of the new wafers. It is just that sometimes, the companies go too fast to produce product for their own good. Take ATI for example. Their video cards are excellent, at least I think so. However, the drivers and auxilary software (IMHO) are lagging badly, sometimes making the excellent cards virtually unuseable. If they have developed a better system of upgrading drivers like nvidia has done, then they would have not only allow end users the ability to maximize the cards’ abilities, but it would leave the programmers better prepared for the newest video cards ATI ships to market. Many people swear by the All-In-Wonder card, but the gap shows with the Rage and the Radeon sieres of products. The gap between the new machine or machine part and the drivers that run them needs to be closed in order to see real progress in upgrade technology.

Here is where Microsoft comes in. For the Windows program, they have generic drivers for almost every device, and some for specific devices granted to them by the companies that do business with them. Some are adequate to run the devices sufficiently, many are insufficient. With the plug and play system, this many satisfy most, but not to the good hackers among us. While this driver system is good for the consumer, IMO this is where their monopoly can get most insiduous. The file system for drivers can get is very inconsistent first of all, with some placed in a single or several the C:>Windows folder(s), and others placed in the c:>Warcraft Diablo Siege Force folder (just to use an example). Second, MS may produce a patch which all of a sudden makes certain products unuseable, forcing analysts to stop and fix that certain problem, instead of producing a better software upgrade for the wafer. That can be abused to slow down any potential software/hardware hybrid competitor to a crawl. Then there is the Linux problem, which MS thought is solved by demanding exclusivity with many products in the market, in which drivers for specific products are to be made only for Windows (Winmodems for example). As the drivers can be easily bloated in Windows, this exclusivity can lead to relatively poor perfomance on the machines.

Then there is that ugly small GDI/GUI. Starting at a miniscule 64k, throughout the years, MS only added extra 64k heaps. This is a real gap to overcome. Right now, one can’t do much to change the GDI/GUI without crashing the system. If MS is willing and able to truly fix this, not stopgap measures, but once and for all, then not only Windows become a great product, but technology can truly expand as software, backwards and forwards, can run at the dsesired speeds and catch finally catch up to hardware.

Then there’s languages such as Java and Javascript. A language with a smaller compiler is supposed to be faster, not slower. The load up times can be infuriating, especially with a high-speed connection… I have a feeling that the faster machines are there to cover up the slow speeds of certain programming languages. If Java had the speed of, say, Pascal, then even Bill Gates will have to bow to the feet of Scott McNealy.

See, these are some of the problems that has to be solved, on the software side, before the newer CPUs can go practically faster.

As a disclaimer, I use Windows almost all of the time.

Homer, I want those links presented, than you.