Computer chip speed and planned obsolescenceo

I always hear about how chip manufacturer’s have the technology to make faster chips but release them slowly to churn upgrade sales. Is this just a wives tale? If not, how fast could chips be today if they didn’t hold them back? Is there a 6 ghz chip in Intel’s vaults waiting to be put in production in a year?

No, they are making their chips as fast as they are able to. You can ask whom ever is telling you this gibberish how to deal with the heat problem.

I got news for you. Intel already had the 6Mhz Pentium 8 in their vaults in 1980 but they started out with the 4Mhz 8086just so they could make you buy lots of computers over the years. Same with Microsoft. They’re all in cahoots ya know.

A new generation of processor will have usually been in development for several years before it comes to market. The new ones are released as quickly as possible.

Sailor:
Too bad there’s not a smiley for extreme sarcasm - your post could have used it. :slight_smile:

I think all the conspiracy theories started when people realized Intel took about 2 years to make a processor from development to final manufacturing.
The thing is that Intel could speed up the process by wasting less brainpower making ‘in between’ breakthrough chips and devote it entirely to the next generation.

Actually you don’t have to buy the latest whiz-bang that comes along. I don’t think that chip speed, once you get to some high number, maybe 7-800 Mhz is all that valuable for home computers.

You still have to get data in and out. Writing speed to the monitor is only so fast. Printing is only so fast. Disc read/write is only so fast. If your program has to go to off-chip memory and back, the bus speeds are 100-150 Mhz.

There are cases where chip makers sold crippled chips to meet different price points. A couple of years ago it was common knowledge that almost all 300MHz Celeron chips could be safely overclocked to 450MHz. And the 486SX started as a 486 with the floating point processor disabled.

But it’s insane to not sell any of the top end processors. There’s a lot of competition in the semiconductor industry.

So then if it is a wive’s tale, why is the curve of mhz/time so smooth? Why aren’t there leaps and plateaus?

Because chip speed is greatly dictated by die trace size. The width of the traces on the chip die determine the actual size of the CPU (in terms of area, mm2), and also how much heat is produced. Smaller trace size decreases heat, and makes higher speeds possible.

The process for decreasing trace size is somewhat linear, which makes the speed increases gradual.

BTW, from what I remember, currently a state of the art CPU uses 0.13 um (micro meter) traces. Pretty damn tiny!

/Markus

Die shrinks are actually a very rare event. The reason chips increas speed linearly is due to gradual inprovements in yield from ironing out the kniks in the manufacturing process. On top of that, they sometimes also increase the “stepping” where the physically rearrange some of the circuits in order to make them go faster. However, good chip designs mean that there is never a serious bottleneck anywhere on the chip. If the FPU is capable of doing 1800Mhz ± 200Mhz, then the ALU might do 1920 Mhz ± 100Mhz. Upping the FPU to 2Ghz would only increase clockspeed slightly.

Also, the race to 1Ghz would have convinced me that Intel did NOT have a 6Ghz processor tucked away. Basically, AMD, a smaller chip company had been performing consitantly slower than Intel for quite some time before suddenly pulling out in front and beating them with a 1Ghz chip. Intel were quite flustered by this and took nearly a year to get back on track. If they had a 1Ghz chip in the sidelines, they WOULD have definately released it.

Listen to what Shalmanese said. No one sits on a high-performing chip if they can produce it, because if they do, their competitors will be all over them. No one, not even Intel, has such a lead on their competitors that they can afford to do this for any length of time.

I work in the semiconductor industry (and a few years ago worked on microprocessors), so I’ll try to add a few bits. zwede comments that state of the art tech is 0.13 um. That’s pretty close, but it’s not the whole story. 0.13um is the tech that many foundry fabs use for production right now. By “foundry”, I mean fabs that rent out space to a lot of different small companies that design chips but don’t have their own fabs. I expect the bigger manufacturers (Intel, TI, IBM) have better fabs than that in production, and the foundries are probably prototyping or in development on better fabs. I’ve seen 90nm on some companies web page, though I don’t know that it’s in production yet.

KidCharlemagne, the curve isn’t actually that smooth - it’s a series of stair steps (sharp rises every time a new chip is released), that people estimate with curves when they draw the graphs. That still leaves the question as to why it’s so predictable. The answer is, that’s the way the companies want it. They want to plan their business, forecast their chip loadings, predict for their customers when chips will come out next year, and the year after that. So they work up development cycles, take into account what their competitors are doing, and work their darndest to make the schedules that they so create. If they were to miss their schedules, their competitors would get a foot in the door.

Also, there are several things being improved at the same time. The main two are processor architecture (how the chip is built, pipelined, and arranged, for speed) and process size. The companies with the big pieces of the market can stand to take a conservative approach - change one piece at a time (that is, put a new processor architecture on a working process, and try out a new process with a working design). Companies trying to gain market share take a certain amount of risk by trying new designs on the fastest process.

A key point here is that the people planning their products know what the performance curves have been. They then expect that the market for a given chip will want one with doubled performance after a given time. When they look at their fabrication plant capabilities and when they will be ready with a process, and how long the design cycle is, and judge when a product will be out, that gives them motivation to design a chip that will have a performance on that curve at the time.

I hope some of this is informative; if I can clarify, please let me know.

Thanks for all the explanations. I’m gonna spread the word of wivetale-dom to all my ignorant kin.

Not too accurate. It depends on what you are doing with you home PC.

If you do any of these things, you could use more power than is available today, period:

  1. 3D rendering work (Truespace, Maya, 3D Studio Max, etc).
  2. Home video editing.
  3. Current games (After playing with the leaked Doom 3 test engine, I know my P4 2.4/1GB DDR/GeForce4ti will need at least a bit more video card to play that well when it comes out.)
  4. Large list management (spreadsheet or database)
  5. Many types of programming.

I do all these things at home.

Also, effective bus speeds are much higher than 100-150 Mhz, and the large cpu local memory pools and fairly intelligent management of the same keeps the cpu from do alot of waiting on data from main memory.

Not really. If you look at it, modern processors retire instructions a lot lower than they should. For these “superscalar” CPUs, ones with multiple execution units on the same die, you expect it to finish a number of instructions each clock cycle equal to the number of execution units, or close to that number. That’s not so. For example, Athlons retire just a bit over one instruction per clock cycle on the average while they have three execution units.

The bottleneck is clearly with the data.

This overlooks the fact that both the Athlon and Pentium cpu’s are very much CISC processors.

Even if the CPU never waited on memory, the instructions per clock would not be near 3. Too many operations cannot be run in parallel, and/or require multiple clocks per instruction.

You would need to run a very hand picked instruction mix that you won’t ever see in the real world to get this happening.

Wasn’t it only a couple years back that Intel prematurely released the P3-1Ghz and actually recalled them because they weren’t reliable at that speed?