Do the major PC chip makers limit the speed of new chip developments on purpose?

I had this long fact-filled OP ready to go when I realized that it might sound like a conspiracy theory rather than a question. Therefore, I will leave the question rather open.

  1. Why do AMD and Intel roll out new versions of their chips starting at the lowest clock speed for that chip and roll out essentially the same chip in incremental increases in clock speed? For example, the AMD Athlon 2600 processor that I just bought cheaply was later rolled out as a 2700, 2800, and 3000 versions. Isn’t the technology essentially the same? Couldn’t AMD just have skipped to the Athlon 3000 if they wanted to?

  2. Does Moore’s law actually limit the rate of improvements in chips? That is, do the chip manufacturers slow down the rate of development to follow this expectation and introduce a greater amount of planned obsolescence into the system because this expectation is actually too low?

  3. Is the technology already developed that would allow them to skip several steps ahead and introduce a chip that is leaps and bounds faster than anything we have now? For instance, Intel is about to roll out the first Pentium V chips but the Pentium VI is reported to be already under development. Couldn’t they cut the Pentium V short and roll out the Pentium VI rather quickly if they wanted too?

I am sure that there are many different factors that go into the answer to this question including economics and certain technology hurdles but I am interested to find out what they are?

It’s all about yields. When a new chip first comes off the assembly line, a small percent actually run, the rest are wasted sand. Of those that run, an insignificant number of them might actually run at a faster speed, but they’d ALL run at the slower speed.

Once they ramp up production, and get better at making those chips, their yields go up, and they collect enough that will run faster to actually offer them as a product. (It’s awfully hard to sell a faster chip when you have no earthly idea how many of them you’re going to have, so you have to wait until you do).

-lv

Thanks for the response. I do remember now something that I read about quality control being an issue? However, that makes me wonder why quality control is an issue with every new chip that they bring out? If it doesn’t work now, how do they know that they will ever get reliable chips and if they know what they need to do to improve it, then why didn’t they do it in the first place?

  1. As LordVor mentioned, practice makes perfect.

  2. Moore’s law worked (and still works, sorta) as a simplified, seat-of-the -pants economic analysis. Sure, given an unlimited amount of cash and an unlimited number of engineers and others who were somehow able to communicate perfectly, a chip manufacturer could increase the performance of their products far more quickly. However, given the realities of doing business, a company needs to be careful not to let their R&D budgets run the company broke or let their product line fade into obsolescence. Both have happened (sometimes to the same company), and most companies try to find a middle path between these two risks.

  3. The technology is in development. A lot of it doesn’t exist for the reasons stated above.

You are equating quality control issue, actually yield issues, with a black and white definition of whether something works or not. ICs are manufactured in large batches, with some not working at all, some working but within limitted parameters, and some working great.

Intel generally doesn’t introduce a chip until it is very sure it can manufacture it in very large quantiies. If it can’t, then the price will remain extraordinarily high. AMD, being the challenger most of the time, will take a somewhat more aggressive policy in introducing chips early. This make ecnomic sense fro both companies because Intel generally has to supply a much larger customre base since they typically own ~ 85% of the microprocessor market share, while AMD has a significantly smaller customer base.

Moore’s law is more of an observation, although one that has been fairly accurate over a long period of time, than a theory that explains ***why ***chip densities progress at a more or less constant rate.

When you ask if technologies exst that would allow companies to skip a generation, you have to differentiate between what is availaible in a laboratory situation and what can be counted on as an ecnomically feasible large scale manufacturing process. In the latter sense, a company with any reasonable competittion would have no financial motivation to hold technology back.

It’s an issue every time because it’s a new core every time. That’s the whole “generation” part of it…chips of different generations have more, smaller parts than the generation prior to it, and manipulating things on that scale takes practice.

It’s like I asked you to sort a bag of marbles. After you did it, would you be able to jump right to sorting grains of sand? Or would you have a better chance of success if I first asked you to sort pieces of gravel, and then grains of long-grained rice, prior to tackling sand? That, in a very general sense, is why it’s not practical to skip a generation.

Straining the analogy a bit, even if you end up being a master rice-sorter, and you have some good ideas on how to go about sorting sand, it’s another thing entirely to actually go to the beach with a bucket and a tweezer and start doing it. After a few hundred hours, you’ll get new thoughts on how to sort the sand more quickly with fewer errors.

And as for how they know it will work eventually? I suppose they don’t, but they don’t have much choice. The Power PC is a pretty good example, here, actually. Took Apple ForEVER to be able to cross the 1GHz barrier. Their vendors (IBM and Motorola) just couldn’t get any yields at that speed. Eventually, Apple had to start selling dual-processor G4s, just to keep up with the PCs available. It was a major hinderance, just as Macs were beginning to catch on again.

-lv

There was a small ‘conspiracy’-ish controversy way, way back in the early days of the PC boom (circa 1992). Some folks might remember that for awhile you could upgrade rather than just replace your motherboard/CPU.

Capitalizing on this concept Intel came out with the ‘Overdrive’ socket which sat alongside the main CPU socket. The controversy was over the 486SX chip. In a clever marketing strategy Intel sold it as both a main CPU and as a (much less expensive) Overdrive upgrade chip. The thing was, off the assembly line, they were identical chips. So to prevent the obvious (buying the cheaper Overdrive version and using it in the main CPU socket) Intel just crippled the Overdrive version so that it would no longer work as a standalone CPU.

Seems pretty trivial now, but back then CPU chips weren’t commodities that you thought you were just going to toss in the garbage in two years. Plus Intel still had a monopoly.

As Hail Ants states, many batches of the 486SX were simply 486DX chips with the FP disabled. This allowed them to sell an entry-level processor with no additional design efforts.

The “conspiracy” involved is making money. Intel (and others) will always have a line of CPUs available at different speeds because they maximize their profits that way. They receive high margins with their very fastest chips and sell the slower CPUs at nearer to cost. Not everyone wants the fastest, but those who do are willing to pay the premium.

This is also partly due to realities of manufacturing. In most situations the fastest clock speed is not obtainable with the initial layout. Even with the same fundamental design, “Prescott” for instance, there are constant subtle refinements to the mask used for production to allow for faster clock rates without errors. They can work out these issues while the initial runs at slower speeds are in production making money.

It can take years of design before a chip is ready for production. Recently Sun scrapped their UltraSPARC V chip after being in development for over four years. It is not only the time it takes for the design, but refinements required in the fabrication plants to allow for the creation of even denser chips that slows the release of new chips and keeps Moore’s law valid.

That’s not exactly the case. Moto had a good market for the PPC as embedded chips (i.e., the thingy in the coffeemaker) but Apple killed off the high-end market by killing the cloners. This reduced Moto’s market for the PPC severely. Now, for a while Moto had been developing faster PPC’s, but they stopped most lab work on one. They coud actually have produced them, it just wasn’t cost-effective counting development cost.

IBM, IIRC, came in more later, but they’re very leery of Apple and won’t commit big resources to development even now. Apple’s burned too many bridges with too many people in the past (screwing Moto was just one example) in their way to claw themselves out of the holes they dug for IBM to trust them.

Anyway, these chip developers aren’t about to forget a few famous companies that self-destructed because some idiot marketing stooge harped on and on about the development of version 2 - before 1 had started selling. I recall several Corps that died suddenyl because some marketer couldn’t keep his bit fat mouth shut. The details are different but the point is the same: these companies probably can’t afford to wait several years while they smooth out chip production.

Intel actually did one better with the 486SX. Not only was it just a DX with the math coprocessor disabled, but they also sold a 487 upgrade. The 487 was just another 486DX that had a small change so it wouldn’t run stand alone. When a 487 was installed, the original 486SX was actually disabled and the 487 did everything. With that Intel could sell two full processors for one computer. They still charged more for the actual 486DX, even though they could obviously sell it for the same price as the SX. Granted, this is ancient history, but it does make you want to cheer for AMD a little.

A different way to look at chip release schedules is from a reliability standpoint. The speed rating of modern chips just tells you the maximum speed that the manufacturer recommends running it at. Overclockers have known this for a long time, and old Celeron chips have been pushed to ridiculous speeds.

If you buy a brand-new ZZ-230 and a week later the ZZ-250 comes out, you could be relatively sure that, barring multiplier or bus speed locking, your 230 could run at 250 speeds for at least a short while. If you went out and bought a 250, however, you were paying extra for the manufacturer’s guarantee that it would run at those speeds. When they later release the ZZ-270, you can try to run your 250 at those speeds, too… and it might work! But you might end up with fried sand.

I recently bought a 35-watt mobile version of AMD’s Athlon XP 2400+, and am running it significantly slower than the stock speed, with a much quieter heatsink. I expect to be running it at its current speed for as long as it meets my needs, at which point I’ll switch to a top-of-the-line heatsink (or, if they’re available, a cheap liquid-cooling rig) and run that puppy up to the chocks.

:smiley:

My understanding that the FP unit was defective in at least some of these, so that disabling it and selling it as an SX was a way to improve yields. Almost all caches in processors are made with redundant rows and columns, so that when cells are defective the redundant rows are turned on, and you can ship the part.

You should understand that the maximum speed you get from a lot follows a distribution, with a small number of really fast parts, more medium speeds, and some slower. As your process improves the distribution moves faster, and your yield of fast parts increase. A process shrink also improves speeds.

Right. There are millions of paths in a processor design, and the ones limiting speed are critical. Much of the design effort is finding paths that don’t make speed and improving them, often by putting in a faster transistor (which is bigger) or buffering it. Once you hit the target speed, you tape out. However, there are people who keep looking at speed paths, and correcting them long after tape out. When you fix all the slow paths, you get better yields at higher speeds. You don’t even need a completely new mask, often, as there are often spare gates you can use with a light tweak of one or two metal layers that do the job.

That notion seems to be popular, probably because it is noted in the articles in FOLDOC and Wikipedia. However, these articles contain errors that make me question the validity of that statement. Two statements in particular stand out:

False. Around 1992-1993 they created an SX-specific design without an FPU to increase yields on silicon. FPU-free versions of the 486SX are still available (in low-power and embedded forms).

False. This was true with the 386SX, but is not true of the 486SX.

I knew quite a few designers at Intel in the 90’s and all that was claimed at the time was that early batches of the 486SX were actually the same as DX chips with the FP unit disabled. This was to get a low cost unit out the door without having to create a separate layout. Whether these were created from defective units or not is questionable.

I would tend to believe that this is just a myth, as disabling the FPU after it failed testing would be difficult and would not generate the quantities of SX chips that they needed. A quick google search for cites did not yield any reliable source that would give credence to the defective chip theory or refute it. Most places just parrot the FOLDOC article. There are a few threads on Usenet from the time where Intel employees denied the idea, but they may not have necessarily been in a position to know.

Here’s a nice overview of processor developmental history from the 8086 up to the Pentium II 300 units.

Processor Types and Specifications

Well, I have worked at Intel, but not on that project - however it never came up. I cannot imagine why anyone would plan to build a chip with the FPU disabled - the size would be the same, the yields almost the same, and you would be producing an expensive chip with a lower selling price, at least to begin with. However if you were able to disable defective FPUs (which are fast and therefore more prone to timing problems) you’d be getting money out of scrap.

I don’t doubt that when the SX took off they built them deliberately, though, since by then they’d be up the yield learning curve.

Would disabling the FPU be difficult? Not necessarily - it is a common trick to build in hooks to disable defective units. This is of great use during bringup, where you can test out the logic on working units (Intel term for major logic block) even if there is a logic bug in another. New mask sets are expensive, so it is of great benefit to find as many bugs as you can before doing another tapeout.

Also, if you think it would be difficult to get enough parts, you haven’t seen processor yield numbers, especially in early life. I have and they’re not pretty. I can’t imaging the marketing people would forecast the demand that there was - remember this was being designed just before the big boom in < $1K PCs.

So I don’t know for sure it’s true. I’m not surprised that no one from Intel is talking. Intel lifers don’t. It is more plausible than you might think, though.

Can anyone tell me if this is/was true or false? Based on the OP I was reminded of a reasoning behind the relatively fast upgrades available for processers.

Back in 1995 I had a professor that gave a lecture on advancement of computing power. He basically stated that companies like Intel spent such an incredible amount of money developing faster processers, they had to hold back to recoup the costs through consumer purchases.

The way he put it, Intel would easily be 3 generations ahead of what was available to the private market (while offering “current” technology to major Universities and the government). This way, every 18-30 months they could dump the newer tech on the public, thinking that was a good time frame to get people to buy the latest chips. And therefore keeping a steady supply of “upgrades” that could be marketed when the last generation sales fell off.

Any chance this is why it seems like the speeds are limited?

Ditto, except I do remember conversations were this had come up, if only casually. I had looked for cites that would back up my recollection, but only found non-official statements that, for all I know, were from employees as clueless as me.

I do not see that it is nonsensical that they would disable the FPU. Intel already had the DX version in production. They were starting to lose market share on the low-end market to AMD, Cyrix, et al. who were just releasing 386 clones. And, they were just starting their “Intel Inside” campaign to create brand awareness. They needed a lower cost alternative that was superior to the competitors and they needed it quickly. They could take a hit on margins to differentiate their CPUs, keeping the competitors at bay.

It’s a similar economic problem to indestructable lightbulbs. Why would I buy the new and improved 30 year lightbulb if my 20 year lightbulbs are only a few years old?

OTOH, I really don’t think the folks at Intel had the plans for the Xeon processor locked away in a file cabinet since 1992 or that double secret military projects had them since 2000. They might already have working prototypes for their next generation of chips, but it’s a long way from building a prototype in a lab to shipping thousands of commodity units.

The different clock speeds, as others have mentioned, are purely a result of manufacturing and quality assurance.

[QUOTE=duffer]
Any chance this is why it seems like the speeds are limited?[\QUOTE]

There’s a big difference between what you are capable of producing and what you are actually producing.

Think of it this way. Right now this very moment, Intel decides to make a new processor which is the best thing they are capable of producing. A huge team of engineers starts to work on the thing, and it takes them months to actually put the design down into transistors and create something that they can fabricate. Now the production folks have to set up the production line. So, millions of dollars and many many man hours later, the first chips roll off the production line.

But at this point, the rest of the guys at Intel haven’t been sitting around playing solitaire on their computers. In the past year and a half while one group of guys has been busy putting ideas to silicon, these other guys have come up with improvements that would make their processors twice as fast.

So, the main reason that Intel always has the ability to produce something a lot faster than what they are actually producing is due mostly to the incredibly large effort it takes to put all the bits together and make a product. It’s not a big conspiracy, it’s just the nature of the business.

There’s an art to deciding how long to produce a chip while letting the research guys come up with new ideas. If you only produce the new chip for 6 months then start setting up production for the next generation chip, then you won’t make enough money to cover the costs of engineering and production. If you don’t give the research guys engouh time, the next generation won’t be that much more powerful than the current chip. If you wait too long, your competitors come out with a better chip and soak you in the marketplace. It’s a tough business.

engineer, what I was trying to convey was the chip companies may hold back a bit to slowly trickle the tech out to the public to make money on each new release.

Now, you’re probably more informed about the industy than myself, so I’ll just use arbitrary numbers. Higher the number, faster the chip.

Say Intel has a chip running at 200. They release the chip to Dell or Gateway or what have you for sale to the general public. As time goes on, Intel has R&D working on 300. But Intel hasn’t made enough money on the 200 yet to cover expenses. So they sit on the 300. As the 200 money rolls in, they have R&D working on 400 to make sure they can keep up with the latest systems offered in the near future. So on and so forth.

Is that theory all wet? Keep in mind, I’m basing that on what my professor taught us about computer tech almost a decade ago. But we have a pretty major computer science/ Aerospace system up here. I assumed it was a credible argument.