Do the major PC chip makers limit the speed of new chip developments on purpose?

IIRC AMD K6-3 was 1st introduced at a higher speed, maybe 500 mhz, then a 333 version came out. Some speculated in overclocking circles it was the same chip, which did overclock to the same speeds as the faster one did. It was a marketing effort to allow the k6-3 to go head to head against the P2 on the high end, while still being competive on the low end.

Also I think when chips are made, there may be many that opperate at a lower speed, but some will do more. Those faster ones are too rare to bring to market till manufacturing processes improve so they are sat on or ‘down labeled’.

Because they needed two levels of performance, and it’s much cheaper to build just one design instead of two.

The company I work for makes some products with optional levels of memory, from 0.5 MB to 64 MB. There’s a $6500 difference between the top and bottom levels. The thing is, we build all of them with 64 MB, and enable just how much someone buys. The good part is that if they want an upgrade later, it’s just a license to enable it.

duffer

Your theory might work if the company is a monopoly or there is collusion. But there is no way AMD is going to sit on a better/faster/cheaper chip if they think they can beat Intel to the market with it. The gains on the better/faster/cheaper chip would far outdistance the money they’d make on the tail end of the present technology.

There are TEAMS of people looking at the projected market and costs of production for these things long before they are introduced. They use predicted mature yields and a host of other variables to determine if the product is viable. If they come to the conclusion that the “200” version won’t make (enough) money, they likely will just divert the resources from that one to the “300” version to get the 300 out the door faster.

Having said that, there is also the possibility they will sell the 200 at a loss just to gain ground in the low end of the market. Name recognition and all that. Then they do roll out the 300 and make huge profits. But they won’t hold back the 300 purposely.

I doubt it. If Intel could get any solid speed advantage over AMD, they’d use it. And vice versa.

Here’s another thing you may not have thought about.

You know how software has versions? If you’re on Netscape, you might be on 6.2, or 7.1? (Or, heaven forbid, 4.76, like my office).

A great many users never know that the chips are going to have versions, too. (And, in fact, for many of them there is no simple way for the user to find out what the version is.)

These versions each represent some change to the chip – either in transistors (I’m adding some gates here, removing this one there) or in process (let’s add a clean-up step after the glass etch on layer 28…) or both.

These are for many reasons. The most obvious ones to the users are bug fixes – like when your FPU give you the wrong results, and the next version is correct. They also do them to improve yield (which reduces the cost), or to improve performance. Speed, in other words.

After you have designed your chips and manufactured them, you test them and find out just how close your expectations and simulations are to reality. Then you start looking at how to improve the chip, because if it’s a faster chip, you can sell it for more dough. You run tests on why it’s slow – what instructions fail if you try to speed up the clock? What cycle does it happen during? What path (or paths) in the chip is the slowest?

This is hard work but – if you succeed – you may come out with minor changes that will improve the speed of your chip. If those are the only changes you are making, you will have to carefully analyze the cost of changing the design vs. the improvement to profit for the faster chips. On the other hand, if you already have to make a new mask set to fix a bug or improve the yield, you will go ahead and include the speed improvements, too.

So MegaChip 1.0 may go at the speed of 100 RSU’s (Random Speed Units), while 1.1 goes at 110 RSU’s, and 2.0 leaps to 150 RSU’s.

There’s also the issue of speed grading, which I can discuss at length if anyone expresses an interest.

Here’s my attempt at short answers:

Actually, they couldn’t get to the Athlon 3000 without what they learned from producing the 2700. They also have economic incentive to get them out as high performing as possible. I’ll talk about design revisions in the next post.

No, what limits the rate of improvement in chips is how fast human beings can come up with new technology, learning from what they’ve done. Moore’s Law merely notes the speed at which this has occurred, and extrapolates it. It’s pretty consistent when you look across decades. If you look at a smaller window of time, you’ll see that we make jump improvements that are faster than Moore’s Law says, followed by periods where we are slower. Moore’s Law just describes the average in the long run. It’s descriptive, not prescriptive (although companies may make business decisions using Moore’s Law to estimate where the market and their competitors will be).

No, that’s because of the long development cycle of chips. If the cycle is (say) three years, concept to production, and you want versions coming out 2 years apart, it means that you’ve started designing the VI before the V is completely ready. That doesn’t mean they are able to shorten the design cycle on the VI – some of which is because they are waiting for technology to be developed. And consistently cutting the V program to staff the VI program is a recipe for disaster – if you always do that, you will never have a chip out. You might do it rarely if you find you are behind the market.

The issues are design improvements; test improvements; and process improvements. These may well happen independently of each other. A given processor design may be moved with little change to a faster process, resulting in performance increase in the same family.

That’s most odd. I’m pretty sure I posted those in the reverse order to how they came out, in case anyone is confused.

I’m having a little trouble with the concept of clock speed being a production or quality control issue. So there is really no physical design improvement involved, it’s just “we’re making more of them better so now they are faster?” To me that’s like saying that if Ford builds enough Escorts, the horsepower will just naturally increase from, say, 120 to 180 because now they know how to build them better. I don’t know… maybe it’s not an appropriate analogy, but I’m not getting it.

Well, the theory is not far from the truth. Old fogies like myself will remember that in the very early 1990’s, computers were still running at $3K+ for the latest processors. Then AMD started producing compatible chips and lo-and-behold, within a * very * short time, '486 chip sets were available at ridiculously low prices. The competition between AMD and Intel in the early 90’s accelerated the until then leisurely roll-out of faster/cheaper chips to an almost ludicrous degree.

I don’t think it’s debateable that until the advent of AMD that Intel was content to maximize their return on processor technology by carefully staged rollouts of faster processors. (To a lesser extent, Intel was competing with the Motorola 68xxx family, but Motorola never really devoted the resources to keeping up. Possibly out of pique with Apple as one poster suggested, but more likely because it wasn’t directly relevant to where they felt like their main business lay, and also because their management was questionable at best.)

I work for a company that makes process inspection equipment for chip makers. The production process is very complex and can affect speed in many ways. The easiest one to consider is probably heat. No matter how good your design is, the faster you run it the more heat you generate. If the components are less then perfect the resistance goes up, which is the R in the P=IR^2 equation. The P is what is generating your heat. Heat makes the metal interconnect resistance increase, and the insulators resistance decrease. This leads to errors. The original line of Athlons could reach 1Ghz even though they were sold with a maximum of 700Mhz. But to get that, a third party company had to cool them to something like -30 degrees C.

To beat the heat issue a number of things can be done. Most major chipmakers have lowered resistance by switching from aluminum to copper. This was incredibly complicated because copper corrodes silicon. Making the components smaller has been an ongoing trend. AMD has been experimenting with using monoisotopic silicon in its chips. All of these things are major investments and involve a lot of working out the kinks. My company makes close to a billion a year in revenues just finding and preventing process defects and errors.
In all reality, the chipmakers are getting the chips up to speed as fast as they can afford to.

The closest thing to a conspiracy is marketing mature chip lines at multiple clock speeds. By the time a new chip comes out over 90% of the old chip could theoretically run at the maximum speed.

I can understand your confusion. There is something complicated going on here, so let me try to explain it.

I’m going to distinguish between the “design” and the “process” used for building a chip. Let’s say my simple circuit is an inverter: A = NOT (B). In CMOS (a popular flavor of transistor types, if you will), you can make that with two transistors and some connecting metal. How I connect those transistors, the length of the metal trace, the width of the active gate of the transistor, and so forth, are all part of the design. (I am only slightly oversimplifying for illustration.)

But this design has to get built using a manufacturing process. For the transistors themselves, some machine has to spit ions of some impurity (like Boron or Phosphorus) into the silicon where the transistor goes. It has to be set to give a precise amount of these impurities. For the connecting metal, it is deposited to a certain thickness, and then etched to a certain thickness. Etching can also reduce – or not reduce – the width from the design portion.

Right away you should notice that the manufacturing process – the thickness of the metal, the doping of the transistor (which relates to it’s electrical characteristics, like maximum current, leakage current, turn-on voltage), and so forth – can be varied independently of the design. That is to say, I take the same design, and on Monday I set the dials one way and make one batch, and on Tuesday I set the dials another way and make a different batch.

Now, the speed of this design relies fundamentally on the resistance, capacitance, and inductance of these circuit elements. These characteristics, in turn, rely both on design elements (transistor drawn width, drawn metal size) and process elements (actual widths, metal thickness, impurity doping). That is why taking the same design and varying the process results in different speeds of the units. (It can also change other things, like power consumption, and reliability.)

What’s worse is that with the top technology processes, all of these parameters are so sensitive to change that they cannot be controlled precisely. There is some natural variation that is going to occur, and you can’t help that. What you can do is make your design robust enough to withstand as much variation as you can; control your process to have as little variation as you can; and set your center “goal” for each step of your process so that the natural variation is firmly within what the design can tolerate.

Within that variation, of course, are faster and slower chips. You go and test them afterwards, everything that is 100 RSU’s (random speed units) goes in this bucket and we charge $ for them, and everything that passes 150 RSU’s goes in this much nicer bucket, and we charge $$$ for them (for there are fewer of them, but more people want them). THAT is the production issue.

Hmmm… this is an odd thread. The time on that post isn’t right, either. It shows as being eight hours past my previous posts, where in reality, I posted it three hours later. Wonky server…

I’ll give you a counterexample. Back 7 years ago, when processor clock frequencies were in the 250 MHz range, Intel started coming up with versions that were only 10 MHz apart, practically nothing. This didn’t work, and rumor had it that the processor jewelry sold in the Intel store was made of good die.

The real reason it doesn’t are margins. You can get more money, and more margin, from a faster chip than a slower one, You build sales by either coming out with faster parts, so leading edge purchasers get them, or by cutting prices on older parts, which you can do because your yield is better. However companies prefer the first, because the margins are improved.

Look at Microsoft. As a monopoly, you’d think they wouldn’t have to upgrade Office at all, but they keep throwing in more bloat in newer versions to get people to upgrade.

You don’t design to your current process, you design to the process that is supposed to be there when you tape out. Process rules are strictly separated from architecture (except that the process controls how much stuff you can put into the chip.)

I was involved in a design that switched process technologies mid-stream. The RTL people and the test people weren’t affected at all, the circuit designers were more affected, but it wasn’t as big a deal as you might think. (I’m a test person, so I didn’t feel it at all.) You also specifically identify chip which are process drivers, that is, the first chip made in a new process. You expect problems with those.

And as mentioned, it isn’t months - it is years.

As for the FPU - I still don’t buy it. I am having trouble seeing my marketing friends deciding to sell a version with lower margins in anticipation of market demand for a crippled design. Yes, the demand turned out to be there, but notice the current way of meeting this demand, the Celeron, does it by removing the cache. The cuts size and improves yield, and is much easier for the OS to deal with than having to do floating point ops in software. Also, why not target slower processors to that market? It just doesn’t smell right, but I can’t prove it. I’m going to a conference next month where a lot of old Intel hands go, I’ll see if I can find out.