the march of progress

A couple of years ago, the hottest computer processor on the market was the a 100-MHz processor. Then came the 200, 300, 400, and now a major corporation has announced they’ll release the 700-MHz processor in the first half of 2000. So why didn’t they release the 700 MHz at the beginning? What gives with this marketing strategy? I don’t think it was a question of not having the technology to do it, but then again, since I’m not Cecil, I could be wrong. I’ve also seen this pattern in video game machines too (like the 8-bit system, then the 16-bit, then the 32, then the 64, etc.) Any ideas?

It is a matter of technology, not just a marketing decision.

A few years ago, the 486 was the latest and greatest chip. It was faster and more capable than its predecessor, the 386. Then came the Pentium, which is yet faster and can do more. After that came the Pentium II and Pentium III. Asking why a 486 is slower than a Pentium III is like asking why the first Ford Model T is slower than a modern Ford Taurus.

Even within one generation of processors, the different clock speeds are due to tech changes. (Basically, the system clock signal is what lets the processor distinguish one operation from the next.)

When the Pentium (for instance) was first released, it was rated for something like 50 MHz. That is, it could keep up with a clock signal as fast as 50,000 cycles per second. You could make it run faster by giving it a faster signal, but “overclocking” in this manner can ruin the chip. The faster a chip runs, the more heat it generates, and at some point it will get warm enough to start damaging the teeny transistors.*

You can make a processor dissipate more heat, for instance by finding a better organization for the transistors on it, but this is a very time-consuming operation. So in effect, Intel does just enough development to get one chip working, then says: “Here’s our first Pentium chip, don’t run it faster than 50 MHz. Give us a couple months, and we’ll have one that can do 75.”

At a certain speed you reach the point of diminishing returns; it takes so much development time for each increase in clock speed, that you’re better served by redesigning and making the chip itself more sophisticated. This is why they took Pentiums to a certain point, then released the Pentium II.

So the reason they didn’t release the 700 MHz chip in 1990, is the same reason that Henry Ford’s cars couldn’t drive 70 MPH.

(* - In reality, chip manufacturers leave a margin of error, so you can overclock your PC a certain amount without harm. But if you tried overclocking a 50 MHz Pentium to 500 MHz, you’d kill it.)

I have no doubt that this was a typo. I’ll just mention that this should have been 50 million cycles per second.

And heat isn’t the only issue. The clock tick (the “cycle”) gives time for propagation of logical changes in the data path of the CPU. Overclocking can lead to certain bits failing to change, thus giving very unpredictable results. Hard drive corruption is one of the nastier possibilties. A hard freeze would probably be the most likely result.

<QUOTE>
A couple of years ago, the hottest computer processor on
the market was the a 100-MHz processor. Then came the
200, 300, 400, and now a major corporation has announced
they’ll release the 700-MHz processor in the first half
of 2000. So why didn’t they release the 700 MHz at the
beginning?
</QUOTE>

Well, as a computer engineer I can probably answer
this question. The first thing is that it’s not true
that the 100 MHz 486 was the “hottest computer
processor” several years ago. Not by a long shot.
There were volume-shipping CPUs such as the
Alpha running at 4 to 500 MHz several years ago.
Also note that performance isn’t directly measured
by clock rate. It’s possible for one type of
CPU at X MHz to be many times faster or slower than
another one at the same X MHz. This is just from
memory, but IIRC, Alphas from several years ago
were on the order of 15 times faster than the
fastest Intel chips of the time. Similar was
true of Sparc and PA-RISC.

But that doesn’t directly answer your question,
which is, roughly, why not just do the faster ones
right at the beginning? The are a lot of factors
that go into this so any answer I give here will
be inaccurate through oversimplification, but it’s
roughly similar to asking, “If the Concord can fly
at Mach 2 now, why didn’t the Wright Brothers
have their plane go that fast?” It takes many
iterative cycles to learn enough to achieve
higher performance. It’s a pretty difficult
thing to do, and in fact, it’s common for
increasing clock speeds to come at the expense
of yield. This is compensated for to some extent
by improvements in the processes and technologies.
For instance, today, we can achive 0.18 micron
feature sizes, while not too many years ago,
2 micron was a big deal.

The other factor at work is that there has been
a pooling of industry resources to achieve levels
of performance common to today’s PCs. There are
only a few fab’s in the world doing 0.18
micron today, while 10 (roughly) years ago,
there were a great many fabs doing 2 micron.
A 0.18 micron fab is just much more expensive
to build and run, and the computer industry
wasn’t nearly as big 20 years ago, so that
level of investment would have been harder to
achieve.

So it’s not just marketing deciding to dole
out performance a little at a time. And anyway,
consumer PCs have traditionally not been anywhere
near the leading edge of the computer performance
curve. The trick isn’t so much in achieving
ultimate performance, it’s in making that performance
affordable to the general public.

Hope this addresses your question.

Sorry about the short lines there; I’m not sure what did that.