is it an obsolescense ploy? It seems odd the speed trend should be so steady. Sometimes I get the feeling there are 5000mhz processors all ready to go but they are being held back to increase turnover.
On a related note, what technology is increasing to allow faster chips and if there is no obsolescence cabal, why is the speed trend so linear?
Doubtful regarding the “obsolence” policy. For one thing, there are a number of small startup chipmakers that would love to take on Intel (and AMD, for that matter). If a 5000mhz processor were currently possible, they would put it out there in a second. Microprocessor technology moves so quickly because it has to; if any of the major players quit moving forward, they’ll be quickly overtaken by the competition. Even if they could get a processor running at 5ghz, the thing would melt very quickly due to the intense heat created.
I’d guess (going off on a little hijack here) that in the near-future, we’ll begin using a different type of CPU, not based upon binary information. As well, it won’t be based on the transistor that current chips are based on, so mhz/ghz won’t matter at that point.
Just a small quibble. NPR ran a piece (on Morning Edition) yesterday (11/26) about Intel beating the overheating issue with a new Terahertz chip that turns on and off with each cycle thereby promoting cooling in the chip.
Fair enough about there being no conspiracy. So what technological advancement is now key in improving mhz? minituration? doubtful. Any ideas. What I find so strange, and why I asked about the possiblility of companies holding back is that the progress in speed has been so linear. Typically in tech advancement there are plateaus until some new material/method/etc is found, but this is nearly (stress nearly) a smooth ride up.
Ask and ye shall receive.
Here’s the press release from Intel:
http://www.intel.com/pressroom/archive/releases/20011126tech.htm
It covers both the new breakthroughs making their TeraHertz chip possible and the technological limitations in existing chips.
Thanks for the link Mr. Chance. It seems the limiting factors are overheating and power leakage. It seems this was due to the size of the resistors and their proximity to the rest of the electronics in the chip. I can see needing smaller chips for some applications, but there is an awful lot of empty space in my computer tower. Why not just make them bigger (the chips that is)?
Whoops, damn wish you could edit. And also cited was the power consumption. Can’t imagine it would be much more than leaving an iron or two on.
Bigger chips cost more to make. As it is the Pentium-IV is quite large and I think pushes the limits of current fabrication lines.
Also, when they say ‘smaller’ they don’t necessarily mean the overall size of the chip but rather the traces (electronic pathways) inside the chip. The smaller the pathways are the less heat your produce in addition to gaining some speed due to shorter paths the electrons need to travel (yes electrons travel quite fast and the distances are incredibly small but when you add up a teeny-tiny speed increase across millions of calculations per second the time saved travelling a shorter distance adds up).
Unfortunately you can only get the pathways so small before you start running into other problems such as power leakeage or interference from other pathways mucking up the system. Also, to produce smaller traces the fabrication plants need to use shorter and shorter wavelengths of light as well as glass (for the lenses) that is insanely pure and finely manufactured. I believe they are already into X-Ray lithography or working on it but again you start running into limitations imposed by physics that become more and more expensive to overcome for less and less return in terms of speed.
So, in order to increase the clock speed of chips the designers turn to other tricks. The Pentium-IV uses a much longer pipeline (the series of steps an instruction takes through a CPU) in order to get the 2Ghz speeds now attainable. Unfortunately everything comes with a tradeoff. In order to be efficient and not sit idle more than it has to CPU’s use a thing called branch prediction where the computer takes an educated guess at what instructions need to be processed next before it knows that those are the right instructions. If it gets it right then great…everything runs faster. If it gets it wrong the entire instruction pipeline has to be dumped and the processes started again. The P-IV, due to its long pipeline, takes a much bigger performance hit than an AMD chip as a result (it dumps more data than an AMD chip would and has to re-process that data which takes time). This mitigates the P-IV’s lead in clock speed and is one of the reasons a lower clocked AMD chip can equal a higher clocked P-IV. Intel is aware of this of course and implemented a very advanced branch prediction scheme in the P-IV but nothing is perfect so it still hampers the P-IV.
There are other tricks and technologies already out there and proposed that will keep boosting computer performance long after current CPU’s hit their physical limits (multiple processors, RISC chips, quantum computers if they ever work, etc.).
- The computer industry loves obsolescence. That 300MhZ system you bought a couple years ago is now so painfully slow don’t you just ache for a new whiz-bang 1.5Ghz system?
But… People are discovering that 500-750Mhz systems are just fine for most anything this short of games and video processing. Hence there is a major slowdown in PC sales this year. Our systems are “obsolete” but we aren’t buying new ones. The industry has overshot the typical user’s needs after 20 years of PCs being so slow that any speedup was welcome.
- Speeds are not increasing linearly but geometrically. We are getting a doubling of speed every 1.5-2 years. It is a variation of Moore’s Law for number of gates on a chip (Googlize for more info). So we are currently getting a 500+Mhz per year gain. Obviously working backwards linearly would have implied 0Mhz just a few years ago. In a couple years we’ll see >1Ghz gains per year.
People are very poor at appreciating geometric progressions. So when a new CPU comes out now that is 200Mhz faster, people go “wow” when it is in fact a minor gain. There is no reason to pay a premium for a 1.5Ghz chip over a 1.3Ghz chip when you probably won’t notice. Esp. when you can spend the money saved on a better hard drive or video card. Upgrade this year only if the difference is at least 500Mhz, next year only if it is 700Mhz diff., then 1+Gig, etc.
- Due to competition from AMD, Intel is in fact shipping chips sooner after development than they used to. This has caused some embarassments when flaws turn up in the first production batches. Yet another reason to not rush out and buy the Brand New CPU. Also the CPU makers are labelling chips at their overclocked speeds. You have to use major cooling fans by default.
The Big Question is: When will the geometric progression end? All geometric progressions in nature must end. The one for Internet usage ended 1.5 years ago, and boy did that change have an affect on the industry. All previous predictions about hitting a wall have been disproven by new technologies. We can’t double forever, but is it going to end sooner or later? Big $$$ ride on the answer.
Thanks, Whack-a-Mole. I always knew that AMD chips were better, but never why they were better.
ftg, that sounds like the beginning of a joke: ‘How many [bill] Gates can you fit on a chip?..’
I’m still waiting to see the wall processor speed will hit but I’m sure there will be one when further significant performance gains cost significantly more.
The other direction to go when the wall is hit is to make the same performance cheaper. This works as long as you can still sell the product at a profit. I suspect digital cameras may be hitting this wall soon. A 2 megapixel camera can replace a film camera for the average person who only wants snapshots and good ones are now under $400. 5 megapixel cameras are commonplace in the semi-pro market and some of the better ones I’ve seen can make an 8x10 dyesub print that is nearly indistinguishable from film. More pixels are better up to a point but when the image is no longer visibly better it just becomes a hassle to transfer more data from a memory card to a computer. That’s not an issue for me now with a couple of 80mb memory cards but I can see the speed of USB being a real pain when I get a bigger camera and a gigabyte of storage for it.
When doing a standard 8x10 you are right that at some point the individual pixels become too small to notice with a naked eye. However, they aren’t necessarilt wasted. If you want to manipulate your picture, especially enlarge it (or portions of it) then the extra pixels come into play. As a result your enlargements may look as good as the original picture (up to a point).
Quoth KidCharlemagne:
There’s another consideration here: Information can only move across the chip at the speed of light. At current processor speeds, that means that an impulse can’t even make it all the way across the chip in a single clock cycle. The way to get around this is to arrange the transistors on the chip in such a way that most impulses don’t need to go all the way across the chip… But that requires a good bit of both cleverness and luck. There’s only so much you can gain by making the chip bigger, before nothing has time to reach the added parts of the chip. Similarly, you can put in multiple processors in the same computer, but that has strictly limited utility: Any single program has to be specially designed to take advantage of multiple processors, and with many programs, it’s not even possible, so multiple processors are only an advantage when you’re multitasking.
Well, not exactly. Behold the miracle of the “Paper Release.”
Whenever, AMD ships a chip that can compete with Intel’s best offering, Intel announces a faster one. Then, nobody actually sees it for months. Want a 2.0 GHz Pentium 4? So does Dell. This has been going on for over two years now.
The 1.13 PIII mentioned in that first link was the greatest fiasco of all; it “shipped” to hardware reviewers, but proved to be so unstable it could not even be reviewed in some cases, and didn’t see the light of day for almost a year.
Hmmm. That doesn’t read so clear. I’m not contradicting ftg. I’m merely reinforcing his point.
Processor speed does not go up 100Mhz every couple of months, it doubles every 18 months. The graph is not linear but exponential. Why does this happen. Advances in technology. I could name a few but new ones are discovered everyday, read EETimes http://www.eetimes.com to see some of these advances. When they say they shrink a die, they are shrinking the area that the traces of silicon in the chip take up. This therefore shrinks the amount of space that the chip takes up. This also lowers the voltage required to cause a transistor to change states, lowers the power consumption and the heat disipation. This also increases the yeild in manufacturing. When they make a chip they do not say, lets make X 100MHz and Y 200MHz chips. They just make a bunch of chips on a silicon platter and due to manufacturing flaws some run slower than others. Sometimes the yeild is too good and they sell a faster chip at a lower clock rating. This is why some people can “overclock” a chip. They do not make bigger chips because they are more prone to fail in manufacturing. There is no forseeable “wall” to processor speeds. They have been predicting a wall for the last 30 years but they always come with a way to make them faster. They have technology planed out to atleast 2010.