I looked on pricewatch.com and was surprised to see that you can buy a processor that clocks in at over 3Ghz. Wow. I mean, how do these speeds keep going up and up? Are there really that many technological advancements that are discovered every couple of weeks? I find that hard to believe. Couldn’t they make a 5Ghz chip if they wanted to? How much more expensive can it be? The only real problem would be overheating, but that isn’t much of a problem is it?
Basically, my question is, why do they keep coming out with faster and faster CPUs so incrementally? Why don’t they make the fastest ones they can make? Do they want you to get into the mentality that there is something newer and faster out there and that you have to go get it? Why would you need such a fast processor anyway? Unless you are going to be playing Doom III, burning a CD, and computing PI at the same time, it seems like a bunch of overkill.
I don’t know what the hell I’m talking about, I’ll go shut up now.
Basically, that’s just the way the technology improves. The engineers keep coming up with improvements that make chips a bit faster than before, and those improvements get incorporated into the new designs.
::Cynical Snigger:: Don’t forget market skimming:
Build a chip that’s 50% faster, then handicap it in increments of 10%. Scale the price for speed accordingly. You’ve now sold much more than if you’ve released just one chip.
The Intel/AMD duopoly tends to keep things ‘honest’, but it’s a practice used in a lot of consumer electronics.
I agree with rabid, but I’m not so cynical on the issue. Chips are made in batches on big pieces of silicon, with acid and lasers being used to etch the tiny, tiny circuitry onto them. So say you design a new chip that, with everything being perfect, runs at 3 Ghz. But out of each batch, only 10% of the chips are perfect. 20% of the chips would see errors or overheat if you go over 2.75 Ghz. Another 25% can only get up to 2.5 Ghz, etc. So you release the semi-defective chips, but lock them so they cannot go faster than you think they can handle, and you sell those chips for less than the perfect, fastest model. As time goes on, your factory gets better at manufacturing the chips, so by the end of the chip’s lifecycle you could push it up to 3.5 Ghz or 4 Ghz without problems. Sometimes at the end of a chip’s life it will be able to go faster than the lowest rated chip from its succesor.
This helps explain how overclocking works. What overclockers try to do is push their chip, be it on the cpu or their video card, higher than the company rates it and hope it holds up, usually with lots of extra cooling added. But check any major hardware review site will tell you than no two chips can be overclocked to the same level. You might get lucky and get a chip that barely missed being classified as the next speed up, or you could get one that just squeeked by to achieve its rated speed.
It comes down to a matter of economics. Processor manufacturing is not just a matter of making fast CPUs; you also have to make them cheaply and efficiently enough to be able to sell them at a price point people will accept.
You can get processor speed increases either by introducing a completely new design or by tweaking an existing design slightly. It’s cheaper to tinker with an existing design and make incremental improvements, and when you’re dealing with clock cycles on the order of fractions of a nanosecond, every little incremental improvement helps tremendously.
When a new chip design is introduced, it takes a lot of money and time to retool a plant to produce the new chip. Further, when the plant first comes online, yields are typically poor, as mcbiggins described. So companies try to recover these retooling costs over time by mass producing chips, gradually improving upon manufacturing processes and getting the chips produced up to their max specs. If you were to suddenly try to push your design and manufacturing procedures to the limit and develop the fastest processor you could theoretically make, you’d be facing enormous research costs, and research costs just for incremental developments are already pretty hefty. To add insult to injury, you’d then have to shell out large amounts of money to retool your existing plants to produce these speedy chips. After all that, you’d have to spend still more money to refine your manufacturing processes since you’d have no prior experience manufacturing chips with such extremely strict tolerances. All these costs would be passed on to the consumer who would wonder why he would want to buy a 10 GHz (or whatever) processor when a competitor’s 3 GHz chip is just a tiny fraction of the cost. Far better to dust off your spec sheets and try to shrink the gate length a few micrometers and then get some engineers on the task of boosting yields by 5%, far more manageable engineering tasks.
Are we not now to the point where mother board speed and peripheral unit speeds are really more effective in enhancing a home computers performance? In other words, would not a 1.5 Gig processor with a 433 MB be a better deal than a 2.0 Gig processor on a 133 MB?
Or do I need to back to kindergarten?
I am on a 600 Hz X 100 Hz MB and looking to upgrade.