The new Intel Sandy Bridge chips overclock to nearly 5Ghz on air cooling.
Absolutely. The Cell processor from IBM, which is at the core of the PS/3, is an 8-core processor of which only 7 are used by the end product. Even when all 8 are good, one is disabled. The Sun Niagara processors came in 2 core versions though there were 4 cores on the chip.
People have already mentioned the speed of light problem. The bigger the chip, the bigger the global routing problem is. We refer to the job of making sure each signal on the chip is fast enough as closing timing - this gets really nasty for global signals. Already in many cases long signals take two clock cycles to propagate a transition from source to destination.
Long signals also need repeaters to make it to the other side. All this causes problems.
There is another problem, though, in that the process for etching the chip has some physical limitations. Chips are on wafers, which have gotten bigger (expensively) but the bigger the chip the few you can get on a wafer, and since the cost is proportional to the number of wafers you run through, this gets expensive.
One chip I worked on got so big that it was no longer manufacturable, so we had to go through a die diet to get rid of features to make it fit again. This is not something you want to do.
It is true that bigger chips have lower yields. Today easily half the transistors on a processor go into memories. We don’t expect large on-chip caches to be 100% good, and there are spare rows and columns which get used instead of failing ones.
We don’t know how to do that for logic, except at the gross level of extra cpus, and we’d have to if we got too big.
One of the big benefits of multi-core design is that it keeps most of the signals short.
Moore’s Law is an observation, not a law. You can actually trace it back to before there were ICs, and it works just as well. We are at 28 nm today, with 21 nm in the wings and 18 under development. Though we are running out of room for geometry shrinks (you can’t make signal lines less than one atom thick) we will no doubt switch to another technology and continue down the curve. It will be a while. The biggest problem, and why going to new technology nodes has slowed down a bit, has been economics, not technology. The number of advanced fabs is shrinking rapidly, so the competitive urge to get the next process out is not as big as it used to be.
It’s ALREADY been proven wrong.
People are experimenting with electrons and photons for data storage/transmission. You won’t see it at Best Buy for a while (if ever) but it has been done.
Cites:
Building a better qubit: Combining 6 photons together results in highly robust qubits for Photons
Turning down the noise in quantum data storage for Electrons
After the 3.4 GHz P4s came out it became more about design and stuff.
EDIT: I seem to remember reading that a transistor cannot be made smaller then 17 atoms wide or something like that, anything smaller and it cannot be physically done.

Well, haven’t consumer processor speeds topped out around 2-4 GHz and we’re going parallel instead to get greater throughput?
“Around 2-4 GHz”??? A 100% delta is a rather wide range for “around.”
And yes, I just had priggish moment.
More to the OP, I’d say it’s tough to say when Moore’s law will be perceptually knocked out. For example, emphasis changes make a difference (like the new push to integrate graphics processing directly onto the CPU). My first modem was 2400 baud (not that long ago, in the grand scheme of things - just shy of 20 years), but my 85 year old grandparents (living where dialup is the only viable option - way out in East Bumblefuck), are at 56k… which seems to be adequate for them; “New baby pictures? We’ll let them download over night.” :eek:
At some point, current materials used for building our technotoys will become deprecated and be replaced by something given to a mad scientist by aliens.

More to the OP, I’d say it’s tough to say when Moore’s law will be perceptually knocked out. For example, emphasis changes make a difference (like the new push to integrate graphics processing directly onto the CPU).
This kind of thing is made possible by the decrease in feature size described by Moore’s Law, and does not drive it. The stagnation of clock speed is indirectly a result of Moore’s Law, in two ways. First, the number of transistors we can put on a chip has outpaced our ability to design with them, and using them in bigger caches and multiple cores is much easier than coming up with new cpu pipelines. The second is heat and power. In the good old days (20 years ago) if you stopped the clock for a chip and measured how much current it was drawing, you’d get almost nothing. Now transistors, because of speed and size, leak current, which means they draw more power and are harder to cool. Big electric bills motivated the reduction of speed increases. So, integrating GPUs is just another way of using more transistors.

Sometimes they will even intentionally disable a fully functional core just so that they can sell a number of cheaper CPUs to meet an order for a particular vendor. I’ve read that some hackers have figured out how to re-enable these cores.
Occasionally a specified motherboard, with the latest bios ( think Asus, although I can’t really recommend them right now ) will automatically offer the option of unleashing the other cores.
Which I do, but to no point since OpenSuse and some other 'nixes sometimes lets the kernel find a problem in seeing all cores. Which is probably due to disabled ACPI functions. Which in turn is due to the fact Asus motherboards require linux to be installed with ACPI=OFF and NOAPIC…

12 years ago, a very intelligent computery guy I knew told me that terabyte hard drives would never happen.
Back in the early 80’s I worked for a small company that manufactured computer terminals and was thinking of making one with a hard disk on it that would run Unix. We were negotiating with a new company called Rodime that had just come out with 3 1/2" disk dries that held 5MB’s worth of data. We asked the company rep when they were planning on coming out with a 10MB drive and he replied sternly “You do realize that there’s such a thing as the laws of Physics”.

Moore’s Law is an observation, not a law.
Former science teacher nitpick:
Scientific laws are observations. More precisely, they are summaries of observations.
Examples are Newton’s Laws of Motion, Kepler’s Laws of Planetary Motion, and Dalton’s Law of Multiple Proportions. All of these scientific laws summarize observations scientists have made. None of them endeavor to attempt to explain the observations. That what hypotheses and theories are for.
For example, Dalton’s Atomic Theory explains the behavior summarized by the Law of Multiple Proportions (and many other observations and laws).

Former science teacher nitpick:
Scientific laws are observations. More precisely, they are summaries of observations.
Examples are Newton’s Laws of Motion, Kepler’s Laws of Planetary Motion, and Dalton’s Law of Multiple Proportions. All of these scientific laws summarize observations scientists have made. None of them endeavor to attempt to explain the observations. That what hypotheses and theories are for.
For example, Dalton’s Atomic Theory explains the behavior summarized by the Law of Multiple Proportions (and many other observations and laws).
Good point, but Moore’s Law isn’t a law even in the Kepler sense. First, the doubling time has changed. Second, it is self-fulfilling in a sense, since when people sit down to do process roadmaps they assume Moore’s Law will hold, and that it is what their competitors will be going for. Planets don’t follow their orbits to keep Kepler happy!
So, scientific laws/observations reflect a fundamental truth about the universe, and Moore’s Law doesn’t.
Won’t we reach a point at which components will be so small that they’d be too sensitive to background radiation?
Another advancement I expect we’ll be hearing more about this decade and seeing not long thereafter is diamond based procesessors. Better heat disipation, higher density, etc.
The biggest obstacle to it at this point is the availability of high quality diamonds for the substrate. Recent advances in synthetic diamonds are rumored to be giving DeBeers sleepless nights and ulcers.
While large scale production is easily a decade or more in the future, Intel has shown remarkable and sudden interest in these lab-created diamonds. With Moore’s Law running into thermal problems on one hand and quantum problems on the other, a switch to the novel, sparkly material could drive computing power to unheard of levels. Already, experimental diamond transistors have been clocked at 81GHz. For comparison, Intel demonstrated cutting-edge silicon transistors in a lab late last year–running at 10GHz.

Won’t we reach a point at which components will be so small that they’d be too sensitive to background radiation?
They are already. Most highly reliable systems have memory error detection and correction built in. The term is “Single Event Upset”) SEU, since the memory cell captures wrong data but if you rerun the operation everything is dandy.
Many companies rent time at a beam in Los Alamos and zap sample chips to see if any types of cells are weaker than others.
Now people are getting worried about this happening to logic circuits. Much of the time it won’t matter, but when a wrong value is captured in a storage element it will. There has been a bunch of papers on this, but I don’t know of too many real cases yet - but, because it is so intermittent, most of the time the user will think it is a software glitch and not worry about it.
There have been tons of papers and books on fault tolerance, and it is moving from spacecraft to normal applications.
If you try to violate Moore’s Law your mass will increase and you’ll find that you need to expend an infinite amount of energy.

If you try to violate Moore’s Law your mass will increase and you’ll find that you need to expend an infinite amount of energy.
And if you violate Dennis Moore’s Law you will find yourself up to your ass in lupins.
I was present about 35 years ago when a cabinet with 8 Megabytes of high-speed semiconductor memory was displayed on a factory floor. A photographer from the S.J. Mercury-News came by, since this was apparently an historic first. Many knew we were approaching the limit: chip features were as small as light wavelength, bit energies were comparable to that of trouble-making background gamma rays, and of course the required refrigeration could not be miniaturized.
(Not a typo: Megabytes, not Gigabytes.)

This kind of thing is made possible by the decrease in feature size described by Moore’s Law, and does not drive it. The stagnation of clock speed is indirectly a result of Moore’s Law, in two ways. First, the number of transistors we can put on a chip has outpaced our ability to design with them, and using them in bigger caches and multiple cores is much easier than coming up with new cpu pipelines. The second is heat and power. In the good old days (20 years ago) if you stopped the clock for a chip and measured how much current it was drawing, you’d get almost nothing. Now transistors, because of speed and size, leak current, which means they draw more power and are harder to cool. Big electric bills motivated the reduction of speed increases. So, integrating GPUs is just another way of using more transistors.
I agree completely.
My point is just that Moore’s Law is fuzzy enough to be a moving target. Doubling capacities biannually can bring several things into play. Moore’s Law (intentionally, I suppose) doesn’t address specific components - this good, given that we could (for example) perform all of our processing on either the CPU or GPU any other processor; if it can count, it can do the job (kind of).
Also, we may be able to get away from transistor-based solutions within the next few years.
According to the book summarising scientific theories I have nearby, Moore himself gave it until 2025 before the law fails to hold.