Doesn't Moore's Law have a saturation point?

I mean, eventually, the geometry of the IC can only get so small right?

Wiki

Yes, and it reached that point ten iterations ago. Read any computer magazine of the time and they’ll tell you that.

Engineers keep being clever, though. Eventually they’ll either stop being clever or they’ll shift the problem over to some new format so that it will live on.

If you’re asking when that’s going to happen, all I can say is “see above”.

There is an absolute limit to the amount of information (or in this case, computational elements since the elements could be thought of as bits of a turing machine) able to be contained within a finite area of space. So even if we push to the sub sub sub sub quantum level we are still limited by that.

Yep. We’ll never get anything faster than about 40 MHz. At least that’s what my college professors told me. (This was back in the days when a 1 MHz 6502 was a fast CPU)

Well, haven’t consumer processor speeds topped out around 2-4 GHz and we’re going parallel instead to get greater throughput?

Anecdote: I saw an article online about a CPU that was overclocked to 5GHz… with liquid nitrogen cooling. So I’d guess it’s possible to push the GHz up more, just not especially practical.

Aside: any reason why CPU makers don’t just make a bigger chip? Say the size of the Pentium 2 but packed with all that nano-scale goodness. Not practical for notebooks and Ipads and the like but screw it. It if’s not going to be mobile, why not make it big?

I’m not a computer engineer, but I would think a chip that size would not have the surface area capable of dissipating the heat generated by that many transistors.

Intel at the moment have an experimental processor with 80 cores and claim it could scale up to 1,000 cores so processing power has a long way to go yet. A more immediate problem is getting software that can be split to run on so many cores.

EDIT: the 80 core processor is about 2.2cm x 1.3cm

Die size (and transistor) count is increasing all the time. Not proportionately - as we become better at making things smaller, we can cram more transistors into the same area.

We’re talking about billions of operations per second, and at that scale you start running into not being able to physically transmit the electronic signal the distance you need to across the chip in time. A giant chip would have issues of waiting for data to get all the way across it, which would introduce new problems. I doubt that’s the definitive answer, but just one issue we’d run into. Physical distance between the various transistors on the chip is actually a pretty significant factor in CPU design.

CPUs are made out of wafers of silicon crystals. As these crystals grow to form the material that will be cut into wafers, small imperfections often make their way into the crystal structure. The wafer then gets doped with different contaminants to produce the various transistors and such on it (I’m majorly glossing over details here, but I hope you get the basic idea). Once this is done, the wafer gets cut up into the individual CPU chips. Every place that you had an imperfection results in a bad chip, and there are a LOT of bad chips per wafer.

If you make each individual chip larger, you increase the probability that it will contain a flaw, which makes the yield of usable chips go down as the chip size goes up. If you double the length and width of the chip, you are 4 times more likely to have a flaw in it.

You want to keep the number of bad chips down to a minimum, because if you spent all that money manufacturing the little buggers, it’s a shame to throw a bunch of them into the trash when you’re done. The fewer you throw into the trash, the more profit you make. The CPU business is very tough and competitive business. Companies can’t afford to produce chips with a high failure rate. They need good yields to stay profitable.

Is there any nuking in the CPU business? By which I mean, you try to make a 4-core processor, one of them is faulty so instead of binning the 3-core processor you ‘nuke’ one of them (and cache, as appropriate) and sell it as a dual-core processor. I remember hearing on a podcast (one of Leo Laporte’s) that it was a common practice amongst graphics cards manufacturers when their products fell short and CPUs are certainly sold with cores disabled to reduce the number of manufacturing runs but are they able to do it for faulty cores?

12 years ago, a very intelligent computery guy I knew told me that terabyte hard drives would never happen, because they would need to be 1 metre in diameter. Even at the time I thought that was nonsense, and it would be solved and overcome, but he was so much more experienced than I in techy stuff like that, I figured he had some insight I lacked.

In my mind, he was proven wrong in less than five years. Terabyte drives were rapidly approaching by that time, and now we’ve passed that point and there’s the inevitability of petabyte drives.

It’s kind of funny how confident people will predict “X will never happen” in a rapidly advancing field.

I guess most of the famous quotes are from some time ago, though, so most people are learning.

There are some physical limits: one is the speed of light, which already affects computer design by limiting how far apart components can be. Another is the size of atoms, since you couldn’t have a memory unit smaller than an atom: we haven’t reached that limit yet.

I know that you’re probably right about the availability of petabyte drives. (I remember being shown an EMC product consisting of a rack of desktop drives that were slaved together to deliver a whole terabyte, which was an amazing amount back then, but now I have a terabyte drive that I can fit in my shirt pocket.) But still the idea of being able to buy a petabyte of storage in a desktop drive amazes me, and at this point, I can’t imagine needing that much space.

Speed of light is not a limiting factor in computer chips now. A node transitions from one to zero much slower than the time for light to go from the drive end of the wire to the receive end of the wire. The speed limitations are determined mainly by the on resistance of the transistors and the capacitance of the signal nodes.

High res holographic porn is going to need some space.

Oh yeah. Happens all the time. It’s nothing new either. Way back when, the difference between a 486 SX and a 486 DX was whether or not the co-processor worked. If it didn’t, they nuked what was left of the co-processor and sold it as an SX. These days, similar CPUs with different features like the amount of cache and such often come off of the same assembly line. When part of the cache fails, they nuke the malfunctioning part and sell it as a reduced feature version. Some chips are even manufactured with spare parts. If part of the chip doesn’t work, they nuke it and enable the second version of that section of the chip.

Sometimes they will even intentionally disable a fully functional core just so that they can sell a number of cheaper CPUs to meet an order for a particular vendor. I’ve read that some hackers have figured out how to re-enable these cores. Sometimes they end up with effectively a better CPU chip (quad core when they paid for a dual core), and sometimes the cores they re-enable were faulty and they end up blue-screening their computer when they try to run it.

I’m no expert, but this sounds like one of those things that could be proven wrong some day.