What is the limit of silicon processors?

According to Moore’s Law the speed of computer processors (as the result of the number of transistors that can be placed in a given area) should double every 18 (?) months as each generation of processor makes it easier to design the next.

Obviously there is a limit to how many transistors can be fitted in a chip for a given area. What is this limit, how long (assuming the above is accurate) will it take and how powerful would a standard processor be using that level of technology?

While I’m asking, is there anything that could end up replacing silicon chips?

We’ll start hitting physical limits which will prevent us from going much smaller than about 1 micron technology and will prevent us from ever seeing clock speeds above about 40 MHz or so.

At least that’s what my college professor told me.

We went whizzing by those numbers around the days of the 486 processor. I’m sure someone can come up with some very good numbers now just as my professor did back then. Whatever numbers you come up with, I’m sure we’ll go whizzing by them in a couple of decades too.

Currently, the most advanced commercial processors use transistors with a gate width of 45nm. 32nm processors are coming soon. 22nm technology exists, and SRAM cells have been created in this technology for research purposes.

According to the International Technology Roadmap for Semiconductors, a report produced by the major players in the industry, the CMOS technology nodes will reach mainstream status at the following times:
45nm 2010
32nm 2013
22nm 2016
16nm 2019
11nm 2022

However, it is unclear whether 16nm and below will be technologically possible. For reference, a silicon atom is about 0.3nm across, so the gate would be less than 40 atoms long in an 11nm device. Clearly there is a fundamental limit that will be reached shortly.

As to what might replace CMOS transistor technology, it’s hard to say at this point. Carbon nanotube technology is showing promise, but it’s still very much in the research stage. I don’t know of any other suitable candidates at this time.

Also, there is an alternative to building faster processors - running more processors in parallel. Already most commercial processors have multiple cores. This is one way to stave off a performance plateau when the limit is reached.

And in fact, if you look at current processors, they’re probably equivalent or even slower in terms of clock speeds than processors a couple of years ago. But you do get more parallel processing (increasing nr of cores and “hyperthreads” & equivalents) which means you effectively get cores * hyperthreads processors in a single package, plus additional benefits like more in-cpu caching and other optimizations.

As the OP noted, Moore’s law has to do with the nr of “transistors” you can place on a chip, which sort of doubles every 18 months or so, still. But that number does not assume the size of the chip stays the same. Moore’s law is about the economically most efficient number of gates on a chip. If it turns out a chip twice the physical size is more economical, we’ll just double the size. If it turns out it’s more efficient to squeeze twice as many gates in the same area, then that’s what will happen. Smaller chips have advantages over larger ones, but it’s not just a straight translation of clock speed or smaller gates to chip “power”.

Moore’s Law is an observation, not an inevitable fact; it’s not even correct, except in a loose “on average” way. Like most observations of its type, it only applies for a limited domain. Since we can’t make any observations in the future, the upper limit of that domain is unknown. We assume (reasonably) that the observation will hold true until we reach some physical limit, and many assume that we’ll continue to invent technology that will make it true indefinitely, but there’s no physical reason why either assumption is valid.

Heat dissipation is the current limit on how fast processors work.

Processors have taped out at 45 nm today, with samples. I’m not sure any have been revenue released, yet. I’ve seen roadmaps at 28 nm before 2013. At the high end, a month ago I heard someone from Intel talking fairly confidently about 8 nm processes.


Multiple cores have less to do with process sizing and more to do with power consumption at these smaller process nodes, power consumption from leakage. The number of transistors on a die is going to continue to grow, but architects have pretty much run out of ways to use them besides bigger caches and processor cores which can be easily duplicated. Another advantage is that you can sell parts with some cores not working, while if you have a single immensely complex core a single defect in the logic will kill the entire chip. All large on-chip caches today have redundancy, which alleviates this problem for memories.

It is more than an assumption and observation these days. Moore’s Law drives process roadmaps, and thus becomes a self-fulfilling prophecy.

As transistors get smaller and smaller, a big problem is that the gate widths become so small electric current can actually leak through them. There are some advancements in nanotechnology that make things like carbon nanotube transistors look promising, though.

Are you sure you don’t mean 32 nm? The 45nm Intel Core i7 has been available since November 2008 according to this. A new 32nm laptop CPU called Core i3 is apparently going to be released any day now.

Wikipedia actually has a pretty good series about the CMOS nodes down to 11nm. Apparently Intel and Nvidia are both claiming that they will be releasing chips using 11nm technology in 2015. They may not be silicon however.