I just read that moore’s law is expected to flat-out end with 15 nm. We’re at 45 nm now. 15nm is six years away.
Ignoring 3D chips and all that, will silicon progress really freeze in six years?
I just read that moore’s law is expected to flat-out end with 15 nm. We’re at 45 nm now. 15nm is six years away.
Ignoring 3D chips and all that, will silicon progress really freeze in six years?
There are other ways to increase processor capacity even if this statement proves true. The biggest one has already become standard. Movement toward parallel computation with multiple processor cores expands computation capability greatly without any additional shrinkage in transistors. There is no reason to think that overall processing speed will halt at a certain point then and that is the key point. Supercomputers are already built from off the shelf AMD and Intel processors running in parallel proving that it can be done.
Moore’s Law does not directly address processor capacity or computational power, although it is often mistakenly interpreted as such.
The misinterpretaion is the part that matters. Nobody gives a rat’s ass about how the improvements happen. We just expect faster computers every year. There are many ways that will continue to happen and there is no reason to think it won’t continue to improve at the current rate. Intel and AMD already have new chips in the pipeline to be released several years from now.
Nobody except the OP, you mean. He didn’t ask about processing capacity, he specifically asked about the limitations of silicon fabrication processes. You can argue all you want about how computing power will continue to expand, and I won’t disagree with that, but it doesn’t address the OP’s question.
I recall that 100nm or so was the limits of physics for chips not so long ago (perhaps a decade). Now its 15nm. Draw your own conclusion.
I used to make IC masks with e-beam lithography. We were at 1µ then, and everyone was worried about how to get past the limits of deep UV, and when was it going to be necessary to use X-ray or e-beam lithography. Well, that was 25 years ago, and the IC industry is sill using UV…
Btw, the 15nm figure I got from IBM’s ‘chief technologist’ interviewed in Forbes. IBM manufactures a good fraction of the world’s chips.
Ultimately the fact that matter is atomic sets some absolute limit. We probably are nearing the end of silicon photolithography, at which point it becomes a question of what sort of molecular circuit fabrication (nanowires, etc) might become commercially feasible. There’s still room for several generations of improvement.
I think the key is to “not” ignore 3D fabrication and such. Moore’s Law (I’ve met him BTW, nice guy) is to a large degree a self-fulfilling prophecy. It gives management a goal when they do a product road map.
IBM’s share of world-wide chip manufacturing is tiny, although that does not negate the comments made by their technologists.
Being in the microprocessor design business, I do.
In a short term the biggest reason why it is taking longer to get to new nodes is economics. It is costing more and more to build fabs, and more and more companies are dropping out of the fab business (such as LSI Logic.) Soon it will be Intel and the Taiwan fabs, and that’s about it.
Second, no one but microprocessor, DSP and Graphics chip makers care about being at the newest node, since no one else has the volumes to justify mask costs.
Third, raw clock speed isn’t as much of a selling point, since it comes at the expense of power usage. The reason for multicore designs is to reduce power and to reduce design time, since a lot of the chip is made up of cores that get designed once. Since everyone has the same problem, companies aren’t competing on clock speeds any more.
We will hit the wall eventually, since we are getting to features a couple of atoms in height. Dan is right, that the law is an observation, not a law, and has held more because people assumed it when defining roadmaps than anything else.
I’ve seen a chart somewhere showing how the law actually has held long before ICs even existed, so it might hold after ICs become cheap commodity parts only.
Of course I realize that 15 nm is not the end. But what I wanted to know was if the IBM guy’s assertion was correct: that 15 nm will be the last node. Will in six years we produce the last generation of silicon?
And btw, stuff like 3D chips and parallelism will not be the same kind of free lunch as Moore’s Law. If you want to gang up 100 dies to work together, chances are you’ll be able to (solely from a technical point of view. In fact a big challenge is making fast-enough networks that let the chips communicate to do work. 3D stacking and optics are the offered solutions for that.)
But the real problem is you will still need to manufacture 100 dies. How will we reduce per-wafer costs 100-fold? Through the years, wafer costs have held steady as a rock.
Take a look at this comparison of wafer costs from 2003. It’s a table that lists costs in that year given process node, wafer size, and number of layers (since, it seems, chips are already sort of 3D?). The cost of a wafer divided by its size and number of layers is the exact same 0.1-0.2 cents/mm^2/layer no matter if you compare the cutting edge (in 2003) 90nm to the 20+ year-old 2um!
The flip side of the same argument is that you’ll need 100 times as much fab capacity. The cost of fabs, of course, hasn’t even held steady. It’s grown exponentially.
How will we manufacture all those dies?
It’s not dies; it’s dice.
And current photolithography has its limitations, but who’s to say that photolithography will be the status quo moving forward?
Sorry, but no. Dies is the preferred plural form based on industry usage.
Dice are for Las Vegas.
Not here.
What are you talking aboot?
I found some more. Here’s an article about Intel saying essentially the same thing five years ago:
Shit.
There are ways to stack transistors on the same wafer. Sorry, too lazy to cite.