Layman question about semiconductor manufacturers - it seems that they’ve taken years, maybe even decades, to slowly go from 300mm fabs/foundries to 400mm wafers and 450mm wafers.
What prevents them from just manufacturing much bigger wafers like 5000mm or 9000mm and making much bigger leaps in size/width in a short time?
Well, I figured you could get more chips out of a larger wafer and make bigger batches at once, like a logger getting more wood out of a big wood log than a small one. But I see that now the bigger ingots require more cooling.
Well, the bigger the wafer, the more likely that a small imperfection in it renders the whole chip useless. For a small wafer, you might fit dozens of them onto a single silicon slice. If an imperfect spot makes one of them non-functional, you still have all the rest to use. And the bad spot might even be in the ‘empty’ space between chips, so none of them are spoiled.
But as they get bigger, there are fewer chips on the silicon slice, and the chance that an imperfection is going to damage one of them increase. Cost-effectiveness of the whole process goes down.
because all of the lithograpy and automated handling equipment is designed around a specific size wafer.
it’s the same as why a plant building Fiat 500s can’t just turn around and start building Ram pickups overnight. practically all of the tooling and equipment needs to be tossed out and changed.
Larger diameter wafers reduce time on operations that can be done on the whole wafer (or better still, a whole wafer lot) at a time. However, some operations (particularly photolithography) are done on smaller sections of the wafer (reticle fields, which may contain one to several hundred chips depending on the die size) in a step-and-repeat manner. Once the wafer is large enough that the stepping time is >> the loading time, there is no benefit to increasing wafer size.
Other processes (ion implantation, plasma etching) require uniformity of processing across the entire wafer, made much more difficult as the wafer size increases.
Yep. And for those who don’t know what a “die” is, that’s what people usually call the “chip”. A memory chip or a logic chip can be quite small (relatively speaking) and a microprocessor can be quote large. There might be 15 or more lithograph steps that occur on a given wafer during the fabrication process. Those steps are used to mask off certain areas in order, for instance, to create transistors or to etch in the metal interconnect lines, etc.
Wondering what the largest area CPUs are at the moment, I see that there is an AMD chip that is 756 mm2. But that is for (gulp) 32 x86 cores. (The 64 core Intel Zeon Phi is slightly smaller, but with simpler, tweaked-Pentium cores.) A core plus 2.5 MB of memory is 11 mm2. CPUs made for mere mortals are weighing in at 45-50 mm2. (I’m not coming up with die sizes for the new Ryzens that are all over the news today.)
(Now excuse me while I dream of getting one of those 32 core chips, but running at 5 GHz and 65w.)
Silicon slice?
What you are talking about is yield. It is possible that larger wafers will have reduced average yield, but yield is a function of the area of each die more than the size of the wafer - that is after the kinks in a new process and new tooling are ironed out.
I have lots of real data on yields, none of which I’m at liberty to disclose.
But I agree with the others that the real problem is cost of retooling versus relatively small increases in throughput. The reason that the adoption of new and smaller feature sizes is slowing is based on economics, not technology. New fabs for new processes are very expensive, and there is a desire to stretch out the payback time. A new fab for a new wafer size is also very expensive, but I doubt the payback would be all that great.
And wafer probe times are ridiculous already. They’d get worse. And I worked on very large chips. One had a die diet because it grew bigger than the process could support.
It takes 14 to 30 days to grow the ingot, and they are working on moving to 450 mm this soon but the complexity of growing larger ingots is not linear in effort or technology.
Remember that these ingots are 100s of pounds, have to be perfect and are single crystal.
Yeah, it all about retooling. There is NOTHING cheap when it comes to retooling. Just the other day one of my techs had to order a (threaded) rod that was all of 12" long. It costs $600 to replace.
Also, converting already proven devices from 200mm to 300mm, isn’t as easy as one would think. It’s not just a matter of re-adjusting “recipe” times. sometimes the engineers have to start from square one again. So, that in itself can be costly.
You’re mixing up the idea of a wafer with a die. The wafer (which was the question in the OP) is the round slice of semiconductor material. The die is the small rectangular chunk of a wafer, or what’s commonly called a “chip.” As the die get larger, the chance of an imperfection ruining it goes up (you had said “wafer” here).
Some die are relatively large, such as for microprocessors. Some die are small - I have heard of wafers with tens of thousands of die on a single wafer.
There are some processing steps that take basically the same amount of time no matter the wafer size, and some steps whose time depends on the number of die. Increasing the wafer size helps you produce more chips because the former will take the same time for a larger number of die. However, increasing wafer size has very significant technical challenges - having a consistent process across the entire surface, heating considerations, etc. And the equipment you have to buy to produce them is outrageously expensive, so the benefit is fairly limited but the expense is very high.
Hopefully this is obvious: the physical size of the wafer does not limit the size of the chip. Even for large chips like CPUs, you can get a couple hundred dies from each wafer. All the improvements come from miniaturizing the features, so you can fit more transistors into the same size chip.
The rare exception is imaging sensors - a large sensor physically collects more light, so it is desirable in some situations. But even the largest commercial sensors made are much smaller than 300mm. Here you can see a wafer containing 42 SONY full-frame sensors. The largest single-die sensor I could find reference for was this experimental(?) sensor, which fills up a full 200mm wafer. Generally, at these sizes, it’s easier and much cheaper to use an array of sensors rather than one big sensor.
That article describes a chip that’s 1 square cm, not mm. But yes, I’ve seen die that are 1 sq mm, and you can put 30,000 of them on a 150 mm (six-inch) wafer. Yes, companies still use 150 mm for parts like this.
ETA: Sorry, that article is about smaller chips and just mentions 1 sq cm.
Die size is generally limited by the area which can be exposed by the stepper (the reticle field). Very large image sensors can be made by stitching different reticle fields together (basically using multiple reticles per mask layer and adjusting the step and repeat so that their edges overlap slightly). This technique is quite expensive, so is limited to expensive chips (large images sensors can cost thousands/tens of thousands per chip) that cannot be broken into a chipset. Not sure the largest, but my company makes sensors close to full-frame 645 as a catalog item.
ETA: We also make tiny ships, like an ambient light sensor that measures 1.08mm square when packaged…
Minor nitpick: The plural of “die” is “dice”, although it is fairly common to use “die” for both singular and plural (as in “3126 die per wafer”).
How finely can they cut a wafer? I would think that with a 1mm die size, a majority of the area of the wafer would have to be empty space between the dies that would be lost in the process of cutting them apart.