This is an idea I just had and I want to know if it’s even theoretically possible.
Imagine a piece of silicon a few inches on a side, etched with a couple transistors in a simple circuit of some kind. Think a single register, or maybe just a single NAND gate. The point I’m making is, the individual components are big enough to be seen with the naked eye. Big enough you could put it under glass or in resin with labels showing individual pieces of the circuit. It’s huge compared to even 1970s-era fabrication processes.
Call it HSI, for Human-Scale Integration.
Could you make something like that out of the same chemicals that go into real CPUs?
Could something like that work electrically if we don’t put any limits on timeframe or amount of electricity consumed?
How resistant to small defects in the silicon would that be? Think a defect that’s a barely-visible hairline or something, tiny compared to the whole element.
Just because it’s big enough to see, doesn’t mean you can see it. One piece of nearly-pure silicon looks just like another piece of nearly-pure silicon.
I don’t know if it answers your question, but you might be interested in something like the Monster 6502, a large-scale replica of the famous 8-bit MOS 6502.
Solar cells are like you describe. There’s one component, basically a diode that is also especially sensitive to light (though I think they all are to some degree). Some of those are big enough they’re hard to hold with one hand by the edges.
Light emitting diodes are usually a single component big enough to see, and usually mounted so that you actually can. They, like solar cells, also generate power when exposed to light. Though they’re not as efficient at it as purpose-built solar cells.
Photocells are typically millimeters in size. I have some that are four quadrants, some that are concentric rings, and so forth. The dimensions are all easily visible.
There are single transistors that are 1/4” on a side or even bigger. A large SCR might be almost an inch in diameter. So, yes, there is no reason why it wouldn’t work. You could easily see the metal layer.
When I was an undergrad the Dual Inline Packages we used for logic lab had one flop or maybe three gates. They would be plenty big enough to see. But even in TTL circuits like those transistors don’t look like transistors. It’s not like anything exciting is going on in there.
My first thought was exactly around the lines of the 6502. But even that doesn’t look very exciting.
As for reliability, it would be very resistant to small defects. You’d have to fab it in a place with smoking permitted to get big enough particles to cause opens in the signal lines. I suspect you’d run it at five volts (unlike the one volt modern CPUs run at) so you’d have plenty of leeway. It would be very slow, of course.
When I read “very large microchips” I was thinking of ones that are right up to the maximum size that is manufacturable. Those I know a lot about.
However Kilby’s 1958 IC prototype was not built using the “planar process” is the foundation of IC manufacturing today. The planar process was developed by Fairchild Semiconductor around 1959-1960: Planar process - Wikipedia
That or something similar could be recreated today, and you could make the components really big, but there’s no benefit. If components are large, the propagation delay between them limits useful work.
You can use vastly larger components than current fabrication and switch that relatively fast using emitter-coupled logic but it burns tremendous amounts of power and is still slower than modern fabrication allows.
The performance was about 80 megaflops (80 million floating point operations per sec). Today an iPhone XS can do 20,000 megaflops, or 250x faster and it consumes about 1 watt. That is from using 7 nanometer fabrication.
It was normal back in the day to build up devices such as processors by using basic gates on wire-wrap boards prior to committing to silicon as simulators had limited capabilities. I saw the 68000 proto created by MOSTEK (they had agreements to second-source both that and the Z-80) which ran, if I remember correctly, at 800KHz due to the limitations of multiple Augat boards and the ability of wire wrap systems to control signal integrity.
How big does that get? What kind of performance improvement could you get if you were able to make a chip the size of a wafer without too many defects?
You’d have to very carefully design a chip the size of a whole wafer to get good performance out of it, even if there were no defects at all. The speed of light is an issue: Signals can’t get from one edge of the chip to the other in a single clock cycle.
This is called wafer-scale integration (WSI). At a given fabrication node size, it theoretically could produce higher-throughput processors that dissipate much less power and fit in 1/100th the volume:
However that is from the standpoint of 1970s technology when traditional high-end CPUs required multiple circuit boards. WSI was studied and attempted several times in that era by very smart people including Gene Amdahl, chief architect of the IBM System/360. It didn’t work well in actual practice.
Since the wafer would unavoidably have many defects, the manufacturing process required somehow disabling those areas and wiring around them. This in turn increased the fabrication cost and complexity and each final product would be somewhat different.
Since then IC fabrication has advanced tremendously and it’s now possible to make Systems On a Chip (Soc) that incorporate many functions: System on a chip - Wikipedia
Yes, the Monster 6502 is, indeed, 5V and draws 2 amps, running at a speed of 60kHz. For comparison, the 6502 of the early-mid-80s generation of 8-bit computers typically ran at somewhere between 1 and 2 MHz.
It’s a pretty cool project with lots of LEDs to show you exactly what’s happening where when you execute a program. You can see the data travel from the data bus to the pre-decode register, to the instruction register, to the control lines, etc. You can see the individual registers, as the OP asked about.
I grew up on the 6502, so I think that project is damned cool to see what is going on “under the hood” in a more concrete way. This one is made with surface mount parts, so it’s 12"x15", which is plenty big enough to see. It says in there if you made it with through-hole parts, it would measure 19 square feet, which would put it at approx. 3.9’x4.9’, assuming the same aspect ratio.
I was actually a little surprised to find out how small the 6502 can be made even using through-hole parts. Four foot by five foot doesn’t seem to me to be that insane a size. I understand with modern computers, you’d be using acres of land for something like that, but I find it incredible how much could computing could be done with comparatively so little.
In 1975 I saw a PDP-8 in a suitcase kind of like that. Indeed very cool. I read up on the 6502 when I had a C64, and studied its instruction set, but never had a good reason to do much with it.
That’s problem number 1. Problem 2 is that no wafers I know of have 100% yield, unless maybe ones done in old technologies. You’d have to build in a lot of redundancy to get any kind of decent yield. And wafers are expensive!
Also, the edges of a wafer have a lot more defects in general than the middle, so you might have to sacrifice some area.
You’d have to figure out how to package it, and connect the zillion I/Os to the package.
Then you’d have to figure out how to test it. Wafer probe is done on full wafers, but a chip at a time. There are no probers that could handle all the I/Os on a full wafer. Plus, timing is very important, since you have to get all your signals from the part to your Automatic Test Equipment (ATE) more or less at the same time. Try that with a wafer. The one I got signed by my colleagues when I retired is almost exactly the size of an LP record.
Wafer probe is also incapable of testing chips at speed. That happens after packaging. Testing the packaged wafer is going to have the same problem.
Not finally - there are probably plenty of problems I haven’t thought of - there is the planarity problem. Unless a chip is very flat you are going to get cracking issues across metal layers. I’ve seen them. Wafers can be held in a fab, but I can’t imagine how those standards would be enforced at a customer. So, cracking and lots of failures.
One more. In big chips today, there are process variations even across one chip. There are typically measurements done, say 9 on a single chip, measuring this. The variation across a wafer is going to be much greater.
And another. It is hard to supply enough power to big fast chips. This is one of the reasons that wafer probe does not run at speed. I don’t think power supplies exist that could supply enough power to a wafer during package test or burn-in. shudder
I worked peripherally on a WSI project at Bell Labs. Peripherally because I knew it was going to be a loser and it was. (We sold it to Alcoa.) The thing that killed Trilogy, Amdahl’s company, was heat dissipation. Not to mention other things. Read the history some day - a real soap opera.
SoCs don’t have much to do with WSI. They come from the idea that ASICs look a lot alike, so that instead of building one from scratch you can buy IP (intellectual property) for the components, add some custom logic, connect it all up, and avoid much of the work. ARM and others are IP for processors. Internal memories are IP. Logic controlling various I/O protocols are IP. IP can be hard (you just get a lay out) or soft (you get the RTL code for the component.)
MCMs are old hat. I’ve know about them for 25 years, and IBM had a very primitive Thermal Conduction Module long before that.
The successor to WSI today is 3D and 2.5 D integration. In 3D integration you stack chips, often a memory on top of the processor, and route the interconnect up. Heating problems yes, but it takes up less space, and you use normal dies for this. But they have to be known good dies (KGDs) since once you attach two chips you ain’t going to be able to unattach them.
A more useful thing is 2.5D where you lay the chips out like on a board but use a silicon substrate to connect them, which means you don’t have to have your signals go through slow I/O buffers. Since the substrate is not active you get good yields. I’ve reviewed tons of papers on testing these things.
When I worked on Merced, the first Itanium (never released) we bumped up against the reticle limit more than once, and had to have die diets to reduce speed or get rid of features to save space. So while this isn’t a problem for chips playing Happy Birthday, it is for advanced microprocessors.
You can get between 70 and 100 chips on a big wafer, at least for the big processors I worked on. You can get thousands of littler parts.
Yeah, the speed of light is a crawl at modern microprocessor speeds. For a chip clocked at 4 GHz, light moves slightly less than 3 inches in one clock cycle.