Why can’t processors be built three dimesionally where more than one level processes thing at the same time and they can communicate with each other so when you turn the CUBE processor on the side you would see the same sort of processing patterns of circuits through any angle. You would be able to interconnect it like neural networks. Why have the circuit have to complete to the other side of the chip in order to achive a certain value when it could move up 1 level and have completed the value ciruit?
Bit too vague? Then you extrapolate…
PerfectDark
Aren’t multiprocessor (or massively multiprocessor) systems logically at least semi-equivalent? The cpus might be spread out physically but you could imagine them stacked on top.
Problems?
-
heat dissipation. Where does the heat from the innermost lattice of the cube go? Might as well give the processors a large, thin profile to maximise heat loss.
-
fabrication and failure rates. Harder to fabricate, failure rates are multiplied (i.e. if currently a chip fails, toss it. In your cube idea you’ve got to toss the other 63 processors).
That said, if you built one, I’d buy it (to paraphrase my answer in one of your many
other strange and yet slightly scary threads - I’d sure think twice before letting you anywhere near industrial strength lasers [sub]in the nicest possible way, of course[/sub]).
‘Answers to all your problems’ (I admit this line I didn’t make up. I heard it before… in the Bible or some other novel)
Anyway…
1)heat dissipation
2)fabrication and failure rates.
If the processors were half celluar material you could allow a blood circulation system to cool the neural circuits down and have genetically programmed cells that specifically repair electrical circuits.
How much would you pay for one of these TetraHz processors?
PerfectDark
[sub]whispers from side of mouth [sub]How many you got?[/sub][/sub]
TetraHz? Wasn’t that the original PC? 4.77 hz?
You mean TeraHz.
Well, the OP is pretty baffling, but if I got the gist of it, people have tried various interesting topologies and architectures for creating massively parallel processors.
The Connection Machine (Hillis) is fairly close to what you’re describing with a large number of fairly low-powered processors connected together operating, I think, in a Single Instruction, Multiple Data (SIMD) mode. Those things are hard to build, very hard to program, and, because they use proprietary chips and architecture, they don’t get the economy of scale that you get from using say, a whole bunch of Intel’s latest chips. So the fabrication process used is always a generation or two behind the current state of the art which tends to negate a considerable amount of the performance gain that you get from the parallelism.
The Kendall Square Research machine was a massively parallel multiprocessor machine that had a high degree of granularity and used lots of cache memory to simulate shared memory multiprocessing. Their timing was bad – they came out just about the same time that chip technology was advancing (very rapidly) from '486 to Pentium and beyond. Their decision to use custom chips (see above notes about fabrication) and (as I recall) a certain lack of candor in regards to stating their financial results did them in.
Now what the OP is asking is why don’t we put similar architectures on a single chip? Well, you’d gain quite a bit in terms of communication ability between adjacent processors, but that still only gives you a topology in which each processor is connected to maybe six other processors. Compared to say, human neurons, that’s not very interesting. Probably not worth the added fabrication difficulties vs just using a bus and accepting the communication latencies in exchange for more interesting topologies. And as other people have noted, you’d have enormous problems with heat dissipation and the ratio of good chips vs rejects would probably become exponentially worse.