Will we ever advance above the integrated circuit?

I remember watching a documentary about Artificial Intelligence. One of the guys on the program made the claim that, in order to get true AI, you’re NOT going to do that with a device that’s built with ICs. You can only make the geometry so small. You’d have to build it with a component that works on a completely different concept.
Leaving aside the AI part, what would such a component be in theory? Has one ever even been conceived of such a thing?

I hope I’m making sense.

He may have been thinking of quantum computers. Both “true” AI and quantum computing must often be mentioned in the same breath as the nerd Rapture.
As for what will increase computing power given that decreasing the geometry is increasingly difficult, that likely has more to do with advancements in microarchitecture and interconnects.

The problem isn’t the scale of “the geometry” of integrated microcircuitry or even the thermodynamics of very large scale integrated (VLSI) microchips; the density of transistors in computational processing units of modern digital computers is far greater than that of neurons in the mammalian brain, and the effective processing rate is much faster. (It is difficult to compare the clock rate of digital computers with the neural processing rates of mammalian brains because they don’t operate on remotely the same principles, but the effective response rate of a computer to solve a discrete problem like a math calculation is much faster than the fastest mathematical savant by orders of magnitude.).

The real problem in using digital computing to emulate the brain is more fundamental; the brain is both highly interconnected, and has a fundamental plasticity in learning and recovering from damage that digital computers simply don’t have. The human mind is not a state machine; it is in constant flux because the underlying neurophysiology is constantly changing, from neurotransmitter and hormone concentrations to the formation of novel connections through memory and learning, particularly during development and into young adulthood in ways that are not very well understood.

Human brains in particular are also very good at interpolating from sparse data and making predictions in order to function faster than they should be able to; for instance, the maximum rate at which a skilled reader can interpret text is not only faster than they eye can focus on each individual letter, it is actually faster than it can produce vocalization in the speech centers of the brain which means that the ability to interpret written language—which is entirely outside of the context of nominal human evolution—uses disparate parts of the brain to “fill in the gaps” and recognize patterns in ways that are independent of the ostensible natural language centers. That we can communicate complex and abstract concepts via written language is not only a remarkable technological achievement that underpins all of modern society and the ability to disseminate knowledged and ideas beyond immediate personal contact, but it is amazing in the ability to go beyond what the brain was “designed” to do by evolutionary pressures.

And because we do not really understand how the brain does this (other than that reading integrates nearly every major cortical funtionality in the neocortex and also stimulates many of the evolutionarly more primitive functions) trying to reproduce it via discrete computational processes in software on top of digital logic in a way analogous to how the brain works is an exercise in futility. We will very likely need some kind of synthetic biological-like system of computing—whether based on natural biology or some other self-organizing and highly adaptive substrate—that can provide this degree of flexibility and plasticity in order to achieve truly sapient “artificial intelligence”.

Machine cognition, such as it is, is focused largely on heuristic adaptive networks, and while they can produce some suprising and unique results, nobody considers them to be self-aware or capable of general self-directed intelligence.

Stranger

There have been some concepts put forth about using DNA to do computing. The small size and massive parallelism has it’s advantages.

But not a lot of connectivity for communication, which you need for general purpose computing. Plus for the foreseeable future the computations have to be for something error tolerant.

Oh, and it’s slow. Mind-bogglingly slow. Think of it like doing addition with your fingers, but you have billions and billions of fingers.

Current integrated circuits are effectively two-dimensional. Yes, there’s some layering, but the number of layers is small compared to the number of features across the plane. The revolution will be to build higher dimensional circuits, where the computational workload is spread within a volume instead of across a surface. I expect truly three dimensions will be impossible, simply because of the requirements to efficiently get energy in and out of the system. Instead, the structure will be a heavily folded fractal surface; think of the surface of the human brain, but with more folding and more iterations. Shapes like the Hilbert curve or the Menger surface. The folding will effectively increase the density of computational elements.

As to whether one can build “true artificial intelligence” by using more computational elements, that’s a philosophical question rather than an engineering one. I prefer a generalized Turing-test approach: we keep finding problems we’d like computers to solve, and then build computers to solve those problems. Who cares if the computer is “intelligent” if it’s solving the problems we want it to solve?

3D integrated circuits are in development. Samsung now makes 3D memory chips.

Actually, there’s something called silicon photonics.

In essence, it’s possible to wire optical fibers directly to an IC. So if we knew how to make an AI system with all the components to be equivalent in capability to the human brain, and we needed to do it using today’s methods, we would :

  1. Design custom ASIC chips that implement the math commonly used in the subsystems of this AI more efficiently. See TPUs from Google and similar chips from IBM for real world examples

  2. At multiple places on each chip would come out a fiber optic cable, coming right off the chip itself. These hair thin cables would go to optical switches, with several layers of topology, and if you were to visit the data center that had a functional system like this, I would expect you would see primarily massive bundles of fiber optic cable and optical interconnects. The actual processing nodes would be smaller in size than the connections between them, similar to how they are in human brains.

This need for bandwidth for interconnects and special chips means that the scenario of a lone mad scientist inventing an AI and hosting it on their desktop computer, and the machine gets loose and somehow escapes to random vulnerable computers on the internet, to be basically science fiction.

Why aren’t we doing this already? We just haven’t solved the problems needed to do this yet. Neural networks, as commonly used, have common and catastrophic flaws. There are improved methods being released several times per year at this point, development is very active. But a fully capable AI would not be a single massive neural network, it would be hundreds, probably thousands of separate subsystems, each probably originally developed to support a much smaller sellable AI product originally.

We won’t, in my opinion, get there by accident or without an enormous amount of support infrastructure. It will require vast pools of both talented AI engineers and a massive amount of commonly available software libraries and techniques that are interoperable with each other. As well as many huge datasets that were used to train these systems. The hardware is the least important part.

Exactly. We probably already have the ability to make a circuit several thousand layers thick, but without someway to pull heat out of the middle it’s just going to be a very fancy heating element.

IMO, the breakthrough we’re going to need will be major advancements in room temperature and higher superconductors. Admitted, this way outside my field of expertise, so I’ll wait for someone with relevant knowledge to weigh in.

What’s missing here is an operation definition of “Artificial Intelligence”. and I think that is necessary for a meaningful conversation. Apparently, this is an ongoing issue:

From Wikipedia:* “The scope of AI is disputed: as machines become increasingly capable, tasks considered as requiring “intelligence” are often removed from the definition, a phenomenon known as the AI effect, leading to the quip, “AI is whatever hasn’t been done yet.”[3] For instance, optical character recognition is frequently excluded from “artificial intelligence”, having become a routine technology.”*

I consider true “A. I.” something that is self-aware. It would have to have consciousness. Memory isn’t the issue, and computing speed isn’t the issue. Computers have already won that battle hands down. So, why would we thing that “smaller” and “faster” is going to do the trick when it hasn’t done it after all of this time?

I hope we’ll be able to answer this in a decade or so. Some AI subsystems like classifiers are such that they take a large amount of input data (a feed from a camera or several sensors overlaid) but only output a very small amount of output data. (like a labeled grid of the environment with what the algorithm thinks is present)

Other systems, like path solvers or optimal move solvers similarly are of this big/little relationship.

This means you don’t need 3d chips. You can just use well cooled 2d chips and short lengths of optical fiber between them and get good performance, possibly better than you would get by having to lower clockspeed to make a true 3d chip.

The reason for pure 3d might be to save space. Such if you wanted sentient independently operating robots or something.

That’s cool, but only three-dimensional when compared to the usual chip. The aspect ratio is still something like 1:1000. But you got to start somewhere and it’s good to see work along that axis.

I could see circuits developed with magnetic-based transistors instead of voltage-based transistors. Think of little conductive loops whose magnetic flux (or lack) switching current flow in another line. That could significantly reduce the thermal cost per bit. But it’d be a while before it’d match component densities of semiconductor transistors. (My dissertation was on superconducting circuits, but I haven’t worked in the field in twenty years.)

I see that as an alternative bus system–well worth developing but independent of the dimensionality of a chip. The reason for compact chip sizes (which drives higher component densities and three dimensions) is that distance is time. Longer distance means longer time means lower frequency or longer wait.

A neuron is not really the equivalent of a transistor. A neuron is complex and performs multiple computations (e.g. dendrites perform non-linear analog filtering computations), it’s closer to a cpu than a transistor.
IBM has some interesting work going on with their neuromorphic chips. It’s still silicon but optimized for neural network type processing so power consumption is orders of magnitude lower. I don’t know if these things were on the same chips or different projects:
One of the things they were working on had a capacitor (if I remember correctly) that basically acted like a synapse, sum input voltage, at a certain point fire a signal.
Another one just arranges connections between ram and processing circuits to more closely resemble the processing required for neural nets (vs traditional computer) which reduced power consumption.

Agreed, and perhaps my explanation was not clear, but that is why getting to a true artifical general intelligence isn’t just a “geometry” or scaling problem; neurons, and the substrate that supports them are not just elements of simple logic circuits but instead are highly plastic and work in a very systematic way that is not readily simulated by digital circuits. And creating a “brain-like” software abstraction on top of digital computing hardware probably requires a fidelity of modeling of neurons and brain systems that goes down to the molecular level, or at least provides an adequate simulcrum of the effects of neurotransmitters, protein synthesis, et cetera. Nobody currently working in machine cognition is really trying to create this at anything beyond the level of a few neurons because the computational requirements are just too complex to make a workable model.

Stranger

Goodness, ICs are only a few dozen years old. It’s hard to imagine that no further inventions will move way beyond them.

Unfortunately, the atomists were right about the structure of matter.

Not according to your own cite: “the connection to historical atomism is at best tenuous”.

First they considered atoms indivisible, then the protons and electrons and neutrons were, then quarks and so forth. How do you ever establish we won’t find another tier of division?

My dear old grandfather Fafi was a bright and curious little boy when electrons were discovered. The tiers keep giving way to one another so quickly that they get replaced within memory. I was already working with electronics when the microprocessor was introduced, and I can’t believe there is anything final about that step, either.