Is there a credible argument that computers cannot replicate the human brain?

I’ve heard it said that artificial intelligence will never truly be able to replicate human intelligence, because the nature of intelligence between the two is different. Computers tend to be exceptional at mathematics but exceptionally poor at tasks that we would define as creative, arts in particular.

Is there credence to this argument? It seems to me that with the right level of brute power and technology, we could use simulation or emulation technology to replicate the human mind exactly how it works.

I’m going to go with “no”: There is no significant property of human intelligence that a suitably complete simulation thereof would not also have, and arguments to the contrary all amount to a new form of vitalism.

Unless you accept there is something about human brains that is not reducible to math, then no, there’s no ultimate obstacle to strong AI; at that point, it merely becomes a question of how much computing power you need. I’m deeply curious about this, and I hope I live long enough to see the definitive answer.

Right now and in the near future any attempted simulations of a human brain will involve supercomputers that consume much more energy (and other resources) than a human does. That’s an obstacle to artificial brains being created in significant numbers.

Comparisons about computers being good at maths and not at other stuff is really missing the point hugely. It is comparing at a vastly too low a level. Existing computers are tiny in comparison with even very small brains. They also operate in a fundamentally different manner. What we get computers to do is arguably limited, not by the computer, but by us. We craft the programs that run.

The obvious thing to do is to take the known structure and operational mechanisms of a brain and get the computer to simulate that. This is pretty much the approach being taken at the moment. You can use a very large number of conventional computer systems to do this, or you can take our existing knowledge of crafting computers, and design something more directly targeted at such a simulation. (It doesn’t need graphics acceleration for a start.) What is more important is designing communication systems that tie together what will be millions of separate computer elements.

Once you craft this, you have the problem that you have no programs to run on your super brain simulator. We really have no clue about how things work in the brain with any detail above very trivial small sets of neurons. There seems to be a bit of a cargo cult mentality in some areas, where there is some hope that a sufficiently large system might begin to exhibit useful higher level properties essentially as an emergent phenomenon.

Anyway, this is to say that a brain simulation that is at least capable of simulating all the components of an animal (including human) brain, and at a speed comparable to the operation of the natural brain is entirely reasonable. We will see it. And sooner than many expect. But we still have no clue how to program it.

But this doesn’t answer the OP’s more fundamental question. Could the thing eventually think? The simple answer is that we have no clue.

Arguments exist on all fronts. The core question is quite simply - do we have a soul? Is there a ghost in the machine, or is it just machine inside out heads? And you swiftly fall back to atheist versus theist arguments. And they are often highly sophisticated arguments, not just “God exists”, “no he doesn’t”, “yes He does”… (sounds of scuffle, muffled swearing, punches being thrown…)

Then you get the question - how would you know? The canonical answer is the Turning Test. Which isn’t all that satisfactory in many ways, but no-one has really come up with anything better. (Most people also don’t know what the proper Turing Test is, and there have been some very shoddy examples of its supposed application.)

Roger Penrose’s book The Emperor’s New Mind was a mildly controversial attempt to prove that strong AI was rubbish and machines could never think as we do, taking a tour of physics to try to bolster the thesis.

There is a significant element to the question of free will in any argument about machine intelligence, and core uncomfortable issue that if machines can think as we do, and are deterministic, it suggests that free will does not exist. This worries more than a few pundits. Given that the theologians have wrestled with this for hundreds of years without satisfactory resolution, the question of machine intelligence is not a welcome newcomer.

In the end, the GQ answer to the question is - we really don’t know, and the question is not otherwise answerable in a GQ forum.

Over the short and medium term yes but as time passes they will use less energy. Gaming consoles today are as powerful as supercomputers 20-30 years ago.

I tend to agree with Penrose, even ignoring questions of higher-level mathematical logic. One can legitimately ask “but surely a computer can simulate a brain at the physical level, even if we don’t know how exactly it produces the results?” Well that’s debatable, both on classic and quantum grounds. Classically, if you have a system with chaotic analog components, then it could require input of limitless precision to model accurately. And of course if the Quantum is involved, then you need a quantum system to model it. And even if an algorithmic computer could in principle model such systems digitally, that doesn’t help much if to do it would require multiple-digit-exponent number of years to do it.

Computers can surely model instinctive, reflexive behavior, and can be expert systems with arbitrarily refined skill at a defined task. But I still follow Penrose in thinking that general intelligence- the ability to create an appropriate algorithm for a task- is something we don’t yet have a handle on.

Sorry to disagree. Penrose committed an error in elementary logic that he still has not acknowledged. Now as to the OP, I firmly believe that the answer is no, but we have a long long way to go. I conjecture that the only way to go is design an evolutionary algorithm for programming a brain simulation. At the end, we will not more understand the program than we understand the brain at present.

It is interesting that someone once made an evolutionary algorithm for sorting. It was even quicker than QuickSort. But it was utterly incomprensible. Nobody could figure out how it worked.

Different researchers are working on the problem using different approaches. However, in general, whole brain emulation eventually intends to copy a specific brain. They don’t plan to build a gigantic array of “brain-like” computers and bolt them together at random, they plan to copy the systems of a specific brain and expect the emulation to behave in ways comparable to that brain.

Also, there is strong, direct evidence that this approach will work. See : Hippocampal prosthesis - Wikipedia .

So, ok, there’s nothing magic about a rat’s Hippocampus. You can replace it with a relatively crude math model and it still mostly works. Do you want to wager that some other portion of the human brain relies on “magic” and that a relatively crude model cannot replace it?

The problem today is merely one of money and scale. To get a good image of a single human brain you need to slice it into 50 nanometer slices using several thousand expensive pieces of equipment over a period of years, then scan it would about 1000-5000 several hundred thousand dollar multi-beam electron microscopes. Both pieces of equipment exist, but there’s only a few in the world.

You would then need billions of dollars worth of custom ASICs to get an emulation at “interactive” simulation speeds.

Nowhere near the level of funding has been spent. You just have various AI researchers with a paltry few mil, and some of them have made outrageous predictions that of course have turned out to be false. I raise you one, and I think you are making a cargo cult argument. You’re saying that since some lone PhDs and their grad student assistants have not made meaningful progress on a fully sentient computer system in 40 years, it is “uncertain” whether AI is even possible. I think you’re making a sort of inverse cargo cult argument, where you haven’t even built the runway, just a control tower, and are trying to argue that because the planes are not yet showing up, airports are impossible.

In order for you to even say that we “don’t know” whether AI can be done you need to either show
1. The brain uses irreducible complexity. It’s a massive parallel system, but it uses subcomponents made of proteins defined in the human genome, which is a relatively small file. Good luck showing that.
2. The brain uses magic. It does things that cannot be emulated at all.
3. You must mimic the brain exactly or you won’t get sentience. Lots of people have relatively serious faults in their brain, sometimes large missing areas, and are still relatively sentient.

Wouldn’t we have to establish a baseline to simulate? I’m thinking the brain along the lines of a fractal equation that mutates over time due to influences. Never ending until death.

Can you elaborate on the two parts of your post I have bolded?

I don’t see any reason why simulating the human brain is impossible.

But I do see two obstacles. #1 It might not be able to run in real time. The computations required to simulate the actions of a single neuron might be so complicated that it takes several hours of machine time to simulate one millisecond of brain activity. Imagine asking Commander Data “How are you today?” and it takes two months for him to reply “I’m fine.” #2 If you were hoping to create a mechanical super genius who can invent new solutions without ever getting bored or making mistakes, then you’ll be really disappointed when we actually end up building a virtual human brain complete with creativity and intelligence but also emotions and making mistakes. You might hope to build a robot butler and but end up with a sarcastic robot who sits on the couch all day drinking beer.

I assume the “evolutionary algorithm for sorting” refers to the use of GAs in the construction of sorting networks.

Brain is a massively parallel ciomputer with hundreds of trillions of synapses, each of which may be capable of elementary learning. To simulate that on a conventional computer is infeasible.

However, it’s possible to imagine a chip made from transistors and crude analog elments that could emulate, say at present densities, many thousands of neurons, each with several hundreds of synapses. Because each of these electronic neurons would be many thousands of times faster than an organic neuron, such a chip might have the gross computation power of hundreds of millions of organic neurons. Figuring out how to program or use such a machine would be … interesting.

I made the same guess. Danny Hillis (or someone using his method) built a 45-comparator oblivious sorter for N=13, besting the previous record. N=13 is small enough that it may seem odd such a network was never discovered by hand.

Only to the extent you believe human minds are dependent on organic human brains. I don’t think that’s necessary in principle but our current technological level is laughably far away anyway, so it doesn’t even matter yet.

I can’t remember who said a simulation is interesting but it’s not the thing. When modeling precipitation patterns you don’t need to bring an umbrella into the lab. Searle or Dennett I think.

Seconded.

Even Penrose admitted there’s nothing magic about organic cells made of protein and programmed by DNA. He agrees that some completely artificial substrate could act as a brain. What he is deeply skeptical of is the premise that intelligence is algorithmic- that a sufficiently long and complex list of directions can yield intelligence. Now since the DNA content of a human zygote, the size of the brain that eventually results, and the amount of data it takes in are all finite, then it can be argued that the “recipe” for an intelligent brain is itself the list of instructions. But Penrose disputes that. He argues that some of that input- the physical laws that make a brain emergent from the recipe- don’t function in an algorithmic way, and that the “brain as computer” model is necessarily omitting something crucial.

Can a computer lie?