Is there a credible argument that computers cannot replicate the human brain?

How do we, humans, know that 2+2=4? Well, there are two ways I can think of. Empiricism - every time we take two of some object and add it to another two of that object, we, coincidentally, always end up with 4. An empiricist would claim that this is the basis of mathematics, at least the basis historically. A computer can do this already. It can use, say, a graphics processor to duplicate an image with two dots on it, then count the number of dots in both pictures and return the result, which, coincidentally, is always 4.

Alternatively, you can do it over the mathematical ring of the natural numbers, using primitive recursion - we assume 0, sn(x), and proj(x) as primitive, and then we calculate 2=sn(sn(x)), + via similar rules, and we end up with 4. This is also trivially done by a computer, with a similar degree of “understanding” - teach it the axioms, and have it execute it, and it understands it the same way we do.

I reject this person’s dividing lines as arbitrary, baseless, and downright false.

Searle is another version of Hubert Dreyfus, and indeed the title of the article sounds like a riff on Hubert’s original book, “What Computers Can’t Do”. Searle is really just trying – and failing – to argue against the computational model of the mind. These sorts rebel against the idea that intelligence and consciousness are really just implementation-independent information processes, which brings to mind a discussion we had some time ago about the usefulness – or lack thereof – of philosophers in science.

Let me restate what I was saying in that quote, as follows. The only useful way to assess intelligence is the way we do it all the time, from the same functionalist and behavioral standpoint that we always use with each other – i.e.- that if some entity were to achieve intellectual capabilities that we agreed in advance are demonstrative of intelligence, then we acknowledge that it’s intelligent. Those challenges have been met again and again and guess what happens. The Drefuses and the Searles of this world peak under the covers and go, “aha, I see how you did that, and that’s not really intelligence”!

They argue that the results are somehow invalidated because the underlying model is wrong, and according to them the underlying model is wrong because they can understand it – more specifically, and more damningly against the fundamental error in that kind of view, they claim that the underlying model will always be wrong because it’s inherently computational. This is an entirely specious argument and it will just move the goalposts forever. It basically argues that there is an intinsic limit to the kind of intelligence we can achieve computationally, and that limit, amazingly, is always the thing we haven’t quite achieved yet, and as soon as we achieve it, it becomes something else.

Let me also comment on your statement about “byproduct of a really rad word processor, chess program or mortgage calculator”. Only a few short decades ago no one would have included “chess program” in such a list. The argument indeed would have been that nothing that is merely a glorified calculator could possibly play good chess. Now we lump it in with everything else and some now argue, instead, that nothing made of such electronic components can exhibit “true” intelligence or volition (and that good chess-playing ain’t it). Sure it can, as long as those attributes are functionally defined. All it takes is the right information processes and a computational platform with sufficient capacity. Extremely primitive life forms have neither intelligence nor volition, we do – and yet we’re built from the same components, it’s just that we exhibit a completely different order of complexity. In the same way, computers with enough processing and memory capacity and the right information models can follow that same path and beyond.

Agreed, well said, and thank you.

I don’t think it’s about coming up with arguments to refute or support the claim that computers cannot replicate the human brain.

I think it’s more that the claim that computers can necessarily replicate the human brain – it’s just a matter of increasing their speed / memory etc, which many people just assume is true, is implicitly based on a number of premises. Such as that the brain is a computer (not merely a machine), that a simulation of a phenomenon is necessarily the same as the phenomenon (just within a virtual domain) and so on.

These premises are much more disputed than many people assume.
I would be happy to debate each of those premises, but in terms of answering the OP it’s enough to point out that they are disputed.

Premise : a box made of ordinary matter can be constructed, with the matter inside patterned to mimic the logical functions of the human brain. The actual logic circuits may or may not be equivalent to Turing machines(the logic circuits might be analog, exploit quantum effects, or just be a lot of tiny microcontrollers).

Premise 2 : the pattern of the circuits on that box can be saved as a digital file, even if the circuits themselves have intermediate states that can’t be represented by binary numbers.

Premise 3 : the circuits run thousands or millions of times faster than human brain tissue

Do you dispute premises 1-3? Whether or not we can grab a billion dollars worth of graphics cards and a bunch of microscopes and scan/simulate a brain doesn’t change the long term possibility of super-human intelligence and being able to copy human brains as if they were mere save files.

Sure. Because as far as we know the human brain (+the neurons which exist outside of the brain, such as in the eye) is just such a box.

I don’t know whether that’s true. It seems we cannot represent a quantum state perfectly in a digital file (or even completely know what the quantum state is). Whether any of that matters, I don’t know.
I wouldn’t accept it as a premise, unless it’s a “let’s say…” premise as part of a hypothetical.

OK

But note that, philosophically at least, you need more than just these premises to conclude we can replicate human minds.

Potato, potato. Whether or not you can copy a specific human mind, you can make a machine that is orders of magnitude smarter than existing living humans, can be backed up and saved as snapshot files, and that can be copied. Even if the copy isn’t a perfect copy, it’s good enough that the new copy can either perform the same tasks that the original was capable of or “re-learn” those tasks.

Philosophical limits don’t have anything to do with the phenomenal power that this technology would have. I take it as the definitive answer to the OP for this thread : even if you can’t exactly replicate the brain of a particular human, you can make machines able to perform that human’s job orders of magnitudes faster than he/she could do, and those machines would only need to be trained once, would not ever forget a skill (even if you enforce this by reloading from a snapshot frequently), would not die of old age, etc etc etc.

With this technology, whether computers *actually *replicate the human brain or they just fake it doesn’t matter. You cannot measure the difference with any empirical test, and they could hold a conversation and do any task a human can do.

This is one part of the skeptical argument that I can’t grasp. If a simulated mind is a perfect simulation of that mind, then it is a mind; minds are already virtual entities, information entities running on an organic processor. A virtual rainstorm is not a rainstorm, but it is as real as any other virtual phenomenon.

If you replace a human brain with an artificial replacement with the same mass, the same energy requirements and with the same behaviour, while keeping the body unchanged then that artificial replacement would be able to act in the real world in the same way as the original did. There is nothing simulated about the actions of a mind which has access to a real body, whether that mind is running on a bloody mass of cells or a collection of electronic processors.

Yeah, which is why I said I SUSPECT that this will be found out in the near future by some mathematician. It will turn out, I’m thinking, that node networks where the individual units have a lot of extra capabilities/signals they can send (neurons), either can create architectures that can’t be mimiced by transistors, or at least that create structures so much more dense than their transistor-circuit counterparts that it’s not a reasonable comparison, i.e. the computer you’d have to build would have to be impossibly big, like continent sized.

Just a hunch.

Either way, from what we understand and have been talking about here, the brain is way, WAY, WAY more dense than any computers, no?

As for the level of abstraction thing:
I thought that was mostly a programming thing to make it easier for regular humans to program the computer; that the computer doesn’t actually run stuff in that manner. I’ve been told the computer rapidly chooses which instructions to run in groups based on what will be the most efficient, or something to that effect. So, the level of abstractions don’t actually portray what’s actually going on in the computer computationally.
No?

…What? So can a computer can or can’t do that?
What I read, in at least one article in popular science or something, though it may have been more in a couple of magazines, which was discussing this entire subject, the author was saying that transistors depend on exact precision on their functioning. That is, they never misfire, which is the same thing as there’s never any “noise”. He was saying that scaling computers down, so that they can be denser, would mean the circuits would have to use less and less power, but the problem is that lower power circuits have more of this noise, and more misfires. He even mentioned a tie where there was a rare event where some guy was using an ATM machine and his account suddenly had like a billion dollars in it, and he said that experts concluded that there was one of those rare transistor misfires.
Then he went on to say that the brain works the other way around, that neurons misfire all the time, and can even recover from complete failure. People getting hit in the head, people dying for 1-2 minutes, people getting seizures, these are things where the brain essentially completely turns off for a moment, and yet magically restarts, and computer systems aren’t a good comparison because they just store whatever’s going on in solid-state memory, if such a back-up system is present, whereas the brain doesn’t seem to have this back-up system. Other psychiatric disorders also seem to confirm this, he said, like people with manic-depression or OCD, where certain nerves in the brain over-fire or under-fire, and yet the brain still as a whole manages to maintain a human being with a personality, however comparatively flawed. This would be compared to computers, where a similar screw up, or “manufacturing defect” (hehe) would likely mean the whole thing doesn’t work.

This is just what I’ve read.

And that whole thing means that the brain is wired differently and works totally differently from computers like we have now. Which I guess is what we’ve both said here. It’s just I have the hunch about neurons I mentioned above .

FINA-FUCKING-LEE!!!

I’ve looked for the article/study I mentioned before, but never found it. I myself had read it in Popular Mechanics or Science, or possibly the Science & Tech section of an issue of The Economist, but here’s an internet link on CNet:

http://www.cnet.com/news/human-brain-has-more-switches-than-all-computers-on-earth/
“One synapse, by itself, is more like a microprocessor–with both memory-storage and information-processing elements–than a mere on/off switch. In fact, one synapse may contain on the order of 1,000 molecular-scale switches. A single human brain has more switches than all the computers and routers and Internet connections on Earth.” -QUOTE FROM ARTICLE

This is what I was saying, this is what I had read. They had discovered, using new technology, that in a tiny space they couldn’t see before, it turned out there where a thousand switches. That by itself multiplied the density/complexity of the brain so much compared to what we thought before that it proved there are more switches in a human brain than all the computer electronics in the world put together.

And that’s jst what we’ve found so far… (cue dramatic science ditty)

The only premise that is disputed at all is the computational theory of the mind, but it has widespread – if not universal – support in cognitive science.

No.

First of all, the level of abstraction concept was introduced in order to explain to you that you’re focused on completely the wrong level of abstraction of this discussion, as I already tried to explain. The implementation of a computational platform is many, many, many levels removed from the information processes that run on it. The platform is fundamentally irrelevant as long as it has the necessary capacity. That’s why studies of how the brain works at a physical level, while interesting to neurophysiologists, have absolutely no relevance to AI. AI has commonalities with cognitive science, not neurophysiology.

Second, levels of abstraction defines how computational processes are structured. So it has a direct bearing on the process architecture. It provides formal descriptions of process interfaces and dependencies, making them modular and independently implementable – for instance, on different processors or entirely different platforms which communicate with each other.

This entire obsession with how computers work and how brains work is completely misguided since the whole problem space is really about information theory, not computer engineering.

I’m involved in microprocessor design, and your understanding is quite flawed. First, I can’t conceive of something a neuron can do which couldn’t be implemented by the proper logic network. We already know how to do multi-valued logic. Things which are essentially analog, like radio transmitters, are being designed using digital logic because analog circuitry on a chip doesn’t scale as well.
You’ll have to define density. Brains are more dense in some respects, transistors in others. The complexity of wiring in a brain is a lot greater in my understanding.
Don’t worry about storage. Just one of the disks I use to store my data has more storage than there was in the world when I started programming.

At the basic level it is transistors switching. Level of abstraction has to do in large part with design. 40 years ago, when design was harder due to more hardware constraints and fewer tools, a lot of computers were designed with microprograms. The hardware implemented a simpler microarchitecture, and you wrote microprograms on top of that architecture to implement the instruction set. (One company implemented a machine with nanoprograms under the microprograms.) And operating systems implement calls so that regular programmers don’t have to worry about the details, and programmers implement programs with their own set of instructions (like Excel) so users don’t have to worry about the code.
To a certain extent if you are calling an OS utility instead of implementing the function inline you can see the levels of abstraction in the code flow.

The person who wrote that knows nothing.

If transistors never misfired, I’d be out of a job. Depending on the complexity, a large number of chips never make it off the wafer. There are various wearout mechanisms affecting reliability. Crosstalk - noise between two signal lines, can be a big problem if the proper design rules are not followed. And there is something called voltage droop. If you have too many transistors switching at the same time, your nice one volt power supply doesn’t look like one volt any more.
Students learning logic design learn about 1s and 0s. There are no such things inside a chip any more.
And then we can get to timing …

And there are many circuits inside a chip checking to make sure that the functional circuits returned a reasonable answer - and taking action if they didn’t.
And while we can survive with some damaged neurons, so can computers. Memories inside chips are so big that they seldom get made 100% correctly. They have spare rows and columns which get swapped in for failing ones either right after manufacturing or even in the field.
The cool thing is that brains have evolved many of these capabilities, implemented differently of course.

I agree. And my guess is we’re not terribly far from being able to do all that.
I’m just trying to respond to the specific question of the OP.

Also, I’m also excited by the notion of there being different kinds of intelligence; not only “like humans + smarter”.

Well, you’re essentially just assuming from the outset that the brain is a computer, and using terminology implying a computer running a mind “program”. We don’t know whether that’s a useful model.
Before computers, we couldn’t even conceive of a model of how a mind might work. But since then I think there has been too much of a temptation to say “Well there we are; it’s a computer!”. It’s still very hard even to explain even concepts like memory storage right now, and if the brain was really that close to a computer that should be relatively straightforward.

Well firstly, how accurate is the simulation?

Until relatively recently, the model of how synapses worked was fairly straightforward; there are a number of dendrite inputs to a neuron, each with their own weighting, and if the number of inputs firing multiplied by their weighting was over a threshold, that neuron fired.
We now know it’s not that simple, and furthermore, it’s not just neurons that we need to be concerned about but for example glial cells have a role in cognition that we had not appreciated before.

If I had created a simulation of a brain that was just based on our simpler model of synapses, and didn’t consider the contribution of glial cells, would that be a mind?
How low-level do we need to go to have a complete simulation, and how close are we to that level? I don’t know.

ohhhhhh, are you saying that the computer actually DOES implement things in the order of the programming levels of abstraction? Like the higher level program calls the lower level program calls the even lower level program and so on? Really? What about the whole thing where the computer sets the order of instructions to carry out based on efficiency? Does it do both these things?

I dispute that summary. I think at the very least the claims that I mentioned are very much disputed.

Possibly the exception is within the field of computational neuroscience where the majority (but nowhere near universally) do seem to assume strong AI; but frankly you’d expect a slight bias that way in a field that is most powerful and relevant if strong AI is true.

Yes to all. Sometimes it’s in order, sometimes it out of order, both happen.

Gating analog signals may be no problem, but how do you store an analog value regeneratively? It’s straightforward to amplify the output of a binary cell as a “large signal” and restore the 0 or 1 as appropriate. But can you do that with multi-leveled signals using a single wire and a single cell? (My question isn’t rhetorical; I don’t know the answer.) I’ll guess that easiest would be to use binary logic and store synapse values in counters, with learning signals sending increment/decrement signals to the counters. Those counters wouldn’t need to be reliable.

What range of values would be needed for a synapse? (I.e., how wide the counters?) Human brain has 100 trillion neuron-neuron connections or thereabouts, with each having a broad range of possible values, but there’s tremendous redundancy and unreliability and, anyway, our brain stores vast experiential memories that might be unneccessary to pass a Turing test.

(The 100 trillion connections may seem intimidating, but, for raw processing independent of memory, an electronic neuron would be thousands of times faster, so fewer would be needed by a factor of thousands. The resulting requirement would not be far-fetched, given present-day densities.)

A very important point. Going back to the rainstorm analogy, we could create a virtual; rainstorm using digital Lego™, but it wouldn’t be very accurate.

We might reach the level of being able to model the brain/body system with digital Lego in the next few decades, but that won’t be anywhere near accurate enough to allow realistic reproductions of individual humans. I doubt that anyone alive today will live to see realistic mind-copies; but I also doubt that it is impossible, and it could happen eventually.

By that time the Lego-minds will probably have proved themselves to be so useful that mind-copies will be an irrelevant side-show.

As voyager was hinting - all modern digital logic is designed at the analog level. If you want it to go fast and/or not chew massive amounts of power, and be cost competitive, you have no choice. The simplistic idea of 1s and 0s just isn’t how it happens. That is an abstraction that is given to the higher level logic designers. But the guys at the coal face, those that are dealing with making the things work deal with analog systems, and wrestle with them to make the binary abstraction stable enough to work.

Current modern computer systems are designed out of binary logic, not due to any fundamental limitation, but because engineering them this way turns out to be the best way of coping with a wide range of competing constraints. Early digital computers were not necessarily binary, with ternary logic being tried, amongst others. There remain some interesting possible advantages in ternary even now.

Nobody has mentioned analog computers until now. Before the advent of cheap digital computers every engineering lab would have an analog computer. No problem at all representing values to quite reasonable precision.

Storage of a multi-level value can be trivially managed - and you almost certainly already own a device that stores millions of analog values on a chip, and manages the transfer of those values between storage cells. I’m referring to a CCD sensor in a digital camera. Each pixel hold a voltage proportional to the light that fell on the photosensitive part of the cell. When the exposure is done these voltages are transferred cell by cell across the face of the chip - and fed to an amplifier and digital encoder when they reach the edge. CCDs can hold the image for quite some time before readout. There was a time when CCDs were first invented that people imagined that they could form the basis of all sorts of interesting circuits. The fundamental storage element - basically a capacitor - is trivial. Similar capacitors for the basis of dynamic memory in all modern computers. It isn’t a big deal to imagine taking a variant of modern digital chip technology and making a multi-value logic system from it. The question is not that it is feasible - but will it result in something that is superior to implementing the same functionality in conventional digital logic, where the different values are represented by aggregating bits? If your multivalue system can reliably represent 8 discrete levels, I only need three bits to do the same job. Freed of the constraints to maintain those 8 values, and only needed to maintain 2, my final design might be smaller faster and cheaper, even it need three times as many components. Overall this has been the experience for the last 60 years of computer design. It could change in the future, but there is little concrete on the horizon that portends such a change.

About now conversations tend to need my standard tirade about information. There is always someone who will pipe up with the notion that analog signals contain infinite information compared to digital levels limited number. This is fundamentally not true. Our universe is noisy. There is nowhere you can go, and no technology that can be applied that can remove all the noise. When there is noise there is uncertainty in the signal, and the range of values that can be usefully represented is intrinsically limited. This rather neatly gets us back to Shannon.

People get rather surprised that the information that can be represented in an analog signal is measured in bits, and that it may contain a non-integer number of bits. This measure allows us to directly compare the information content of signals nomatter that they are digital or analog. (And it is worth adding the note that “analog” is a travesty of a word. What we colloquially call analog systems are more properly termed “continuous”. Digital systems are analogues just as much as any other. But the usage has become commonplace and there is no point fighting.)

The point for the OP. There is a well defined equivalence between all the different implementations you can imagine for a thinking brain. Anything from numerical simulation of the electrochemistry of a real brain, simulation of the neuron/synapse abstraction in a range of manners, from conventional computer programs, bespoke computer systems also running code to simulate the abstraction, down to custom digital or analog circuits that directly represent the abstraction. The choice is a matter of engineering tradeoffs, nothing more. Some ways get results quicker, some leverage existing experience and technologies to good effect, others may actually turn out to be superior in the long run but may require serious investment to overcome the technological lead conventional technologies enjoy.

Eventually all of this is arguing about the possibility of creating an existence proof of an artificial thinking brain. We assume that a silicon brain that has been engineered to exactly mimic a human brain, and that then exhibits the same external behaviour and a wet-ware brain is a very good indicator that the silicon brain has captured everything needed to be a self aware thinking being. Since we have not engineered with modern AI “tricks” and only copied the innate structures we found in a real brain, we can avoid claims of “tricks” that are designed to blindly mimic, but don’t actually think. With such an existence proof we can dismiss the ghost in the machine, and look towards more interesting, non-human minds.

Agreed.

And as I said upthread (I think…I might have deleted it before I posted), I also think that different kinds of intelligence would be useful.
So it may not be a simple spectrum of “more stupid than humans”, “human-level” and “smarter than humans”. Maybe once we start to model cognition better we could create minds that think in a radically different way to us.

But in any case, all the proto-minds will likely be a footnote in history. I think the gap between making any kind of general intelligence and being able to make one that wipes the floor with us in most domains will be short.

I wonder about the whole “orders of magnitude smarter than humans” thing. “Smart” is a set of tools specific to an environment and context, just like any other attribute like speed of movement. Frequently an increase in ability in one direction runs counter to some corresponding capability within that same environment/context.

When it comes to “smarts”, it’s entirely possible that an increase in one capability must correspond to a decrease in some other capability due to the fact that there is only one result allowed, one set of actions. Even if the brain could calculate an optimal result for all different styles of thought and contexts, one must win out to drive action.

There is no one single “optimal” course of action across time measured across all goals and contexts, many of them are in direct contradiction to each other.