Is there a credible argument that computers cannot replicate the human brain?

**Chronos **is making some of my points exactly.

Searle’s focus on “intentionality” is essentially him adding a ghost to his machine. On the one hand he says we’re machines & can think, but on the other hand a programmable digital computer can’t, not ever, no matter the program. Because a digital program lacks “intentionality”, whatever sort of nouveax phlogiston that is.

By analogy: “A molecule can’t do arithmetic. Therefore a digital computer made of molecules can’t do arithmetic.” That’s clearly bunk.

Searles’ (and my) thought experiment fails by implicitly dismissing the possibility that higher levels of abstraction produce more powerful behaviors.

I’d suggest that it’d be impossible for any computer that’s like our modern computer (that is, like an advanced abacus or turing tape machine thingy) to actually converse intelligently and with understanding using nothing but rote responsive program.

Whatever the machine does, it’s circuitry would have to be wired such that it doesn’t just respond, it creates concepts and understands, like our brain does.

That is I deny that a big enough normal computer wired and programmed as we currently do could ever have the understanding of a human and consequently would be conversationally limited. That is, a tenacious enough asshole could show the computer doesn’t understand things over enough time, no matter how much computer brains there is.

The question from there is whether “understanding” machines that create concepts and abstractions could be built from normal circuitry.

I think the answer to that question is we just don’t know. We know the brain is wired differently, but presumably it may be possible to run lots of transistor circuits simultaneously sort of like the brain does. Actually… isn’t that impossible with transistor circuits? From what I understand about neurons is that one of the most important things is that the different circuits can interrupt and talk to each other, both of which are impossible with transistors, no?

Again, I bet certain elements neurons’ functioning allow for certain key architectures that are impossible with transistors

Let me add another factoid about brains to the discussion:

Apparently, human memory is actually limitless. I remember I think Michio Kaku (or however you spell his name) talking on I think The Colbert Report, saying that with a certain experiment, where they implant an electrode on a certain part of the brain and stimulate it, they can get anyone to have perfect memory. You ask people in this state what they ate for breakfast on July 5, 1983 and they give you a correct answer, stuff like that. They are apparently also hobbled in other cognitive functions.

I always suspected human memory is limitless or effectively limitless. At first one might think that a machine with a finite set of elements could only hold a finite set of information, but that’s using the thinking of the way a computer remembers. I’m betting (and research shows to some extent) that human memory is some kind of weird playback loop. Of course, it’s largely imperfect, which explains false memories and forgetting, but this flawed system also has the advantage of being limitless. A good trade off, evolution also considered.

I’m also betting that within the next 40 years some mathematician will prove that such a limitless memory system is possible from finite systems that have certain basic circuit capabilities like the brains’ neurons do (that transistors don’t have - again, interruption and cross-talk)

Just chiming in to add that I just read the (excellent IMO) scifi novel Blindsight by Peter Watts, available to read online for free at his website here. It delves into the nature of sentience and intelligence, and how one might well have intelligence without self-awareness, and what that might look like. There’s a bibliography and references at the end of the novel that are worth perusing as well. The novel raises the question that since so much of what the brain does is unconscious, and so many internal processes are inaccessible to us, is self-awareness really necessary? After all, sleepwalking people can engage in complex, goal-directed behavior, and much of our own waking behavior happens without our direct attention and control. We initiate motor actions before our conscious mind has had time to form a plan, and yet the conscious mind tells itself that it initiated the movement In many respects, consciousness seems to be a time-delayed summary report of the actions taken by the various hidden unconscious modules of our brains, with some sort of confabulation laid over it resulting in the illusion that the modules all are working under some central control.

I see no reason to believe that CS guys are going to stumble upon a method of creating true self-aware AI until the neuroscientists have figured out how sentience works in our brains. I doubt that there will ever be a way to create an AI that doesn’t look like growing a brain in a vat and giving it a childhood of some sort. You may attempt to simulate a human brain at the molecular level, but why pour the resources into calculating how the proteins are going to fold when you can just reel an amino acid chain into the appropriate medium and let physics do the work for you?

The point about a Turning machine is that it embodies in one particular form everything we know about that is needed to perform any computation. Sure is is a hokey as you can imagine, but the point is that you can reason mathematically about its capabilities. You can show that any other computer cannot do something that a Turning machine cannot. Indeed, because a Turning machine has infinite storage, and has no limit on the speed it can execute at, if you program the rules of QED into it, and then feed it the configuration of every atom in a wet-ware brain, it will exactly calculate what the wet-ware brain will do in terms of electro-chemical functioning.

It isn’t a matter of wiring. The wiring is just an implementation issue. Any Turning equivalent machine can emulate any other Turning complete machine.

That is the jump = you added “programmed as we currently do”. That is the whole point. We don’t currently know how to program anything to think. It isn’t a matter of what the hardware we have is. We have no idea how a real brain thinks. But the OP asks if there is an intrinsic limitation. So, will we ever know how? Is there something about the way a real brain thinks that we can never understand? If so why? If we do eventually gain that knowledge, what might be there that intrinsically prevents that knowledge being codified and represented in a computer program? If the computer system we ran this codified representation on is fast enough, what prevents us from being able to hold a conversation with it?

No. Transistors are building blocks at a much lower level than neurons. You might model a synapse with a couple of transistors, but not an entire neuron. That is if you want to model the brain as an analog circuit, and not emulate it in a computer, or build a bespoke digital system that directly emulated neurons.

Again no. This suggests a fundamental lack of knowledge of what transistors are, how they work, and similarly a lack of knowledge of the mechanisms that we do understand about neurons. Neutons are acknowledged to be reasonably complex devices. They are like macro blocks in digital design. Nobody designs a complex digital device out of individual transistors, or even gates. They are designed with building blocks that provide useful complex behaviour. Multiplexors, registers, shifters, and the like. Building them out of transistors is banging the rocks together in comparison. (There are optimisation programs that can be used to squeeze things down once a design is done, but that is a different question.)

To get an idea of what is currently being done you could do worse that look at the IBM TrueNorth project.

What is important is that the chip used in the system works in the same manner as most other digital systems - it has processing cores, memory, and interconnects. However they are optimised in such a way that they can programmed to provide a system that looks like 1 million neurons and 256 million synapses. It has 5.4 billion transistors to do so, and this functionality is built from an on-chip network of 4,096 neurosynaptic cores (processors.) Your average x86 chip has 4 processors, but also uses billions of transistors. You could write a program to make the x86 emulate the TrueNorth chip. However it would be vastly slower, and chew vastly more power. Which is why IBM’s work is so interesting. But the difference is one of engineering, not fundamentals.

It seems to me that there’s a basic flawed assumption in the OP’s question, though it may be the way I’m reading it. The title asks whether we can replicate (or simulate, which amounts to the same thing) the human brain, but the first sentence then seems to ask about intelligence. The two are not at all the same thing.

Simulating the brain may be a useful technique for all kinds of purposes, most notably for better understanding of how it functions, but it has little to no practical utility in building the necessary computational platforms for AI. The challenges of AI really have nothing to do with the computational platform and are all about modeling representations of knowledge. And we can already do that pretty well in many constrained domains, like chess playing and IBM’s Watson prototype which is evolving into commercial spin-offs. Watson is particularly interesting because its knowledge domain is really very general – what is constrained is the method of interaction.

AI isn’t some magic threshold that we have yet to reach – it’s a continuum, and we’ve made long and impressive strides since the 60s when Hubert Dreyfus claimed a computer would never be able to play a decent game of chess because a computer fundamentally cannot “think”. Every time some wit sets a silly goalpost in the ground proclaiming the next thing that computers will “never” be able to do, they do it. And so it goes.

But the fallacy is to believe that in order to do these things, or in order to be truly intelligent, computers somehow have to “work” like the human brain. That’s utter nonsense, complete nonsense of Dreyfusian proportions. It’s like saying that in order to be able to fly, we have to figure out how to sew wings to our arms and learn to flap really hard. No, we don’t. That’s just how birds happened to evolve. We can do a lot better with aluminum and carbon fiber and jet engines. And so with AI. With the right representational and computational models we can do a lot better than the human brain.

Lordy I hate auto-correct. :mad:

Part of the ticker-tape turing machine is that it does the tape thingy infinitely fast?

I think that’s kind of what I said/am saying

No… to my question? Are you saying it is possible for transistor circuits to interrupt each other and talk to each other directly? It’s my understanding that this is actually impossible and perhaps only accomplished in a roundabout way.

I mean, my understanding of neurons is that functionally they’re capable of a lot. They can interrupt each other, they can interrupt other circuits. Or are the details not even understood? This is something I’ve not yet learned about, if they even know about it - like, I know the basic diagram where the axon receives a synapse which trigger an impulse which goes up the axon to the main body and then a signal is sent out of one of the multiple outputs. But does a neuron always send out a signal when it receives a signal? Does it send a signal out of only one of its outputs, or can it choose to do multiple? Do scientists even know this stuff yet?
One thing I was trying to say is that if there are certain functions that neurons can perform than transistors can’t in terms of sending different kinds of signals and effecting each other and storing information, then I suspect it’s possible that it’s impossible for transistors to emulate certain brain circuits, no matter how many more transistors you use. That is, it isn’t just an issue of power and density. Like I said, I need to know more about how neurons work, and am looking forward to you telling us, if you know those details.
Indeed, though, the thingy I mentioned about the brain dealing with noise, frequent mistakes in signal-sending, and how it can even start back up after stopping (by getting hit in the head) is, from what I understand, something that transistors can’t even do. Or is it possible that this new TrueNorth brain-like chip might eventually be able to deal with stuff like that, just with smart engineering of how the transistors are put together?

Emotion and idiosyncrasy.

A single neuron is more complex than a single transistor. So your question is ill-posed.

I believe Francis Vaughn was saying that we can build a circuit, perhaps containing a few dozen or a few hundred transistors, that performs *all *the functionality of one neuron. And then you build your “brain” out of those neuron-like modules.

The hardest part about reasoning about any computational system is keeping the layers of abstraction straight. In an ordinary app running on an ordinary PC there are dozens, if not a hundred, layers of abstraction between the UI and the electrons.

In the brain it’s probably more like 500 layers of abstraction.

You, any you, cannot correctly make a statement about the capabilities or limitations of abstraction layer #364 and apply your conclusion to any layer other than that single one.

What? What’s a layer of abstraction?


result = bottomLayer(
  bottomishLayer(
    middleBottomLayer(
      middleLayer(
        middleTopLayer(
          toppishLayer(
            topLayer(input)
          )
        )
      )
    )
  )
)

It’s the fundamental thing that you’re not understanding that is causing you to focus on completely the wrong one. Namely that the physical implementation of the computational platform has absolutely nothing to do with its potential for intelligent behavior. See my post #66 and specifically the bird analogy about flight.

Layers of abstraction (or levels of abstraction) are the classic formalism for managing complexity. Each layer performs a well-defined function whose details are unknown and irrelevant to the layers above it and below it, because the layers communicate only through well-defined interfaces. All the layers together perform a task of potentially much greater aggregate complexity than could otherwise be managed, and provide clean modularity for changing their implementations. A good basic example of layers of abstraction is a network protocol stack.

Not infinitely fast - that is a meaningless term. But a Turing machine is a mathematical abstraction. There is nothing in its design that places any constraint on its speed. The whole point about a Turing machine is that you can reason mathematically about its capabilities. Not that you would ever build one and use it to compute with. Indeed it is probably about the slowest architecture that you could come up with. But there is a well developed mathematical theory about what can be achieved with different kinds of automata. And theory about the equivalence between different designs - so that you can show that if automata design A is equivalent to design B - they can perform the same computational task. The main automata are: finite state machines, push down automata, and Turing machines.

When I say we don’t know how to program anything to think - I’m including brains made out of neurons and synapses. If I give you a construction kit that allows you to take an arbitrary number of neurons and construct an arbitrary brain, with control of every interconnection, and even the ability to pre-load the state of each neuron, we have close to zero understanding of how to do this, or how the thing thinks.

What do you mean by interrupt? Computers have specialised interrupt control systems. Closer to your question, you might like to contemplate single wire control systems. You can make a system where an arbitrary number of machines are connected by a single wire, and they use that wire to control one another and cooperatively maintain operation together. A transistor is a tiny tiny building block. Nobody is saying a single transistor can act like a neuron. Its like saying that you can’t compare a brick with a tree.

Fundamentally, there is currently no known reason why this is true.

Roger Penrose was trying to come up with reasons why this could be true, but it was clear he was putting the cart before the horse, he fundamentally could not accept the idea that a neuron could be so described, so spent a lot of effort trying to find a way to that it could not. There is no evidence that any of his ideas are true. But the door is open - maybe there is a grain of truth - but

Again, you are at the wrong level of abstraction. You are using the word transistor like one might use the work synapse. Synapses cant do what you describe either. (Note, building a computer, or other electronic circuits, that can trivially survive power glitches, or being stopped for some time is trivial. The reason we mostly don’t is that it adds cost to no useful end.) Further, this has nothing to do with the basic question.

In all likelihood the answer is yes.

I’m calling BS on this. If such a thing were possible, it would be used all the time, and everyone would know about it.

An illustration: You’re currently sitting in front of a computer, and interacting with it in various ways: You’re moving a mouse around, tapping keys on the keyboard, and so on, and in response, what’s displayed on your monitor is changing in various ways. In particular, right now you’re using a web browser. The web browser has some rules that you’ve become familiar with: Links are underlined and blue, when you click them they turn purple, you can move around the area of the page that’s shown using a scrollbar, there’s a button that takes you back to the previous page, and so on.

That web browser was written by a programmer in some programming language, perhaps some variant of C. But you don’t need to know anything about C programming in order to use the web browser, and in fact you probably don’t. And the rules of the web browser are not the rules of C: Some programmer had to create those rules. But C has its own rules, that the programmer had to know (but you don’t).

But the C program isn’t the bottom level of abstraction, either. Once the programmer wrote the program in C, he had to turn it into machine code, that a computer could understand and execute. The C programmer probably doesn’t understand machine code, and he doesn’t need to, as long as he understands C and has a compiler.

Of course, somewhere along the line, someone needed to understand the machine code, in order to create that compiler. But that’s not the lowest level, either: When the computer executes machine code, it’s actually just feeding “true” and “false” signals through a set of logic gates. And the logic gates aren’t the lowest level, either: They’re made up of transistors connected together. And the transistors are made up of atoms which interact according to the laws of electrodynamics.

Every one of these, the web browser you use, the C programming language, the machine code, the logic gates, the transistors, and the laws of electrodynamics, is a different level of abstraction. And ultimately, in order for you to sit there posting to this thread, someone, somewhere, had to understand each of those levels. But it’s different people for each level, and most of them don’t understand most of the other levels, above or below them. Nor is that even all of the levels: I skipped over a bunch of detail, and there are many layers in between.

I dunno man… I swear to you I saw some guy say it on the colbert report or something, maybe the daily show. I don’t remember exactly which kind of memory gets so jacked up, the guy didn’t even go into much detail about that, but he said it works, they stick electrodes in your brain.

Getting electrodes stuck in your brain is serious surgery, so this could never be used as some kind of useful service for people who forgot stuff. It would be prohibitively expensive and risky, with high liability on the provider.
Not to mention all the regulatory hurdles. Everything has to get cleared with the FDA and other federal agencies before it’s even available to the public, so this treatment/surgery is probably illegal outside of approved research facilities/purposes

I have more to ask of you brainiacs but I’ve got to go for now

One assumes he was pushing his new book.

He is a smart guy. But whenever any expert steps out of his area of expertise, he is no longer an expert, and is just as clueless as the rest of us.

That is so true. Michio Kaku is particularly prone to wild flights of fancy and speculation in which some tend to grant him undue credibility because… science! But it isn’t science, it’s just blue-sky speculation. Even worse are folks like Freeman Dyson who not only venture outside their area of expertise and pontificate on things they know nothing about like climate modeling, but actually demonstrate their ignorance by getting basic facts completely wrong. God love the dear old chap but I swear the man must be well into the grip of senility.

I don’t doubt that computers can do many clever things. But I don’t see how human intelligence is a latter part of the trajectory that you describe. We have some highly effective chess programs today, but the computers aren’t really playing chess per se, insofar as they lack an independent psychological and volitional reality. That’s where the Chinese Puzzle room comes in.

Could a computer with volition, etc. be created? Sure, I think. It might happen if you build a sufficiently sophisticated brain simulator. More likely, it could happen if volition is programmed in explicitly (through methods I can’t fathom). But as a spontaneous byproduct of a really rad word processor, chess program or mortgage calculator? It’s hard to see how.

Though honestly I don’t know what I think. I was just channelling Searle’s 2014 article in the New York Review of Books (sub req). And quite honestly I don’t grasp it entirely: I have limited philosophical capacity though I enjoy the subject at a beginning level. Here’s an excerpt: [INDENT]Underlying this epistemological distinction between types of claims is an ontological distinction between modes of existence. Some entities have an existence that does not depend on being experienced (mountains, molecules, and tectonic plates are good examples). Some entities exist only insofar as they are experienced (pains, tickles, and itches are examples). This distinction is between the ontologically objective and the ontologically subjective. No matter how many machines may register an itch, it is not really an itch until somebody consciously feels it: it is ontologically subjective.

A related distinction is between those features of reality that exist regardless of what we think and those whose very existence depends on our attitudes. The first class I call observer independent or original, intrinsic, or absolute. This class includes mountains, molecules, and tectonic plates. They have an existence that is wholly independent of anybody’s attitude, whereas money, property, government, and marriage exist only insofar as people have certain attitudes toward them. Their existence I call observer dependent or observer relative.

These distinctions are important for several reasons. Most elements of human civilization—money, property, government, universities, and The New York Review to name a few examples–are observer relative in their ontology because they are created by consciousness. But the consciousness that creates them is not observer relative. It is intrinsic and many statements about these elements of civilization can be epistemically objective. For example, it is an objective fact that the NYR exists.

In this discussion, these distinctions are crucial because just about all of the central notions—computation, information, cognition, thinking, memory, rationality, learning, intelligence, decision-making, motivation, etc.—have two different senses. They have a sense in which they refer to actual, psychologically real, observer-independent phenomena, such as, for example, my conscious thought that the congressional elections are a few weeks away. But they also have a sense in which they refer to observer-relative phenomena, phenomena that only exist relative to certain attitudes, such as, for example, a sentence in the newspaper that says the elections are a few weeks away.

… When I, a human computer, add 2 + 2 to get 4, that computation is observer independent, intrinsic, original, and real. When my pocket calculator, a mechanical computer, does the same computation, the computation is observer relative, derivative, and dependent on human interpretation. There is no psychological reality at all to what is happening in the pocket calculator.

…Commercial computers are complicated electronic circuits that we have designed for certain jobs. And while some of them do their jobs superbly, do not for a moment think that there is any psychological reality to them.

Why is it so important that the system be capable of consciousness? … If the computer can fly airplanes, drive cars, and win at chess, who cares if it is totally nonconscious? But if we are worried about a maliciously motivated superintelligence destroying us [as an author Searle discusses is], then it is important that the malicious motivation should be real. Without consciousness, there is no possibility of its being real. [/INDENT] 500 words: fair use, barely. Emphasis in original. I think I grasp his definitions (roughly) but I lack the facility to readily apply his terms. Though I find it interesting that something can be observer relative but epistemologically objective.

Trouble is, these are just bland assertions. He divides the world with arbitrary distinctions, and then uses them to build an arbitrary wall between human and machine. There is no justification for the choice of distinction, and no logical reason to make it. These are just grand words covering up the simple idea - “I believe that human observation is real, and that machine observation cannot be”. The rest of the argument is reverse engineered from this. The first paragraph is actually content free.

This is the sort of argument that made me mad with Penrose. He used similar assertions. “Computers cannot know that they are adding 2 and 2” is used imply that computers cannot ever know. Heck most people have no idea what adding two and two is. Most people do it by rote. Ask them to explain what “two” is and you won’t get a cogent answer. Does an autistic savant with mathematical skills perform operations that are “observer independent, intrinsic, original, and real”? (Whatever that phrase is supposed to mean.)

Comparing a modern computer with a conscious brain is about as useful as comparing a snail’s brain with a humans. Is the snail’s experience “observer independent, intrinsic, original, and real”? If so, why, and what does that mean? If not, where do you draw the line in brain complexity? Is a chimp’s experience “observer independent, intrinsic, original, and real”? I can teach some apes to do arithmetic. Do they have a gestalt of numbers that a computer lacks?

Eventually complaining that a computer cannot understand that it is adding 2 and 2 is about as useful as complaining that a synapse is incapable of understanding that it is performing logical operations based upon electrochemical potentials. If synapses cannot grasp their own nature, clearly no brain composed of synapses and neurons can either.