Is there a credible argument that computers cannot replicate the human brain?

Personally, I think we’d be better off trying to design artificial kindness and artificial wisdom before artificial intelligence. I’ll take nice over smart any day.

Penny beats Sheldon.

There is a subtle but crucial point here. If the rules are deterministic, the system as a whole is also. The n-body problem has no known closed form solution, but that doesn’t mean that it isn’t determined by nothing other than the initial state.

Chaotic systems are the enigmatic problem. They are deterministic. But they are ill conditioned, so that even the slightest change in the initial conditions brings about wildly different outcomes. Creating a closed form solution is essentially infeasible. But critically, the system is deterministic, but there is no useful way of predicting it. This sounds like playing with words, but it isn’t, not from the point of view of free will. If the universe is fully deterministic it means that our paths in life are predetermined. That opens a philosophical and ethical can of worms so large you can drown in it.

A mathematical example of the nature of deterministic systems is a hash function. Something like SHA-2. If I give you an input, the SHA-512 algorithm will generate 512 bits of output. Perfectly deterministically. However if you change a single bit in the input, the output will vary so wildly from the first hash to be unrecognisable. There is no known feasible way of predicting how. More importantly, there is no known way of working out how to create a given hash result by synthesising the input, bar brute force search. Yet clearly the process is deterministic. If you do find an input that gives the designated output, you can be sure that it always will.

ehhh…

it is a semantic issue in effect, and so that doesn’t defeat free will. Deterministic people that we can’t even remotely predict is effectively free will.

And frankly, on a fundamental level, the idea that the rules of the brain are deterministic is the same as saying that there’s no free will is a false conflation. It’s a deterministic system, but it can respond to conditions. Many of those conditions are themselves may not be deterministic. Even if they are, they are also unpredictable. It’d be like a computer program you could write; while computers are essentially deterministic, they can easily take into account things like weather, which is not, so even with a computer you couldn’t determine the outcome (there is a random number generator that uses weather data, for example).

I’ll even go one step above that, claiming that “our lives are predetermined” just because the rules of a system are deterministic, is essentially meaningless because of the vagaries of the nature of time. We’d need to discuss time philosophically before we could even begin to come up with exactly what we mean if we want to say something like that. The human brain can imagine possible future scenarios, but that doesn’t mean that they are fundamentally real in any way. I really doubt that such thoughts would ever have an effect on the actual laws of physics.

(cont.)
The dilemma reminds me of the Objectivist claim against pro-life people: just because something is going TO BE a baby, doesn’t mean that right now it IS a baby. Time matters, but only actual time, what’s passed by. (Not that I agree fully with the Objectivist take on the issue)

Neural networks are an attempt to create a programming model that is similar to brain neurons. Instead of explicitly coded logic, you start with a set of random states and transitions and let the computer experiment with changing them around until you are happy with its behavior. I’ve seen this modeled with letter-grade feedback, as if the computer was a student in a school. E.g.:

You: “What is 1+1?”
Computer: “Pie?”
You: “F!”
Computer: <rewires itself and recomputes> “553e6332?”
You: “D!” (at least it was a number this time…)
Computer: <rewires itself and recomputes> “67?”
You: “D+!” (getting closer…)
Computer: <rewires itself and recomputes> “2.7?”
You: “B!”
Computer: <rewires itself and recomputes> “1.9?”
You: “B+!”
Computer: <rewires itself and recomputes> “2?”
You: “A!”
You: “Now, computer, where is the Eiffel Tower?”
Computer: “45?”
You: “F!”
Computer: <rewires itself and recomputes> “764?”
You: “F!”
Computer: <rewires itself and recomputes> “oak?”
You: “D-!”
Computer: <rewires itself and recomputes> “Michigan?”
You: “D!”
Computer: <rewires itself and recomputes> “Normandy?”
You: “C-!”

etc.

So the idea is that you could train a neural network over time the way you would teach a child. It works, a little bit. But nobody has taught a neural network to function like a human being.

Also, look up the “Chinese Room” concept. It is more or less a thought argument that says that even if something appears to be intelligent, it might not be, or might not be in the way you suspect. Suppose an automated translator can translate between English and Chinese and back again. It has millions of rules as to when to use this character and when to use that one, and thousands of exceptions to those rules, and a few thousand counter-exceptions. Does the translator actually speak Chinese, in the sense of understanding it?

The idea of the big neural simulators is to somehow pre-program the neural net by scanning a known working brain. The idea that a huge neural net could somehow be taught to think ab-inito is what I am calling a cargo-cult mentality. The idea that somehow something sufficiently big will start to exhibit interesting complex behaviour as an emergent process without some form of existing structure already existing.

One of the core issues with neural nets has been that it is essentially impossible to extract information from them about how they work. Not that yields insight into the problem they are being used to solve. Indeed further training of the net with more problems may significantly change the internal weights in an unpredictable manner. This hints at the problem at hand in trying to understand how a wet-ware brain works. Different individuals may have intrinsically different networks that address much the same activity - and it may be close to impossible to tell that they do by external examination of the interconnections.

This gets us back to questions of information content. Indeed IMHO questions of determinacy, free will within the context of the history of the universe are best answered by looking at information content and the intrinsic limitations on information capture and content. But, that is MHO, and not pertinent to the OP.

Can you be more specific?

I would argue that this copy wouldn’t be a computer but an artificial brain.

Thank you to leahcim, septimus, and Indistinguishable for ably answering the questions I posed.

He asserted that we could build a machine (or write a program) that no computer could ever compute would stop or would run forever. This is the “halting problem”. He then asserted that humans could know the answer. What reasoning is available to humans that is not avaiable to computers was never described. This is utter nonsense (unless you believe that god will come and tell you the answer). This elementary error wasn’t even original with Penrose (I think it was originally attributed to someone named Lucas). This was aired in a review of his book published in the American Math Monthly sometime in 1991 or so. Sorry the, online archives don’t go back that far and are, in any case, behind a paywall. I have heard since that he continues to spout this stuff. I can suppose only that it is religiously based.

I did just find a citation for that review: Review of The Emperor’s New Mind by Roger Penrose. Amer. Math.Monthly, {\bf97} (1990), 938–942. It should be in any university library. Or maybe can be found on JSTOR, but that requires payment.

Sorry, I cannot give a cite for that sorting algorithm. I read it somewhere some years ago and cannot now recall where and when. But I assure you I did read it. Perhaps if you google evolutionary algorithms you might find it.

Do you really think the Chinese room experiment is doable? I don’t. Just think what is being asked. That the operator of the room, while not knowing Chinese, be given instructions whereby, for any input in a language he knows nothing of, he produce an appropriate output. Short of teaching him Chinese, it seems inconceivable to me that this could be done.

The “Chinese Room” thought experiment is bunk. Always has been. It assumes the existence of the perfect set of arbitrary translation rules. Then asks whether an effectively mindless automaton following the rules “understands” the high level result of what it’s doing.

All this is classic polemic distraction and prestidigitation. Start by assuming the impossible. Then make a series of assertions operating at very different levels of meta-ness, while deliberately ignoring or papering over the level jumps you’re making. Finally, assume the desired conclusion and shout “ta DA!!!”

It has always been a mystery to me why this fraud of a “thought experiment” has any persuasive power with anyone whatsoever.

I thought the point of the Chinese Room thought experiment was to demonstrate the strangeness of the assertion that intelligent thought can be reduced to a set of rules. By breaking down the “program” into books and the “processer” into someone with zero insight following instructions, we’re deprived of being able to simply presume that the “black box” somehow acts like a brain. The Chinese Room is simply a de-mystified computer. If you have trouble believing the Chinese Room could give a credible impression of AI, then how is a conventional computer different in principle?

Yes. The Chinese Room thought experiment was created by philosopher John Searle as an argument against the idea of artificial intelligence.

It’s still flawed, though. The central argument of the Chinese Room is “A book can’t think. Therefore, a computer can’t think.”. But this ignores the fact that a book is very different from a computer. For one thing, a computer has volatile memory, which a book lacks. If a Chinese-speaker outside of the box feeds in the question “What did I just say?”, there’s no one correct answer: The answer would depend on what, in fact, the Chinese-speaker actually did say. In order for the Chinese room to answer the question, it needs to be able to store all of the inputs it’s received, and use them in constructing outputs.

In Searle’s original paper, he explicitly mentions our Chinese-pretender having the use of “a lot of scratch paper and pencils for doing calculations”, which amounts to a form of memory. That having been said, the protocol he envisions is also one in which all questions arrive in one batch and then the answers are provided in one batch. But regardless, certainly the spirit of the experiment would not at all be disturbed by continuing in an interactive questioning protocol, allowing unlimited use of paper or such things as data storage along the way, according to rules prescribed by the book. The point was simply to imagine someone carrying out mechanically and blindly the instructions of a computer program, just as would the sort of computing machines we call “computers”.

I’m not sympathetic to the conclusions Searle draws, mind you. I just don’t think “But what about remembering inputs?” is a very strong objection to the thought experiment. (I’ll note that in Searle’s story, inputs are provided as writing and never removed, so one automatically has a transcript available of all inputs; the reason to provide further paper is just as working memory for calculation)

Incidentally, it may be enlightening to note this passage from the original paper:

But I think that once you do allow “pencils and plenty of scratch paper” that the argument falls apart. People reflexively say “A book can’t think” (or understand, or whatever), but once you include the scratch paper, it effectively is a computer, and the argument then becomes “Computers can’t think because computers can’t think”.

For what it’s worth, Searle already was willing to get rid of the book. He considers the case of a person who has memorized the rules of the book and just follows them (again, mechanically, opaquely). He thinks, for various reasons, such a person still doesn’t understand Chinese. Like you, I do not share his perspective.

I would argue, rather, that the person who memorized the book is not the same person as the person who knows Chinese, but that he rather has another person running on the hardware of his brain. But of course, any rulebook complex enough to model a human mind would be too complex to be memorized by a human, so saying that the man memorizes the book is yet another cheat (unless we assume that the man in the box is some form of superhuman, which makes the experiment less interesting).