Strong AI and the Chinese Room

perhaps the most famous claim of refutation of strong ai is searle’s chinese room argument. by strong ai, i mean that it is theoretically possible for computers to be programmed in such a way that a computer could “think” in the same manner that we attribute that capability to a human being. a computer could possess mental states, could be conscious, etc. all the things that we consider humans able to do.

the chinese room argument is supposed to be a refutation of the turing test, and strong ai in general (the full argument can be found here ). it goes a bit like this: an english-speaking man is inside a room that no one can see in, and through a little slot, messages in chinese characters are passed in. the man takes these messages, looks through books or runs a computer program in order to determine the appropriate response, and passes a written response through the slot. the argument then claims: the man in the room does not know chinese, nor do the computer or the books. he nonetheless manages to pass the turing test and mimic a chinese-speaking human flawlessly. thus, he has demonstrated no understanding of chinese, and is still able to pass the turing test.

this, searle (originally) claimed, showed that the turing test was flawed, and that strong ai could not be possible, by similar reasoning. it is my belief that 1) searle misinterpreted the turing test, and 2) searle’s argument shows that if no understanding of chinese writing has occurred, then neither do any human beings understand chinese writing.

1: in turing’s test, it is assumed that no knowledge of the inner workings of the enclosed “room” is possible. by assuming that the workings are known, searle’s argument is not applicable as an argument against the turing test.

2: suppose that the physical workings of the human brain are decided completely. none of these things alone demonstrate understanding. thus, if searle’s argument (that things that do not understand can’t make up something that does) holds, human beings do not actually understand anything, or at least not chinese writing.

what are your thoughts on strong ai, and searle’s response? is strong ai possible? why or why not? and does searle’s response hold some facet that i do not see? is it really a valid refutation of strong ai?

I don’t think that 2) is the case – Searle isn’t critiqueing the computer’s structure so much as he’s critiqueing the computer’s understanding (or lack thereof) independently of the computer’s structure. The man in the box can behave as though he understands all of the written materials passed into the box so long as his output algorithms are sufficiently well-written, but he won’t truly understand the information as would a man who can actually understand Chinese – he’s essentially a puppet, since his actions (if he wishes to maintain the facade that he’s carrying on a conversation in Chinese) are completely dictated by his output algorithms. He has no will of his own. It is possible that humans speaking Chinese (or any other language) in reality behave no differently than the man in the box (if, f’rinstance, we have absolutely no control over our brain states, in which case we’re really just puppets dancing to the tune of physical laws), but it isn’t a given that the Chinese Room argument applies to humans in general.

I’d say that strong AI is possible, since I believe that Clark hits the nail on the head with his “meat machine” argument (viz. computers are machines made out of silicon while human brains are machines made out of meat, and that there is no fundamental difference between the two constructs other than the fact that meat machines are currently more complex than silicon machines). Just as there is no “spark of life” (there is no force that makes something “alive” instead of “not-alive”), I believe that there is probably no “spark of intelligence” (i.e. there is no force that makes something “intelligent” instead of “not-intelligent”).

Searle’s argument isn’t really a refutation of strong AI at all – it’s really just an argument that we ourselves don’t understand what makes something “intelligent” instead of “not-intelligent” (what, if anything, separates us from the beasts). Searle first needs to show that he has a will of his own before he argues that others don’t have wills of their own – in other words, his critique of the Turing Test is valid, but it isn’t particularly helpful if he can’t provide even the barest outline of a test that would be capable of determining whether a being had a will of its own.

My thoughts are largely the same. Searle still needs to explain what he means by “understand.” What’s actually going on when I “understand” something? What’s going on that I can do, but computers can’t? We need to know what process we’re looking for before we can declare that computers can’t do it.

Searle’s test is a nice thought experiment, but realistically, think of it this way: If the man in question keeps up that process for a decade or so, eventually he’s going to notice a pattern in the messages, and he’ll apply his OWN meaning to each individual character that he sees.

That’s the exact same process that goes on in our own brains when we’re infants. We hear someone say “water”. To begin with, this is meaningless babble. However, after we hear it a few hundred times and are able to start understanding context, we’ll start to realize, through trial and error, that “water” is “that clear wet stuff that goes in our mouth”. Thus, we develop understanding.

I wonder if there is a difference; it may be that the fundamental building blocks of human intelligence are just like the Chinese room and that sentience still emerges anyway, or to put it another way, the Chinese room ‘system’ really does know how to speak Chinese.

Having spent some years as an AI manager, designer and programmer, I take issue with the general idea of somehow perfect AI on a different basis.

We, and as far as can be told, all mammals, have emotions that constantly affect what we do.

Computers do not have emotions. I, personally, don’t want to see software that fakes emotions with the intent to deceive humans. If a program was written that faked emotions, I would distrust it, because it is not creating emotions from the same motiviation that I am: I.e., they are fake emotions. I don’t care what a program feels. I do care what cats, dogs and birds feel.

In this sense a “dumb” animal rates more highly than the most advanced software.

True, but ‘strong AI’ wouldn’t have fake emotions.

Protein sequences, enzymes, neurotransmitters etc don’t have emotions and/or sentience , but when they are assembled in just the right way, emotions and sentience can emerge from the system.

I believe it ought to be possible to create the same kind of effect with non-biological components.

Again, I’ll thoroughly recommend Greg Egan’s Permutation City as reading material on this subject - although it is a work of complete fiction, it addresses many of the issues of AI very nicely.

The Turing test does not differentiate between Strong or Weak AI. If any AI can pass the test it can be considered sufficiently intelligent.

if a computer is written on similar hardware to yours, with a similar program, and it appeared to be emotive, would you say it faked emotions?

what if someone came to you with a robot that he claimed felt things, and you could not prove otherwise. what would be your claim that it did not in fact have emotions?

the turing test claimed that any ai that could pass would be strong ai. that is, we could assume that it did in fact understand, or whatever attributes you give humans.

Hmmm, the Turing test merely states the obvious; if you can’t tell the machine from the human, how can you assert that the machine isn’t intelligent like the human?; not quite the same (philosophically) as assuming that it is intelligent. It’s as much a statement of what we cannot know as it is of what we might assume.

If the Turing test is carried out with a man and woman (as opposed to human and machine) - a scenario that Turing himself mentioned - and we are unable to discern which one is the man, we still can’t assume that the man is a woman, only wistfully admit that we can’t tell the difference.

i agree. these are things that lead me to claim that searle misinterprets the turing test in his attempted refutation.

also, as far as what we cannot know, other people must fall into this category too. i can claim that i understand just fine, i have the consciousness to be aware that i am thinking those understanding thoughts. but as far as other people go, they’re just black boxes, with apparently as many and as sophisticated inputs and outputs as i’ve got. plus they look an awful lot like me, so i can’t tell the difference between them and what i would be like if i didn’t have the inside knowledge, so to speak. so attribute consciousness to them too. there is no way to know for sure, though.

and if a robot is indistinguishable from a human, what is the difference between attributing that robot consciousness and attributing other people consciousness?

i suspect if we were ever to figure out what made a human tick, and we could “program” a “human” into a “body”, we would all have entirely different definitions of what it is to be conscious and to understand.

The other point is that I don’t think any such Chinese Room could ever pass a Turing Test. All you have is a look-up table. How could you write an algorithm that would tell you how to carry on a conversation? As a simple point, the Chinese Room as described has no memory. It can’t remember that you asked it about it’s mother-in-law last week. How can a system with no memory pass a Turing Test?

If you simply have a look-up table where you write what you should respond to any statement, then what you should respond to the next statement given the previous statement, then what you should respond to the third statement given the previous two statements, your book of answers explodes exponentially. Even if we stipulate that the variety of things that people can say isn’t really infinite, we can say that it is astronomically large. I don’t think a functioning Chinese Room of this nature could be built in a Universe as small as ours.

that’s why it’s a gedankenexperiment.

The difference, I suppose is that I can comfortably assume that the humans around and about me are sentient because not only do they display outward signs of sentience, like me, but they are physically constructed in a similar fashion to myself; there’s nothing specific to cast any doubt on the matter.

In the case of machines, the doubt as to whether the effect is just a façade arises because possibly quite different systems are generating the effect.

Not a terribly satisfying answer, I know.

not a bad one, either.

it could be that line of reasoning that led searle to eventually claim that the consciousness was a property arising only from neurons arranged in such a manner that they form a living brain, that no arrangement of silicon could ever have such a property.

the more i read about him though, the more it seems like he defines “understanding” as “something living things with brains can do”.

Ah, yes, the question: Is what brains do computation, or something else?

If it’s computation, then any computer can do the same (in theory, with enough memory).

If it isn’t, we have to ask what it is. Even if we assume our brains rely on subtle quantum effects, there’s no reason we couldn’t build something that also used subtle quantum effects.

The atoms and molecules in our brains don’t have any understanding of our thoughts. Yet somehow intelligence and awareness arises from our brains.

Roger Penrose thinks that quantum physics may be necessary to explain how we can be conscious and perceive truth. His arguments are intriguing (I lean towards the Platonist school of thought also), but would require a lot more proof.

Presuming that our brains do funtion on a purely classical level, then even if you couldn’t compute our thought processes directly, you presumably could simply model the brain at the neuron/synapse level and simulate it.

i’ve also found this idea very interesting. could you provide a link or two?

it coincides with my intuition all along that human brains have power over computers by the ability to make mistakes. if there was some randomness in computing, the relaxed “rules” could yield some very strong consequences. a heuristic is often much easier to implement and can lead to better overall results than an algorithm, whose result is guaranteed (and usually optimal, but only in the worst case). and indeed, some heuristics can yield much better results in practice, if “pseudo-random number” generation wasn’t such a costly process. i’ve also thought of how powerful the ability to forget is, to make room for new storage. and to remove pieces of storage to make room for new storage, but keep enough around so that a simple reminder of some sort can regenerate the original data in its entirety.

anyway, i’m rambling, but if it weren’t for quantum fluxuations, there would be no room for randomness in the brain, so i feel like that provides a lot of flexibility.