Strong AI and the Chinese Room

erislover: Also, Searle seems to forget that just having a brain doesn’t guarantee either a mind or mental contents.

i actually take issue with each of those axioms. personally, i would very much like to know what searle’s definition of “formal”, “syntactic”, and “semantic”, are. or at least their extensions.

to me, particle a hits particle b, particle a goes in direction a^ and particle b goes in direction b^ seems pretty formal to me. at least in so far as i understand searle’s definition. so, minds are no longer governed my physics. if physics is not formal, how is it that programs must be?

what on earth are “mental contents”?

if A1 and A3 are accurate, certainly computers ought not be able to have internal representations of reality. but i wrote a program once that learned how to play connect four. that sure seemed like a representation of something that wasn’t based solely on the “formal” system of the computer system.

as far as “brains cause minds”, doesn’t it seem as though that’s what he’s trying to show? to claim it as an axiom is just plainly circular.

No… even the Chinese room accepts this. In fact, that’s the point of looking at the Turing test, shall we say, metaphysically: we already attribute this to ourselves. What does it take to attribute it to something else? More than that, when could we say that our ascription was correct? Apparently, if we have a formal description: never. Period. That seems to require a hell of a lot more than three axioms.

None of the axioms examine what meaning is. From what I’ve gathered by glancing at his works at Borders and Barnes and Nobles around the USA, Searle fancies himself as a Wittgensteinian in some way. But if that is the case he has certainly added, or completely changed, what ‘meaning’ means, or what the criteria are for saying something has semantic content.

Vorlon

Quite!

grrr… even the Turing Test accepts this. SHEESH.

So what would pass the Turing test and what might tip you off?

From what I understand of the Chinese Room thought experiment (and admittedly it isn’t much), I see an implication that the “man” is working with algorithims that not not only determine what the question is but formulate an appropriate response - how is that not thinking? It wouldn’t be if the questions had definite answers, but if that were the case then this “Chinese Room” sounds an awful lot like a computer.

Someone show me where I am wrong here?

i don’t necessarily agree with that. my understanding of the turing test is that it provides a method to attribute “intelligence” to something. whether this includes consciousness or a “mind” (whatever that might be), i can’t say.

i can, however, say that i’m rather confident that the validity of the turing test does not rest on the claim “brains cause intelligence” even. i think the claim “i have intelligence”, or “i have [whatever i’m looking for]” is good enough. the method is then based on the fact that we attribute to other humans the properties that we feel we have. not necessarily that all brains have those properties.

“brains cause intelligence”

hmm…

like only ever having seen birds fly and concluding that wings cause flight only to see a rocket take off?

Since it uses beings-with-brains as the standard for declaring when one can validly attribute “intelligence” to a thing, one would imagine that the standard itself is so because it is presumed to have the attribute “intelligence”, eh? :slight_smile:

It’s not thinking because it’s just a program that reads some symbols, finds the proper response by consulting x number of lookup tables of x size and then gives back the appropriate response from the tables. There is no intentionality and there is no awareness on the part of the program that the symbols mean anything… as Searle says, they’re not even symbols, because they don’t symbolize anything to the program.

The point of sticking the man in the Chinese room and not letting him do anything except run the programs and consult the lookup tables is to illustrate that the programs and tables don’t have any intentionality… the man may understand English, but he doesn’t understand Chinese!

The linguistic nitpick is that it isn’t even theoretically possible to simulate fluency in a language that way… there is an idea that leaving the logic of the program undefined is the perfect hiding place for something that must be akin to intentionality or semantic understanding, or the proper responses could never be formulated. But that’s a hijack considering we aren’t even characterizing Searle’s position properly yet.

-fh

I agree, but what’s more I don’t see how it’s theorectically possible to simulate consiousness or awareness in that way either.

Doesn’t the man have intentionality (to use the tables and programs to find the appropriate response), and the tables are merely tools to serve that intentionality? And all the components working together form a system that understands Chinese?

I see what you are saying about leaving an element of the program logic undefined in order to leave room for something like semantic understanding but isn’t it more than that? Doesn’t there need to be a response based on more than the symbol itself?

I’m not trying to quibble, just asking questions as they come to me. It’s a pretty interesting subject to try and wrap ones mind around.

“hamburger” is a symbol. “what a hamburger is” is presumably the semantic connection to that symbol. i think if we could define exactly what a hamburger is to you, we would be able to model it symbolically. i’m very curious as to what reasons some give for saying that semantics can’t be modeled symbolically. if the semantic description of something was definable, why couldn’t we describe it symbolically?

Was this a geeky attempt to out himself?

Well, Searle isn’t saying that. He’s saying that symbol manipulation doesn’t instantly imply semantics is happening. What he isn’t saying is that semantics cannot be modeled symbolically… at some point it has to be for our brains to work! Just that a particular form of Strong AI that manipulates symbols in fact lacks semantics, and therefore cannot be said to understand anything.

-fh

if in fact semantics can be modeled symbolically, how does the strong ai that manipulates symbols necessarily lack semantics? what do our brains do that it doesn’t?

my theory is that semantics are all just figments of our imaginations. in a manner of speaking…

and by the way:

as posted by erislover, quoted from searle.

my question is, what is sufficient for semantics?

The man has intentionality, but he isn’t using it to deal with the Chinese. He is just running a program and following a set of instructions. This is the contrast… he understands English, but he can not be said to understand Chinese. Your last question is answered by Searle himself in the link in the OP.

Yes, but you are making the same point Searle is here… in order to call the symbol processing “understanding” the response has to be based on more than “if given squiggle squiggle reply with squoggle squoggle” …those squiggles and squoggles mean something to human Chinese speakers!

My nitpick is that whatever program and lookup tables you use to provide the correct responses to statements in Chinese must contain a degree of program logic and reasoning ability so high that it would in fact be sentience… language ability is separate from thought in the human mind, but you can’t have a conversation without thinking. Daytime talk shows aside.

-fh

It doesn’t necessarily lack semantics, it’s just that semantics is not the automatic, inevitable result of an ability to manipulate symbols. According to Searle, that is… not me!

Some other module in the brain aside from those modules that deal with language and symbols : )

-fh

I can’t remember which smart-ass said this, but here it is:

I fell off my bike when I was twelve, and I haven’t been sentient since! <rimshot>

The point is, if such an injury did occur, neither you nor I would be able to tell.

Likewise, intentionality and semantic content are, as Searle admits, inherently subjective. We grant ourselves sentience largely by instinct and to others by inference. There is no test for intentionality. (or rather, Turing’s the best we got)

<b>hazel-rah</b>:
<QUOTE>
… in order to call the symbol processing “understanding” the response has to be based on more than “if given squiggle squiggle reply with squoggle squoggle” …those squiggles and squoggles mean something to human Chinese speakers!
</QUOTE>

Given that the chinese man understands “squiggle squiggle” to mean [semantic content], and the boxed man understands “squiggle squiggle” to mean [must reply with “squoggle squoggle”], what makes the chinese man’s interpretation any more valid? Both are meaningful within the context of each man’s world (China and A Box, respectively).

If I may (massively) paraphrase Focault et al, meaning is arbitrary and specific to the meanee(?).

-ehj

Hmmm, I agree.

So the program would have to illustrate an understanding independent of the symbols themselves and a failure to do so would result in a failure of the Turing test?

(I realize that the Chinese Room and the Turing test are two different things, I’m just still stuck on a good way of using it)

Could it be that language is after all only the external representation of internal symbols that convey different mental states, ideas and images? If that be so then in order for communication between intelligent beings to happen it requires that both share a common set of internal symbols for any understanding to be taking place.

Where does this leave the humble program? Obviously no formal structure could emulate internal awareness or could it, could an open-ended structure do it? How would we know? Bleh, right back where we started.

Bleh it really is hard to talk about intelligence from a third person perspective.

It could, and, IMHO, it is. However I’d replace “common” with “mutually meaningful”, and remove “intelligent”.

But that’s like saying, my brain can’t respond in English, so I don’t know English. We can’t actually question the man, so what he knows, or thinks he doesn’t know, is irrelevant. We put a slip of paper in that says, “Do you know Chinese?” and the response is, “Yes” or “Of course”.

Indeed. But this is the decisive move in the conjuring trick, and it seemed quite innocent. What is meaning to Searle?

I have here the second edition of the Cambridge Dictionary of Philosophy (the purchase of which was inspired by our good man flowbark’s mention; for this I shall be ever greatful). Its exposition of Searle’s conception of language seems to be at odds with the CR, too. It states,

That is a fucking mouthful. However, handy as this book is, we can head over to Speech Act Theory to get a better glimpse at the above; specifically, we can get a grasp of the intentional, illocutionary acts.

Reading this makes a picture form in my mind of a surfer—intention—, his board—meaning—, and a series of otherwise normal and more or less indistinguishable waves—words and sentences—; specifically, the picture makes me think of the times when the same (or a very similar) sentence can be used in all sorts of different situations and mean (we say) something different each time.

I think this picture of meaning, intention, and sentence (the one I offered or the one I quoted) makes several mistakes. The biggest mistake is that when we try to imagine the meaning of a sentence in isolation, it seems that there is so much going into understanding the sentence that we are stuck making explicit rules before we being. But so much depends on more than just the words, it depends on the situation (stable, or not mentioned, for the CB), it depends on what the listener can in fact do to receive information (the CB has no sight, hearing, touch, taste, or smell: it has a slot which, I think at best, could be said to be an analogous to hearing monotone [not stereo], monotone [single toned] speech [not sound!]), and it depends on what means it has to respond (the box has what would be analogous to a monotone [single toned] voice, and that’s it).

Now, imagine a person that has the same means to receive and communicate as the box. What sort of questions could you ask it? I mean, this isn’t even Hellen Keller, for she had touch. Consider, before we get to questioning the box: what could we teach it? —How?