Strong AI and the Chinese Room

Well, the Chinese Room assumptions kind of cover that aspect of the scenario… the program and tables contain within their functionality all possible Chinese utterances and their proper responses. You can ask it anything, in theory.

As for how the box communicates with the world, it’s not relevant to the point the Chinese Room is making. You can posit any kind of apparatus for communicating with the world… video cameras, microphones and speakers, a fax machine. As long as all the man does is take the input in whatever form it comes in and place it into the program. Then he uses whatever to get the response back outside the box.

According to Searle you can totally internalize the whole thing into the mind of the man and just have him talk to people in Chinese if suits you. He will still understand English and not understand Chinese.

-fh

Then I agree with Searle: this case is impossible.

It is to me, when I consider those things relevent to meaning, intention, and so forth. As, I suspect, it is relevent to Searle.

I’d like to read the paper/book in which he puts the argument forth. The CDP lists it as being in, “Minds, Brains and Programs”, just like that, probably indicating a dissertation in a periodical. It references The Rediscovery of the Mind, from 1992, where the argument is fleshed out more.

Um, try the link in the OP.

I think I should clarify. This case is impossible because I agree: natural language is not able to be put into such a rule-system without the various accountriments we, as humans, have, which includes (but is not limited to) our senses and interactive history. However, this only means: we can’t teach boxes Chinese (or English) [without these accompanying elements]; not: these boxes do not have intentional states; not: these boxes cannot ‘understand’ ‘the meaning of sentences’.

I got the link in the OP. It doesn’t include later amendments.

But that isn’t what Searle was arguing, that is a criticism of Searle’s Chinese Room scenario.

Yes, it is a problem with the scenario; I believe Searle creates an impossible situation; impossible situations are not very enlightening to me as a matter of analytical philosophy. But, disregarding my own opinions on meaning… not to sound trite—I mean I’m not a pHD philosopher or anything—but that paper is full of assertions rather than explanations. No one has taken it on, so I will do so, in true “vivisection by vB code” fashion that only philosophical debates can muster. :smiley:

Systems Reply

Yeah, neither is the electrical behavior of my neurons and the action of ‘my baking of a cake’. So?

How does he know? He just asserts it, right off, because he already assumes the only thing that can ‘understand’ is the man in the box. But two things are being questioned: the whole box, and the homunculus. Why o why should we expect their answers to accord except possibly in very degerate cases of meaning (for example, solving equations not written in Chinese or English).

What does he have? An ability to feel? But how do we know? Well, we ask him.

as far as I can tell, the only grounds for the point of the CR is Searle’s own assertions in the matter by questioning the homunculus. And if Searle is the homunculus, how do I know that he, in fact, understands English? Well, I ask him questions.

Bullshit. I’m sorry, but this is bullshit. He never offers his criteria for understanding in reply to the systems test other than, “I am telling you I understand English” which is what we would expect the CR to say, too (well, about Chinese).

Now, I don’t like the systems reply, either; I believe it makes a metaphysical assertion that is just as open to skeptical treatment as the problem of other minds; but it is the biggest irony that someone hinging their argument on the report of an unquestionable being, claiming this undermines the Turing Test, is getting worried about the other party making one assertion (which, to tell the truth, is the whole of the systems reply in itself; that is, passing the Turing Test is the definition of ‘understanding’ or ‘intentional states’ etc, so of course it is an assertion; viewed in this light, I think there can be no “systems reply” at all, they are simply talking passed each other).

Robot Reply [Note: me, with a healthy degree of skepticism and a different perspective on meaning]

No, it doesn’t. It concedes that our language is not meant to function in a vacuum.

This may be (re: Schank’s program), but it at least makes me happy; I no longer find the CR impossible when it is a Chinese 'Bot.

As Dennett’s “Where am I?” elucidates, a brain doesn’t know it is a brain, either; at least, not with respect to natural inclinations of expression, and not with accustomed use of phrases like, “I am in a deep shaft below the Earth’s crust, and my brain is in a vat” or “I am floating in a vat, and my body is in a deep shaft below the Earth’s crust”. (If anyone is not aware of this story, I can give a summary on request… and I have the story here by my side, so no intentional errors of the story’s point will be made—well, assuming you trust me, anyway… do I pass the Turing Test? ;))

Would we expect, for instance, that a human instantly programmed with an ability to speak (but lacking all the attendent use of the words, and the situations they were taught in) would still report the same things and be inclined to feel the same things (that is, feel the same things about these parts of speech and their function)?

this is so close to my own opinion it is like I am just missing Searle’s point (if I assume he has one), or Searle never follows his own thoughts through.

Of course, “all I do is follow formal instructions”! “All” a brain does is adjust electrical and chemical signals. But, significantly, when questioned in a certain way, it reports understanding.

Brain Simulator Reply

It is odd only becuase it is the only repsonse you are willing to consider. that is odd, yes. :wink:

Yes, and if we could question homunculii, we wouldn’t have this as a thought experiment, but a falsified (or corroborated) theory.

Only a dualist will find this question relevent. Non-dualists will find it contains a mistake. Similarly, “Where is the essence of this chair?” is a question only a Platonist can answer. But asking these questions if we are not platonists and dualists is as unfair to the questionee as it is to the questioner should the roles be reversed. And neither Searle (he claims) nor Strong AI proponents (by necessity) are dualists.

That’s nice. How do you know it doesn’t achieve this? By asking the homunculus if it knows or understands anything about what is happening outside of its perception. And this answer is supposed to demonstrate that there are no intentional states? If I had a homunculus in my brain, I should expect it to answer similarly, for it is in my brain, and only has access to what the brain gives it: electrical and chemical signals (which, I might add, are meaningful to us, outside of the brain in a different way then they are meaningful to it, inside the brain; but, this is just how I view meaning, so I’ve only added it parenthetically).

The Combination Reply

Give the man a prize; he understands the Strong AI claim! If something passes a Turing Test, it understands. Full stop. Making the Turing Test harder or more complicated might convince more people. The Strong AI claim is: “this is the definition of ‘understanding’: following an algorithm.” Searle is welcome to combat this definition, though I don’t see how any thought experiment will achieve it.

John, John, John… tsk tsk. You must see the difference between the metaphysical assertion/definition: “This is understanding” and the means by which a person says, “This [or it or he or she] understands.”

Yes, and if natural language operated in a vacuum, the Strong AI claim could indeed be corroborated or falsified (barring the skeptic’s insistence that we know no such thing; that is, accepting the definition). But this is precisely the problem—in my opinion—that faces it.

Yes, we have a tendency to regard pure functionality as mechanical. But from “I feel this is so” does not follow “this is so”, and so Strong AI would have you question the rapidity with which you judge such mechanical beasts.

Neither does a single neuron, or collection of neurons, do any such thing when viewed in the manner you insist: they only manipulate and respond to chemical and electrical signals. Category. Error.

Yes: we cannot fully explain their behavior, and so we use words whose formal meaning has not been established. Why should this surprise anyone?

Rejection of behavioristic assessments of intention, etc

This is not my argument; however, it resembles it. My argument is that since these are the means at our disposal, given that their use nearly demands the assignment of intelligence etc to things we didn’t normally assign intelligence to, we should reassess our use of the word in a philosophical or psychological (should one consider a naturalistic epistemology) context, and eventually in general. With that caveat, his reply:

I don’t feel this has really been demonstrated.

That’s right; and in the physical sciences, we do not multiply explantory or descriptive entities unless they are necessary to explain things. In this case, if something passes [what I woudl characterize as] the extreme Turing test (the Chinese 'Bot), then human behavior has been explained, full stop. Just because one is a cognitive scientist does not grant one the ability to put ghosts into biological machines at will. Furthermore, this quote notes what I’ve been trying to point out all along: we already dismissed the problem of other minds, one of the two things I think stand in the way of Strong AI (the other being mentioned earlier with respect to the determination of meaning in a vacuum). Granting these things, the burden of proof is justifiably shifted: why should your homunculus report intentional states about a reality he is not in contact with in a manner we cannot question him in?

I am not going to grant the Many Mansions reply any consideration. At that point the thought experiment goes out of control, IMO. It is, IMO, the “If things were different they wouldn’t be the same” argument. It is a useful thing to read before one reads the rest of the paper, however, as Searle’s “biological” prejudices are given free territory to roam. It seems Searle’s main point is the axiom above (to wit, A4): brains cause minds. This assertion is no less powerful and question begging than the Strong AI camp’s definition of ‘understands’.

This is erl, signing off for the night… [and, might I add: whew!]

So erl, let me create a situation. The look-up table for translating Chinese are so vast that a single man in the room can not do it. You have a whole society of men in the room sharing the task according to some organized system. None of them know what the overall goal is. Each of them is just doing their job: I see this symbol I respond thus.

None of these men understand Chinese, by any stretch.

But this society of men can produce an output that translates Chinese. Does the society of men understand Chinese?

If I understand you above, then I believe that you would say yes. Others thoughts?

I would say yes. And I am comfortable substituting neurons or AI components for the men and having the anlysis hold equally.

Being clear to differentiate between your operational definition of “understanding” and “intelligence” and “consciousness”, of course. I do not think that such meets a reasonable defintion of “intelligence” nor of “consciousness”

Oh, sure! You are correct in your estimate. However, I still remain unsure that, even in Searle’s hypothetical, that the man doesn’t understand Chinese. If all these symbols are Chinese, then I think it could be safe to say the man understands what to do with them; we cannot say that he understands Chinese like a Chinese speaker does, though. But why should this be a reasonable requirement for understanding (other than the inherent bias necessary to answer the question of, “Does it understand”)?

And in absence of this definition (which we all admit we don’t have, except the Strong AI camp), we are left only with our normal use of the words. We don’t know that understanding is a state one has, a state one is in, a permanent physical quality or property, etc etc. We only have how the word is used in everyday contexts, and this use strongly implies the necessity of using it to describe things that pass Turing Tests. It is the unfounded, unjustified, metaphysically arbitrary “thing” (that Searle wants to say is named ‘understanding’) that he can’t understand us attributing to robots (and boxes, etc). And most of us (those who aren’t moved by this argument) generally say:
(1) We’re not attributing anything, we’re following consistent use of a word; or,
(2) There is nothing to attribute
(3) Whether or not there is something to attribute, we are using the word until a better method of decision, or a better definition, comes along (which the CR does not offer at all, in either case).
(4) This is what the word means, so we must use it that way.

There are many of us here not motivated by the CR: does anyone not fit into one of those categories? Or who would prefer a different description?

Well, it looks as though erislover already provided a nice link that summarizes most of Searle’s positions.

I tend to side with Searle’s objection to the Strong AI position (organizational functionalism), but I, too, am a bit confused as the meaning/definitions that Searle is employing in making his argument.

Something that strikes a cord with me (personally) with Searle’s argument against Strong AI are some of the (to me) problematic conclusions one can draw by supporting the Strong AI position. If all that required for a computer AI to “understand” Chinese if it succesfully manipulates symbols using a sufficiently large table of rules - "instantiating the program(?) - that makes any output from the computer indistinguishable from a human being, then what’s to stop one from making the claim that, for example, a thermostat “understands” that it needs to shut down at a given temperature?

I know the thermostat example is overly simplistic, but at what point do we draw the line at “this entity understands and this entity is just mimicking/simulating understanding?” Can we provide a clear and convincing definition that allows us to make a distinction? Or should a distinction even be made?

Here’s a little thought experiment:

Suppose, for the sake of argument, that a computer AI has been created that passes the Turing Test. By those who favor Strong AI, all this means is that the computer AI successfully exhibited “behavior” in its responses that is indistinguishible from human responses. So we can conclude that the computer AI “undertands”, for example, Chinese, in its responses to questions that are indistinguishable from human responses.

Know let’s suppose that this very same computer AI is sophisitacted enough to answer almost any question, on par with the capacity of the human brain. We pose the question “Does a thermostat “understand” something in the same sense that you (computer AI) understand something?”

If the computer AI answers that yes, the way it (a thermostat) understands something is the same as I (computer AI) understand it. That is, the thermostat is exhibiting behavior (it shuts down at a given temperature, thus it “understands” that the temperature is getting too high). The program or procedure a thermostat is instantiating to shut down when the temperature gets too high is and extremely limited one, but by the behavior it exhibits, one can surmise that it exhibits a level of understanding that is the same kind (though not of the same degree), as that of my (computer) AI understanding.

If one accepts the first case, then at what point would the computer AI decide that an entity doesn’t understand something? Would “instantiating some type of program” (however rudimentary) be required for the computer AI to conclude that an entity “understands” something? What justification could it give (or be programmed into the AI) in deciding whether or not a rock or water “understands” something?

If the computer AI answers no, that the thermostat doesn’t “understand” in the same sense that I (computer AI) understand something, then wouldn’t it be invoking a similar kind of objection a la Seale? That is, the way in which the computer AI understands something (instantiating a program) is somehow different from the way a thermostat “understands” something.

If one accepts the second case, then Searle’s position is vindicated. That is, the computer AI’s understanding is somehow qualitatively different from a thermostat’s understanding. And, therefore, a human being’s understanding is qualitatively different than that of a computer AI (because we can make the claim that it is). After all, if the Strong AI thesis is correct, then all we as humans are doing is just “instantiating our programs” just as a computer AI would. If part of our “program” is to reject the notion that a computer AI “understands” in the same way that humans understand, then we’re right back where we started.

eponymous

Well, it sounds silly, but why can’t we say it? A thermostat’s IQ would be, well, pretty much 0. We wouldn’t expect to be able to meaningfully question it.

There are other examples, too, that Penrose has mentioned, questioning the manifestation of intelligent algorithms… that is, how do they need to exist in order to be intelligent? And various answers present themselves here, which can cause us to attribute intelligence to things like books (for instance).

:slight_smile: Sure! What, we don’t expect it to make mistakes? :wink: That is funny, though.

Why? Is intelligence a linear scale, for example? Is it quantifiable? If it is, then it it quantitatively different, I’ll not object. If it is qualitatively different, why do we say this based on the responses of the object in question? —especially if we are dubious about the claim of the computer understanding in the first place!

I agree with you that it should be stopped. It will not be stopped, however. If it is possible, then somebody will eventually do it, mad scientist or no.

I would offer a friendly challenge to anyone to name a technology so hazardous or morally unacceptable that no one anywhere will ever take it up again. Nuclear weapons? Biological weapons? Surely we’d stop somewhere?

The monkeys do not stop.
The monkeys cannot stop.
That is what makes them monkeys.
That will eventually make them cease to be monkeys.

On Searle:

Instead of a lone person, imagine if you were in a massive office where each person is constrained to their cubicle. You can send and recieve messages to adjacent cubicles via a slot but you never meet anybody else. In front of you, is a code book which tells you what you are supposed to do when you recieve a message.

Now, this example is almost exactly analogous to the previous one and you would be hard pressed to make a case stating that this model would obtain any more “understanding” that the classical model.

However, while the classical model is based on a computer algorithm, this model is based on the human brain. How can we attribute understanding to humans if we do not attribute them to neurons?

if we ever get a good definition of “understands” from someone who is arguing either way, perhaps we’ll have an answer for you.

is a human sophisticated enough to answer that question? it seems as though we’re arguing over it now. it seems that if the computer could answer such a question truthfully, it would be a failure of the turing test.

whether or not a thermostat understands depends very heavily on how we define “understands”. if understanding is something organisms with brains do, then no, thermostats don’t understand. but i think you’d be hard-pressed to come up with a more abstract definition of “understands” that limits it to organisms with brains.

Why?

G.E.M. Anscombe has an interesting argument that intention is not ‘a something’ that accompanies actions (not a mental state, for example), and does so without appealing to Wittgenstein’s private language argument (though it would apply there, as well). Roughly, this would cause infinite regress, as one would have to intentionally look for intentions in order to grasp the ‘sensation’ or event, etc.

if a computer can answer a question that no human can, we can tell that it’s not human.

when i was younger (say, 17 or so), before i had heard the physical arguments against free will, i decided that free will was impossible anyway. the way i reasoned this was something akin to mr. anscombe. when i make a decision, i evaluate some value i have. somehow i have to arrive at that value. suppose i have another value that allowed me to arrive at that one. at some point, though, every decision i ever made was influenced solely by environment or genetics, which are both outside my intentionality. so, for those who believe in free will, where does it come from? what is intentional, and what caused that intentional being to be that way?

to be honest, i never really believed that the intentionality argument held any water, since i fail to understand where my own intentionality comes from.

But if the AI responded in that way, it would just be a Searle-based AI :wink: You see what I mean? All factual answers do not lie in our language, simply waiting to be constructed. AIs could make cognitive mistakes just like the rest of us if all they had to work with was our language.

I, too, have a hard time finding an anchor for free will.

Final note, Gertrude Anscombe was a female (and, surprisingly or not, a Catholic). She died in 2001, I believe, and most of her writings were in philosophical journals rather than complete philosophy books. The one book of hers I’ve been able to find, Intention, is an excellent continuation of Wittgenstein’s method of philosophy, but without understanding Wittgenstein, it might be quite intractable. His method of analysis was, I think, wholly unique, and—for me—quite convincing.