Searle’s box is a classic example of begging the question. “Assume that it is possible for a system which does not actually understand things to act exactly like a person who understands things. Then it wouldn’t understand them!”. Searle has not demonstrated that this hypothetical set of instructions exists.
Further, his “I memorize the rules, then I am the entire system” idea is totally disingenous. Once again, he begs the question. Which of his neurons understands speech? None of them do! And yet, we assume Searle understands speech… This implies that it’s possible for consciousness to exist without anyone being able to tell you exactly where it is.
I have no strong opinion on whether or not AI can be “conscious”, but Searle’s arguments against it are embarassingly bad.
Welcome to the SDMB, and thank you for posting your comment.
Please include a link to Cecil’s column if it’s on the straight dope web site.
To include a link, it can be as simple as including the web page location in your post (make sure there is a space before and after the text of the URL).
I know Searle’s arguments against strong AI have been attacked a lot, but I don’t think that they are embarrasingly bad.
To put the “Chinese Room” argument into something you can try out for yourself, look at Alta Vista’s babelfish program (which translates sentences from one language into another). If you type a sentence in there, you’ll get a translation that obviously was created by replacing one word at a time, rather than phrases or expressions. You can see how a program like this only understands the syntax rather than the semantics of the language, and I think this is an important concept in comparing AI to human intelligence. Clearly the program behind babelfish doesn’t understand the languages it is translating, it is just swapping symbols.
For a different take on the argument against strong AI (as well as a decent discussion of Searle’s argument) try checking out Roger Penrose’s The Emperor’s New Mind…he uses quantam physics to argue against it.
See, in his Chinese room thought experiment, he’s in a black box, translating Chinese, but all he does is use a rule book to do the translating–he doesn’t understand Chinese. Clearly, he says, that is all that computers do, and so they don’t/can’t/won’t understand Chinese either.
In the Polish room, the rule book is replaced by…a computer. Searle still doesn’t understand Chinese, so he would say that the black box doesn’t either. But it does! Surprise, inside the “computer” is one of the Polish logicians*, fluent in Chinese. QED.
*(purportedly, the smartest people on the planet)
I don’t agree that your Polish room is a refutation…I find it hard to apply the label “consciousness” to the Chinese Room, wherein there is something just following an explicit set of rules. However, in your example, you introduce human intellect (the Polish logician), which changes everything. In your polish room you now have a definite consciousness within the black box.
Let’s say that we introduce an idiomatic expression into the Chinese Room. Now, unless that rulebook covers every possible combination of words used by various dialects within the language (which no translating program does right now at least – unless maybe the NSA has developed something), sooner or later the Chinese Room is going to reveal that it doesn’t actually understand the language. Its going to spit out a nonsensical translation of a metaphor or whatever idiomatic expression is used.
In the polish room, however, that’s not going to be the case because you have a human inside the black box that understands the language. So when you send an expression into the Polish room, the human within is going to understand that not everything has to be translated literally. The Chinese Room, following the rulebook, won’t be able to understand that.
(BTW, I thought Sicilian logicians were the smartest…or have I just watched the Princess Bride too many times?)
That’s exactly the point. Searle would still insist that there is no consciousness there, because he can’t “see” it–that is, until the Polish logician is revealed. It fits the design of his Chinese room argument–but proves that he is wrong, since all three of us (you, me, and Searle) would agree that there is consciousness there, once we reveal the Polish logician.
Princess Bride used Sicilian logicians because Polish logicians charged too much. Just remember, Hewlett-Packard doesn’t use reverse Polish notation for nothing.
One other thing, I don’t think that there is a single person who knows every idiomatic expression. Or word, for that matter. An intelligent/conscious person is going to spout nonsense occasionally, too.