View Full Version : Chinese Rule Book
The Rule Book that allows you to answer all the chinese questions is intelligent. There is no other way a list of rules can respond to all possible questions.
You need to try 'Eliza', which can respond to all possible questions but is neither intelligent nor conscious.
To expand on this, it is quite easy to write a rule-book which answers all possible questions if it does so by reflecting many of the questions back. This may give the rule-book, or the computer program, the appearance of intelligence and consciousness, but it does not provide the actual dingen-an-sich.
Last season's fruit is eaten
And the fullfed beast shall kick the empty pail
Remember this is a hypothetical "Chinese Rule Book". Hypothetical things can be impossibly large, heavy, or complex as is the case for a "Book" to have an answer to every question. You just have to expand your mind before wrapping it around the concept as you did for imagining real time answers from the guy reading this giant, complex book who is stuffed in a box.
When you first heard of the conundrum of what happens when an irresistible force smacks against an immovable object? Did you think "There is no object that can't be moved" or "There is no force that can't be stopped"?
Think how stupid the average person is.
Now realize if that's the average, then half the planet is dumber than that.
Remember this is a hypothetical "Chinese Rule Book". Hypothetical things can be impossibly large, heavy, or complex as is the case for a "Book" to have an answer to every question.Hmm... this might actually be a confirmation of what pwright said! An important premise of the entire gedankenexperiment is that books per se are considered non-intelligent. Thus, if you only follow the rules in a book, you're not displaying intelligence. But, as you say, this is not an ordinary book, it's arbitrarily (and perhaps infinitely) large and complex. So we have to reconsider our premise, and we might legitimately conclude that the book is intelligent!
nitpicking PS:Think how stupid the average person is. Now realize if that's the average, then half the planet is dumber than that.Not necessarily; only if the average should coincide with the median. Someone else on this board (I forget who) has a quote (I also forget of whom) that says the overwhelming majority of people has more than the average number of legs..
True, Holg. If you base your average on the mode (instead of the mean or median) you will probably find that the number of people truly dumber than average are limited to small groups of small-time crooks (writing hold-up notes on their paystubs, for example) and American corporate executives.
No. Eliza is irrelevant in this discussion. The Chinese room gives real, meaningful answers to the questions posed to it - Eliza does not.
But, as you say, this is not an ordinary book, it's arbitrarily (and perhaps infinitely) large and complex. So we have to reconsider our premise, and we might legitimately conclude that the book is intelligent!
Complexity does not equal intelligence. The book, by definition, is not intelligent. That is a fundamental element of the premise. In fact, in later arguments, Searle said let's dispense with the rule book and allow that the man inside the Chinese Room has memorized the rules. It is therefore finite and there are no other elements in the system. Even though the man in the room can take input in Chinese and produce output in Chinese, he still does not understand Chinese. Which, incidentally, is Searle's point: just as the man in the room doesn't understand Chinese, neither can a computer ever be conscious.
I personally think Searle's logic is flawed, though I tend to agree with his premise.
Auraseer - Yes, Eliza is too primitive, and Eliza's cousins are not much more convincing. If Eliza came up with a cousin that COULD fool me for a couple of hours, that cousin would be intelligent. It hasn't been happening!
Note that the converse is not true. There are plenty of people on this board who could convince me they are actually poorly autogenerated reply automatons. This does not make them unintelligent.
JoeyBlades, the book-in-a-box experiment doesn't say anything about "real and meaningful" answers. All it says is that people believe the box understands Chinese.
That's exactly what Eliza tries to do-- convince you that it understands your answers and its own responses. Even if you think Eliza itself is too primitive to fool anyone, it has more sophisticated cousins that do a better job.
I agree... and yet I don't. Yes there are parallel's between what Eliza does and the Chinese room. They both operate on a syntatical level without semantics or sentience. However, a key element in all of Searle's arguments and counter arguments was the human in the box. Searle wanted to make the point that even though we know this human is sentient and capable of understanding (given the right set of tools), he does not understand the semantics of what he is doing - only the syntax. Eliza takes away this very important element.
BTW, I've played with several "supposedly" more sophisticated versions of Eliza... I can usually get them spouting giberish within a dozen exchanges.
JoeyBlades, the book-in-a-box experiment doesn't say anything about "real and meaningful" answers. All it says is that people believe the box understands Chinese. That's exactly what Eliza tries to do-- convince you that it understands your answers and its own responses. Even if you think Eliza itself is too primitive to fool anyone, it has more sophisticated cousins that do a better job.
The problem with the Chinese Box is that it flat-out assumes what it tries to prove--namely, that "Real Understanding" is different than "just following the rules." Searle doesn't provides any explanation of what it means to "really" speak Chinese, as opposed to faking it. He just assumes we'll say, "Oh, *of course* they're different" without considering the possibility that they are, in fact, the same.
Searle is preaching faith in an unseeable Absolute Truth (Real Understanding is more than just syntax) over actual observations (according to any real-world test you could administer, the man in the Chinese box speaks Chinese.)
He'd be right at home in a medieval monastery.
Y'know there is nothing in the problem that says the rulebook couldn't look like this:
1. Translate the question into English by applying these syntactic rules:
2...n [rules that translate Chinese to English]
n+1. Think of an English answer to the question.
n+2. Translate the answer into Chinese by applying these rules:
n+3-N [rules for the translation]
So the book could contain nothing but syntactic rules and one "subroutine call" to invoke the human's intelligence.
(Not that the syntactic part would be trivial)
As my Short Duration Personal Savior, "Bob" Dobbs, says: "You know how dumb the average person is? Well, by definition, half the people are even dumber!"
The problem with the Chinese Box is that it flat-out assumes what it tries to prove--namely, that "Real Understanding" is different than "just following the rules." Searle doesn't provides any explanation of what it means to "really" speak Chinese, as opposed to faking it.
according to any real-world test you could administer, the man in the Chinese box speaks Chinese.
You're missing the point. Searle is not maintaining anything about the system's ability to speak Chinese. He is maintaining that the system can emulate understanding, but it does not really understand.
n+1. Think of an English answer to the question.
So the book could contain nothing but syntactic rules and one "subroutine call" to invoke the human's intelligence.
You are describing a system whereby the **MAN** in the room **COULD** understand. If you replaced the man with strong AI, however the translation to English facilitates nothing.
I think you are both, still missing the point. Don't think about how the system can or cannot speak Chinese or how the man might be enabled to understand. Think instead how the man's lack of understanding is like that of some computing system. Symbol manipulation is not "necessarily" understanding...
I had to throw in that "necessarily" for Nickrz's benefit - lest he use my last sentence against me in another discussion. [wink]
JoeyBlades--I misspoke, I should have said that, so far as any outside observer is concerned, *the system a whole* speaks Chinese.
The *man* in the box may not understand what's going on. But he's just one small part of the total system. More importantly, the terms of the thought experiment dictate the man can't be seen by the observers. They don't even know he *exists*.
Scientists base their conclusions on observaable data. Since all the data indicates the box speaks Chinese, scientists tend to dismiss Searle's claim that the box doesn't "really" speak Chinese as metaphysical mumbo-jumbo based on unobservable Absolute Truths. It's beyond the realm of science.
P.S. You wrote:
"You're missing the point. Searle is not maintaining anything about the system's abilityto speak Chinese. He is maintaining that the system can emulate understanding, but it does not really understand."
Searle's claim is *specifically* about the system's ability to speak Chinese. The Chinese box is an extended metaphor, where "really speaking Chinese" stands for "really being intelligent." A system where the man uses his own noggin to answer the questions and the box only handles translation is exactly what Searle had in mind.
08-27-2000, 01:59 AM
There are serious flaws in Searle's argument.
Searle asks us to dispense with the bo and written rules and imagine that a person learns the rules by heart. Then, according to Searle's argument, the person can successfully apply the rules without understanding a word of Chinese. This is supposed to show that understandning must consist in something beyond applying the rules ( an algorithm).
The problem is as follows: It is not Searle alone that is answering -- it is the whole system -- Searle plus the rules. Neither the system in isolation or Searle in isolation understands Chinese.
The alleged force of Searle's argument rests solely in our not being accustomed to thinking of conscious systems as bifurcated -- with one set of internal linguistic rules and another set of external rules that is applied using the internal set of rules. Searle asks us whether the subject understands Chinese, he expects us to consider the subject, seen apart from the whole system (the person minus the rules), understands Chinese. Of course he can't-- this is is implicit in the thought experiment. The subject plus the rules is a different system than the subject considered alone without the rules. When we ask if the subject alone understands Chinese, we ask whether half of our system, not the whole system, understands Chinese -- of course it can't, since it takes the whole system to Undertand Chinese. One part on my brain, in isolation (e.g. might right brain hemisphere), may not understand English, but you cannot conclude that I as a whole system do no understand English.
The system, i.e. Searle plus the rules, is not like any being we have ancountered before (and such a system is probably impossible for practical reasons), so we are reluctant to consider that such a being could possess understanding. We naturally think of understanding as something that only people have. Searle's argument derives all of its force from this "common sense" idea. But this is just unthinking prejudice and serious question begging: Searle proposes that something that looks like understanding cannot be real understanding because it does not correspond in a superficial way to the everyday concept of understanding we have, i.e. understadning in a single individual without the aid of a rule book. Very question begging. I'm surprised Searle's article was even published.
08-27-2000, 03:21 AM
There's another problem with this idea (the Chinese reading room). From a linguistics standpoint, any true language contains an infinite number of sentences. And Chinese syntax is different from English.
So the book either:
1) contains an infinite number of sentences. If you assume this, you might as well assume the book is a magical book from the Land of the IPU. You've discarded reality.
2) Churns out sentences with translated words in the wrong order. In which case, it sounds like a guy with a English-Chinese dictionary, as opposed to a guy who speaks Chinese.
08-27-2000, 04:44 AM
The trouble with the "Chinese Box" idea is that we can come up with an infinity of possible sentences. How can your rule-book contain an infinity of possible responses?
I agree that if we ever invent strong AI, it won't work via a list of algorithms for answering questions. But that doesn't mean that a "computer" can't be intelligent. After all, our brains are composed of ordinary matter arranged in interesting ways. There's no magic involved. So if our brains can understand things, then what is the problem with other similar structures that are not human brains understanding things?
08-27-2000, 09:47 AM
Please include a link to Cecil's column if it's on the straight dope web site.
To include a link, it can be as simple as including the web page location in your post (make sure there is a space before and after the text of the URL).
Cecil's column can be found on-line at this link:
What is consciousness? (11-Jun-1999) (http://www.straightdope.com/columns/990611.html)
moderator, "Comments on Cecil's Columns (http://boards.straightdope.com/sdmb/forumdisplay.php?forumid=1)" forum
08-27-2000, 03:28 PM
You may have a point -- it may be impossible to generate a potentially infinite number of sentences from a finite algorith.
But, I fear, if an algorith was not responsible for understanding (here equivalent to competence in a language per the Turing test), it would spell the end of strong AI. If a finite algrithm was not responsible for understanding, how could a machine (or any physical object), which presumably consists of a finite number of parts performing a finite number of operations, generate understanding? The alternatives to a finite algorith seem to be an infinite algorithm or randomness, the latter of which seems to me to be the opposite of an algorith. I don't know what an infinite algorith would be; and how could understandning arise from randomness, which is by definition
unresponsive to and disconnected from the outside world?
The human brain has a finite number of parts that must perform a finite number of operations. These operation must follow a set of rules -- an algorith. The alternative is that they don't follow an algorith -- that there is some sort of non-rule following going on there; but this seems equivalent to saying that some level of randomness (the opposite of rule following) is responsible for understanding. This is the point made by Roger Penrose in his book "Shadows of the Mind." Unfortunately, Penrose does not really explain how the quantum randomness he proposes lies at the heart of understanding and it remains a mystery.
Let us assume that Penrose is right -- some quantum effects outside of the brain's algorith produce understanding. There is no reason in principle that the these quantum effects cannot be built into a computer that would be partly algorithmic and partly quantum and thus instantiate true understaning (I think Penrose would agree with this point). Understanding would then be algorithmic up to a point.
The point made by dualists is that there is some additional element, a soul, that must be added to the brain for us to have understanding or consciousness. I know that that is not Penrose's point (and maybe not Searle's either) and I certainly don't agree with it. Proposing the existence of the soul doesn't really advance the debate because it just pushes the same questions back to another level: we still ask "in virtue of what property does the soul have understanding" -- in virtue of its instantiating an algorith? Of course, people who believe in souls usually believe that the matter cannot be understood beyond a certain level and they do not believe it is appropriate or possible to inquire into the mysteries of the soul.
None of this rescues Searle. Penrose intelligently attempts to show that understandning cannot be produced solely by an algorithm by exposing the (alleged) inherent weaknesses of algorithms that were shown in Goedel's proofs. I think Penrose would not allow for the eistence of the Chinses rule book because he his entire argument is that such a book is not possible in the first place. Searle, on the other hand, by allowing for the existence of the rule book, assumes that understanding can be produced algorthmically and then attempts a reductio ad absurdam, unsucessfully I think.
I think that Cecil was wrong in his article "What is consciousness": Searle's article is not the best challenge to strong AI ever written. I think Penrose's book is far better, even if I'm not sure I agree with it.
08-29-2000, 12:09 PM
Human consciousness is not threatened by Goedel's incompleteness whatever. Our brains do not work perfectly, they do not have to *always* come up with the right answer, just often enough that we don't get eaten before we make at least one copy of ourselves on average.
Maybe I don't understand the concept of algorithm well enough, but it doesn't seem to me that our brains are structured like a computer...with lists of instructions: If X happens then Y, if Y and Z, then Q. It seems to me that our brains approximate everything, we don't give exact answers. We can use our general purpose non-algorithmic brains to perform algorithms (like learning division) but the thoughts that we use to perform the algorithms don't have to be algorithmic.
So much of what happens in our bodies is automatic, without involving calculation at all. A signal comes from a nerve cell, and such and such a message is sent to the muscles to get your fingers off the hot stove. But you don't have a hot-stove-avoidance algorithm here, do you?
08-30-2000, 01:08 AM
I agree that that Goedel's incompleteness theorems do not threaten the idea that the brain instantiates an algorithm, and accordingly, I disagree with Penrose's argument. But I think Penrose makes a good argument and I don't think its obvious that he is wrong.
Penrose's argument is based on Goedel's proof. Goedel's incompleteness proof shows that there are sentences (or theorems) within any consistent formal system, roughly an algorithm, that cannot be proven within that formal system. Another way it is often put is that any system that is complete (meaning that the system can generate any true sentence about itself) cannot be proved to be consistent within the system. I may be out of my depths a little here, since there are really two or three related types of incompleteness that Goedel showed in several related proofs.
Furthermore, it is possible to create a sentence within any system that says of the system that the system is consistent (assuming it is consistent). But, Goedel showed, if the system is in fact consistent, it will be impossible to prove within the system that the system is complete. The system will not return an answer on the completeness issue. But we humans can be inventive and find ways to prove the completeness of the system by using methods that lie outside of the formal system. So we humans can know something that the formal system does not. Therefore, Penrose argues, human beings must not be using a formal system. If we were using a formal system to do mathematics, we could never prove the completeness and consistency of our own system -- the one we use to do mathematics. But Penrose argues, we can do both in theory.
This is a very rough sketch and I may have it slightly wrong, since it is long time since I read Penrose's book. Penrose is not the first to come up with this argument, and it has been much criticized over the years.
Penrose has been criticized for assuming that we are in fact consistent (he rejects the possibility that we are actually inconsistent as a matter of mathematical faith). Also, Penrose has been criticized for rejecting the possibility that our system might be so complex that we could never prove its completeness.
I think that simply getting things wrong or making mistakes and errors in thought and judgement does not prove that we are inconsistent or incomplete. A computer doing mathematical proofs using a formal system can make mistakes even if the system is consistent -- computers can make mistakes if they overheat, if they have viruses, or if they are just full of bugs. Likewise, we humans often make mathematical or logic mistakes even if we know our logic or math -- many things can interfere with the smooth operation of our brains. But when we review our thinking, we can catch the error because our system is not bad, it just didn't operate the way it was supposed to.
So I think that Goedel's theory does present real questions as to whether our minds are formal systems, and I don't believe the mere fact that we make mistakes refute arguments, such as Penrose's, that are based on Geodel's proofs.
vBulletin® v3.7.3, Copyright ©2000-2013, Jelsoft Enterprises Ltd.