I think there is something to be said for the distinction between following a rule and merely acting in accordance with it.
I mean, there’s an absurdly low chance of a person developing cancer* at a given moment for no apparent reason, but if there was a universe with the exact same physical laws as ours, there’s a chance that, by pure coincidence, every single time a 15 year old male points at somebody and says “KACHOOM” the person they point at develops cancer. Does that mean that 15 year old boys can practice the magical art of Cancermancy in that universe? Could it be said to have different physical laws after all? After all, it only happens by pure, random chance – there’s no causal link between the actual act of pointing, saying “KACHOOM”, being fifteen and male, and someone developing cancer. It just so happens that in this universe it happens. Every. Single. Time. While I agree that P-zombies are silly, that’s only because things like “qualia” and “consciousness” are so subjectove and don’t really have any mechanism or objective way of measuring them. It’s only when a clear, objective metric for determining something where the distinction is meaningful.
So I don’t think it applies well to the concept of AI (at a philosophical level). I don’t think there’s some magical thing – whether it be a “soul” or quirk of biology that makes us “special” and able to understand things. Or perhaps I’m trying to say that if the Chinese Room is merely “acting in accordance with the rules of Chinese” rather than actually “following them”, then a fluent Chinese speaker is also merely “acting in accordance with the rules”, Searle doesn’t really offer any mechanism for which the two systems differ and instead substitutes it with “but it feels wrong…”, so I assume that until he puts a reasonable one up, there is no mechanism separating them.
Actually, the probability is 0 because time is continuous etc etc, not the point.
It’s rules programming all the way down. Whoever programmed the punishment clearly knows the rules. In fact you can do a GA that learns how to play chess, but the fitness function has to encode the rules.
I know the rules of chess, but I’ll accept that I don’t know chess - I really suck at it. And a program could learn to play better without explicit guidance, since the fitness function is winning or losing (though a board evaluation function would go much faster.)
In your example, it would end up with the rules encoded once it stops getting punished. If a monster taught a kid chess by rapping his knuckles every time he made an illegal move, don’t you think he’d eventually know the rules? How is that any different from someone reading the rules, and possibly making mistakes and getting corrected until he eventually internalizes them. If you’ve ever taught a little kid, this is how it is done (no knuckle rapping though.)
I fully agree with this, and I think you have hit on why this makes people so uncomfortable. If one believes in a soul, he believes in a non-virtual consciousness. What is missing in the Chinese room is the soul which understands Chinese.
I think this is probably the crux of the misunderstanding. If there’s a program that understands Chinese, there’s a FSA that understands Chinese (and there’s lots of work on language processing using FSAs, even a book, Finite-State Language Processing). But really, perhaps the key turning point of the argument is: the model of computation does not matter.
Hm, are you claiming that one might understand Chinese without knowing it? That seems rather strange to me, as understanding is effectively nothing but knowing (i.e. what a sentence is about). So do you think that the person with the internalized Chinese room would, if told that to point towards an apple tree, correctly pointed to an apple tree, without being able to specify a reason why they did so? It’s an interesting point of view—kind of a blindsight for language understanding—but I don’t think I could really bite that bullet.
The question is the other way around: how would you propose to abstract an actual understanding from rule-guided symbol manipulation?
But the crucial component of understanding is having a concept of what they do. I don’t think it could be meaningfully called understanding if there is not something it is like to possess that understanding.
But I guess you would hold that it could be so programmed. So let’s modify the program to one that not merely implements understanding of Chinese, but a full Chinese mind. Would that change your conclusion?
Also, I don’t believe, as said above, that one can talk about understanding in the absence of a mind that possesses that understanding—in the absence of there being something it is like to understand Chinese, and thus, to be the Chinese room.
Nothing much to do with anything, but I don’t think the mirror test is a really good indicator of consciousness. It tests the facilitate to make the conclusion ‘that is me’, but this neither needs conscious perception (one could program a simple robot to identify itself without the robot being conscious), not does the failure indicate its absence, just the lack of ability to draw the above conclusion. And besides, the dominant sense in dogs is probably smell, not sight, and thus, sight just may be a secondary facility used for getting around in the world, while smell is used to actually identify things.
No, the person is only there to provide a perspective for the one who hears the argument. You’re invited to put yourself into the place of the person, getting strings of Chinese characters, manipulating them, and producing other strings of Chinese characters. The fact that nobody can imagine themselves coming through these manipulations to an understanding of Chinese—which is just the fact that they cannot produce semantics from syntax—then motivates Searle’s conclusion. So it is because the implementation is irrelevant that he can propose the Chinese room as a concrete example: if there is a program at all, then it can be implemented on a Chinese room; but since there is no understanding in the case of the Chinese room, there cannot be any understanding from syntactic rule-following, period.
You’re still putting the cart before the horse: Searle doesn’t start with the Chinese room, but with the Chinese-understanding program, which he then proposes to implement on the Chinese room.
No, his main problem is understanding how syntactic manipulation gives rise to the appreciation of semantic content. The Chinese room is a concrete example for demonstrating this problem, it is just intended to flesh it out, give you a perspective on it. Few people can imagine that through the rule-following, the person in the room can come to know Chinese, and you seem to agree. According to Searle, the systems reply does not provide an improvement; here, you seem to disagree. Personally, I still can’t fathom what it means for a room to understand Chinese, and I think his example of putting the room inside the man is apt. As I’ve said, it’s here that I think replies become possible—replies where at least the original argument does no longer have the same force. But still, what it is, or what it is like to be, or possess, such an ‘emergent understanding’ (or ‘virtual mind’ as Leaffan mentions) I cannot imagine.
So there still is lacking the ‘ah, so that’s how it works’-moment that you usually have when a difficulty is overcome, and I am left to believe in the possibility that a computer program can give rise to mind and understanding because of my disbelief in any magical properties, souls, Descartes’ res cogitans or other dualistic concepts; and I find this somewhat disappointing. I would like to understand how it is that computer programs give rise to understanding; I have independent motivation to believe that this should be possible, but I would be kidding myself if I didn’t admit that I find the ‘how’ utterly mysterious.
I’m not convinced that this ‘self-examination’ is really what leads to conscious perception—IMHO, this and things like Hofstadter’s ‘strange loops’ or higher-order thought theories and so on make the common mistake that once a kind of self-reference is established, once thought has thought as content, somehow consciousness ‘pops up’, which I think is ultimately naive: there are many self-referential processes (for instance, simple control circuits) which I’m pretty sure don’t have any phenomenal component—but there are certainly approaches in this direction; from memory, two I found promising were Marcus Hutters AIXI agent, which uses Solomonoff induction and Bayesian reasoning to find optimal courses of action, thus essentially ‘re-evaluating its beliefs’ at every step, and Jürgen Schmidthuber’s ‘Gödel machines’ which make provably optimal self-improvements at any step.
No, because I can easily imagine processes which can answer these questions, without there necessarily being any consciousness involved; so the answers to these questions don’t tell me anything about the machine’s consciousness. (I believe other people are conscious because I know I am.)
I was never really on the cards, I think—remember, implementation is irrelevant. But I don’t agree with your claim that ‘understanding Chinese is a causal chain’: there is certainly a causal chain producing Chinese utterances from Chinese questions/statements, but I don’t see why that should necessarily be accompanied by understanding—that is, knowing what is being talked about, having some sort of ‘internal perspective’ on the conversation. This is indeed just the question: we can imagine the causal chain without it being accompanied by any understanding. So how does implementing nothing but that causal chain, as in a program, then produce understanding?
Walks like a duck, 's a duck. P-zombies are an utter absurdity.
[/quote]
[QUOTE=Half Man Half Wit]
Why?
[/quote]
If we have a causal chain from the input to the output, and the output is identical to the output of a conscious mind, then the causal chain must include a conscious mind in it. The only possible exception to this is a chain which includes the most outrageous coincidences, as suggested by Jragon, above.
Even if the causal chain included some massive encyclopedia of answers that was written by a master programmer years before, that master programmer was conscious, and it is in effect that master programmer you are having the conversation with.
No. I can easily program my computer to say ‘fine’ when I ask it ‘how do you feel?’. Causal chain from input to output, and in my case, accompanied by consciousness, but not, presumably, in the computer’s case. This is just the problem: when analyzing reactions to stimuli, at no point does conscious perception arise of necessity. So it’s possible to do without; but then, the question arises why we have it.
This just pushes back the problem one step, as apparently you need a conscious entity in order to create a conscious entity; but then, you have an infinite regress of consciousness.
Well, I think this goes for the most part for both sides of the debate: one cannot bring themselves to reduce the world to ‘mere physicality’, while the other just cannot fathom such silliness as ‘immaterial souls’ and things like that. Both sides think their point of view is inherently superior to the other’s: the soulists think that the physicalists’ world is impoverished, sterile and without wonder, while the physicalists consider the soulists to be irrational, childish and fantastical.
Now I’m firmly in the physicalist camp: I think that no form of dualism, be it spiritual or natural (as in David Chalmers’ proposal), can ultimately be made to work. But while I thus think that Searle’s argument must fail, I don’t have a really clear idea of how it does so, that is, I cannot imagine how a program gives rise to mind. Understanding this is, I think, necessary to come to a truly defensible physicalist position, and to do so, I believe one must consider one’s opponents’ arguments in their strongest possible form. I’ll thus consider the issue settled if I have the aforementioned ‘ah, so that’s how it works’-moment, or I am at least convinced by somebody that they have this understanding, even though I may not be able to attain it.
You only need one regression, to a truly conscious entity. The point was, there can be no simulation of consciousness without consciousness, therefore there can be no P-zombies. The question is, how do we create consciousness from matter- we simply have to accept that we don’t know, yet.
But that doesn’t mean we will never know; it seems very likely that we will know how to create consciousness from dumb matter within this century, and philosophers will need to start considering the implications of that.
Irrelevant. The reward/punishment system is external to the learning system. The learning system doesn’t “know” the rules.
I gave an example already of why this isn’t necessarily so: the system will learn a descriptive approximation of the rules, but they might not learn “the rules” per se. The system might have one of two entirely different ideas about how knights move. Moreover, the system might never discover castling. The system could still engage in a pretty damn good game of chess…without ever castling. Is that “playing chess” or not? And that’s a matter of philosophical interpretation. The purist might say, no, it’s some other game very similar to chess, but without castling, it isn’t really chess. The pragmatist might say, pfui, it’s close enough.
As this is a philosophical discussion, there needs to be room for these interpretations.
Only if you don’t ask how that entity comes to be conscious, but just postulate it as a buck-stops-here regression-breaker. But the question at issue is precisely how consciousness arises; an answer depending on the existence of consciousness simply cuts no ice.
But this is exactly what you would have to establish: I can conceive of anything a conscious being does as being the result of an entirely unconscious, if-a-then-b-else-c process, or, as Leibniz put it, as just mechanical parts pushing on one another.
I could describe, for instance, exactly how a couple of photons of a certain wavelength strike the retina, excite some rods and cones, producing electrochemical signals which are then processed by some particular part of the central nervous system in a particular way, eventually giving rise to other signals causing the vocal chords to contract while air is being expelled in such a way as to give rise to the utterance ‘this is blue’. I could even build an apparatus that exactly replicates this process.
But the problem is that nowhere in this description have I had to appeal to any form of conscious experience at all, nor does it seem in any way a necessary product of these goings-on. So I would not expect this apparatus to necessarily have any conscious experience, anymore that I expect a thermostat (which does essentially the same kind of thing) to have conscious experience. Nevertheless, to me, the process I have described gives rise to something extra: the experience of seeing blue.
Not knowing about castling is an example of a local optima, a good one in fact, which is a problem with all learning systems - genetic algorithms, simulated annealing and lots more. The way you get around it is to let the system generate random moves far from its local search space, one of which, given enough time, will find that castling is legal. It will definitely take a long time to reach the global optimum, but it eventually will with a small enough search space.
Without the knowledge built into the rules checker the system will never converge to anything remotely approximating knowing chess. So, if you consider the checker as part of the learning system, which it usually is considered, it does know the rules in this sense. If you read anything about machine learning, the answers to at least a subset of inputs has to be provided in some form.
If you say that the checker doesn’t “know” the rules, fine - but does a chess book a kid learns from “know” the rules either?
Only time for two responses - will try to get to the rest later.
This I agree with. And the problem is set up so that the rules don’t include giving the room a concept of what it does - and then he thinks he has shown something interesting by saying the room does not understand. If by understanding he means ability to work in the problem domain, which seems to be what he means, then maybe we can have understanding. If understanding is true understanding, as you describe it, the room never can have it. I’d suspect no FSA can either.
Sure - if you can program the room to understand, then it can understand. I’m not claiming it is possible - just that deliberately ignoring understanding in the programming does not prove understanding is impossible.
That’s the trick. If you placed a robot in the room in place of a person, which is certainly possible in the scenario, no one would have a problem with the entire room understanding (in his sense,) and no one would expect the person to. Ditto if the programming were placed inside the robot. Then the robot might be said to understand, but not its cpu.
We’ve gone through this already. That the person, who is acting as a component, not a thinking person, will not understand Chinese is irrelevant. But it seems to disturb Searle. And his answer to the system issue is to shrink the system, which is not much of an answer at all. We progress from rules to understanding all the time - kids get taught the algorithm for long division, and some progress to understanding it.
And I don’t buy that any system could be said to “understand” a language through simple symbol processing. People working in natural language understanding have realized that for a very long time. If he thinks that computers only do symbol manipulation, that might be the source of his problem. A truly representative room would not only do symbol manipulation according to the rules, but modify the rules also. Unless one thinks we understand a language through magic, this should be enough to eventually understand it. (And the rules would include semantics.)
More later - time to do something useful.
The problem is set up so that the room has everything you deem necessary to implement a program that conveys understanding of Chinese. You keep trying to insinuate that through some sly restriction on the rules, Searle purposefully limits the discussion to such cases in which obviously no understanding arises, but the limitations on the rules—that it must be a lookup table, that it can’t adapt its own rules, and so on—always come from you. If you believe that the room needs to be able to modify its rules in order to truly understand, let it; if you believe it needs to include a ‘concept of what it does’, provide it with one. The argument as usually presented is invariant under these and any similar changes.
On a straighforward reading, this seems to go against the grain of everything you’ve said before. Searle’s argument is aimed exactly at demonstrating that this ‘true understanding’ is not something achievable by mere rule following, a conclusion you seem to have disputed until now. Are you here still arguing under the impression of a somehow limited set of rules—say, without the above ‘concept’—, or are you claiming that generally, the room could not achieve ‘true understanding’? And are you saying that no matter what, no FSA can achieve this understanding?
So now you seem to say that, for instance, the person carrying out the Chinese room process in their head would properly understand Chinese. But what would that be like for them? I mean, for instance, would you believe that the person could translate Chinese sentences into English ones (for me, this seems straightforwardly impossible)? If prompted in Chinese, could the person point to an apple tree?
And the room, if asked for instance ‘how do you feel?’, what would it answer? As it is supposed to emulate a person speaking Chinese, would it answer like a person would? Would it be aware of basically lying, or would it consider itself in fact a person, thus being radically out of touch with reality in its phenomenology?
The answer is not to ‘shrink’ the system, but to put it in the head of the person, so to speak—if now the room from before understood, the person should, as well. But most people (including myself) find that implausible: the person would still see unintelligible strings of symbols, compare that with his memory of such strings, and combine similar symbols to new strings—I think it’s obvious that at least understanding is not necessarily involved in such a process, but this then raises the question of what ‘extra ingredient’ is necessary to produce such understanding.
But the algorithm for long division is long division; learning this algorithm is learning long division. That’s not the case for Chinese: the algorithm for producing valid Chinese utterances is not an understanding of Chinese (you can master the algorithm, but if prompted in Chinese to point to an apple tree, be utterly stymied).
Searle thinks (to the extent that I can speak for him), quite correctly, that every computation can be described in terms of symbol manipulation (simply because every Turing machine is equivalent to a formal system). And that’s all he needs. (Unless perhaps you want to argue that while a Turing machine, or even a FSA, could perform the computation taking as inputs strings in Chinese and producing valid response strings of Chinese as output, but in a Turing machine, this wouldn’t be accompanied by true understanding, for which you would need to perform the computation on an xx-machine?)
NB: I brought up the Chinese room as just an example of an argument against the Computationalist position. All I was intending to illustrate is that Strong AI is not considered trivially true by everybody, or every expert in AI, philosophy or neuroscience.
A debate about the Chinese room is bound to run and run given the many arguments, and counter-arguments, that have been played out in forums and journals over the ensuing decades.
Yes, and the intriguing thing is that while most think that the argument is fundamentally flawed, it turns out to be surprisingly hard to put one’s finger on that flaw, as also evidenced by the lack of agreement between purported dissolutions of the argument, which illustrates that Searle has hit on a true difficulty; it just seems to me that this difficulty is brushed away somewhat too casually here, IMHO.
The brain is just a pattern processor and the processing theory has been simulated in computers with neural networks. Before long we will be able to account for every cell in the brain and every connection between those cells - and at that point we might be able to simulate human consciousness. Though it wont be a living, breathing brain, which is changing constantly.
Ultimately the brain is just a physical system, and that means it can be duplicated. There is nothing magical about it, its just a system like any other. If it was really complicated to create, there wouldn’t be billions of them ( of varying consciousness and capacity ) running around.
On brain chemistry, it can vary widely from person to person and even vary in a person on a daily basis. It definitely effects thought processes, but doesn’t seem to effect the emergence of consciousness.
That is because, as I have said before, if he allows the room to simulate a general purpose computer, the only way he can claim that the room does not understand Chinese is to assume that a general purpose computer can never understand Chinese. And I’m saying the room, not the robotic person in the room.
A general purpose computer with a learning technique can’t be said to be just following rules. What I call the original formulation, with simple rule following, can’t understand, but it would not pass a Chinese Turing Test as you yourself have demonstrated. If he is saying that there is something about a program, even a self modifying and self-examining one, which prevents understanding, he needs to show what it is and how our brains are different from such a program.
And this goes to the heart of what understanding means. I’ve had the experience several times of moving from being able to follow rules about something to actually understanding it. A person who understands long division does not follow the algorithm slavishly, but can jump ahead and is much faster than the person just following the rules. This involves, in my experience, not just executing the rules but examining the rules - something which does not just happen and, for a machine, would have to be added to the program. Theoretically you could not tell if the person in the room really understood Chinese or not if the rules got internalized - practically you would not be able to enforce the boundary between his consciousness and the rules. He’d be able to talk about feelings. And you’d be able to tell from him responding more rapidly when he understands Chinese as opposed to just following the rules.
I agree with your comments about the tree, etc, which is why we know that language cannot be understood purely syntactically. A true language understanding program must have the concepts of tree and feelings.
FSA no. Turing machine, yes. However it would likely have to be able to read its tape and do computations not just on the input but on its input and its program. What do we do that a Turing machine, theoretically, can’t also do? Or are you just assuming it cannot understand, which is what the analogy is supposed to prove.
Given the glacial pace of true AI research (as opposed to developing cool heuristics) I think we’ll have a machine intelligence sitting on a simulation before we have on programmed from scratch. Increasing computing power and decreasing costs help simulations far more than it helps develop algorithms and heuristics.
No. He assumes there is a computer program that understands Chinese, which is then implemented on the Chinese room, leading to the absurd conclusion that the person inside, though following the program, does not understand Chinese; or iterated one more step, that the person, having internalized the room, still does not understand Chinese. But then, the original program can’t have understood Chinese, as well. He does not assume that there is no understanding; he demonstrates (if his argument works) that there is none.
The structure of the argument is:
(1) If Strong AI is true, then there is a program for Chinese such that if any computing system runs that program, that system thereby comes to understand Chinese.
(2) I could run a program for Chinese without thereby coming to understand Chinese.
(3) Therefore Strong AI is false.
This I don’t think is actually true. After all, there is a set of rules that governs how the rules are changed, so I can just consider each possible ruleset the computer might come up with as part of its state. Or more simply, I can implement the computer on some Turing machine, or even more simple, on the rule 110 CA, which are patently just simple rule-following machines.
You say that as if there were some agreed-upon method by which to give a computer the concept of a tree, or feelings. But by assuming there is such a possibility, you’re assuming your conclusion: that it is possible in some way to provide a computer with a program that facilitates understanding. But then, you reasoning is simply circular.
The question considered by Searle is, effectively, whether we can give the computer a concept of a tree.
But any real machine, that is, anything we can actually build, is a FSA. Every real computer has a finite memory, hence, finitely many states to be in; that holds even for our brains.
But there are universal Turing machines that do none of these things, that are thus able to simulate Turing machines that do. If you’re saying that the one kind of Turing machines can do something the other can’t (i.e. give rise to understanding), then you’re essentially saying that this capacity does not come down to understanding—as both classes of Turing machines are capable of performing exactly the same computations.