The Chinese Room argument is not the argument from incredulity you’ve outlined here. The argument, rather, goes like this:
Suppose some set of computations is sufficient for understanding.
Then a person executing those computations should thereby understand.
But the person in the Chinese Room executes those computations without thereby understanding.
Therefore, no set of computations is sufficient for understanding.
That’s a valid argument. The only way around it is to disagree with lines 2 or 3. (I disagree with line 3–I don’t think the person in the Chinese Room executes the right computations.)
Personally, I disagree with (2) - the person is only part of a gestalt system, it doesn’t follow that one component has understanding. Then again, I also disagree with (1) - understanding is *not *just a set of computations.
No, I’m not. The equivalence is not false. In human beings, it is impossible to have a speech act without a thought act. Even talking in your sleep is driven by this sort of dynamic -
Naah. Human speech and computer speech are not operationally nor functionally equivalent. Computer “speech” is in no way like human speech in that there are no separate algorithms to drive human speech acts. We “think” in “speech”, unlike the computer, there is no separate system for logical processing and for speech. Our cognition occurs as speech and other semiotically loaded communications.
So your supposition is in error - and thinking is not just expressed in external speech. It is, in fact, in the internal speech between elements of our gestalt that consciousness originates. Computers have no analogue to this level of speech.
And yes - “thinking” does constitute consciousness. But specifically, externally unprompted thinking. Our internal monologue.
Suppose there is a set of computations the execution of which is sufficient for understanding on the part of the thing executing those computations.
Then anyone executing those computations would thereby understand.
The person in the Chinese Room executes those computations without understanding.
Therefore, no set of computations is such that execution of them is sufficient for understanding.
A bit more wordy, but you can see 2 follows inexorably from 1.
If you disagree with 1, then the argument’s not aimed at you anyway. Searle was arguing against the idea that there can be any set of computations that is sufficient for understanding.
I disagree with 3 because the set of computations executed by the person in the Chinese Room is necessarily (unless he’s not human–and he’s gotta be for the thought experiment to have any juice) executing some other set of computations than the one encoded in the books in his room. For example, he’s executing computations that make him stop and eat, and if he’s really a human being, he’s executing computations that occasionally make him stubbornly stop following the books’ rules out of boredom or perhaps pure cussedness.
Searle thinks 3 is obviously true. Pointing out that there is a fact about the world which shows a hypothesis false isn’t fallacious.
Is he right that 3 is obviously true? I think it’s obviously true the guy in the room doesn’t understand Chinese. (But there are thinkers who disagree!) But I think it’s non-obviously false that the guy in the room is carrying out the right computations.
The 4 step version I gave isn’t meant to capture the argument in all its detail–it’s just a broad outline meant to explain why Searle’s argument isn’t simply an “argument from incredulity” as someone on the thread had claimed.
(ISTM that most responses to the thought experiment try to deny 2–by saying it’s not the guy in the room who understands, but something else. But 2 follows inexorably from 1. I think it’s better to deny 3. But most people (most!) accept 3.)
(3) is only obviously true in the sense that the whole gedankeneksperiment is set up that way - it’s set up to deny an antecedent that only it, really, proposes.
So, in a sense, I have no problem with this formulation of the Chinese Room experiment because I agree that there is no single set of computations which would replicate consciousness. BUT that’s not the formulation of the CR that is usually set forth - it is taken as a given that the room produces outputs indistinguishable from a “real” Chinese speaker, but there’s no specification that the entity performing the calculations (the man in the room) has to partake in that understanding as a premise. This seems to be your rewording, not the usual formulation.
As far as I can tell from above, my thesis has been fairly consistent. I have argued:
Determinism => no free will
no free will => all thoughts, actions, are determined ultimately at the atomic level. The use of any words like “conscious” to refer to the self reflect only the mundane result of atoms blindly following their own trajectories in such a way that such a vocalization happens to occur.
I’m not following you. The term ‘computer’ is quite general. I don’t know why you are defining it so narrowly as to determine how any speech it might have compares to that of a human. You say we “think” in “speech” unlike a computer. First of all, I’m not sure that I “think” in speech – sometimes I do, but usually I don’t. Second of all, I don’t see any reason why a computer can’t “think” in “speech” – if you define “think” in “speech” to be “internally process information in symbolic form”.
iamnotbatman, I’m beginning to agree with your point of view, but I have a question:
How would you distinguish between a person who is conscious and unconscious? You know what the words mean in that context, but do you think they’re misleading?
Not sure what you’re referring to by “the way it is usually set forth” but the way Searle sets it forth is this, in paraphrase: “You Strong AI guys say you can get understanding from the carrying out of computations. But it’s obvious the guy in the Chinese Room carries out the computations, yet doesn’t understand. So, you Strong AI guys are wrong.”
One of the most popular replies is to say to Searle “Well, sure, the guy in the Room doesn’t understand. But the Room itself does.”
Searle calls this the “Systems Reply” and he has a response to that as well* but I may be going to far afield here. I just mention it because I think you may be confusing elements from the System Reply with the Chinese Room Thought Experiment itself.
*The reply is: Just stick the whole Chinese Room system inside a person, then. Forget about the room–just talk about a person who has memorized all the rules. He is the system. And he–the system–carries out all the necessary computations. But he doesn’t understand Chinese. So you Strong AI guys are wrong.
Why don’t we keep going with that and say “The use of any words like ‘vocalizations’ to refer to the sounds produced by throats reflect only the mundane result of atoms blindly following etc etc…”
And of course you can do this ad infinitum.
In other words, whatever one thinks of your conclusion, doesn’t your argument imply a kind of nihilism (I mean in the technical sense, with no intention of sticking any negative connotations on a label for your view)? Are you in fact a nihilist? If you are, why argue specifically about consciousness? If not, then how do you avoid the nihilistic implications of the argument you just outlined?
Actually I don’t know what the words mean in that context. I’d like to answer your question, but please elaborate a little – I’m not sure exactly what you are asking.
I disagree with both 2 & 3. The person is merely a part of the system. The entire system could understand something without the individual parts understanding. And the room as originally proposed by Searle contains no provision for memory or learning – a fatal flaw in any model of cognition.
Why do you think it contains no provision for memory or learning? Searle is saying take any set of computations you think is sufficient for understanding, and encode that set into the books in the room. This would include, presumably, any computations AI proponents think is needed in order to to constitute a memory and a faculty for learning.
Your response (the entire system could understand without the individual parts understanding) is the “Systems Reply” and I discuss it a little bit (just a little) further down in the thread.
Well, technically am a nihilist, but that one word vastly oversimplifies my position (for example I am intellectually a moral nihilist, yet am functional in society and am terribly offended by a kitten in pain). I think the last line of my OP is particularly relevant here. Oh and the phrase I would prefer to use in place of ‘nihilist’ would be “truth hard-liner regardless of the philosophical consequences”
And that brings me to your point about applying the same logic to any word like ‘vocalizations’. Nihilism aside, there is a difference between words like ‘vocalizations’ and ‘consciousness’. ‘Vocalizations’ is a well-defined term in an informational sense, concretely describing an objective scientifically verifiable phenomenon, in terms of other words whose meanings are independent from the word being defined. ‘Consciousness’, in it’s general usage in the philosophical context, is not such a well-defined term.
It sounds like you are wanting to know how I would define the terms “conscious” and “unconscious” in the medical context. I’m no doctor, but I accept the usual meaning as something like (taken from a free online medical dictionary):
This term is well-defined as long as one understands “rational response to questioning” as something like a correct answer to “do you know where you are?”
Where is the memory stored? As I recall Searle’s thought experiment (and my recollection may be faulty) the human in the room is just looking things up. He’s never recording anything or using those records to change how he looks things up. That’s a fatal flaw, IMHO.
As Dennett thouches on, though, to *simulate *understanding by having a pre-programmed response to any possible input would require an infinite set of rules, really. You can’t stick that in someone’s head.
This is his first use of the intuition pump idea - he says that Searle has set up the thought experiment to lead us to a seemingly simple, “intuitive” response, but in actuality, any realistic Chinese Room setup would require much more than just a “book of rules” to pass the test. As Wikipediasays, it would require “memory, recall, emotion, world knowledge and rationality” to formulate actual convincing responses. As Dennett goes on to say:
“Searle’s thought experiment depends, illicitly, on your imagining too simple a case, an irrelevant case, and drawing the ‘obvious’ conclusion from it.”