Anyone else who doesn't accept that they are conscious?

You then proceded to quote a line in which he says exactly what I said. What you said next

is a non sequiter.

What Searle said, as you quoted, is that according to Strong AI, all you need in order to get a mind is an “appropriately programmed computer with the right inputs and outputs.” He says and implies nothing about what internal hardware would be required.

No. He replies, in the original work, to exactly this objection. In fact he addresses two different objections which you’re gesturing at here. One is the “systems reply” which says “no one would have expected the human in the room to understand Chinese anyway–rather, it’s the room itself that understands.” The other I forget what he calls, but the idea is “The room isn’t moving around and sensing things like an intelligent entity would be, so add that onto the scenario and you would indeed have understanding.”

Searle doesn’t “ignore” these ideas. He talks about them in the very work we’re discussing. He gives responses.

Have you read the work in question? (Really there are several relevant ones, but in the Seminal article “Minds Brains and Programs” he discusses all the ideas you’re bringing up. He responds to them. If I had more time I’d say how he responds to them but gotta go…s

I’ve never come across this assertion before. In my experience, “computer” is invariably defined as a “symbol manipulation machine”, which expressly avoids semantics. Would you care to expound? (Perhaps it deserves a new thread – I’d be happy to start one to avoid a hijack.)

No, here’s OK - computer languages don’t just contain syntax, but also entity relationships, in the form of pointers that link memory spaces. That’s a (crude) semantic structure.

Is all this discussion of Searle’s Chinese Room really about consciousness at all? It seems to me it’s about understanding language. Human beings can clearly understand and use language in some circumstances without subjective consciousness.

Call it what you like, it still seems clear all that stuff can be implemented in the instruction set contained in the Chinese Room so it doesn’t seem to be to the point.

I got that much – more, in fact, as I’d never heard of “gellish” before – from the post I quoted. So that’s not much of an exposition.

Again, I’ve never heard anyone make that assertion before, so it’s not even clear to me how to think about it. The fundamental concept conflicts with all my computer science studies, including my AI work. Naturally, that’s not to say that semantics is absent from computer science – for instance, we could discuss the semantics of distributed file systems (e.g., in NFS, scroll down to section 7.2.4).

But you’re making a different claim to argue against Searle – that programming languages are not syntactic. (And I just now realized that that’s actually the result of denying the cited premise, whether semantics are brought into it or not. In other words, the possibility of semantic structure inclusion in programs does not make programming languages non-syntactic.)

I’m not trying to be pedantic, although discussions of this sort often hinge on it; rather, I’m trying to understand both your justification for making such a claim and the ramifications of doing so…

No, it can’t just be implemented in a rulebook, because the rulebook doesn’t contain memory, and the rulebook can’t be changed. This is the obvious flaw with the Chinese Room.

Even ELIZA could store input and sometimes spit it back out: “Why do you say $previous_input?”

How could a rulebook written on paper do that? Yes, you could have rules such that if someone types in “My name is Jenny”, you write out “Hi Jenny”. But you’d need rules for thousands of names before that could work. So I agree that you can simulate memory with a sufficiently complex rulebook. But this is a physics-bustingly inefficient way to implement a memory, because pretty soon your rulebook needs to be larger than the universe, just like a rulebook for every position in chess is larger than the universe.

So this isn’t what human brains do, and it isn’t what a Strong AI would have to do, and it can’t be implemented with a sufficiently complex rulebook because the rulebook would be larger than the universe.

Because the Chinese Room is an argument from incredulity: “How can such a simple system experience things like humans do?” In fact the system he’s proposing is vastly more complicated than the human mind and the nuances of its emergent behavior are unknown.

I looked up the original paper. It’s not stated explicitly up front, but it’s implied in his discussion of various arguments.

There are several billion people with a version of the room inside their heads.

I was thinking about it over night. I think the root of the problem is that Searle’s understanding of “understanding” is incompatible with a computational model of consciousness and cognition. He therefore concludes that its impossible to “understand” something through computational steps. I’d argue, rather, that this suggests that how he defines “understanding” is flawed.

I’ve addressed this already, but lemme try again.

A finite rulebook can implement a memory, assuming the memory is finite as well of course.

Do you want to have a memory with four addresses, each of which may have one of two values? Then you need eight rules. Rule one corresponds to the state in which every address contains the value 0. Rule two corresponds to the state in which the first three contain 0 and the fourth contains 1. And so on down the line.

Rule 1 says what to do when the memory is in the state corresponding to rule 1.

Rule 2 says what to do when the memory is in the state corresponding to rule 2.

Each rule, of course, tells you which rule to execute next–in other words, includes information about how the memory state changes as the instructions are executed.

And so on.

That’s it. You’ve got a rulebook which implements a memory mechanism.

Except even forgetting what I just said, there’s nothing in any explication of the Chinese Room scenario which rules out the possibility that some instructions say something like “note down X, on line 5 in a notepad or something” and later “get what you wrote on line 5 and do the following with it.”

I mean do you seriously think you’re going to hang Searle on this point? You really don’t imagine he’d just say, “Fine, stick a notepad in the room and let it function as a memory.”

An equivalent question: Are you telling me that if Searle had only added that little detail, the man in the chinese room really would have obviously understood Chinese, and the whole debate could have been brought short?

Alright… so… what’s the problem?

It’s a thought experiment. Expand the universe in your thought experiment. Make time go faster inside the room. Do whatever you like, as long as you’ve got a man in a room implementing rules concerning markings on paper which he can’t read. Give him whatever rules are equivalent to a program that makes a computer understand Chinese. Give him a “memory store” if you really think you have to.. Does the man in the room, by executing that program, thereby understand Chinese? If you don’t think so, then you should see the basic force of Searle’s argument.

I think Searle is wrong–don’t get me wrong about that.

You just made me quietly scream. :slight_smile:

The reason why is, I started this conversation by explaining, very clearly, that Searle’s argument is not an argument from incredulity. Not at all.

There may be something to that.

Let me put it this way. I have absolutely no trouble accepting that the Chinese Room as a totality is a conscious, thinking entity that experiences the universe exactly as a human does. (Well, exactly as a human does who is currently confined to a sensory deprivation tank.) The only reason that this position seems at all remarkable is that the mind-boggling complexity of the hypothetical is swept under the rug by the language used to describe it.

I prefer this definition: Understanding is the possession of a model that makes accurate predictions.

So, for example, if I understand Chinese I can predict what utterances should follow other utterances: “He said XXX, so I should say some sort of greeting in return.”

And I can use utterances I receive to make predictions about the state of the universe in my vicinity: “He said YYY, so it’s likely that he’s a doctor.”

And I can make utterances that change the state of the universe around me in predictable ways: “If I say ZZZ, it’s likely that he will believe I’m a computer.”

That’s what it means to “understand” Chinese.

Oh, yes, he does. “An appropriately programmed computer” =/= “any computer running the appropriate program”. And he is careful to mention “right inputs and outputs” which, IMO, covers a lot of ground. So no, the ZX Spectrum example is not a non sequitur.

Inadequate ones that don’t actually address the problem - for instance, his “move the whole system inside the guy’s head” is a non-starter. You can’t simulate a system on another system the same size.

Yes, I have, years ago. His responses didn’t strike me as adequate then. Perhaps you could refresh my memory?

:confused::confused::confused:

…yes… it does.

It’s like we’re speaking different languages or something here? It’s like you’re telling me “typical domesticated feline” and “pet cat” are not synomnymous phrases.

Maybe it’s relevant to mention that Searle and everyone else talking to him understands “computer” in a sense that allows for any computer to execute any program. No quibbles about different computer languages or the physical hardware involved in the input or anything like that.

Can so. I think it’s necessarily (in a physical sense) true that the simulated system has to run more slowly, (not sure about even that, though, actually,) but the fact is you can run software emulating a computer system of design X on a computer system of design X. (I take it back: The emulating computer will need a little more memory of course. That just means in the thought experiment the guy who memorized the rulebook will need a memory larger than that of the person simulated by the rulebook. No problem there, if we’re already allowing that in our imaginary world a man could memorize such a rulebook in the first place.)

Such as?

It may be implementable in the instruction set (I’m dubious), but it certainly then refutes Searle’s first premise.

BTW, what do you say to one objection I read, where the most efficient instruction set is something like “If you see [chinese character for horse], coming under door write down ‘horse’”, “If you see ‘rode’ in your pad, write down [chinese character for ‘rode’].” “If the sentence in English is Sub-Verb-Predicate, translate it crudely to Sub-Tense Marker-Verb-Predicate and goto character translation section CIII”, etc.? Where (arguably) the most efficient CR algorithm is to simply translate the input into English, have the human respond in English with understanding, and translate that into Chinese.

The implication Searle makes in contrasting computers and humans is the syntactic/semantic difference, implying exclusivity. HE doesn’t qualify it at all. I agree that computer languages are largely, primarily sysntactic, yes, but…
ignoring the semantic aspects of computer languages is, IMO, a case of special pleading. It attempts to privilege human forms of mental entity relationships without making a case for that privilege. AFAIK, semantics only requires that relationships between signifiers and their denota be meaningful. To me, this means specific syntactical arrangements carry more information about their referents than just the referents’ identity. Even something as simple as X>Y is a semantic concept as much as it is a syntactical one. Computer languages are ripe with such meaning, IMO.

But they’re both human - they (for theoretical purposes) have the same memory size. That’s the point of the objection. Humans don’t have extendable memory. Finite number of neurons…

Yes, I understand that idea (that’s why I mentioned Turing completeness) - but that’s not the way things are in the real world, and it’s certainly not the case for real-world proponents of Strong AI, unlike Searle’s strawman. Strong AI is a systems approach, not an algorithmic one. What Searle does in the Chinese Room is shoot down a version of SAI only he is proposing. Certainly, it’s not the SAI that current research focuses on, whether that be the software-oriented whole-brain emulationists or the hardware-oriented behaviour-based robotics set.

Wow is it solipsistic in here, or is it just me?

I’ve lost my memory on a couple of occations - very scaring. First, you think in words like “There is a wall”, “That’s a computer”, “Who the hell I am?”. So you can think self-awereness as a process or a thread in a sophisticated program. You also need this process, because there is no objective/plan in your life if you don’t know who you used to be.