Strong AI and the Chinese Room

it seems to me that searle defines “understand” as “something that living things with brains can do”. or at least he qualifies it with that. it seems rather arbitrary as to whom or what he attributes the quality of understanding.

to answer your last question with another question, at what point do we genuinely believe that other human beings are understanding, and not just behaving as though they do? i agree that it is easier to believe that humans understand as i do, since they are made up of the same stuff, and they look and behave generally just as i do, but it is still a belief. i have no proof. i can never in fact have proof. nor can we ever have proof that a machine genuinely understands. but the farthest we can go without attributing understanding to it, i believe, is as far as we let humans go, which i think is where the turing test aims to draw that line. if we can’t distinguish it from a human’s understanding, and we believe that other humans understand, why isn’t that good enough for machines?

but yes, understand, aware, and conscious are all tricky things to define in this context.

When I saw the thread title I thought it would be about the horrific “Chinese Restaurant” scene in the movie eXistenZ. The movie, after all, was about AI and had a “Chinese room” sort of scene. Now that you make me think about it, maybe the director of this AI movie was indirectly alluding to the “Chinese room” Turing test.

partly_warmer, you scare me.

Not the scenarios of destruction you paint, but your hatred of things you consider “other”. Why would artificial life be any less sacred than meat life?

And bypassing the danger issue, assume neurons and neurochemical reactions can be computer simulated, and further that a “snapshot” of a living human brain can be made.

If such a snapshot, when implemented in simulation, behaves the same manner as the physical version, why would it not be intelligent? or for that matter, intellectually human?

i think what partly_warmer might be getting at (or at least i am extrapolating from him) is that we DON’T value machines in that way. and when we can recreate a person in metal, one can then see how the view could be passed that humans should be valued only as much as machines. my only problem with that is that it’s not a topic for this thread. perhaps someone can start a “morality of pursuing ai” thread or something?

the common argument against this is “a computer simulation of a hurricane does not make me wet.” it very much depends on how you define simulation. i think emulation might be a better word, since simulation implies that it’s not really happening.

Rubbish. First off, you’re mistaking “artificial intelligence” with a Von Neumann machine. Second, it’s a hell of an assumption to postulate that any form of AI would inherently be dangerous. That assumes malicious intent on the part of the AI in question.

Why, pray tell, would an AI be dangerous to humanity? Because we are “inferior”? Because we are “unclean”? Because humanity is a “virus”? I would offer that you’ve seen The Matrix or The Terminator one time too many.

An intelligence - ANY intelligence - is capable of looking towards the future. It is capable of realizing that destroying other intelligent - and not-so-intelligent - beings around them is not always a desireable action. Do humans slaughter each and every dog or cat they see, just because they’re “inferior” forms of life and poop on the carpet?

I don’t think the hurricane analogy works here. We’re talking about information rather than a physical process. Is our conversation here any less real because it’s mediated by text and electric signals rather than pressure waves in air?

A conversation with an emulated mind would be just as valid as this conversation we’re having, at least that’s my stand on it.

*Originally posted by Ramanujan *

Hmm…that’s proabably true, but I think one thing that Searle is trying to counter is the perception of the human brain/mind as being similar to the internal workings of a computer (a computional model of the mind). There’s something about the human brain that allows for the emergence of consciousness and therefore the human mind.

I think it’s important to understand that (in my opinion) Searle is trying to force those who believe in Strong AI to account for consciousness. It’s something that can’t be entirely explained away. And one element of consciousness is its’ intentionality. That is, humans, when using their minds, are generally conscious about something.

What makes you think humans are conscious? I see no evidence to suggest that any other humans, besides myself, are conscious–in fact, I see little reason to think that I’m conscious.

Can we get a definition of consciousness?

Clearly, it’s what humans have and everything else doesn’t.

At least, that’s what the objectors to AI often seem to imply in my experience.

I know several professional philosophers who’ve convinced consciousness in the interaction of ‘qualia’. Exactly what that means is probably known only to a hypothetical divinity…

Big difference between learning Chinese and learning Morse Code. Morse Code is not a language… it’s English written with dots and dashes instead of Roman characters. Deciphering Morse Code won’t get you anywhere unless you know English too. Being able to decipher Chinese writing into Roman characters (with diacritics) still won’t mean you are fluent in Chinese. You can’t reduce understanding a language to finding the right symbols to respond to other symbols. Without the semantics, you don’t have a (human) language. Which is Searle’s whole point with the Chinese Room thought experiment.

And I think the Chinese Room scenario succeeds with this criticism of one particular nascent form of Strong AI, but overgeneralizes from here to saying Strong AI must be theoretically impossible.

What it boils down to for me is that unless you believe in some kind of duality… a soul, a spirit, what have you… then it’s impossible that the human mind cannot be modelled as a machine and that that machine could not be said to “understand” things the same way we do if we modeled it accurately.

Obviously we will not be making a machine that understands solely by programming in all possible understanding-like responses to every possible question. A machine that understands will need to be able to learn about the world just like we do, and its conclusions will be dependent on this experience. Will it have some initial conditions that allow it to learn? Yes. We do too.

-fh

Penrose wrote an entire book on the subject: The Emperor’s New Mind. Search for links on that title, or even better, order a copy. It covers topics ranging from algorithms and computability to classical and quantum physics to the structure and functioning of the human brain. This book is a must read for anyone interested in the strong AI question.

*Originally posted by robertliguori *

OK, bear with me. I’m taking the following definition from an article by John Searle published in the Annual Review of Neuroscience, Vol. 23, 2000.

“One often hears it said that ‘consciousness’ is frightfully hard to define. But if we are talking about a definition in common-sense terms, sufficient to identify the target of the investigation, as opposed to a precise scientific definition of the sort that typically comes at the end of a scientific investigation, then the word does not seem to me hard to define. Here is the definition: Consciousness consists of inner, qualitative, subjective states and processes of sentience or awareness. Consciuosness, so defined, begins when we wake in the morning from a dreamless sleep and continues until we fall asleep again, die, go into a coma, or otherwise become ‘unconscious’. It includes all of the enormous variety of the awareness that we think of as characteristic of out waking life. It includes everything from feeling a pain, to perceiving objects visually, to states of anxiety and depression, to working out crossword puzzles, playing chess, trying to remember your aunt’s phone number, arguing about politics, or just wishing you were somewhere else. Dreams on this definition are a form of consciousness, though of course they are in many respects quite different from waking consciousness”.

Searle goes on briefly to state that the above definition is not universally accepted. Nonetheless, he goes on to state that “there is a genuine phenomenon of consciousness in the ordinary sense and that this phenomenon [consciousness in the ordinary sense] is what [he is] trying to identify…” and believes it to be the proper target of investigation (a neurobiological account of consciousness).

Searle goes on to identify three essential features of consciousness: the combination of qualitativeness, subjectivity, and unity. He then proceeds to outline the feaures of each.

He also briefly identifies some other features of consciousness:

  1. Intentionality
  2. The distinction between the center and periphery of attention
  3. All human conscious experiences are in some mood or other
  4. All conscious states come to us in the pleasure/unpleasure
    dimension
  5. Gestalt structure
  6. Familiarity

If anyone’s interested, a collection of some of Searle’s more important papers was recently published as Consciousness and Language.

To add more from the above article by Searle (entitled “Consciousness” - sorry, forgot to include it), Searle provides the following (I’m summarizing quite a bit - if anyone’s interested in Searle’s poition I recommend getting the above book for the entire article):

Searle rejects the traditional mind-body problem in discussing consciousness. That is, what is the relation of consciousness to the brain? He identifies 2 parts to the problem, a philosophical part and a scientific part. The philosophical part - consciousness and other sorts of mental phenomena are caused by neurobiological processes in the brain, and they are realized in the structure of the brain. In a word, the conscious mind is caused by brain processes and is itself a higher level feature of the brain.

He then states that the scientific part is much harder. But if we are clear about the philosophical part, then the scientific part also becomes clear. He notes two features of the philosphical solution. First, the relationship of brain mechanisms to one of consciousness is one of causation. Processes in the brain cause our conscious experiences. Second, this does not force us to any kind of dualism because the form of causation is bottom-up, and the resulting effect is simply a high-level feature of the brain itself, not a seperate substance.

Just as water can be a liquid or solid state without liquidity and solidity being seperate substances, so consciousness is a state that the brain is in without consciousness being a seperate substance.

Searle rejects both dualist and materialist categorizations as part of the philosophical solution. He goes on “We know enough about how the world works to know that conciousness is a biological phenomenon caused by brain processes and realized in the structure of the brain. It is irreducible not because it is ineffible or mysterious, but because it has a first-person ontology and therefore cannot be reduced to phenomena with a third-person ontology”. He coins the term biological naturalism for this view (a rejection of materialist and dualist categorizations).

As this is getting long, I’ll summarize Searle’s position (in his own words):

“Consciousness is a biological phenomenon like any other. It consists of inner qualitative subjective states of perceiving, feeling, and thinking. Its essential feature is unified, qualitative subjectivity. Conscious state are caused by neurobiological processes in the brain, and they are realized in the structure of the brain. To say this is analogous to saying that digestive processes are caused by chemical processes in the stomach and the rest of the digestive tract, and that these processes are realized in the stomach and the digestive tract. Consciousness differs from other biological phenomena in that it has a subjective or first-person ontology. But ontological subjectivity does not prevent us from having epistemic objectivity. We can still have an objective science of consciousness. We abandon the traditional categories of dualism and materialsm, for the same reason we abandon the categories of phlogiston and vital spirits. They have no application to the real world.”

I should point out that Searle doesn’t reject out of hand the possibility that humans can build some conscious artifact out of non-biological material that duplicate, and not merely simulate, the causal powers of the brain. We just have to figure out first how brains do it.

Probably I’m responding less to Searle’s actual position on matters of consciousness than the position for which the Chinese Room is usually employed to defend… the view that Strong AI is not possible or if possible, is evil.

-fh

i personally would have no problem with this definition if he went on to define “sentience” and “awareness” without referring to consciousness in any way. my problem with searle is that he has always sort of just assumed we would take his view as he presents it and accept it out of “common sense” or “intuition.” it isn’t just that intuition proves nothing, nor is it just that our intuition has been completely wrong in the past with respect to proofs, but also the source of the intuition is what we are trying to define. if we found the source of “intuition” scientifically and mechanically, it would perhaps be a lot easier to accept things that described intuition that weren’t “intuitive”; we don’t have a clue why we feel “intuitive” about certain things, so why should we accept an argument that appeals to that? also, it seems rather circular, in some regards.

it’s very hard to attribute consciousness to a non-biological entity when this is how you define it.

as to the last part of your last post, it seemed to me that searle’s more recent papers on the subject (c. 1992, i think, perhaps i’ll try to find a cite later) claimed that the medium was what was responsible, that consciousness could only occur in biological neurons, and not in silicon.

could you expound on this a bit more? what particular aspect exactly is that?

(not trying to be obtuse here, did you mean just human language?)

also, i want to reiterate that we do actually attribute humans the ability to achieve consciousness, it is not something that we have proof of. i think this is important to consider when thinking about the chinese room and the turing test. we have no reason to claim that a given machine, say a thermostat, does not believe the information it processes; a thermostat can be said to believe that a room is too hot, and that the heat should be turned off. there is no proof to the contrary, and no real reason, short of appealing to intuitive definitions (which aren’t really intuitive at all), to claim that the thermostat does not believe this.

OK I haven’t read all the responses so sorry if this is a repeat:

Anyway Searle seems to me defining “understanding” in terms of the subjective sensation of understand the feeling of “aha” when we figures something out. Since machines don’t have subjective feelings (as far as we know) they don’t have understanding.

But why exactly is the subjecitve sensation important at all? In fact it seems rather a minor part of the concept of intelligence. Supposed we designed a machine which could do brilliant things like compose great symphonies and discover new mathematical theorems does its lack of inner experience make it unintelligent? Why?

A related debate: do we really know whether other humans have subjective experience. This is just a common-sense assumption we make not something that we can really prove.
The argument that goes:
a) I have subjective experience
b)other humans resemble me biologically
c)therefore they must also have subjective experience
isn’t really a good one since I don’t know whether my own subjective experience is based on my biology.

So if we can’t really be sure whether other people have the subjective experience of understanding Searle’s argument becomes irrelevant.

Hazel-rah wrote:

To borrow from Charles Barkley, I misquoted myself. I meant to say that the man was learning X (i.e., something that resembles both Chinese and English, but is neither).

More to the point, how would you know whether a computer had an inner experience?