The concept of a philosophical zombie makes no sense to me

One half of me wants to scream “EQUIVOCATION FALLACY”, the other half wants to buy you a beer.

I know I am conscious because it is I who knows it. I don’t know that you are conscious. You might be an automaton cleverly programmed to react exactly as a conscious being would. You might be, but if I want to make reasonable predictions as to how you will respond to something, I will do so by assuming you are conscious since you react exactly as a conscious being would. That leaves an irresolvable quandary.

When AI finally produces a “Hal” or a Data, this problem will become real. Some people, especially in CS, believe this will eventually come about. So do I. Some philosophers deny it as a possibility. Look up Searle’s “Chinese room”. If a person with no knowledge of Chinese is in a room in which every possible utterance of Chinese is paired with an English equivalent, he could produce a translation to every English text and yet not know a word of Chinese. The flaw in this thought experiment is that the number of utterances in any human is infinite (unbounded) and such a room is impossible.

Honestly, it seems to me to be an essentially religious dispute.

We ought to return to this thread in about ten years.

You’re correct, which is why philosophical zombies are a dumb argument. I don’t think they convince anyone of anything.

The concept of a philosophical zombie is intended to focus debate on consciousness. What does consciousness do? Is it possible for a creature to act like a human without being conscious? It’s not intended to prove a particular position, but rather clarify a field of argument.

Not so fast my friend.

Even if p-zombies truly exist, it does not follow that physicalism is false.

It could mean that there are multiple ways to arrive at our exact behavior. A being without consciousness that behaves exactly like us could possibly be 1 method out of 19 million, but the other 18.9 million configurations end up with consciousness.

It’s entirely possible that evolution stumbled on the easiest/most prevalent method of arriving at our behavior.

I don’t think the existence of p-zombies really excludes any other possibilities.

I have recently unpacked some books, and I found my copy of *Cyberiad *by Stanislaus Lem, and so I have an irrepressible urge to post in threads about consciousness and automaton theory. I have no free will in this matter; I MUST POST.

Searle’s argument was a little more subtle than this. He argued that if you had a computer program that could converse in Chinese, and you gave it to a person in a room, along with a pencil with an eraser*, the person could use the book to converse in Chinese; surely you wouldn’t say the person can thereby understand Chinese?

The response to this, of course, is that we don’t say the person can understand Chinese, but we can say the whole system, of person, room, book, and pencil, does understand Chinese.

*Searle evidently did not include the pencil with eraser, which is vital to his argument, but it changes neither the argument nor the fatal flaw to go ahead and include it.

I disagree that a good response is “the whole system understands” Chinese.

To me, it seems “understand” means that the input has been transformed into an internal model that allows for all kinds of other relationships, inferences, data extraction and general manipulation of an internal model that substantially reflects reality.

Just getting the right answer is not understanding, understanding is understanding, the correct output is a “symptom” of understanding, it’s an external clue that there is understanding internally, but the external part is not the thing.

If the input to the Chinese room is “I’m going to fire this gun into the room” then a clue that the man/system understands Chinese would be represented by the man inside taking cover.

Two behaviorists have sex. The woman says to the man, “It was great for you. How was it for me?”

One point of p-zombies is, like so many of these thought experiments, to highlight that all third person accounts of the brain and consciousness leave something critical out of the equation. From the outside it seems humans are like any other determined physical system, but from the inside it seems there’s a grand tapestry of freedom and ineffable subjective things happening.

The problem of other minds is a problem because you can’t prove anything, which is why everyone comes up with these weird thought experiments like twin earths or Mary’s room or what it’s like to be a bat or p-zombies (which is about catching physicalists in a bind).

You can’t crack open someone’s head and say oh yeah, there’s the consciousness juice spraying everywhere. You could crack my skull open and tell me it’s full of candy corn. Cool, but I won’t stop thinking I have subjective experiences.

The Chinese room is an argument against the idea that a simulation of consciousness (specifically on a symbol manipulating computer) is consciousness. The argument says there’s no way to get from the syntax, the symbols, to the semantics. Your calculator doesn’t understand math. Deep Blue doesn’t understand chess. Watson doesn’t understand Jeopardy or medical conditions. You can make a calculator person, like the Chinese room, that doesn’t understand anything, but people will think it’s conscious. The argument shows why it’s not. Or you can dive into the vortex and claim that yes, the whole system is conscious.

Declaring it impossible because someone can’t manipulate billions of scrolls of paper and rummage through all the cabinets and so is missing the point. A lot of thought experiments are physically improbable.

I would say that the philosophical zombie is inherently impossible, on the presumption that consciousness, a sense of self, etc. is indispensable to acting in a manner indistinguishable from a human being. Mathematician Roger Penrose took this position in his book The Emperor’s New Mind, when he raised the question of why consciousness exists at all. If it’s not supernatural, then it must do something that can’t be done by unconscious reflexes or algorithms.

It’s possible there are multiple solutions with the same output but the ones with consciousness use less energy/are less complex and get selected for in nature.

No, but it’s quite possible that we’ll eventually understand the brain enough to know exactly why it’s conscious. In that case yes, we’ll be able to tell you what, exactly in there is responsible for consciousness.

Actually it’s not really missing the point, because the point of the Chinese Room is to convince people that AI is impossible because surely nothing so simple as a room full of scrolls could duplicate such a thing as human comprehension. Even the name “Room” is picked to give people a false image of what would be required; something simple, something small. Here’s a good quote I just ran across on the matter:

Not really; that would be like expecting my hippocampus to try to dodge a bullet by itself if I knew someone was trying to shoot me in the head. If the entire system knows Chinese that doesn’t mean the man in the room understands Chinese any more than my understanding English means that a particular neuron in my brain understands English.

But a real person can’t know all of the infinite possible utterances either, so, if that’s a requirement for “knowing Chinese,” no one on earth knows Chinese!

An operational “Chinese Room” would “know Chinese” well enough to get by on. It would be able to converse at some level. Even school-boy Chinese would be enough to establish the principle.

(Just as some of the earliest chess-playing programs, long before they attained Grandmaster levels of competence, were enough to show that, yes, computers can play chess. I remember when certain persons very firmly declared that this could never be possible.)

If I say, “I’m going to fire a gun into your head,” does your brain take cover?

But he would, wouldn’t he? After all, when you give the man a statement in chinese, he doesn’t inherently understand the statement, but he can still parse it. And when the english comes out, you can bet he’s going to duck and cover. Now, if the system is so constructed that the man in the room doesn’t get to see his english translation… that still does not imply the system doesn’t understand chinese. It implies that that specific part of the system does not.

Isn’t the “philosophical zombie” merely the opposite of (or counter to) the “ghost in the machine”?

Your brain (the system) would take whatever action is possible to minimize damage. Your brain can’t really move individual neurons out of the path of the bullet without moving the entire system, so it would move the entire system.

The Chinese room doesn’t have the same capabilities to control movement, merely input and output, so maybe a different example is required.

Regardless, we know we can describe scenarios that get the output 100% correct while not understanding (any formulaic (pseudo-random) sequence that just happens to match the correct output), which means output alone is not enough for gauging understanding.

My understanding is that the translation from input to output is Chinese to Chinese, no English in between.

I just went back and re-read Searle’s Chinese room and I was remembering the thought experiment incorrectly.

My memory was of a predefined mapping of inputs to outputs, but his argument actually specifies following a computer program to take the input symbols, perform processing such that output symbols are generated that are appropriate.

In this case it’s entirely possible the system does understand Chinese and it’s entirely possible it doesn’t. It all depends on the specifics of the program.

Well, not really.

  1. We could assign the Chinese Room some control devices, such as the ability to move a mouse-pointer on a computer screen, or robot manipulators. There’s no reason not to expand the concept into the physical world.

  2. Or, we could just accept the limitation that it is a text-based simulation, so that, if you say to it, “I’m going to fire a gun at you,” it will respond the way a person might, in text: “Please don’t” or “Holy Moly, what did I do to make you angry?” or “Help, police!” (Or even, “What’s the point of that, given that I’m a disembodied thought-experiment?”)

I disagree; it does not depend on the specifics of the program. When considering what function a system computes, only the inputs and outputs matter. For example, you could have two machines, each of which accepts a pair of two-digit numbers and outputs their product. One machine has a kind of mechanism for performing shift-add multiplication, and the other merely consults a 10,000-cell lookup table. Both machines compute exactly the same function, and it is equally true of both to say that they “know” the multiplication of two-digit numbers.

Likewise, the Chinese Room, whether it contains a nuanced program for parsing and representing Chinese in order to produce its responses, or whether it contains a (physically improbable) table with the “sensible” response for every possible sequence of inputs, can be equally correctly said to “know” Chinese.