Anyone else who doesn't accept that they are conscious?

Which definition. The clinical one? A lot of people here would argue that there is more to “consciousness” than “a state in which the individual is capable of rational response to questioning and has all protective reflexes intact, including the ability to maintain a patent airway”. Read the other posts. They promote something they call “subjective experience”, and “qualia”.

Wha? I’m sorry I don’t follow you… I don’t contend that subjectivity can “magically manifest itself as some alternate reality,” although I’m not sure what you mean…

I meant your definition of it as “a word used by humans in explanation of the phenomenon responsible for their verbal output.” A phenomenon is, by definition, observably real. Whether we fully understand it or not is an unrelated issue.

I didn’t mean you contended that. I meant that my understanding of your argument is that because subjectivity cannot be taken into consideration in empirical research, it doesn’t exist. If that were true, then that leaves objectivity and nonexistence, both of which disregard our own take on things. Morality is subjective, taste in food/music/men/women is subjective, etc., not to mention our differing opinions on this very subject.

I don’t know what to say to this. I don’t know how someone could claim to respect someone as a Philosopher and at the same time attribute such boneheaded thoughtless ignorance to them.

I just told you. One way to do it would be to have each rule consist of a neural state and a set of conditional responses depending on input–where a response includes both (optionally) output and (necessarily) movement ot another neural state. And you have a rule for each of the hugely-yet-finitely numerous possible neural states.

Unless you’re going for some really super strong version of Extended Cognition, I don’t know what else you think could be involved in everyday occurences of memory like the one you’re talking about.

If anyone’s tempted toward thinking that it’s impossible to capture every possible input with a finite number of such rules, remember that while input in Chinese (or any language) may be infinitely variable, but is composed of a finite number of symbols arranged according to a finite number of rules, meaning any of those infinite possible inputs can be analyzed in a finite number of steps.

There sure is. And for that matter, it’s possible to be in that “rational response” state without being conscious. For example, hypnosis.

That there is a phenomenon responsible for the output is not contested. I call that phenomenon “computation.” Others submit that there is more to it than that.

That is a misunderstanding of my argument. I’m saying that the term “subjective experience” is ill-defined and ultimately meaningless. “Subjectivity,” when referring to one’s (often unsubstantiated) opinion on some matter, is a well-defined term that I do not object to.

Exactly. Taste in food has no truth value beyond what it means: I, as a deterministic process, am determined to prefer some inputs over some other inputs. The word ‘taste’ describes that deterministic process. Others, however, would have you believe they really ‘taste’ something, as though it is something more than I have defined. They impute more meaning on the word than I do. Similarly, I, as a computer, compute. The word ‘consciousness’ could be used to describe this fact, but others would impute more to the word ‘consciousness’ than I have.

Well, some would disagree with your characterization of hypnosis. From wikipedia:

If you’re using neural states to encode information that the room has previously received, the number of necessary states quickly becomes larger than the number of particles in the observable universe.

It’s a problem of combinatorics. Say you’re trying to encode two basic utterances to use later on:

“My name is Jenny.”

“My number is 867-5309.”

How many possible human names are there? 1,000? How many phone numbers? 10,000,000? So in order to just be prepared to store all the possible combinations of those two chunks of information you need 10 trillion neural states. Now imagine how many neural states you need if you don’t limit input to just one name and one phone number.

In fact, without some form of data store, Searle’s Room is exactly identical to Borge’s Library – a system that a priori encodes all possible combinations of all possible utterances.

However, this particular argument is probably moot. In the Wikipedia page on The Chinese Room it mentions that the occupant has pencil and paper as well as his reference book … in other words, a data store. My recollection of the set-up of the room was wrong, and thus my objection to Searle’s argument is limited to the equation of the occupant’s understanding with the system’s understanding.

(I suspect I also disagree with his conception of intentionality … in other words, what it means to “understand” something … but that’s a much broader discussion.)

That’s not in conflict with my understanding (except for what I would call their misleading use of the word “unconsciousness”).

Wakefulness and attention do not require consciousness, nor vice-versa.

Consciousness is interior awareness, the narrative of the person within himself, in the moment. A person deep in his own thoughts may be highly conscious but not very attentive to things around him. Typically we are unconscious in sleep, but lucid dreams are an exception (though a lucid dreamer is still, in fact, asleep, and usually unresponsive to external events).

Conversely, the hypnotized, the schizophrenic, the sleepwalker* may be able to perceive and interact with the physical world and other people without any self-awareness at all.

Of course, I understand these distinctions are difficult for a nonconscious person such as yourself to perceive. :wink:

  • Recognizing that there is variation in all these conditions. Certain modes of drug use can induce similar states, which may later be perceived (consciously) as a “blackout,” through which the person may have been walking and talking and so forth, perfectly awake, but not conscious.

I always thought the two were the same thing, which is where the confusion came in I suppose. If they are indeed different, then I plead the same ignorance as you. What is subjective experience?

The objective is accessible from all points of view. The subjective is accessible only from some (perhaps sometimes as few as one) points of view.

A colorblind scientist may know absolutely everything there is to know, objectively, about the color red. He may know its wavelengths, the physiology of the eye, the patterns red things cause in the brain, the symbolic associations it brings for people of various cultures, and so on–everything that everyone could know about red. But, many people think, there’s still something missing. There’s a further fact that he hasn’t learned by learning all that. Once his colorblindness is cured, and he actually sees the color red for the first time, he’s learned a new fact: “This is what it’s like to see red,” and that fact is accessible only from particular points of view. It’s part of subjective experience.

Why do you think that’s relevant?

Hrm… I actually don’t think that’s stated explicitly in those works where Searle has set it up. But I don’t have those ready to hand right now so I’m probably misremembering.

To which Searle replies, “Sure. Just stick the entire system inside a single person now.”

I see it as subjective experience=subjectivity. iamnotbatman is distinguishing between the two somehow, but I just thought they were two ways of saying the same thing. Qualia is a matter of subjective experience. Doesn’t that make it a matter of subjectivity? I see no difference.

Note that I’m not saying subjective experience transcends the physical somehow, it’s merely how you or I as an individual processes input information and responds accordingly.

Well, like I said, what iamnotbatman is talking about (I think) is the qualia-laden character of experience–the aspect of subjective experience which, to the subject, seems to be most intimately and unsharably “his”.

Basically, many people think that “this is what it’s like to see red” is a fact that must be added to the set of all objective facts in order to make for a complete account of the world, while many others including iamnotbatman think there’s no such fact.

Oh, I understand. I agree that qualia is objectively meaningless, but I don’t see what that has to do with being conscious. iamnotbatman, did you just see it as another way of saying subjective experience?

If Frylock’s assessment of your position is accurate, then I agree. I just had no idea what you were talking about at first.

Some hypnotists, especially charlatans, would have you believe that hypnosis involves the reduction of the patient’s self-awareness. This may even be the mainstream viewpoint, but not overwhelmingly so. A great many think that’s bullshit.

touché

Yes, I think Frylock does a good job of characterizing my viewpoint.

Blut Aus Nord, it’s possible I am wrong, but I have always understood the term ‘consciousness’ (as used by philosophers, for example, as opposed to its use by boxers or anesthesiologists) to include the concept of “subjective experience” as a fundamentally defining component. Incidentally wikipedia’s article on consciousness uses the term “subjective experience” at the top of its list of definitions.

You’re right. It has a rulebook.
From Searle:

OK, it’s not just a *list *of pre-programmed responses, but it really is a deceptively too-simple system, like Dennett says. There’s no way a simple system like that would be able to simulate understanding of a language. It doesn’t have world knowledge. And I do not agree that you could put emotion in an instruction set.

But Searle is wrong from his original premises, anyway. From his very first one (in his more formal argument) :
“Programs are formal (syntactic)” - Not true. Any programming language worth anything has plenty of semantic content or space for it, in the form of pointers, digraphs, other semantic data models. Whole languages have been written just to increase the semantic information content possible, like Gellish, but even you C-pointers will do.

Then you’re not a proponent of the Strong AI thesis, and the argument’s not addressed to your view.

Strong AI is the thesis that all you need in order to get understanding is the execution of the correct program.

A computer executing a program is a simple system like that. You agree with Searle, then, that you can’t get a computer to understand just by giving it the right program.

I don’t know exactly what you mean here, but why can’t “world knowledge” be contained in the instruction set?

Searle almost certainly agrees with you here of course!

Do you know Searle’s distinction (which you don’t necessarily have to agree with–I don’t, really) between original and derived content?

That’s not what Searle says:

Obviously he can’t mean that a Strong AI program could be run on a ZX Spectrum. So Strong AI is about the computer too, not just the program.

No, a computer executing a program is not a simple system like that.

I see where Searle was going, referencing Turing completeness, but that’s a theoretical construct, not a real-world computer. And infinite time and infinite memory storage do not a simple computer make, so even that is sleight-of-hand on his part.

Because it changes, moment to moment. Today, the *appropriate *Chinese response to “Hello, Comrade” might be “Back at ya, Comrade”, *tomorrow *it might be “Get lost, Commie anachronism”. Is the instruction book being constantly updated? Or is it infinite? It’d have to be.

So he know his own induction pump is bunk. He attacks a straw man version of strong AI that no-one seriously proposes anymore. A Strong AI system (artificial general intelligence) would have to be the whole system, not just the code that’s run. I don’t know anyone who says different. Which is why I said the initial proposition is bunk.

You mean original vs derived intentionality? I’m aware of it, but I think it’s nonsense. Stealth dualism, in fact.