Anyone else who doesn't accept that they are conscious?

So if I’m under the impression that I’m actually on Planet B8—18 hiding from the genocidal Zargons, I’m not delusional but unconscious?

You are just trying to cause trouble, no? I agree that the definition I grabbed from the medical dictionary website was no substitute for several years of medical school. Presumably, there are objective criteria to follow (DSM-IV) to rule out a previous history of psychosis, which might entail a delusional worldview that is distinct from the incoherent talking-while-asleep sort of state in which someone might mumble something about Planet B8 while on their way out of general anesthesia, a situation in which I find the term ‘unconscious’ reasonably appropriate, yes.

Please posit a hypothetical universe that is otherwise identical to this one, but differs only in that it does not, in fact, exist. It’s only a hypothetical universe.

OK, now, please explain the difference between the universe that actually exists and the hypothetical one that does not, in fact, exist.

========

Please explain why we should not seriously entertain the conjecture that perhaps nothing whatsoever exists.

And please explain why that’s any less reasonable a conjecture than the conjecture that we exist but that our consciousness does not, in fact, exist.

I’m using weird examples but I’m not trying to be smart, I’m just getting confused. It seems like you’re straying from your original argument when you are willing to accept that there is a legitimate medical differentiation between conscious and unconscious.

If nothing exists, including you, then it is reasonable to conclude that you will not reply to this post.

We can’t even do this for CHESS, so how the heck are we going to simulate natural language with a gigantic set of pre-determined responses. From wikipedia, one estimate of the game tree complexity of chess is 10^123. Since it is estimated that the number of atoms in the observable universe is something like 10^80 (give or take however many orders of magnitude you like), it’s clear that solving chess by a brute force decision tree would require violating the laws of physics.

Now, the possible human conversations have to be much larger than the number of possible chess games, because a logically possible human conversation would be for two people to play a chess game, and therefore if we tabulate every possible human conversation a tiny subset of those conversations would be to play every possible chess game.

So the Chinese Room can’t work, because it would have to be larger than the observable universe by several godzillion orders of magnitude, even if you could encode each branch on one atom.

One way of re-phrasing my original argument, is that I accept the clinical medical definition of consciousness, but do not accept most philosophical definitions (discussed on Consciousness - Wikipedia), specifically anything referring to “subjective experience”.

There is no doubt that there is such a thing as “brain activity”, and in the clinical setting it is useful to make some distinction between “waking” and “non-waking” brain activity. But that is separate from any philosophical discussion of whether human “waking” brain activity constitutes anything more than the ability to pass the turing test.

I thought I’d chime in here: while the set of possible human conversations is indeed large, an actual human need to not “contain” that entire set. Consider a reasonably functional lookup-table chatterbot like Alice. The number of patterns in its lookup-table is surprisingly small, on the order of 10K. Human patterns of thought are more predictable than we tend to imagine. Furthermore, it is not inconceivable that one could form a hybrid of a pattern-lookup table with a rather simple algorithm for generating new patterns when the rare situation is needed. Such a bot may not be an Einstein, one who pushes the boundary of human thought, but it is easy to imagine such a chatterbot engaging in a life of dialog that is indistinguishable from that of your average sub-100 IQ human.

I guess we really don’t know if we’re unconscious and somewhere else. Native Americans treated people who had hallucinations as sacred rather than crazy because they might just have access to knowledge that others don’t. I’ve seen young people with autism who zone out and I wonder where they are.

That’s not relevant, since Searle didn’t propose that the rules in the Chinese Room consist of pre-programmed responses to every possible input. The rules in the Chinese Room simply correspond to whatever program a Strong AI proponent thinks would suffice to make a computer running that program understand Chines.

I’m sympathetic to Dennett’s line in some ways. I think that any “human being” in any actual “room” who could actually pull of Searle’s feat hardly fits any plausible concept of “human being” or “room.” I think that’s key–Searle’s argument relies on our intuitions about what human beings can understand, but the entity he describes is not actually a human being. It’s an infinitely patient, infinitely obedient, completely passive entity. It’s an automaton! The very kind of thing the argument’s about in the first place!

So I’m sympathetic to an argument like Dennett’s, but a lot of his individual comments (including esp. the ones you quoted) miss the mark. No infinite instruction set is required. Memroy, recall, emotion etc can all be put right into the instrcution set. Etc. The problem isn’t the simplicity of Searle’s scenario, the problem is just that if you take his scenario seriously, the thing he describes is no “human being” in any “room” at all. So intuitions about human beings and rules go out the window.

Nothing in the scenario disallows this. It’s just a set of rules implementing whatever program you might think suffices for understanding.

Presumably Searle knew that the execution of a computer program requires a memory store.

In any case, you don’t actually need a writable memory apparatus to have something functionally equivalent to a memory. A finite (but of course humungous) ruleset could simply consist in a vast list of data arrays, each array corresponding to a particular neural state. Each data array can have associated with it rules conditional on input concerning what (if anything) to output and what data array to move to next.

The Chinese Room does not contain a pre-programmed response for each individual possible input.

I understand your differentiation now, but are you rejecting subjective experience? Given that reality and how we perceive it can be very different from each other, our personal experience can’t be objective. Experiencing something can be described as how we process the information fed to us, whether it’s visual or auditory, etc., so we most definitely experience things.

I reject subjective experience, not just for myself, but in principle.

If that is how you define “subjective experience”, as “how we process information fed to us”, then I do not reject it, but it does seem like a rather misleading definition, since my calculator can “process information fed to it”, and I doubt you would agree it has “subjective experience.”

The calculator has no cognition so no. Actually, maybe cognition is close to what you’re looking for in terms of a definition of conscious?

I don’t understand your rejection of subjectivity. If I associate the color blue with happiness, does that mean blue is an objectively happy color? Does my preference of one kind of music over another make it objectively superior?

I’m not looking for a definition of consciousness. Aside from the clinical definition of the term (and any colloquialisms) what I am arguing is that I don’t see how the term can have any coherent definition beyond “a word used by humans in explanation of the phenomenon responsible for their verbal output.”

My answer to both questions is “no.” You are pointing out the same thing I am pointing out – that subjective statements must be treated with grave doubt by objective researchers.

Am I Xuan dreaming that I am a butterfly…or a butterfly dreaming that I am Xuan?

I respect Searle a great deal as a philosopher, but the design of the room suggests that maybe he doesn’t. Or perhaps he does, but doesn’t have a deep enough understanding of computational theory to grasp why that point is significant.

How would a Chinese Room *without *writable memory respond to a question like “Would you please repeat my previous statement?”

Or how about:

“When we were chatting earlier, did I mention I was a doctor?”

Couldn’t you say the same thing about free will? e.g. You say we have fake free will, not real free will. I’m only interested in how humans make decisions! But whether we have one or the other is a pretty big deal.

The troubling thing to me is that science could correlate all the content of the brain and it still might not be able to decide this question because of the inherent subjectivity of experience. It strikes me as being a stone’s throw away from dualism – there’s this vital property of the brain that isn’t accessible to empirical tools. Maybe in the end Occam’s razor will come to save the day again. “Subjectivity? My model works fine without it…”

No, it just means for you the internal label blue is associated with the brain state label happiness. This is all measurable by known tools of science. Your claimed vivid subjective experience, the what it’s like to feel happy in the presence of blue, is the problematic religious-esque stumbling block.

Except for that it covers more that just verbal output, what’s wrong with that definition?

That’s the meaning of subjective, something that has no basis in reality beyond your own opinion. In that sense, subjectivity can’t magically manifest itself as some alternate reality, but no one’s saying it does.