Anyone else who doesn't accept that they are conscious?

:smiley:

I also don’t accept that I am couscous. Much prefer being rice!

I am in the same situation as you, and I come to a different conclusion. What if I am nothing more than a computer programmed to say I am conscious? What would that mean? Would I still say things like you just did? Yes. Would I think I was “experiencing it”? Yes! As long as the computer was programmed to say it was conscious to itself in elaborate ways. In fact, being a student of physics, I think it is obvious that we are nothing more than a complicated computer, and that through nature and nurture we have been programmed. A complicated dialog takes place internally, all programmed, in which I tell my self I am conscious, when really, I am in control of nothing, and experience nothing.

If you “think you are experiencing” a subjective sensation, then you are.

What makes you think that being a computer means you can’t be conscious? What makes you think being programmed (even rigidly programmed much less the self-adapting kind we are composed of) means you can’t be conscious?

Ask me again when I’m awake.

The problem with denying consciousness is that you can’t express this opinion without contradicting yourself . . . without using the very faculty and terminology that you’re denying. The fact that we are discussing this subject implies that we are conscious of each other. Try convincing me of the nonexistence of consciousness without any implication that you - and I - are conscious.

What is this “subjective experiential awareness” that the computer would not have? If you deny that these terms have meaning, what is the content of your own assertion?

I don’t see the relevance of the free-will issue. A computer’s behavior is determined, but that, in and of itself, doesn’t make me more skeptical of its reports of its own internal states. Rather, my credence in its reports is based on my understanding of the causal connection between its internal states and its external reports. This causal connection may make the external reports more or less accurate. But the computer’s determinedness, per se, doesn’t make me consider its self-reporting to be any less reliable or meaningful. And this continues to be true in the particular case where the computer is a human.

One of my very first threads on the board:

Most of this stuff got pounded out in that thread, almost four years ago, if you’re interested. Of course, no reason not to pound it out again.

Free will is not consciousness, and to be honest I think the OP is very obviously confusing the two concepts.

I agree. There’s no reason I can see to think that determinism would affect whether or not something is conscious. For that matter, people with compulsive mental disorders are conscious.

I don’t think that the OP is confusing the two concepts. I think the argument is something like this:

[ol]
[li]When I utter “I possess consciousness”, my utterance is a deterministic consequence of a bunch of physical facts that were true just prior to my utterance. [/li][li]These physical facts are things like “This neuron was in this state, and that neuron was in that state.” In particular, the facts were about non-conscious things such as neurons. The facts don’t say anything about consciousness.[/li][li]Since the physical facts don’t say anything about consciousness, they all could be true or false whether or not any consciousness is actually present. In other words, the truth or falsity of the physical facts is independent of the presence of consciousness.[/li][li]Therefore, the fact that the utterance occurred is not evidence for the presence of consciousness.[/ol][/li]
For me, the problem with the argument is in Step 3. The physical facts might not mention consciousness, but we could still have good evidence for the belief that their truth suffices to imply the presence of consciousness.

Not if by “think” I mean simply that I can tell myself that I am conscious. There is a subtle but very important distinction between the two.

Because the term “consciousness” has no objective meaning. It is unscientific. There would be no way of proving whether a computer is conscious as opposed to just claiming it is conscious. I go with Occam’s razor.

If you think about a computer with predetermined behavior equivalent to a look-up table, of course if it says it is conscious it is merely programmed to. On the other hand, if you try to give the computer some form of perceived free will, it’s going to find use for a name for that “experience”. Without free will, it seems always reducible to something isomorphic to the look-up table.

iamnotbatman, I think that you would enjoy this sequence of blog posts by the AI researcher Eliezer Yudkowsky.

Where have I contradicted myself, specifically, without also contradicting that which I am attacking? Read up on your proof by contradiction, if you don’t know what I’m getting at.

And no, the fact that we are discussing this subject DOES NOT imply that we are conscious of one another. Have you read the rest of this thread? Two very stupid computers can be programmed to have discussions like we are having now (as an example, someone earlier in this thread mentioned some of the lookup-table-based chatbots, which have come a long way)…

I don’t disagree with your conclusions about free will. I do with what you’re saying about predictability. You mentioned Newton but ignored Heisenberg.

Exactly! The terms have no meaning (hence the quotes I put around them). But if others assign meaning to them, I must discuss them as I have. If you deny their meaning, as I do, then the validity of my logic is irrelevant – my cause it won.

Hmm, is Alice (chat bot) really conscious? At what point, during which computers grow more complex, will we begin to trust their assertions of consciousness?

People with compulsive mental disorders are supposedly aware of their compulsions, and their relationship to them, and are often annoyed they cannot stop. Not a good analogy.

And yes, there is reason to believe there is a fundamental connection between determinism and consciousness. Just google the two terms and see how much has been written about it (both by morons and scholars).

Wow, it’s long. I’m reading it, looks great!

Thanks, I don’t think I’m confusing the two concepts, although I think that they have very strong logical implications for each other. I agree more or less with your bullet points. My issue with your issue with step 3, is that the ‘evidence’ we have is necessarily unscientific. It is also very difficult for me to reconcile 3 with a phenomenon outside of the description by physics, without invoking consciousness as a supernatural phenomenon. I have a hard time arguing against consciousness as a natural phenomenon, because I don’t even know where to start – it just makes no sense naturalistically from any angle, and no one can even describe it without invoking a “because I said so” form of argument. “I experience it, isn’t that enough?”

The problem with eliminativism (which is basically what he’s arguing) that the OP has put forth (well, the most obvious problem) would be to imagine the existence of creatures exactly like us physically, yet who lack any qualia at all. They, like us, would insist that they are conscious, yet unlike us they would be mistaken.

From David Chalmers’ The Conscious Mind, p. 188 (I highly recommend that the OP read this book, as Chalmers thoroughly discusses the issues with the eliminativist position-in fact this book has more or less taken up a permanent position on my nightstand, and I regularly will flip around in it and chew on some tidbit or idea):

“…But consciousness is not an explanatory construct, postulated to help explain behavior or events in the world.* Rather, it is a brute explanandum, a phenomenon in its own right that is in need of an explanation. It therefore does not matter if it turns out that consciousness is not required to do any work in explaining other phenomena. Our evidence for consciousness never lay with these other phenomena in the first place…Our experiences of red do not go away upon making such a denial. It is still like something to be us, and that is still something that needs explanation. To throw out consciousness itself as a result of the paradox of phenomenal judgement would be to throw the baby out with the bathwater.”

*At least, that is how I am viewing it, but the OP might be taking a strict functional view of things (i.e. “If it fulfills no functional use then it cannot exist”). But even once you’ve delineated all the functions of the brain, there is still something which needs to be explained.

As Chalmers later says, “On the face of it, we do not just judge that we have conscious experiences-we know that we have conscious experiences…To take consciousness seriously is to accept that we have immediate evidence that rules out its nonexistence.”

You didn’t put quotes around them in the lines that I was responding to.

You seem to think that others are imputing something to your awareness of irony beyond what you see in it. Is that right?

You were right when you said that consciousness is not a scientific term. We will need to understand consciousness better before we can reliably identify whether a computer has it. There is something about some of the processing in your brain that makes it accessible to your introspection and verbal reporting. But we don’t know what that “something” is, so we don’t yet know how to identify it if it shows up in a radically different substrate.

Right now, to identify consciousness in X, my consciousness-identifying algorithm requires that either (1) X is my brain, or (2) X is similar to my brain in ways that I’ve come to associate with consciousness in my own brain. In the first case, my consciousness-identifying algorithm works because it receives input of a kind that it can’t get from other things. In the second case, I am generalizing from the first case. But since I don’t really know what the underlying cause of my observations in the first case were, I have to be less confident.

There is an analogy with how we would have to talk about color in a pre-scientific society. What is the difference between red and green? Just what is it about the process of looking at a red ball I use to infer that the ball is red? In a pre-scientific society, there would be nothing about the red ball that you could tell me in advance that would allow me to conclude that the ball was red, unless you just straight-out told me the color.

Suppose that I know the process by which the ball was produced. Now you tell me that another ball was produced by a very, very similar process. I could then reliably guess that the second ball was also red, even if I couldn’t see it. But if you told me that there was a piece of metal, and you told me the process by which the metal was produced, I couldn’t (being pre-scientific) guess reliably whether the metal was red. I wouldn’t yet understand the physical causes of redness at a deep enough level to infer whether some process new to me would cause redness.

Nonetheless, color would still be a useful thing to talk about, and I would be justified is supposing that my awareness of redness (vs. greenness) points towards underlying causes, which I could hope to understand in a scientific way in the future.