Anyone else who doesn't accept that they are conscious?

I don’t think the OP is disputing any of this. The hard problem of consciousness accepts this too. The problem is why any of this would lead to a rich inner life, or the "what it’s like"ness of experiencing the world. Presumably if we make a robot that can be aware of its surroundings and is able to make decisions, plan for the future, and is self aware there’s no reason for it to have an inner world like we do (or think we do, as the OP would say, I think).

No, but we could impose such a mechanism on them. Evolution imposed it on mammals, whether it is real or a useful illusion. Either way its persistence and growth indicates it must increase fitness somehow, or is it related to something else that does.

No. The mirage is the perception event. Without it, the photons of light that would have been perceived as a mirage are just going to go off into the environment.

I disagree. You have more than a state awareness, you also have a process awareness. You are aware of changes of state over time, including internal states, such as system-favoured responses to stimuli (feelings, if you will). It is in this internal awareness of internal processes, time-linked but not bound, and occurring in multiple drafts. It is our abstraction of these processes after the fact that constitutes consciousness. In other words, consciousness could in some sense only really be said to exist when we report it - even if only to ourselves. And by ourselves, I mean the multitude facets that make up us.

In throwing out qualia, we do not throw out consciousness entirely. But we perhaps need to understand that the “I” considered in the traditional “Cartesian Theatre” model of consciousness is a fiction, and not even a useful one at that. It imagines consciousness to sit behind some sort of barrier from our experiences, that we can draw a line between consciousness and the experiences we are conscious of. Dennett (and I) believe there’s no such line. The nature of perception and memory makes it completely impossible to separate out sense from percept from thought from memory.

But that doesn’t mean consciousness doesn’t exist. It just means it exists at a different level from the actual physical one. An abstraction level, as Dennett calls it.

I think you have the notion that consciousness is binary - you either have it or you don’t. But it isn’t like that, it’s a continuum. Animals, for instance, have it to varying degrees, from none in the cockroach through some in the dog to almost human-like levels in the chimpanzee. And yes, a sufficiently self-aware AI would be conscious, although I’m not aware of any current examples that have more than a cockroach level of awareness of self.

Keep in mind that I reject consciousness completely, so the issue for it being continuous (which I would otherwise agree with) is irrelevant to me.

All of those things you listed that humans exhibit can also be applied to some computers. I agree that consciousness only “exists” when we report it. But it seems to me that that is just another way of defining consciousness as merely be the ability to report it. The ability to report something is a characteristic of rather simple computational systems that I would not refer to as possessing consciousness in the way I think you intend.

A certain degree of complexity and flexibility of access to that kind of information seems to me to fit the bill for consciousness pretty exactly.

Do others in this thread not think so?

As for qualia, my own pet theory is that they are kind of an illusion, in that they’re not really reflective of our experiences of things, but reflective of our experiences of experiences of things. They’re what our experiences look like to us when we reflect on them, either presently or in memory. And based on this I hypothesize (and have no idea how to confirm or disconfirm) that qualia are only present in the mind when the mind is attending to some sensory experience or other. At other times, no qualia are present to the mind–though later on, the will seem to have been present if we reflect on past sensory experiences through memory.

In other words, I can have a whole stream of sensory experiences, perhaps for hours at a time, without experiencing qualia–but if I start to think about the experiences I just had, I’ll seem to myself to have experienced qualia. (And if, while having the sensory experiences, I start paying attention to the experience as opposed to the things I’m sensing, only then will I actually experience qualia. Without the attention, no qualia.)

I suppose a further prediction of this is that we should be able to find people who are (or induce a state in people such that they become) “qualia blind”–able to have coherent sensory experiences which they are reactive to in every normal way, except that they fail to be able to report having any qualia. They should say strange things like “I don’t actually see anything there… no colors, no light or dark… but I know there’s a ball in front of me” when presented visually with a ball.

I guess blindsight might have something to do with something like that…

I’m unaware of any AI systems that report on their internal states and also report on their changes in internal state in response to those internal states, the way humans report on their feelings. Especially along the lines of the essentially chaotic/freeform Multiple Drafts Model, which would be necessary for consciousness as Dennett outlines it.

But yes, you are right, consciousness really is the reporting of itself, a social event (even if only in the internal society of the Mind). Look up the etymology of the word sometime.

Surely chatbots like Alice give the illusion of such reporting. But I admit that it would be theoretically possible to differentiate between those that only appear to be reporting changes in internal state in response to those internal states, and those that actually do. It is, however, not clear to me that there is any ultimate difference between the two, since I would guess the two are in any sense relevant to this discussion completely isomorphic, but that is just a WAG.

In any case, many simple or not simple machines that humans produce can report on changes in internal state in response to those internal states. It is easy to rig up an algorithm that has an internal state coupled with a function that operates on that internal state and reports on the changes due to that operation. It wouldn’t be an AI, but that is just a technicality. I don’t see how there is anything inherently ‘conscious’ about the process you are describing.

Chatbots don’t come even close. They’re like the AI equivalent of cold reading - it’s all headology.

There wouldn’t be. But what the computer does (report states based on inputs algorithmically) and what the human does are very different. Part of this is the nature of e.g. memory and sense perception. You’d have to engineer a pretty faulty computer, that lies to itself all the time about what it perceives and what it remembers. Then you’d have to make it compete with a whole server room full of other computers for your attention. And that babble (not a sorted out story, but the actual babble of all those competing narratives) - that’s consciousness.

Like I said, that (plain state reporting) isn’t where consciousness resides, it’s just part of the process. It resides one level above - when you take not one, but multiple such instances and streams of state reporting, and run them through a filter process - have them compete with each other for resources like your memory, your muscles, your senses.

Dennett likens it to the way fame works, but other system analogues could work too, like thinking of an ecosystem. Then you pipe that abstraction layer back into the whole thing as a “state” of its own. And rinse and repeat. And that’s consciousness. That whole hot mess.

Are you denying such a process is happening (i.e. outright rejecting the Multiple Drafts Model)? Or that it is “consciousness” (i.e. faulting the semantics)? If so, why?

Yes, and I thought I addressed that fact specifically in my previous post.

I wasn’t describing any specific computer; I gave a concrete counter-example to your definition of consciousness by description of a simple and easily programmable algorithm. Go back to the definition you gave (that I replied to) and tell me specifically how what I described was not in fact an example of a system “that reports on its internal states and also reports on their changes in internal state in response to those internal states”.

I don’t contest any model that ultimately describes objective behavior in any way (such as the multiple drafts model), that behavior being a description of how the human brain chooses to describe the process of self reflection. What I am faulting is perhaps semantics – it depends on how others define experience of self-reflection, rather than simply the reporting of it. This may seem like a perverse subtlety, but I find it to carry considerable weight. If you want to best understand my position, I suggest reading my post #102 of this thread (I don’t know how to link to it. help?)

[ol]
[li]The function of any brain – even the most rudimentary – is to predict the future unfolding of the universe and adjust the organism’s behavior to accommodate.[/li][li]Brains predict the future by gathering sensory information and feeding it into an ongoing internal simulation of the external world.[/li][li]The nature of each species’ simulation is dictated by its evolutionary needs. We simulate those aspects of the universe that help us to survive and reproduce within our particular environmental niche.[/li][li]Humans, having evolved as social animals, use a big chunk of their brains to simulate the behavior of other humans. Anticipating the actions of others is very important to our survival.[/li][li]Recursively including a crude version of our own brain in our social simulation increases its utility. We can simulate long chains of cognitive cause and effect between ourselves and those around us.[/li][li]Consciousness is the sensation produced by this self-simulation. It’s what we experience in those moments when our brain is sustaining a reduced-complexity model of its own cognitive processes.[/li][/ol]

“Consciousness” is one of those things that disappear when you look at the individual processes that create consciousness, which in the case of human beings is the arrangement and firing patterns of nerve cells in the human brain.

The ecology analogy given earlier is on point. Is “ecology” an illusion? After all, when you look at the individual organisms and minerals that make up an ecosystem, there’s no ecology there. It’s only the interaction of these things that creates something that’s worthwhile in thinking about in its own terms.

So of course individual neurons aren’t conscious. And so you argue that since a neuron isn’t conscious, and the human brain is made up of nothing more than a bunch of neurons wired together in a complicated way, the human brain can’t be conscious.

Except, then you say that consciousness is a meaningless term. But if it’s meaningless, what do you mean when you say humans don’t have it? It’s one thing to say that there’s this term you’ve heard, “consciousness” and you don’t understand what people mean when they use it. But if consciousness really were meaningless you wouldn’t be able to say whether humans have it or not, just like you can’t say if humans are arklyopish or not if you don’t know what the word means.

Look, it seems to me that consciousness means a lot less than people think it means. Simple organisms don’t have self-knowledge. A plant turns toward the sun, not because it senses the sun and decides to move toward the sun, but because light stimulates certain chemicals which cause a difference in water pressure, and so the light side becomes slightly deflated and the dark side becomes slightly inflated, which causes the plant to turn towards the sun.

“Aha!” you say, “That’s just what humans do when the talk about consciousness! It’s all just cells secreting chemicals, without any awareness of what’s really happening!” Except, our brains are more complicated than that. Some animals react the way plants do–they react stereotypically to certain stimuli. A small object that moves across the field of vision of a frog causes the frog’s tongue to shoot out. But the frog isn’t thinking, “A fly! If I just stick out my tongue I’ll have some food!” If you toss a tiny pebble across the frog’s visual field, the tongue will come out. Do that 100 times, and you’ll get the same response.

In other words, the frog is incapable of learning. It has no memory. Even after 99 times of the moving object being a pebble, the frog still reacts the same way. And note that this is how humans also sometimes react. Poke a stick at someone’s eye, and they’ll blink. Do it 100 times without actually hitting them in the eye, and they’ll still blink. This is something that is not under conscious control. It is something that can’t be learned or unlearned, it’s an automatic response that is pretty much exactly equivalent to the frog’s response.

We can agree that this response is not conscious. And now of course, you claim that this is all there is–a collection of automatic responses, and none of them add up to consciousness, therefore there is no consciousness. Except we know that there ARE some responses and behaviors that AREN’T automatic. Lots of animals learn, lots of animals have memory. And they don’t have stereotypical automatic behaviors, they have complex behaviors which change over time.

And so we have a dog which remembers that a particular person gives it treats, and therefore the dog wags its tail when it sees the person, and a different person kicks the dog, and so the dog growls when it sees the person. The dog isn’t just a stimulus-response machine, or if it is, it’s a stimulus-response machine which can change its own stimulus-response lookup table.

And then we get to the human level, where a human being can see a person who has kicked them in the past, remember the kicking, and form a theory about why that person kicked them. And they can theorize that the kicker had a theory about the mental state of the kickee, and the kicker’s theory about the kickee is what caused the kicker to kick.

In other words, a conscious mind is able to model other minds. It is able to model the other mind so will, it can include that other mind’s model of other minds. Including the other mind’s model of the first mind.

And so the mind can think, “Bob is angry because I didn’t put the milk in the refrigerator, and so he’s going to kick my ass, and he’ll do this because he thinks it’s because I’m challenging his authority. But he doesn’t know that I didn’t put the milk in the refrigerator only because Mary forgot her keys and so I had to call Steve to tell him to let Mary borrow his car, but that meant I couldn’t go to the store. And if I tell Bob this story, Bob will stop being angry with me, and he’ll be angry at Mary instead.”

And the amazing thing about this sort of modeling is that it WORKS. I really can predict whether Bob will be angry or sad or happy, and I can change his mental state by TALKING to him. And I have a mental model of my mind, a mental model of Bob’s mind, a mental model of Bob’s mental model of my mind, and a mental model of my mental model of Bob’s mental model of my mind. I can think about how I feel, about how Bob feels, about how Bob feels about how I feel, about how I feel about how Bob feels about how I feel, and so on.

And this isn’t the illusion of consciousness, this IS consciousness. I don’t know if a computer will ever be able to do this, but if it could, it wouldn’t be a mere chatbot. And if a computer can remember what I say, and predict what I will do, and predict how I will respond to what the computer will do, by creating a model of my mind including a model of model of the computer in my mind, then it would be perverse to say that the computer isn’t conscious.

And it seems to me that being able

“Ecology” is just a word. The word is not an illusion, nor is the fact that there exists a system of interactions that is described by the word. All depends upon definition. I have no issue with any of the standard definitions of “ecology”. Consciousness is another matter, and I’m not sure how your analogy is relevant, other than the superficial sense that both “ecology” and “consciousness” describe a complex system composed of smaller parts.

If you want my best answer to that question please see my post #102 (also any help on how to link to a specific post would be greatly appreciated).

Nicely put.

The rest of your post is very long, and it seems you got cut off. But I read it and I don’t think I have any major disagreements with you, probably because you avoid any strong insinuation of subjective experience as a foundational aspect of your definition of consciousness, although you do seem to imply it. Again, see my question posed in post #102.

The words ‘sensation’ and ‘experience’ in your last bullet point are where our perspectives clash. I would more or less agree with you if you replaced “sensation produced by” with “manifestation of”, and replaced “experience” with “describe”.

The “#102” text in the upper-right of your post is a hyperlink. Its URL points to your post.

:smack:

OK, thanks. I thought there was some internal tag that could be used rather than a just an url.

I am conscious, so I suppose my answer to the question is ‘no’.

I am accustomed to the study of physics (pursued it for two years in college before switching to computer science) and this comes as a great surprise to me. We have two or three posters on this board who routinely insist that human beings are nothing but a bunch of particles acting under the same physical laws as everything else. However, whenever I or others ask them to justify this position, they are never able to do so. Perhaps you’ll succeed where they failed, but I’m not betting on it.

The fact that human evolved from lesser organisms is irrelevant to this discussion. The process by which a certain entity came into being tells us nothing about what that entity can and can’t do.

Libet tested volunteers who held their arms in front of them while a variety of electric devices were attached to their bodies and heads, measuring when electric activity in their brains began. He had the volunteers make spontaneous arm movements and measured when the elctric activity in their brain began relative to when the volunteers consciously chose to make the movement and found that, on average, the former began 500 milliseconds earlier than the later. However:

  1. An average is not a guarantee. Just as the fact that the average family has 2.1 kids doesn’t prove that every family has some kids, the fact that he got an average of 500 milliseconds in this experiment isn’t the same as saying that the brain activity always began before conscious awareness.

  2. The human experience does not, by and large, consist of sitting in a lab, watching a rotating dot on a screen, and making spontaneous arm movements while having fancy electric equipment attached to your head. Therefore this experiment does not tell us anything about consciousness in the human experience.

  3. According to the Wikipedia article that you linked to, Libet himself flatly rejected the argument that you’re making. (Of course I’m well aware that Wikipedia is often wrong.)

Lastly, in response to your statement that people’s awareness of consciousness is erroneous because it’s the same as people’s awareness of the love of Jesus, I believe that the love of Jesus is real, so to me that would be an argument in favor of consciousness, not against.

Well?

I’m not sure if you are looking for a response from me, but I realized I never responded to your earlier post, I guess because, though you don’t seem to agree with my point of view, it sounded like we were in complete agreement on everything you wrote other than the semantics of what the term “consciousness” is generally assumed to encompass. Correct me if I’m wrong. I also found your insight about qualia-blindness very thought-provoking.

I was wondering if others in this thread mean something else by “consciousness” than what I said. (Not that I don’t care what you have to say of course.)