“Consciousness” is one of those things that disappear when you look at the individual processes that create consciousness, which in the case of human beings is the arrangement and firing patterns of nerve cells in the human brain.
The ecology analogy given earlier is on point. Is “ecology” an illusion? After all, when you look at the individual organisms and minerals that make up an ecosystem, there’s no ecology there. It’s only the interaction of these things that creates something that’s worthwhile in thinking about in its own terms.
So of course individual neurons aren’t conscious. And so you argue that since a neuron isn’t conscious, and the human brain is made up of nothing more than a bunch of neurons wired together in a complicated way, the human brain can’t be conscious.
Except, then you say that consciousness is a meaningless term. But if it’s meaningless, what do you mean when you say humans don’t have it? It’s one thing to say that there’s this term you’ve heard, “consciousness” and you don’t understand what people mean when they use it. But if consciousness really were meaningless you wouldn’t be able to say whether humans have it or not, just like you can’t say if humans are arklyopish or not if you don’t know what the word means.
Look, it seems to me that consciousness means a lot less than people think it means. Simple organisms don’t have self-knowledge. A plant turns toward the sun, not because it senses the sun and decides to move toward the sun, but because light stimulates certain chemicals which cause a difference in water pressure, and so the light side becomes slightly deflated and the dark side becomes slightly inflated, which causes the plant to turn towards the sun.
“Aha!” you say, “That’s just what humans do when the talk about consciousness! It’s all just cells secreting chemicals, without any awareness of what’s really happening!” Except, our brains are more complicated than that. Some animals react the way plants do–they react stereotypically to certain stimuli. A small object that moves across the field of vision of a frog causes the frog’s tongue to shoot out. But the frog isn’t thinking, “A fly! If I just stick out my tongue I’ll have some food!” If you toss a tiny pebble across the frog’s visual field, the tongue will come out. Do that 100 times, and you’ll get the same response.
In other words, the frog is incapable of learning. It has no memory. Even after 99 times of the moving object being a pebble, the frog still reacts the same way. And note that this is how humans also sometimes react. Poke a stick at someone’s eye, and they’ll blink. Do it 100 times without actually hitting them in the eye, and they’ll still blink. This is something that is not under conscious control. It is something that can’t be learned or unlearned, it’s an automatic response that is pretty much exactly equivalent to the frog’s response.
We can agree that this response is not conscious. And now of course, you claim that this is all there is–a collection of automatic responses, and none of them add up to consciousness, therefore there is no consciousness. Except we know that there ARE some responses and behaviors that AREN’T automatic. Lots of animals learn, lots of animals have memory. And they don’t have stereotypical automatic behaviors, they have complex behaviors which change over time.
And so we have a dog which remembers that a particular person gives it treats, and therefore the dog wags its tail when it sees the person, and a different person kicks the dog, and so the dog growls when it sees the person. The dog isn’t just a stimulus-response machine, or if it is, it’s a stimulus-response machine which can change its own stimulus-response lookup table.
And then we get to the human level, where a human being can see a person who has kicked them in the past, remember the kicking, and form a theory about why that person kicked them. And they can theorize that the kicker had a theory about the mental state of the kickee, and the kicker’s theory about the kickee is what caused the kicker to kick.
In other words, a conscious mind is able to model other minds. It is able to model the other mind so will, it can include that other mind’s model of other minds. Including the other mind’s model of the first mind.
And so the mind can think, “Bob is angry because I didn’t put the milk in the refrigerator, and so he’s going to kick my ass, and he’ll do this because he thinks it’s because I’m challenging his authority. But he doesn’t know that I didn’t put the milk in the refrigerator only because Mary forgot her keys and so I had to call Steve to tell him to let Mary borrow his car, but that meant I couldn’t go to the store. And if I tell Bob this story, Bob will stop being angry with me, and he’ll be angry at Mary instead.”
And the amazing thing about this sort of modeling is that it WORKS. I really can predict whether Bob will be angry or sad or happy, and I can change his mental state by TALKING to him. And I have a mental model of my mind, a mental model of Bob’s mind, a mental model of Bob’s mental model of my mind, and a mental model of my mental model of Bob’s mental model of my mind. I can think about how I feel, about how Bob feels, about how Bob feels about how I feel, about how I feel about how Bob feels about how I feel, and so on.
And this isn’t the illusion of consciousness, this IS consciousness. I don’t know if a computer will ever be able to do this, but if it could, it wouldn’t be a mere chatbot. And if a computer can remember what I say, and predict what I will do, and predict how I will respond to what the computer will do, by creating a model of my mind including a model of model of the computer in my mind, then it would be perverse to say that the computer isn’t conscious.
And it seems to me that being able