All this does is reveal how vague concepts like “consciousness” or “sentience” or “thinking” or “subjective” “feeling” can be. But they don’t have to be.
I hereby reject the notion of philosophical zombies. That is, people who act just like regular people, who walk and talk and act as if they had subjective internal states, but don’t actually have those internal states. And this is because if you can’t tell the difference between two systems, then it makes no sense to assert that there is a difference between the two systems.
It is surely possible that there really is a difference. So in a pitch black room someone hands you two swatches of cloth and asks you to tell which is the red on and which is the green one. You can’t tell, but you do know that if you turned on the goddam lights you would be able to tell. But in the philosophical zombie argument every attempt to turn on the goddam lights is ruled impossible, because the zombies are defined to behave exactly like regular people. Except in this one particular way, which is impossible to define or detect or explain.
And I claim balderdash. What exactly does it mean to be conscious? It means that you not only have thoughts, but you know you have thoughts. You don’t just react, you model your own internal states as you react. You get mad when someone steps on your toe, but you also know that you’ll get mad if someone steps on your toe. You’re able to understand yourself, to some extent at least. When you talk to a human being, and threaten to step on their toe, they can explain back to you what might happen if you try to step on their toe.
So how is the philosophical zombie different? It has to be able to act AS IF it knows its own mental states. And how exactly is this different than actually knowing its own mental states? It just is, in some ineffable unknowable sense? Again, balderdash. If there’s no difference, there’s no difference.
And this is why bullshit concepts like “The Chinese Room” are bullshit. A Chinese Room that could fool you into thinking you’re talking to a person would have to have some way of remembering the conversation. It can’t just be a bunch of arbitrary tables with some random factors thrown in. It can’t just be quadrillions of transcripts of every possible human conversation, because such a thing couldn’t be contained in a room, it would take a solar system full of filing cabinets. Every day people are saying and writing sentences that no human on Earth has every said or written before. You can’t just start writing down every possible sentence that could be input into your goddam Chinese Room along with every possible output sentence, because the sun would grow cold before you finished.
And suppose you really could do it. All you’ve done is proven you can emulate one computational system on another vastly slower computation system. Congratulations Alan Turing! You were right all along.
Again, my whole complaint about this line of reasoning is that the questions aren’t well thought out. Could a computer have a soul? First tell me how you could tell the difference between a human with a soul and a human with no soul, and I could begin to answer your question. But if you can’t talk about souls in such a way that it would be possible to tell the difference, then my contention is that “soul” is a word that doesn’t actually mean anything.