The ‘illusion’ of consciousness? Who is being fooled by this illusion?
Can it? How can it understand the pen without understanding the pen?
Look, a Chinese Room can’t be conscious because it can’t learn, it can’t observe, it can’t remember. Our brains aren’t just complicated lookup tables, because we experience novel inputs all day every day. There aren’t any genes that encoded into our minds how to respond to an iPod, because there weren’t any iPods on the African Savanna where we evolved.
So the Chinese Room thought experiment imagines a system that seems to be conscious because it could pass a Turing test, yet it seems to me that the Chinese Room could never actually pass a Turing test, because it is impossible to construct an actual real room with books and shelves that would contain lookup tables for every possible Chinese conversation that could ever exist. Such a room would be large than the solar system. This is like trying to solve chess by creating a table of every possible set of moves, and providing a hard-coded response to each one.
Yes, advanced chess programs do include such solution trees for endgames. But they can’t have a solution tree for every possible chess game, because such a tree would be unfeasibly ginormous. Likewise, for the Chinese Room to pass a Turing test would require something that’s actually impossible to build, let alone design.
And that’s why the Chinese Room fails as a critique of artificial consciousness. Because we know that a real conscious–whatever we mean by conscious–computer can fit in a pretty small case, weighs only a few kilograms, and can produce sensible novel output over reasonable time periods.
You’d have to radically redesign the Chinese Room before it could be capable of passing the Turing Test.
Apparently you are making some qualitative judgment about what qualifies as “pattern” recognition, etc, of sufficient complexity, or “human like”. What I responded to:
All I pointed out is, this criteria can be met. You are taking issue with, for example, the difficulty of “human-style pattern recognition”. But any kind of pattern recognition will do for the above point. Ie the rememberance of the pattern, the performance of operations on the pattern, etc…
I don’t disagree biologically, but they very well could be lookup tables in principle. “Complixated” lookup tables can surely include the ability to add to the table as novel inputs are introduced (in fact this has been done).
:smack: (can this emoticon be interpreted as a high-five)?
I’m playing a bit of devil’s advocate – I’m not really sure what to believe because the empirical evidence is so scant, but it seems possible that our everyday intuitions about what is going on in our head is just wrong. I think free will is already dead, but the consciousness thing is a bit of a problem because we’re in the straight jacket so to speak. I can’t imagine what a scientific explanation of why we seem to have subjective experiences would look like, or what sort of evidence you would collect to substantiate it, or how falsifiable it would be. Like I said earlier, I think no matter how far we get we’re just gonna have to be a little bit unsatisfied. But then again we never really found an answer for “the vital force,” it was only a concern based on not enough info which stopped being relevant. Maybe the same thing is going on here.
Whatever system is getting input from the other systems which catalogs and reports internal brain states to itself. Red may be an internal label for color of a certain wavelength, but the subjective part of the label ain’t happening.
It can manipulate the pen in its head and spit out a result for whatever is required. It could pretend to twist it in 3D space like we do too. It could tell you all about its non-existent subjective experiences and you would have no reason to doubt it. I mean, you’re telling me right now it’s conscious already. So it’s fooled you. If you think that’s impossible, well, fine. Neither side of the p-zombie argument can really point to anything.
I keep coming back to the problem, for me, of why manipulating a pen requires a “what it’s like” or a “subjective aboutness” or whatever other muddled term we can use about what it feels like to understand the pen. It seems like an unnecessary extra step or assumption. So maybe we don’t understand the pen, either.
Most philosophical thought experiments are impossible. It’s an intuition pump that’s supposed to be an analogy for digital computers. And the Chinese room can learn and observe. The man observes, or it can be handed observations from outside, and it has paper for memory.
I probably shouldn’t have mentioned Searle because IIRC he thinks our inner world is real. But he agrees that machines can be conscious (because like you say, here we are). His room is an argument against the idea that a simulation of consciousness (specifically on a symbol manipulating computer without the semantics, which AFAIk is the only sort of computer we can make right now) is consciousness. And that damned problem of other minds comes here again because you can’t prove any of this stuff, which is why everyone comes up with these weird thought experiments like twin earths or Mary’s room.
It seems to me fairly obvious that this is not true. I’ll just appeal to general knowledge of physics as you did in the original post. No amount of computing can determine an electron’s position and momentum. An electron’s position and momentum can have an effect on whether an atom or molecule undergoes a reaction. An atom or molecule undergoing a reaction can have a cascading effect on other atoms and molecules. Such as cascade of chemical reactions can determine whether a neuron fires or not. Whether one neuron fires can determine whether multiple neurons fire. So the events that take place in the brain can depend on something which can’t be computed.
Or consider the question from this angle. Suppose that scientists have a subject in the lab. They can keep him as long as they want and use the EEG, MRI, or any other piece of equipment on his brain that they want. Would they be able to predict what he’s going to do even just in the ten minutes after he leaves the lab?
To me and, I’d wager, to most people, describing a human as a sort of computer seems a lot life describing a cat as a sort of refridgerator. Consider the following conversation.
Al: “I believe that cats are a type of refridgerator.”
Bob: “Obviously you don’t believe that, because you treat your cat very differently from your refridgerator. You pet your cat, feed it, take it to the vet when it gets sick, cherish it, and let it boss you around, whereas your refridgerator you use to store food and toss it out without emotion when you get a new one.”
Al: “Fuck it, I’ll maintain the illusion.”
At this point Bob might wonder what’s the point of Al believing that a cat is a refridgerator even if Al is then going to maintain the illusion that a cat is a cat and behave accordingly.
Um- no colors exist if you get small enough…
So your consciousness exists in certain circumstances only?
Can it be said to exist, then?
OK, you are correct that quantum indeterminacy is a fundamental barrier to being able to predict the microscopic behavior of a brain to arbitrary precision. I was a little too hasty in saying that given a powerful enough computer we can simulate the brain and make specific predictions arbitrarily far ahead in the future. A more correct but unfortunately more subtle statement is going to have to be the following, which I was hoping to avoid due to it being technical:
- Quantum indeterminacy is the only barrier to a completely deterministic description of nature.
- The indeterminacy in Quantum mechanics is separable: there is “wave function evolution” (deterministic) and “wave function collapse” (random).
- A full and proper simulation of the human brain at the microscopic level would be deterministic, but with the addition of a sprinkling of random samplings within deterministic probabilistic envelopes.
- While the inclusion of randomness precludes us from being sure about predicting any particular quantum history, any simulation would nonetheless be a ‘correct’ simulation of a human ‘consciousness’. If we were to magically make two copies of Bob and put them in mirror universes, quantum indeterminacy may lead one copy to grab a coke rather than a pepsi. The same would hold true for the simulation.
I guess this is what I can’t understand. People who believe that p-zombies are logically possible confuse me. Because we human beings have an inner life and the zombies don’t? Except, what? How could that be possible, if the zombies act just like us? If a zombie is capable of thinking about a pen such that it simulates a human response perfectly, then it has the same sorts of mental states that you and I do. I mean, that I do, I’m not so sure about you.
Consciousness, or the “feeling of consciousness” or “subjective aboutness” or whatever you want to call it, is just what happens inside the brain when it thinks about things. So any brain that can think about things is conscious. Or rather, it is utterly incoherent to speak of human beings who are conscious, and p-zombies who aren’t conscious but act exactly as if they are. But** iamnotbatman** would agree with me, but would assert that human beings are in fact p-zombies and aren’t “really” conscious, even though we think they are.
Whatever, but I can’t see how a system that isn’t conscious could be tricked into thinking it’s conscious–how would it know? You couldn’t be tricked into thinking you’re conscious unless you actually were conscious. You couldn’t have the subjective experience of feeling conscious unless you are an entity that can have subjective experiences.
No, the man can’t observe. Or rather, all he can do is look at the Chinese text he receives, and then search through the vast library until he finds that text on a particular page of a particular book on a particular shelf in a particular building in a particular city on a particular planet that’s part of the gigantic Chinese “room”. And if he gets handed observations from outside–if there’s a small team of experts adding content to the “room”–then how is that different from a chatbot where the programmer occasionally just responds for the chatbot? As for using paper for memory, sure. He could simulate any Turing machine with dots on a page, just like this XKCD. Except, he’d have to have some sort of instructions on exactly how he’s supposed to perform the memory and calculations, right? If he’s allowed to write whatever he likes on paper and hand it back, then it’s just a guy in a box teaching himself Chinese, rather than a consciousness simulator.
The whole point of the Chinese Room is that it’s a chatbot implemented with paper and books and a guy who looks things up in books, rather than on a computer. And the point of it is to show that since a computer can be implemented by a guy writing and erasing dots on paper, that computers can’t be conscious or actually think about things, because how could paper and dots be conscious? Except that’s an argument from incredulity. How can cleverly-arranged amino acids and a bunch of water be conscious?
I don’t claim that we have any idea how to actually build a computer that I’d call conscious, because if the last 50 years of AI research have taught us only one thing, it’s that we’re going about it all wrong. However, a human brain is constructed of ordinary hydrogen, carbon, oxygen, nitrogen, and some other ordinary elements. It is assembled by ordinary (hah!) biological processes, no different than the processes which produce muscle tissue in animals or leaves in plants or mats of bacteria in prokaryotes.
And so the solution proposed here is to cut the Gordian Knot by proclaiming that yes, computers and chinese rooms can’t be conscious, and neither are human brains! We’re just zombies! Well, it seems to me that’s just a semantic argument, to argue that what everyday people call “consciousness” isn’t “real consciousness”. Because even though you proclaim that there’s no such thing as “really-real-for-real consciousness”, only fake consciousness, now instead of understanding real consciousness you’ve got to understand fake consciousness. And it seems a lot easier for me to name what you call “fake consciousness”, “consciousness”. That is, whatever it is that goes on in our heads is what I mean when I say “consciousness”, and there’s no reason to name that other stuff, the stuff that you assert that we don’t have, “real consciousness”.
You are using the word “think” in a way that seems to be presupposing consciousness. Replace “think” in the above with “say”, and go from there.
So you’re saying that people merely have a more complex set of input/output processes, but there’s no clear- cut separation between us and simple computers. I sort of agree, but I disagree that that means we’re unconscious.
Consciousness I think only really applies to us because we have external awareness. When we are unconscious, external stimuli usually elicit no response. With computers, there is no external – it’s just a series of 0s and 1s. To use the word conscious in the context of a self-contained computer system is nonsensical.
C3P0 and R2D2 on the other hand…
It sounds like you are defining ‘computer’ in a narrow sense here. By computer I don’t just mean DELL. A computer can have external inputs. Hell, even PCs have external input via keyboard, mouse, cam, mic, dsl…
Well, I don’t know if other people really think, or if they just SAY they think.
But I, for one KNOW that I think, because, here I am. If I just said that I think, and didn’t really think, then why do I think I think? How could I think I think, unless I could think? If I didn’t really think, I might be programmed to SAY I think, and might be able to trick YOU into believing I think when I really don’t, but how could I trick MYSELF?
Of course, my brain is tricked all the time about the input it receives and the processing it does–contrast the raw photons hitting my retina with the mental picture my brain constructs.
But if there’s no “me” to fool, how could I be fooled into thinking there’s a “me”? What’s being fooled if it isn’t me?
It’s one thing to claim that my notion about what “me” is, is all wrong, and X, Y and Z don’t happen but rather A, B, and C. But to name X, Y and Z “me”, and then declare that since there’s no X, Y and Z, that there’s no me, seems like an attempt to win by semantics. Like saying that because the sun doesn’t move there’s no such thing as a sunrise, it’s only an illusion. On the one hand, it is an illusion, but on the other hand there’s a period where portions of the earth become illuminated by the sun, and that’s what we mean when we say “sunrise”.
Self-trickery is not remotely unprecedented.
Bingo. And that’s just the tip of the iceberg. You’ve got your basic cognitive illusions, your cognitive distortions, your optical illusion, your psychoses and hallucination, various neurological illusions like alice in wonderland syndrome or all the wonderful things described by Oliver Sachs like the Capgras delusion, you’ve got brain farts like Déjà vu, psychedelic experience (“Experiences include total loss of visual connection with reality, the sense of not being human or having a body. The feeling of being in many places at the same time. The loss of reality is so extreme that it becomes ineffable. People have been reported seeing themselves in entirely different settings than their original setting. Many people experience the feeling of being in a simulated reality”), synesthesia, depersonalization… this list goes on and on.
You are being semantically imprecise. A complex machine can make reference to itself (“me”). There is nothing deeply meaningful about that. It’s just a word that references an object which happens to be the machine itself. It can apply terms to describe “me”, like the word “conscious”, defining the term tautologically. It is “fooled” in the sense that it continues to use a meaningless word, though internally, it’s system of logic is self-consistent.
I don’t think I’m playing a semantics game like that. Semantics is and definitions are important though, and I invite you to help define what I am arguing against.
FWIW, having waded through the whole thread, ISTM there have been at least a dozen good refutations of the OP. That said, although I’m a volitionist on evidential grounds, I have to say I don’t see the supposed contradiction between consciousness and determinism. If the mind is determined, so too is consciousness. Take a simple example, extending on one mentioned earlier in the thread. Show me an apple. I perceive it (am conscious of it) as red. I don’t have a choice in the matter. Light hits the apple, it absorbs certain wavelengths and reflects others, the reflected ones impact cones in my eyes and a processing center in my brain interprets that as red (semantically defined).
Whether the same analysis applies to all domains of human action can be discussed. But the OP seems to me a lot like Voltaire’s disputing the aquatic fossils found in the Alps because they supported the theory of a Great Flood (at a point when we didn’t yet understand plate tectonics). Consciousness exists or it doesn’t. Whether it has implications for free will is irrelevant.
We can’t “say” anything without “thinking” it first. “Saying” is not separated from “thinking” in human beings. All speech acts start as thoughts. Except possibly the “Aaaargh!!” of pain.
I never meant to imply that consciousness has implications for free will. But free will does have implications for consciousness. If the world is deterministic, then will is determined, not free. If free will existed, it would therefore contradict determinism, which is the axiom I use when arguing against “consciousness” as defined beyond a description of the actions or statements of a complex organism.
It is also relevant that I have never heard a coherent definition of “free will” that did not take for granted “consciousness” as a precept in one way or another. In fact in your thread I seemed to ascertain that:
I am satisfied that free will is not well-motivated.
Now I have responded multiple times to this statement that “I perceive an apple (am conscious of it) as red”, as though it is somehow objective evidence of anything beyond which a machine (perhaps very complex, but certainly deterministic) could testify to. As a scientist observing the behavior of such a machine, I would conclude: “the machine has directed its sensors at an apple, it has identified EM radiation in the 600nm range.” Is the machine “aware” of the apple? Yes, if you mean it has recorded the information about it in its data banks. Is it “conscious” of the apple? Again, if you mean nothing more than that it has reported that it is aware of the apple.
You are playing a semantics game here (“say” => “think” => “conscious”, by making false equivalences), and I was trying to help you see the error by suggesting that you re-write your sentence. Let me do that for you:
A computer can “say” all sorts of things. I am supposing that human beings are complex machines, which can also “say” a heckuva lot. Do machines “think”? Do humans “think”? Sure, if by “think” you mean that complicated algorithms are working out what we will say. So OK, we can’t “say” anything without “thinking”, by the above definitions. Now what is your argument? That “thinking” (algorithms that determine speech output) constistute consciousness?
I can’t say what you meant, but I observe what you said. See, e.g., Posts #6, #40, #51, #56, #58, #78, #88, #93, #152, and there probably are others I missed. Also, please notice that, in the case of the red apple, I was actually arguing for your deterministic side of the debate, though disputing the no-consciousness side.