I resent those implications! I know non-humans. Non-humans are friends of mine. I’ve talked about animal consciousness, and I looked at the Turing Test from the computer’s point of view. I never said that computer consciousness was impossible, just that the Turing Test is not the best way to find it.
Do you have some GOOD argument that says it isn’t? or is it just that that feels right? I thought my last post gave a pretty good argument. If that wasn’t good enough for you I may not have an argument to convince you, but I’ll give it another shot.
You have a big, complicated set of data. That’s not conscious. It doesn’t matter that the data set is big and complicated, or that it consists of words that have meaning in the English language, or that it includes rules on how to put words together in response to a given input. It’s just dead, static data. Not conscious. Then you add a simple program to implement the rules. A simple, algorithmic program. Like something a calculator might use. That has no awareness. It can identify the data to put it together in a response, but it knows nothing of the contents or the meaning of the data. The program is not conscious. The program with the data is not conscious. And that’s the look-up table computer. What would be conscious? What would the experience of this computer be? Nothing that I can imagine.
That’s one intuitive test of consciousness, imagining the experience of the object. A conscious being is one that has experiences. If some object doesn’t seem to have an experience, it’s probably not conscious. I can imagine other people’s experiences (although not with much detail or clarity). I can imagine a dolphin’s experience, or maybe a bat’s experience (Nagel wrote an interesting paper on that). I can’t imagine a rug’s experience, or a calculator’s experience, or the experience of a simple algorithmic program using a big complicated look-up table (even if it could pass the Turing Test). I can sort of imagine the experience of a computer that worked a different way, especially if it was in some kind of robot body with perception. (That may be all that a brain is.) Imagining the experiences of an object depends very much on what exactly it is doing, what all the architecture is inside the black box, and not just on the output it gives to various inputs. So of course it’s architecture-dependent.
I can imagine the experience of Kasparov when he intuitively feels that a certain move is a good move, but I can’t imagine the experience of Deep Blue when it calculates that a certain move will lead to better board position several moves down the road, based on some algorithm for calculating the value of board position. Same input and output, very different internal architecture, and very different state of consciousness. With the Turing Test the input and output are a lot more complicated, so just looking at the input and output is a better test than with the chess test. But if the architecture mattered with the simpler test, why should it stop mattering with the more complicated test?
Is that argument too intuitive? Here’s another one that’s more definite: the gullibility test. One sign of conscious thought is the ability to question and correct false beliefs. So teach a computer some ridiculous fact the same way that it gets taught all its other facts, either by programming it in directly or in some other way. Tell it all about baseball, but tell it that baseball is played underwater. Teach it a little relativity, but tell it that a bullet fired from a gun can go faster than the speed of light. If it accepts this ridiculous fact no questions asked, and even repeats it in conversation, then it has no idea what it is saying and it is not conscious. If it can correct the fact or at least show some suspicion, then it still may not be conscious, but at least it passed this test. So this gullibility test is necessary but not sufficient for a machine trying to prove its consciousness in a Turing-type way. I believe a look-up table would fail the gullibility test. Therefore I believe that a look-up table is not conscious.
There are two arguments – one fairly intuitive argument that architecture matters and one specific external test that should show that a look-up table is not conscious. Of course it gets more complicated than this with real computers. As Chronos pointed out, any Turing Machine would not be a pure look-up table. A look-up table is just a simple framework, an extreme hypothetical that shows that it’s not just about the output (as I said before, it’s the framework that’s simple – the type of computer – not the computer itself). If a look-up table could pass the Turing Test, would it be conscious? I say no, hence the TT is not sufficient. The Great Unwashed says yes, architecture does not matter at all. Chronos says mu – the question is not meaningful because it’s physically impossible for a look-up table to pass the TT. I agree that it’s technically impossible, but I’m not sure if there’s a deep reason why it is impossible for a Turing Machine to be mostly a giant look-up table. I’m arguing against The Great Unwashed’s view, and I’m trying to establish a method that could help in more difficult cases. Any Turing Machine that actually got created would have a more complex design, and it would need a more complex examination. But the principle is a longstanding one – if you want to know what’s going on with a car, don’t just drive it, look under the hood.
Think of what you see in a brain scan, like a MRI. Many different areas in the brain are active at once, and they are all connected by a neural network. There are different patterns of activation for different experiences. A being is aware of something only if the areas of the neural network relating to that type of thing are active, and all those areas are somehow connected. If the visual part of the brain is not activated, or if the visual region is not connected to other regions that are necessary for awareness, it’s safe to say you aren’t aware of any sights. Broken connections lead to interesting phenomena, like people who say they cannot see something, but can correctly answer questions about it. That’s interesting psych stuff, and it can apply to computers. If only a couple areas of a computer’s neural network become activated at a time, or if different active areas are not connected, and maybe separate processors are active in different parts of the network, then the computer can’t be having an experience or an awareness of what is going on. That’s what I picture for a look-up table, and some of the same things could be happening for more complicated designs. No complex web of neural interactions, just a search (or many separate searches) through a long list of options, with a few simple connections to put the results together into sentences. There’s not enough happening together for there to be consciousness. I’m sure that once we know more about brain architecture, and what relationships between different regions in the brain are necessary for human consciousness, it will be possible to give a more sophisticated analysis of whether various computers are conscious. But it makes sense that awareness of what you are doing depends not just on what output you give, but on how you get that output.
There’s good reason to believe that the architecture could matter for more complicated designs. For instance, it would probably be possible to add on some programming to a computer to get it to pass the gullibility test. But if all that it had was some add-on program designed solely to pass the gullibility test, I don’t think that program could make it conscious. It would just be a separate program, looking for contradictions between the new fact and the old data, and then giving a simple output like “Underwater? Is that a joke?” or “I thought that nothing could go faster than the speed of light. Is that a joke?” This program could use the computer’s data to function even while the computer was turned off. For the computer to actually be aware of how ridiculous the new fact was, the reason that it could pass the gullibility test would have to be integrated into its programming. If there’s no connection between the gullibility test region and the rest of computer program, then the computer could give the right answer or ask the right questions without beware aware of the problems with the facts it was told (like that psychological phenomenon). So even if a computer did pass the gullibility test, you might be able to tell from the way that it passed the test that it was never thinking about anything, and that it was never conscious. Architecture matters. And if there are other components like this, then only a look inside the black box could show whether they were just separate components, working independently to treat inputs in the right way, or part of an integrated system that had connected all the pieces that were necessary for awareness.
I hope, for Christ’s sake, that I have been able to give you some good reasons to believe that the computer’s architecture should have implications for its potential consciousness.