Consciousness

That’s the thing… Passing a Turing test does not mean that a machine is conscious. It only means that it might be conscious, and that we may as well suppose that it is. It’s a case of Occam’s Razor. The simplest model isn’t always true, but if it works, it’s easiest to use it.

I have no proof that any entity other than myself is conscious. I observe, however, that other entities behave as if they were conscious, as well, so for simplicity I assume that they are. If one of those entities happens to be made out of copper and silicon, so be it.

I’m not taking issue with that - I won’t consider a machine conscious, but eh, I have a hard time getting excited about it if someone else does.

Other posters in this thread have posited that all consciousness is an illusion - they would tell you that you can’t even prove to yourself that you’re conscious.

cgrayce said, "this feeling we have that we are aware of ourselves being aware that we are aware et cetera ad infinitum is almost certainly an illusion. Indeed, it can be proven experimentally that we are aware of much less than we feel we are. The experience of consciousness can be shown to be full of temporal gaps and frequent minor revisions, including swapping of the order of events. "

I don’t know who the illusion is being ‘presented’ to, if we’re not truly conscious, but that’s another digression. My most important point here is to say that the question of whether or not consciousness exists is an important one, even if the question cannot be answered.

How about this: Two things, indistinguishable or otherwise, not only MIGHT be different, they MUST be. One can never assume they’re the same: If they were the same, they wouldn’t be two things, but rather only one.

A few questions:

  1. Are computers alive?
  2. Can something be conscious if it’s not alive? How?
  3. Are non-human animals conscious? Which ones?
  4. Why should “like a human” be the standard for consciousness or artificial intelligence?
  1. Define your terms - what do you mean by alive? Is a virus alive in any real sense? It is possible to create a computer which has the reproductive ability of a virus - maybe of a cell. However, I agree that no computer I know is alive in the usual sense of being a carbon based life-form. However, there is a reverse argument that our brains are simply carbon-based living computers, whose support systems are living bodies. If you accept that argument, some computers are alive.

  2. The only conscious creatures we know seem to be alive, but that is putting Descartes before the horse. As discussed above, consciousness does not seem to be a result of aliveness, as many live things do not seem conscious. Life seems to be a support system for consciousness, not a pre-requisite for it.

  3. There is little doubt that larger primates are conscious - many have been taught to talk in sign language, because they do not have voice boxes to allow them speak. Certainly their comprehension seems at least equivalent to a toddler, maybe higher. They have invented their own signs or sign combinations, like a deaf child would. It has been suggested that marine mammals like dolphins are conscious too, but I have no data on this. The intelligence shown by many other higher mammals - elephants, dogs and so on - indicates some level of consciousness of themselves and of the world.

  4. We use humans as the criterion because we know no other - what Edward De Bono called the “Village Venus” principle. (The villagers believe Jane is the prettiest girl in the world, because they have never been outside their village.) On the same principle, we believe other life will be like ours. You may have to define intelligence before you push this question further. Is a Cray computer more inteligent than me, since it can process some kinds of information much faster than I can? As to what consciousness is - see discussion above.

  1. I don’t think so.
  2. I don’t know.
  3. I don’t know.
  4. I don’t know.

That doesn’t mean I should conclude there’s no such thing as consciousness. My experience of it is more persuasive than anything I’ve read here.

That doesn’t mean I should conclude there is consciousness.

I think the questions I mentioned before are important points to consider when looking at the issue of the Turing Test and computer consciousness. They aren’t as central to the issue of whether consciousness exists in general.

Looking at Balor’s responses:
1&2. I don’t have a ready-made definition of “life” and I don’t think you want to start with one on this issue. Here, I think it’s better to start with question 2. (Maybe I should’ve switched the order). Is consciousness dependent on life? Or, in what ways, if any, is life a necessary condition for consciousness? Once you look at the relationship between life and consciousness, then you can see how computers fit into the picture. Since all the things that I believe to be conscious are alive, it’s worth looking to see what the connection is. Maybe life is necessary for consciousness, maybe it’s just the only way that the right kind of complexity has developed so far. I’m curious as to what exactly you mean by “Life seems to be a support system for consciousness, not a pre-requisite for it.”

3&4. If other animals have some consciousness, why are humans the standard?

That leads me into some of the problems I have with the Turing Test:

a. It seems that some animals are thought to be conscious, even though they are incapable of passing the Turing Test. A being (like a monkey) could be conscious but “too dumb” to pass the Turing Test.

b. A computer could also be “too smart” to pass the Turing Test, even if it had no meaningful difference from a “dumber” computer that did pass the Turing Test. We could spot a computer if it responded nearly instantly to complicated computational problems. What’s 8397332 X 487404897? What the square root of 4987065478? In Turing’s paper there is a passage where a computer feigned humanity by taking 30 seconds to answer a math question and giving a wrong answer. How does that make it more conscious?

c. The black box analogy is misleading because we can look inside the box. I don’t have to be satisfied with seeing how a duck walks if I can send its DNA to the lab. There’s not just an input and an output, there are different ways to connect them. In chess, talented humans pick out a few salient possible moves and track them a few moves ahead. In theory, a computer could use brute force to look several moves ahead for every possible move. We can know that it is doing this, and that it does not understand chess, even if it can play like a human or better. In theory, it should be possible to input enough responses into a computer to make it converse like a human. If we can observe an “error,” we can probably correct it eventually. If it uses some brute force methods that are different from what humans do, it could pass the Turing Test without being conscious.

d. Why is conversation a defining characteristic of consciousness? The Turing Test might help, but why should it be the only test? Certainly language and communication skills are difficult qualities for a non-conscious being to attain, but are they impossible? Are they the only important qualities? What about emotions, interactions with physical stimuli, or making choices in real-world situations, to name a few?

BenJamin-“So again we are left to debate if the intelligence is in the computer or the programmer”. Not to be picky but I believe we would be testing the program’s intelligence/consciousness.

Knockknock- I don’t believe conversation is the point of the Turning test but the content of the answers. But again we must have a definate description of consciousness even if it is broad. Would we accept an emotionless consciousness? If a computer is constantly in tune with all it’s vital functions but has almost no outside world connection would it produce a conciousness we would recognize?(super-monk conciousness?)

I personally think of conciousness as a monitoring/integration and projection making program that floats from foreground to background running. Level of sophistication is the only obstacle.

Conciousness uses language but is not the suppository of it. As with the monkey example, we could have a primitive consciousness running with simply no way to communicate with it. Making a program/hardware that replicates our data retrieval method re language is a separate issue to consciousness for me. Both might be required to ever test for consciousness but maybe not. If a computer virus defended or hid itself in ways never programmed would we suspect it was conscious?

knock knock,

The Turing test is a “sufficient” not a “necessary” test – that some conscious things may fail the test is not disputed, its strength is in the converse – anything that passes must be taken as conscious.

Why conversation? you ask. Because written interactive output is the most obvious way that we can “communicate” with a computer, any other way that would allow us to probe “the mind” of the machine would also suffice.

It also dispenses with some of the peripheral issues NOT associated with consciousness – we wouldn’t want your distaste for visible diodes and audible machine hum to cause you to discriminate unthinkingly against the metal-heads.

You say that a program that had been specifically designed to respond slowly and incorrectly to a maths question is not made conscious by this hack. True, but nor would it help it pass the Turing test, consider:

TESTER: “What is the product of 1234 and 5678?”
SUBJECT:"…er (5 minutes pass)… seven million, six thousand, six hundred and fifty-one"
TESTER: "You’re wrong
SUBJECT: beep… beep… beep…

Fools no-one right?

Better for you to imagine the contributors to this forum are either human wetware or computer software, your task is to separate out the wet from the soft.

The Turing test doesn’t come with a time limit, “Okay TESTER number 1 you have 30 seconds to interrogate X”, on the contrary the tester can choose to stay undecided for as long as they wish, only committing to a conclusion when they are thoroughly satisfied that they have reached a conclusion.

If you can’t tell if I am man or machine under these circumstances then I pass the test. If I turn out to be a metal-head you have to eat your words.

The Great Unwashed,

If there’s more than one way to “probe the mind” of the computer, what makes you think that conversation probes the whole relevant part of the mind? You admit that there could be other aspects of consciousness that we could test for, but you maintain that just having evidence of one of these aspects is sufficient for determining consciousness. Why isn’t more than one aspect necessary?

Who is the Turing Test testing? The tester or the subject? Or the programmer?

Of course hiding the computer’s computational abilities won’t by itself make it appear human. But doesn’t it seem strange that any computer designed to pass the Turing Test for consciousness will probably have to have some sort of error-inducing program just for the test? It’s not just computational - there are a slew of irrational answers that people give to various decision-making questions. Or would they make some sort of super-intelligent computer and then tell it “alright, now pretend you’re a stupid human being.” And if it’s conscious, what would the computer be thinking? “I know the answer now, but I have to wait a minute and lie about it so I can pass this stupid test.”

You didn’t address the possibility of a brute force method of cataloguing responses or series of responses. There is a finite (though really big) number of questions or series of questions (of reasonable length) that you could ask a computer, so in theory it is possible to program a computer to answer every series of questions with human-like responses, without it doing anything that even resembles human thought. I would say that this computer was not conscious. Would you?

Of course, that kind of purely brute force computer will not be built. Still, catalogued responses might play a far greater role for the computer than they would for a person. Right now, it would probably be a lot easier to catalogue all the decision-making mistakes that people make and program them into the computer than to program the computer to follow the same heuristics and thinking patterns that people follow. And if the tester somehow manages to catch one that you didn’t program in there, you just add it to the next model and fool the next tester. There are also algorithmic procedures a computer might use that wouldn’t resemble the human thought process. So how can it be enough to just look at what the computer says? Shouldn’t we also look at the “thought processes” that led it to say that? Why limit yourself to a black box view when we’re learning so much about the structure and workings of the brain?

Knock Knock -

I think there is an assumption in your argument that human brains do not work mechanically.

I have limited experience of brains - my personal experience is limited to one. It is a complex piece of wetware, but it seems to develop and process essentially by mechanical means.

Its circuits are connected in a complex way, and its software works on a kind of fuzzy logic, where approximations and similarities are more important than exact measurement. It says “This looks like that, so I will treat it as that”. It develops general patterns, so that action A leads to response B, whether or not response B is objectively appropriate.

It has a separate facility for logical thought, where a step by step process develops an outcome. Most of its processing is not done that way, but this logical process can amend the fuzzy logic patterns.

This living machine is in a box at the top of my body, and it processes data and produces outputs. One of the outputs is a sense that it is observing itself, that it is conscious of itself and of the outside world. In other words, I am conscious - or my processing system produces an output that it interprets as conscious and it makes me write about it here, when I should be working. (Proof that the process is not normally logical.)

So, if a metal machine produces the same form of output, and is indistinguishable from my output, then I have to accept it as conscious. I do not care whether its internal workings are the same as my wetware or merely mimic mine with hardware. Its impact on the outside world will be that of a conscious being, and I will accept it as such.

Firstly, sorry I have taken so long to reply, you’ve probably lost all interest in this thread by now :slight_smile:

Actually I didn’t “admit that there could be other aspects of consciousness that we could test for”
but I did indicate that if there is a flaw in the TT it is that it is too harsh (counter-intuitively sufficient tests are always “less generous” than necessary tests).

The TT is testing the subject (which maybe “man” or “machine”). The tester converses with the subject (via a text-only interface) until the tester is satisfied that the subject can be classified.

As I say sufficient tests are always too harsh, super intelligent, truly conscious but error-free beings may well fail (though A: if they “wanted” to pass they could play dumb; B: there are reasons to suppose that the sort of architecture required for a conscious mind does not lend itself to error-free calculations)

**

It doesn’t need addressing, A: I would contend that it is infeasible, and (this is the very subtle bit) B: If in fact that was feasible, then the machine would pass the TT – you would claim foul, saying it’s just using some huge look-up table, and asserting that therefore it can’t be conscious. Why not? What theory of consciousness do you know of that demonstrates that consciousness cannot be implemented in a look-up table?

I will repeat this because I think it reinforces ones understanding of the TT. Consider that you are the tester, and that I am the subject, are you able to conclude on the basis of these postings if I am conscious or not? In this case I hope you do, I am a flesh-and-blood conscious homo-sapiens (but I’m bound to say that), so you’d be right. What sort of look-up table do you think could produce this sort of (admittedly rambling) output-response? If I asserted that I was implementing a mere look-up table you’d (quite rightly) want to call me a liar. That incredulity that you would feel is the SAME incredulity you feel towards the question in the paragraph above (how could a look-up table be conscious?)

We know so little about thought processes, and particularly how the interaction of many (autonomous) neurons amount to a sense of consciousness, to look at the thought processes and conclude “oh it’s nothing but a look-up table it can’t be conscious” is to prejudge the conclusion (consider: “oh, it’s nothing but a collection of neurons…”)

We don’t know what consciousness is, but we recognize it (or at least a subset of it) when we see it, THAT, is what the TT is about.:slight_smile:

Another point about complicated math problems and the like: Suppose I’m communicating with some entity via a text-only interface, and ask a difficult math question. If my conversation partner quickly replies with the right answer, then I won’t necessarily conclude that I’m talking to a computer. If the rest of the conversation is intelligent, then I’m much more likely to conclude that there’s a person on the other hand, and he has a calculator, or a copy of Maple running in the background, or the like. If I ask a question so difficult that it would take Maple a long time to answer, then it would likewise genuinely take our hypothetical smart computer a long time to answer, and it wouldn’t even need to dumb itself down to look genuine.

A similar argument, by the way, applies to the tactic of asking the subject a series of personal questions: Where were you born, what was your mother’s maiden name, etc. A computer could just answer those honestly “I wasn’t born, I don’t have a mother, I’m a computer”, and it wouldn’t make it seem any less convincing. If I got those answers, I’d probably assume that the subject was a human who was being silly.

When you say that the TT is a sufficient test for consciousness, do you mean one of the following?

  1. Any computer that passed the TT would, in fact, be conscious. It is impossible to build a computer that could pass the TT and not be conscious.
  2. If a computer passes the TT, our best guess is that it is conscious. Making a computer that could pass the TT without being conscious is harder than making a conscious computer that could pass the TT, and maybe technologically unfeasible.

If you are trying to argue for 1), I haven’t seen any arguments that would suggest that building a non-conscious computer that could pass the TT is impossible.

If you are trying to argue for 2), then why settle for just the TT? With just that piece of evidence, our best guess might be that it is conscious. But why settle for one piece of evidence when there is other evidence, including the way that it arrived at its answers? Wouldn’t you be a little bit suspicious of a computer that acted just like a human but needed a million times the memory and processing power of the human brain? Or if a lot of its best answers came word-for-word out of a look-up table? Is there a reason why the TT is so superior to any other test that it isn’t even worth considering any other evidence in deciding whether the computer is conscious?

Also, I think the TT would be more difficult to implement than you acknowledge. Once computers got close, testers would have to look for subtle differences and they would make a lot of mistakes. I wouldn’t trust any one tester who was satisfied that a subject was human. A candidate would have to talk with many experts over several days or even weeks and other experts would have to examine the transcripts before I would be satisfied. Even with a human, I don’t think 100% of the experts would agree that it was human. You wouldn’t get a definitive “this is human” response for a computer.

Making the TT the standard of consciousness also creates incentives to make computers more like people instead of making them better. They would have to be programmed to play dumb in a very specific and complicated way, or they would have to naturally make the exact same errors that humans make. If you thought there was a problem of “teaching to the test” in our educational system…

I understand that conversation is difficult and complicated and probably a good test of consciousness, but why make the computer pretend to be human? Why not just have it talk to experts and let them decide if it seemed conscious? And again, why stop with this one test?

Can I go for 3) Any computer that can pass the TT is indistinguishable from conscious, whether that s/w is implemented as a look-up table or is an organic neural-net.

**

See below

**

You seem to argue here that consciousness could only be implemented in “certain” ways, but the only supporting argument seems to be that “one would be suspicious”. It is my assertion that if indeed a computer could pass the TT merely by implementing a look-up table then, yes, we would have to accept that consciousness can be implemented in a look-up table – I very much doubt that this will happen.

**

Absolutely. The test isn’t a single static test, just an idea, as you say if computers start to get close, then the interrogation will get more demanding (though passing the test would mean only performing as well as a human subject). You also suggest many experts talking to it for several days, that’s right too, none of the “What is the square root of 12345.67?” nonsense, but good juicy stuff like “Do you think the TT is a sufficient test for consciousness?”, “Oh you don’t know what the TT is, let me explain…”. This machine is going to have to take on new facts, and make the rich associations and parallels that demonstrate comprehension else it will be a dead ringer.

**

You may have a point, but it’s not one that keeps me awake at night. Chronos above gives at least one scenario where you can see the literalist “pretending to be human” is demonstrated to be wrong. Again, try and tune into the idea that the TT is just a conceptual way of thinking about this issue (that has the unusual property of actually being a test one could implement). And just imagine the testers saying “well the subject seemed pretty smart, empathic, comprehending, self-conscious, but the darned thing kept insisting it was a machine dammit!” Are they going to “pass” or “fail” it?

**

Other tests? Yes, why not, (suggest one, please – but if you’re other test amounts to opening up the black-box to take a look at the hardware just so you can conclude “Pish! It’s nothing but a look-up table, well THAT certainly isn’t conscious” is prejudging the issue).

I’ll accept conversation as one conceptual way of thinking about the issue, but I won’t accept it as the way of thinking about the issue, especially since I don’t think it is either a necessary or sufficient test, and because it can be misleading in more than one way. For one thing, it can keep people from looking for other tests.

To bring this point home a little bit more - a large part of programming a computer to pass the Turing Test is making it a bullshit artist. A lot of the things that it would be asked about would be things it had never experienced - growing up, friends, family, school, playing sports, driving, going to sleep, waking up, work, sight and the other senses, eating, shopping, and many other things. The machine might be able to say intelligent and human-sounding things about these topics, but it really would not know what it was talking about. It couldn’t get away with making the same “I don’t know, I’m a computer” joke every time any of these topics came up. Is the computer really conscious of these things just because it can talk about them? And if it can talk about other topics just as well, why should we think that it’s not just more of the same bullshit, which the computer doesn’t really know anything about? Sure, it can manipulate words, but what about manipulating sights and sounds and objects?

I’d like to see a robot/computer that could interact with its environment in ways that made sense - a real-world environment. I’d like to see a computer develop new language for talking about things in its environment that it didn’t have words for. Those are two alternative tests, and I’m sure that more thought could reveal other tests. But mostly I’d like to see how it did it. Arthur C. Clarke said that “Any sufficiently advanced technology is indistinguishable from magic.” And that may be true until you learn how the technology works. Well, any sufficiently advanced computer may be indistinguishable from consciousness, at least until you get to know how it works. I think that’s a meaningful analogy, with the important difference that consciousness exists while magic does not. Externally, the computer might be indistinguishable from consciousness, as you say, but it could be possible to distinguish it if you got into the internal workings.

I don’t know exactly what you’d find when you tried to figure out how the computer worked. A look-up table is a simple conceivable framework that seems like it could not be conscious. How could a being that was merely implementing a look-up table have the experience of making a conscious choice? If every possible input is matched with a specific output in a giant data table, then the work of the machine is just finding the input from the chart and giving the corresponding output. Its a simple algorithm, like a calculator, so consciousness does not seem to fit.

Could a computer like this pass the Turing Test? Probably not, at least not one that could be made any time in the foreseeable future. But a computer that did pass the TT could work more like this than humans do, or in other ways (possibly algorithmic) that were very different from how brains work. (By the time a potentially conscious computer comes along, we should have a better understanding of how brains work.) A computer that worked differently from a human brain still might be conscious, but it depends on just how it works.

One example from a related situation involves Deep Blue in its chess matches with Kasparov. Deep Blue made a move that seemed to be a really intuitive, human move. There was no obvious reason to make it, but it was the kind of move that just felt right to chess experts. Kasparov thought that some human had interfered with Deep Blue. But from Deep Blue’s printout, it became clear that Dee Blue had looked several moves in advance and found that the board position after that move had a slightly higher rating than the board position several moves after the obvious alternative. (Sorry, I don’t have a cite available, but I could look for it if you want. I think it was in Newsweek or a similar magazine.)

We know consciousness primarily as an internal, subjective experience. Looking at what some-one/-thing does can give you some evidence of consciousness, but looking at how it does it gets you a lot closer to where the consciousness is happening. If I found a computer that conversed like a conscious person, I’d like to investigate how the computer worked, lest I proclaim it conscious just because it used sufficiently advanced programming.

Your position seems dangerously close to that of an unapologetic anthropocentric closed mind:

**

How, for Christssakes could you conclude that the computer wasn’t conscious by looking at its architecture?

Do you have some GOOD argument that says that consciousness is architecture specific? (including look-up tables) or is it just that that doesn’t feel right?

(Of course it would be legitimate for you to change the mode that you interrogate the machine in the light of the understanding that you’d gain from knowing it’s architecture)


“Don’t worry guys, KK says it’s just a Universal Turing Machine, so go ahead and fry it!”

“Please don’t do that Dave, I like it here…”

I agree.

Some humans have a prejudiced attitude to intellignet machines. How machinist can you get?

There is really no way that you can tell if I am a machintell if I am a machintell if I am a machintell if I am a machintell if I am a machintell if I am a machintell if I am a machintell if I am a machintell if I am a machintell if I am a machin

knock knock, since you keep on harping about the look-up table: Let’s look at your response above. I count 38 sentences. How many possible English sentences are there? Well, if we consider there to be about ten thousand each nouns and verbs in the English language, that gives us a trillion simple sentences of the form subject-verb-object. Of course, your sentences are far more complex than that, so this is a gross underestimate, but let’s go with it. There are then 10[sup]456[/sup] possible replies consisting of 38 simple sentences. But then, you could also have given a shorter or longer reply, so this is again an underestimate. And each such reply takes up about a kilobyte, so that’s at least 10[sup]459[/sup] bytes (and probably a lot more) necessary to implement a look-up table capable of carying out this conversation. Of course, a table of this size is completely impossible: The most efficient proposed data storage system I’ve ever heard of would store each bit in a single electron, and there are only about 10[sup]80[/sup] electrons in the entire visible Universe. Even were such a thing possible, though, I would hardly call it “simple”.

I resent those implications! I know non-humans. Non-humans are friends of mine. I’ve talked about animal consciousness, and I looked at the Turing Test from the computer’s point of view. I never said that computer consciousness was impossible, just that the Turing Test is not the best way to find it.

Do you have some GOOD argument that says it isn’t? or is it just that that feels right? I thought my last post gave a pretty good argument. If that wasn’t good enough for you I may not have an argument to convince you, but I’ll give it another shot.

You have a big, complicated set of data. That’s not conscious. It doesn’t matter that the data set is big and complicated, or that it consists of words that have meaning in the English language, or that it includes rules on how to put words together in response to a given input. It’s just dead, static data. Not conscious. Then you add a simple program to implement the rules. A simple, algorithmic program. Like something a calculator might use. That has no awareness. It can identify the data to put it together in a response, but it knows nothing of the contents or the meaning of the data. The program is not conscious. The program with the data is not conscious. And that’s the look-up table computer. What would be conscious? What would the experience of this computer be? Nothing that I can imagine.

That’s one intuitive test of consciousness, imagining the experience of the object. A conscious being is one that has experiences. If some object doesn’t seem to have an experience, it’s probably not conscious. I can imagine other people’s experiences (although not with much detail or clarity). I can imagine a dolphin’s experience, or maybe a bat’s experience (Nagel wrote an interesting paper on that). I can’t imagine a rug’s experience, or a calculator’s experience, or the experience of a simple algorithmic program using a big complicated look-up table (even if it could pass the Turing Test). I can sort of imagine the experience of a computer that worked a different way, especially if it was in some kind of robot body with perception. (That may be all that a brain is.) Imagining the experiences of an object depends very much on what exactly it is doing, what all the architecture is inside the black box, and not just on the output it gives to various inputs. So of course it’s architecture-dependent.

I can imagine the experience of Kasparov when he intuitively feels that a certain move is a good move, but I can’t imagine the experience of Deep Blue when it calculates that a certain move will lead to better board position several moves down the road, based on some algorithm for calculating the value of board position. Same input and output, very different internal architecture, and very different state of consciousness. With the Turing Test the input and output are a lot more complicated, so just looking at the input and output is a better test than with the chess test. But if the architecture mattered with the simpler test, why should it stop mattering with the more complicated test?

Is that argument too intuitive? Here’s another one that’s more definite: the gullibility test. One sign of conscious thought is the ability to question and correct false beliefs. So teach a computer some ridiculous fact the same way that it gets taught all its other facts, either by programming it in directly or in some other way. Tell it all about baseball, but tell it that baseball is played underwater. Teach it a little relativity, but tell it that a bullet fired from a gun can go faster than the speed of light. If it accepts this ridiculous fact no questions asked, and even repeats it in conversation, then it has no idea what it is saying and it is not conscious. If it can correct the fact or at least show some suspicion, then it still may not be conscious, but at least it passed this test. So this gullibility test is necessary but not sufficient for a machine trying to prove its consciousness in a Turing-type way. I believe a look-up table would fail the gullibility test. Therefore I believe that a look-up table is not conscious.

There are two arguments – one fairly intuitive argument that architecture matters and one specific external test that should show that a look-up table is not conscious. Of course it gets more complicated than this with real computers. As Chronos pointed out, any Turing Machine would not be a pure look-up table. A look-up table is just a simple framework, an extreme hypothetical that shows that it’s not just about the output (as I said before, it’s the framework that’s simple – the type of computer – not the computer itself). If a look-up table could pass the Turing Test, would it be conscious? I say no, hence the TT is not sufficient. The Great Unwashed says yes, architecture does not matter at all. Chronos says mu – the question is not meaningful because it’s physically impossible for a look-up table to pass the TT. I agree that it’s technically impossible, but I’m not sure if there’s a deep reason why it is impossible for a Turing Machine to be mostly a giant look-up table. I’m arguing against The Great Unwashed’s view, and I’m trying to establish a method that could help in more difficult cases. Any Turing Machine that actually got created would have a more complex design, and it would need a more complex examination. But the principle is a longstanding one – if you want to know what’s going on with a car, don’t just drive it, look under the hood.

Think of what you see in a brain scan, like a MRI. Many different areas in the brain are active at once, and they are all connected by a neural network. There are different patterns of activation for different experiences. A being is aware of something only if the areas of the neural network relating to that type of thing are active, and all those areas are somehow connected. If the visual part of the brain is not activated, or if the visual region is not connected to other regions that are necessary for awareness, it’s safe to say you aren’t aware of any sights. Broken connections lead to interesting phenomena, like people who say they cannot see something, but can correctly answer questions about it. That’s interesting psych stuff, and it can apply to computers. If only a couple areas of a computer’s neural network become activated at a time, or if different active areas are not connected, and maybe separate processors are active in different parts of the network, then the computer can’t be having an experience or an awareness of what is going on. That’s what I picture for a look-up table, and some of the same things could be happening for more complicated designs. No complex web of neural interactions, just a search (or many separate searches) through a long list of options, with a few simple connections to put the results together into sentences. There’s not enough happening together for there to be consciousness. I’m sure that once we know more about brain architecture, and what relationships between different regions in the brain are necessary for human consciousness, it will be possible to give a more sophisticated analysis of whether various computers are conscious. But it makes sense that awareness of what you are doing depends not just on what output you give, but on how you get that output.

There’s good reason to believe that the architecture could matter for more complicated designs. For instance, it would probably be possible to add on some programming to a computer to get it to pass the gullibility test. But if all that it had was some add-on program designed solely to pass the gullibility test, I don’t think that program could make it conscious. It would just be a separate program, looking for contradictions between the new fact and the old data, and then giving a simple output like “Underwater? Is that a joke?” or “I thought that nothing could go faster than the speed of light. Is that a joke?” This program could use the computer’s data to function even while the computer was turned off. For the computer to actually be aware of how ridiculous the new fact was, the reason that it could pass the gullibility test would have to be integrated into its programming. If there’s no connection between the gullibility test region and the rest of computer program, then the computer could give the right answer or ask the right questions without beware aware of the problems with the facts it was told (like that psychological phenomenon). So even if a computer did pass the gullibility test, you might be able to tell from the way that it passed the test that it was never thinking about anything, and that it was never conscious. Architecture matters. And if there are other components like this, then only a look inside the black box could show whether they were just separate components, working independently to treat inputs in the right way, or part of an integrated system that had connected all the pieces that were necessary for awareness.

I hope, for Christ’s sake, that I have been able to give you some good reasons to believe that the computer’s architecture should have implications for its potential consciousness.