Firstly let me apologise if you thought I was giving you undue attitude when I said “Your position seems dangerously close to that of an unapologetic anthropocentric closed mind”, truly, no offence is intended, but your position seems to be that you believe that consciousness is architecture dependent, (you may be surprised to find that I consider it your duty to demonstrate that and not mine to demonstrate the converse).
To address this point only, I think it is essential to discuss functional equivalence of “diverse” computer architectures.
What is special about these conceptual machines is that Turing demonstrated that they are functionally equivalent to any other digital computer.
Conversely (or do I mean correspondingly?), a sufficiently large look-up table can emulate the behaviour of any given digital computer (notwithstanding the physical limitations discussed above). That is to say, Deep(er) Blue (or any other machine) could have been implemented with a look-up table.
So, you see why looking at the particular architecture is wholly and necessarily irrelevant? These machines are all equivalents, just because one has oo-bells and gui-whistles and superfast-supercooled-supercallifragilistic memory, and the other looks like a toilet roll has nothing to do with it.
Some people take this to mean that a computer could NEVER be conscious, no matter how complex (because there is an equivalent toilet-roll (or look-up table) implementation that will perfectly emulate the behaviour.
That, I think, is a fair enough POV – machines can’t be conscious, period.
But, then we are left having to explain in what way we (human-beings) differ from machines. We’re constrained by the same physics, at the neuron-level the behaviour of the human brain is pretty well understood, and these neurons behave in a deterministic way, our best guess is that the brain is a mere agglomeration of these fundamental units.
So what do we conclude? That consciousness is an illusion? Because I think that if you take the “fair enough POV” that machines can’t be conscious then you are forced to accept that neither can we be. (Actually this is an appalling gloss – maybe the human brain behaves in some way that is intrinsically different from a digital computer, say by behaving non-deterministically, BUT, where would this non-determinism (or whatever) “come from”, and why couldn’t we emulate it in hard-ware?)
PS:
Though I wouldn’t be surprised if you’ve read them both, can I recommend that you read Godel, Escher, Bach by Douglas R Hofstadter, and The Mind’s I by Hofstadter and Daniel C. Dennett.
What I’ve been trying to show is that “functionally equivalent” or “equivalent behavior” cannot just be reduced to “equivalent.” For two objects that perform the same function identically, there can still be important differences that go undetected if you just look at their behavior. For something as internal as how they experience performing the function, it makes sense to look at how they perform the function, not just what function they perform. So being conscious depends not just on what you do (function), but on how you do it (architecture).
Some of my examples (like Kasparov & Deep Blue) and some of my arguments (like imagining the object’s experience) have tried to show this. There are many other examples that don’t directly deal with consciousness that suggest it’s not just about performing the same function. Someone who multiplies integers by repeated addition (3802x87 is 3802 added to itself 87 times) does not know multiplication in the same way as someone who can just multiply directly, even though they get the same answer. Someone who spells a word correctly doesn’t really know the word if he spells similar words incorrectly (e.g. spells roll correctly, but spells bowl as “boll,” soul as “soll,” coal as “coll,” whole as “holl,” etc.).
Sometimes you can catch this difference with another test: what’s 3802.3x87.7? how do you spell bowl? did you know that baseball is played underwater? but sometimes you can’t. Even if you can find the difference with an external test, looking at how it does it can give you ideas for a test (It says some intelligent-sounding things, but it’s just repeating whatever gets programmed in. Would it repeat anything at all that we told it?) or it can help you catch the difference directly (3802 is written 87 times on his sheet of scratch paper!). You should look at the architecture because functional equivalence is not enough.
So, for people, computers, and anything else, consciousness depends not just on what you do, but on how you do it.
ps:
I also recommend Godel, Escher, Bach to anyone out there. I haven’t read The Mind’s I yet, although I have read The Mathematical Tourist.
Not to butt in, but: ’ What I’ve been trying to show is that “functionally equivalent” or “equivalent behavior” cannot just be reduced to “equivalent.” For two objects that perform the same function identically, there can still be important differences that go undetected if you just look at their behavior. '
It be argued that all conciousness is is its function. If we figured out how a computer was concious what would we compare it to? We don’t know how we’re concious. It’s funny that you mention the math problem as most people I know would do the math through a series of look up tables(the times tables memorized in school). They are not really “doing the math directly” whatever that means. So isn’t the human look up table method of math less concious than the computer adding the number 87 times?
Also, what important differences do you imagine turning up? Differences turn up between different people and cultures so what difference do you imagine disqualifying machine conciousness?
’ Someone who spells a word correctly doesn’t really know the word if he spells similar words incorrectly (e.g. spells roll correctly, but spells bowl as “boll,” soul as “soll,” coal as “coll,” whole as “holl,” etc.). '
That is not true at all, that would mean illiterates don’t know any words. The type of spelling error described here is common for children and I’m sure a very simple neural net could improve a computers spelling.
An analogy might be reproduction. Suppose a machine was capable/programmed to gather materials and through an internal workshop create an updated version of itself. Would the mere fact that we understand the process mean it is not reproducing?
If we are going to create conciousness in a computer we are going to have to accept that it won’t be identical to human conciousness. If one thinks human emotion and bad math abilities are required for conciousness then I would say computer conciousness is impossible (but personally I don’t think that).
Just to clarify my last post - the 2 examples I gave were trying to show that two beings (it could be 2 people) can do the same thing in different ways and have very different internal states. The examples don’t have anything to do with computers or consciousness, directly. They’re trying to show that doing the same function is not enough for equivalence. You can’t find the difference in internal states just from looking at what they do, but you can find it by looking at how they do it. So if one person has a multiplication problem, and he just adds the number repeatedly, he doesn’t really know multiplication. He knows addition, and he knows that multiplication of integers is repeated addition, but that’s it. Similarly, the person who spells roll correctly doesn’t really know how to spell roll if he’s just using some simple rule that says that the sound ole is always spelled O-L-L. If you ask someone how to spell roll and they spell it correctly, they still might not really know the written word.
The cultural argument seems to favor an internal test over the Turing Test. It’s true that people in different cultures behave differently. That makes it difficult to know just from a conversation if someone is a person, since different people converse so differently (although The Great Unwashed would probably just say that the Turing Test is sufficient but not necessary). But internally, I’d imagine that most people work about the same way. Isn’t that what they’re teaching in kindergarten - “we’re all the same on the inside”? As far as what we could find by looking in a computer - I already discussed that at great length above. In short, a computer can’t be aware of something if it doesn’t have all the information about that something together in one place. So, if we saw a computer was something like a look-up table that only considered a little bit of information at one place at one time, we couldn’t call it aware of much. So I wouldn’t call it conscious. It’s true that we know very little about how the humans are conscious. Because of that, I’d be very cautious in proclaiming something that we created to be conscious. Once more is know about consciousness (probably at least in part from studying the brain), we should be able to say more about potential computer consciousness (probably at least in part from studying the computer’s “brain”).
What is the function of consciousness? Or did I misread this?
I believe conciousness function is (as I said earlier): "… a monitoring/integration and projection making program that floats from foreground to background running. "
It takes new info and decides whether it is important to itself, integrates it with previous knowledge and makes future predictions using best guesses/assumptions as well as known facts. To some degree these future predictions are also integrated into general knowledge. If a computer could dynamically do this it would be concious, imho. I think by definition a look up table could not be used to predict things not already established.
I understand that your examples were not for displaying conciousness but I still miss your point. Even if the computer is multiplying different than us is, is it not still multiplying?
Back to conciousness, if we look under the hood and see it’s using some randomizing subroutine what is our conclusion? That it can’t be concious or that we must have some randomizer in our brains?
btw- that “direct multiplication” looks suspiciously like four simpler multiplications (no doubt done through a look up table) summed with four more. A computer could add like that if we wasted the memory and had “8X table” and “7X table” matrices. It would be inefficient though. Maybe there is a more effecient way to have comp conciousness as well.
knock knock, you argue that an entity capable of passing the Turing Test would not necessarily have “experiences”, which (you claim) are a necessary component of consiousness. Experiences are memories of events which happened to an entity, right? Because it’d be impossible to pass the TT without that. If I’m having a conversation and say “wait, what were we talking about”, then in order to make any sense, the entity on the other side is going to need some sort of memory of what went on previously in the conversation (a major deficiency of most current chatterbots, by the way).
As for the gullibility test, that doesn’t prove anything. If you take a human being who doesn’t know anything about baseball, teach him the rules, and then tell him it’s played underwater, he’ll most likely accept it without question. And I’ve personally met people who, despite a marginal knowledge of relativity, think that a faster-than-light gun would be possible. Is a correct understanding of relativity a necessary prerequisite for consciousness?
One other point, by the way: A lot of what humans do (beyond even things like math) is just a look-up table. When someone asks you “how are you”, how often do you respond with something other than “fine”? It’s a straight stimulus-response: You hear a standard question, you look up the standard response internally, and output it without any processing.
That just means you’re not sufficiently imaginative. Literature is full of inanimate objects being personified. Just rent a Disney film, like Beauty and the Beast. (Dancing teacups, a talking candlestick).
RUG: Hmmmm. Sure is comfy on this floor. Just inside the door, out of the rain and cold, but where I can do the most good with people stepping on me with their dirty shoes instead of messing up the rest of the house. I love being useful! Oh no, here comes that filthy mutt that chews on me - OW OW OW OW OW! Somebody make him stop!
I could go on, but I think I made my point.
I do this all the time. I like to play Solitaire. One version I play is as follows: Lay down three cards face up, then four face down, for 7 across. Repeat for 4 rows. Then put down the next three rows all face up. That leaves three left-over cards off to the side. The game is played by matching suits, stacking the cards in descending order. You move any card to it’s appropriate spot on the next higher card (play 2spade on 3spade) if that card is fully exposed, at the bottom of the column (for instance, last row layed down is exposed, the previous rows are partially covered), and you move any cards that are on top of the moved card with it. When you open a blank spot at the top row, you can move kings to their own column. When you run out of moves, you turn over your hold cards and play them to open up more options.
When playing this came, I frequently get a situation where more than one move can be made. For example, I open up a top row slot, and I have two or more kings available to play. Which one should I move? I visually review the following moves that come from each move, as far as I can take them till it that set dead ends. Since I’m playing for strategy, I “cheat” in that if one of those follow-ons will expose a face down card, I turn it over where I can view it and see if the play will continue. In this way I can work several moves beyond the current decision. I’ve analyzed multiple king move options that themselves opened branches of options for at least five moves per branch. It is tricky and confusing and frequently I have to repeat my thought process, but I like the challenge. And at some point I have to evaluate which option to take, even though I can’t proceed farther. That is exactly the process described about Deep Blue evaluating the subsequent board value numerous moves later. Which option takes me the furthest? This sequence moves 7 cards, but doesn’t expose any face down cards. That sequence moves 3 cards, but exposes a face down. Which will be better in the long run?
The problem is we don’t have any a priori way of declaring what it should look like under the hood.
The root of the problem is we don’t know what consciousness is. We can’t define it well, can’t point to it and say “That’s consciousness”. And until we understand how it works the one place we know about, it’s all guesswork when extending to computers, or animals, or hypothetical alien life forms.
Irishman- I can imagine what a rug would experience if a rug had experiences, but I have no reason to believe that a rug does have experiences. As far as I can tell, a rug just doesn’t have the right equipment for it to be experiencing all the things that happen to it. It’s not aware of any of it. Similarly, for the chess move, I can imagine the experience of a person who is looking ahead several moves, but, given what I know about how computers work, I can’t imagine the experience of a computer looking ahead several moves.
Awareness is the issue here – that’s what I mean by an experience, Chronos. It’s not about memory. A sheet of paper has memory – it will keep track of every mark that gets made on it, every crease or fold. But it’s not aware of these marks – it has no experience of them. I was looking at the entry for “consciousness” in my dictionary (I won’t bother repeating it here – I know you all have dictionaries if you want to look) and the one word that seemed crucial was “awareness.”
I basically agree with this, but I think the term “guesswork” glosses over the fact that there are good guesses (high probability of being correct) & bad guesses, and it’s worth the effort to try to make good guesses. I know I’m conscious (at least as well as I know anything else), but it’s hard to look at something else and say “this is conscious.” I’m pretty sure other people are conscious, since they all look and act so much like me, but it’s hard to pin down which similarities are the important ones. Beyond people, it’s a lot easier to say “that’s not conscious” than it is to say “that is conscious.” I can say that my rug is not conscious with about as much confidence as I say that other people are conscious. Why? It has nothing resembling a brain or a nervous system or anything that could allow it to perceive or reason or think in any way. Nothing detectable happens when I step on it besides it getting a footprint. With computers it gets a lot more complicated, but there are (imperfect) ways to test for awareness. Obviously there’s the Turing Test. It or something like it is a good test for computers that claim to be able to demonstrate their consciousness through language. It is not the end-all test, though, and it is better at showing something to not be conscious than at showing that something is conscious. Other tests, like the gullibility test, can only tell you “this is not conscious.” (If you think that those examples weren’t ridiculous enough, Chronos, I’m sure you can come up with one yourself. I think underwater baseball is pretty ridiculous. How do they breathe? How does the ball go so far? But the basic idea is that, if you actually are aware of something and are not just repeating what you’re told tape recorder, then some “facts” that you are told will strike you as implausible and ridiculous, even though they are not straightforward logical contradictions, no matter who tells them to you. [spoiler]Sure, little kids believe in Santa Claus, but they think about it and realize it’s a myth after a few years[spoiler].) Another way to test for consciousness is to look inside the computer and see if it has all the pieces put together in a way that would allow it to be aware of what you suspect it might be aware of. Maybe “it’s all guesswork,” but a lot of things are guesswork, and we can use what we know about consciousness to design tests and make the best guess possible so that we can be fairly confident of our answer.
I agree with Irishman that our lack of understanding of consciousness is at the root of the problem. CarnalK has a plausible suggestion:
I’m not sure if this definition works, though. It seems like Deep Blue should be conscious by this definition.
CarnalK, you question the value of looking under the hood, but in your original post you wrote
You seem to be implying that we may be able to get information on a being’s consciousness, not just by looking at what it does, but by looking at what it does in relation to what it was programmed to do. Looking at its original programming seems a lot like looking at how it works. Looking at how it works refers to both software and hardware, and I would consider looking at its programming to be a part of that.
One more problem with the Turing Test (I’m not sure if anyone left in this debate besides Chronos believes that the Turing Test can detect consciousness, but mostly that is what I’ve been arguing against) – how could a computer be aware of anything if all of its input and output has been words? How would any of the words have any meaning to the computer? Obviously words can have meaning in relation to each other, but it seems like the system of language cannot get off the ground unless there’s some other form of information – like sight, touch, etc. Can anyone tell me what a computer that only had access to words could actually be aware of (and not just a story like the rug’s)?
I also came up with another test – the two-conversation test. If a computer can have a conversation, it seems like it should be able to have 2 conversations at once. People can. So have two purportedly separate conversations going on with the computer at once, but make them related in a way that should catch the computer’s attention. At the extreme, what you tell the computer in one conversation could be word-for-word the same as what it tells you in the other conversation. If the computer is actually aware of both conversations, it should realize what’s happening and comment on it.
Not everything that humans do is done consciously. As I said before, “like a human” does not mean “conscious,” which is one reason why the Turing Test is not the best way to determine if a computer is conscious. When a person says “fine” that may be an automatic response. But trying to figure out “What’s her name? Amanda? Annabelle? Something like that. Some southern name, I think it starts with an A…” is done consciously. A computer might be programmed to say that, but you would have to look at the data that was actually accessed while it said that to see if it was really experiencing tip-of-the-tongue syndrome. It’s not a knockdown test, but it would at least suggest that the computer was conscious. So there’s something else to look for – both by looking at what the computer says and at what it’s “thinking” while it says it – tip of the tongue phenomenon (or lethologica). A related phenomenon is when people think (or feel) “there’s something I’m forgetting” without realizing what it is that they forget. These are the kinds of errors that people make that are signs of conscious thought, imho. It may be possible to be conscious without them, or to do something like them without being conscious, but they’re at least indicators that are diagnostic in that they’re much more likely for conscious computers than for non-conscious computers. And the more ways you can think of to test the computer’s consciousness, the better the guess you can make of whether it is conscious.
If the spelling & math examples I gave still aren’t making sense to you, don’t even think of computers. Imagine a little kid Jackie who gets the answers in the way I talked about, and Lee who gets the answers the “normal” way. They both get the correct answer on their math & spelling tests, but I would say that Lee knows multiplication & spelling a lot better than Jackie does. So functionally equivalent does not equal equivalent. More tests would probably reveal their differences, but you can also see them just from looking at how they got the answer.
You miss the point. The Turing Test (like the Duck Test or Occam’s Razor) does not “detect consciousness”. It says that if something behaves conscious, you may as well treat it as conscious until it stops behaving that way. As I said at the start, that is the only viable basis for living and thinking.
As William of Occam said, why multiply entities if this is not necessary? Most of the debate in this thread could be dispensed with if you follow this pragmatic and logical thought structure.
I am still here and “in” this discussion. I am a little frustrated that you keep wanting to rip the top of the box to look inside to see if the machine could be conscious. I just do not see how that could help you – we have no adequate description of consciousness, but you believe that some architectures “couldn’t” support it per se, whatever it is. I think this is prejudging the issue, and/or ignoring machine equivalence discussed above.
I’d like to express my amusement at your repeated suggestion that a machine that fell for your “underwater baseball” lie would fail some putative “gullability” test. Come on KK, you must have met a thousand people will a thousand whacky and unsupportable beliefs that would collapse if they would only just open their minds and ask the right questions of themselves, but they never do. No, they heard it once, their fourth grade teacher told them, they saw it on TV, it’s just common sense.
Their mind is set like concrete – you’d fail them in a TT and then want to chop off the top of their skulls to debug them.
ALL your suggestions about how to use the TT “aggressively”: parallel conversations; lying; remembering; generally messing with it’s “head”, are all legitimate components of a TT and a suitably skilled TTester would use them all – think of the “empathy” tests in Blade Runner.
You observed that humans have a subconscious and sensory input. No-one has suggested that these might not be necessary in order to create a conscious machine, in fact I think all who would seriously propose a thinking machine would expect to give it a rich interface to its enviroment[1], and that a “subconscious” would be an artifact of its conscious mind.
[1] though didn’t Helen Keller, whose interface lacked sight and hearing, attain a considerable intellect – so we needn’t suppose it would have to be extraordinarily rich (and yes, yes, I know she was born sighted etc, I just mention it to help us think).