Consciousness

Hello all! Long time listener, first time caller.

From this thread.

I haven’t read the argument Cecil is quoting, so I’ll have to presume that he’s explaining it well enough to argue against it.

I’m not sure this would convince someone the box understood Chinese, in the normal sense people think of ‘understand.’ First, in order to understand Chinese squiggles and return the appropriate squiggles, the ‘rulebook’ would likely have to be larger than. . .well, logically larger than every book ever written in Chinese and probably a great deal larger than that since any expression or question or observation—including every sentance from every book and any thought anyone could think—would need an appropriate comment. So it would take you. . .I’d guess several hours at best and possibly several days to look up the right response. I wouldn’t think the box understood Chinese, hell I’d probably think it was broken or that it had someone inside it that took the message and walked around asking random people if they understood the squiggles. A scientist would never think the box understood chinese. She would quickly realize that some kind of simple lookup or comparison routine was running.

And this is an important point. People interpret language on the fly which is pretty spectacular. The box in the example can’t do this, can’t remotely do this, and wouldn’t convince anyone.

Ok, here’s the crux of my point. I don’t think you could do this. I don’t think one person, without learning Chinese could memorize the rulebook described in the earlier example. And that’s where I think the example breaks down. If I’m right, and it’d be impossible to memorize a rulebook that contained, let’s be ultraconservative and say 10,000 pages, then this argument is meaningless. It’d be like saying ‘Gravity isn’t absolute, because I can imagine a ball floating in the air, ignoring gravity.’ “Yes, but you can’t make a ball that does that.” Just because John Searle, the guy who put forth this argument, can imagine memorizing a rulebook containing the entirety of chinese syntax doesn’t mean it’s possible.

I don’t know when computers will be able to fool people into thinking a computer is a person, but I don’t think Searle’s argument means the Turing test is meaningless. I think it just means a human couldn’t pass the Turing test using the method he’s describing.

Furthermore, I think Turing was right, if I can’t tell the difference between a person and his responses to my behavior and a computer and its responses to my behavior, then the computer is thinking. Think of it this way. . .what other test is there?

The Turing test is a version of the old political principle - the Duck Test. If it looks like a duck, if it walks like a duck, if it quacks like a duck and if it lays duck eggs - it’s a duck.

We cannot verify the nature of any other person’s consciousness - we know only what is in our own heads. I assume the rest of you have the same internal processes as I do, because you behave as if you do. This is a practical way of responding to the world.

Some crazy people do not make the same assumption. They believe those round them are machines programmed to respond in a human way. To support this, they have to construct a hugely complex belief system. If you have this kind of belief system there is no easy way to disprove it.

The proof ultimately depends on the duck test, which is a version of William of Occam’s Razor. When you have two competing theories which make exactly the same predictions, the one that is simpler is the better.

By Occam’s Razor, I assume that the simplest explanation is correct, until proven otherwise. So, I assume that you do indeed have consciousness like mine.

If a machine responds to me in a way that seems to be conscious, I will treat it as conscious until a better answer appears. This is the only practical way to handle the situation. I don’t care if it is not a duck, so long as it lays eggs.

It seems to me that if you ask “what are you thinking” and the only answer you get is “awaiting input”, then the black box is not conscious.

This does not disprove consciousness. The black box may be lying to you. :wink:

matt has a very good counterargument. An expanded version with essentially the same viewpoint was given in The Mind’s I by Douglas Hofstadter and Daniel Dennett (which anyone interested in the question of consciousness must read!). AFAIK Searle has never responded to this counterargument. In fact, his recent book makes the fallacy even more obvious.

Suppose we replace the book with a computer that has the complete translation table. A human who doesn’t understand Chinese takes the input and passes it on to the computer, then transcribes the computer’s response. The human doesn’t understand Chinese, the computer doesn’t understand Chinese (by assumption), but somehow the system understands Chinese.

First problem: the human is clearly a red herring. Input the Chinese characters directly into the computer and the result is the same. So let’s lose the human.

Second problem: Now Searle’s argument goes: “Assume a computer exists which can translate Chinese into English but which doesn’t understand Chinese.” Obviously Searle is assuming what needs to be proved, that such a computer is indeed possible. Dennett/Hofstadter would respond (I believe) that any computer which could translate any Chinese sentence would necessarily (by definition, in fact) understand Chinese.

Sorry, just remembered that the point is to respond to, not just translate, the Chinese inputs. But the point is the same.

People like Searle can dream up endless possibilities to produce the cause that is observed. However, consciousness can only be experienced externally as an effect. Ultimately, all anyone knows of my consciousness is the effect I produce.

For all you know, I may be acting as an automaton - lacking all self-consciousness. My responses to you may be the product of brain-washing, the dribble from Pavlov’s dog. Even I cannot verify what my own consciousness seems to be. I may be programmed to believe that I am conscious.

In the end this kind of logic vanishes up its own fundament. I think the Turing test/duck test/Occam’s Razor is always going to be the end point of any discussion of consciousness. It does not matter what is inside the black box or inside the head, if the external impact is the same.

Everyone seems to be ignoring one major factor here. It is not the human or the computer in this system which requires any consciousness or intelligence. These elements are contained in the rulebook.

FriendRob said, “The human doesn’t understand Chinese, the computer doesn’t understand Chinese (by assumption), but somehow the system understands Chinese”. But the system does not ‘somehow’ understand Chinese - It should be obvious how the system understands. Someone with intelligence (a vast amount of intelligence) wrote that rulebook. The intelligence of the rulebook author is what is being evaluated by the people on the output side.

Likewise, the Turing test can give false results if not performed very carefully. Is the computer we are interfacing with intelligent or is it the human programmer who is intelligent. Unless the computer can create ‘new’ ideas not programmed into it, it is mearly a conduit for the programmer’s intelligence. And even then it could be argued that it is only a result of the programmer’s designed neural net (or whetever method) that allows the computer to ‘discover’ and ‘learn’. So again we are left to debate if the intelligence is in the computer or the programmer.

You make good points, but I am not persuaded. I am a neo-philistine philospher and proud of it.

If the black box contained a rule book of such intelligent design, its operation would be indistinguishable from intelligence. How can you distinguish my brain in its head from that book in the black box?

If the environment produces any stimulus, the channels cut by experience into my brain produce a response. If you ask me am I conscious, the same channels produce the answer “yes”, whether or not that is true

My brain is not simply connected, so its responses may not always be appropriate to the stimulus. I may stick out my tongue, rather than respond as you expect, but that is still an in-built response. I am Pavlov’s dog, and happy to accept that.

Your book must be taken as symbolic, as no such book could in fact be created. That is the pragmatic answer to your query.

The Turing test, like the duck test and old William’s Razor, is a pragmatic test. Of course it may produce a false result. It may fool me into believing that a machine is conscious, rather than the creation of an intelligent toymaker. You may indeed be robots or extraterrestrial monsters posing as conscious beings.

However, if we are to appreciate reality and live our lives without paranoia, we have to trust what we see as reality. Sometimes we will be fooled by illusions, and trip over lines that are not there. We may accept illusions, Maya blocking our vision of Nirvana.

However, it’s what my granny called “mind over matter”. I don’t mind and it does not matter. I am happy with the duck test. If the eggs taste OK, I don’t care if it is not a duck.

The traditional assumption regarding the Turing Test and similar ideas has been (as it is presented here) that if such a test cannot distinguish between an artificially intelligent system and human-level intelligence / consciousness, then the artificial system must be considered to have human-level intelligence / consciousness. But to me that’s missing the more important point. In my view, if an artificial system passes a carefully administered Turing Test, the more meaningful interpretation is that humans must be considered nothing more than a programmed intelligent system (albeit a natural one, in the sense of being programmed by natural selection).

That seems to me to be much more important philosophically!

I agree with you. I believe that what we call “consciousness” is just a description of the brain’s reaction to its own existence. A bit of auto-stimulation so to speak.

Saying, “because we cannot distinguish someone else’s consciousness from a really well-written AI script, really well-written AI scripts are consciousness” is a copout. Likewise, calling consciousness “really good AI” is a copout.

In our reality, there are things that are different but cannot be distinguished. It’s a fact of life. Calling them the same for convenience doesn’t change reality.

Self-awareness is different from a machine printing out “I am self-aware.”

Free will is different from a computer reacting to inputs, with a randomizer to make it hard to tell that it’s completely predictable.

There are countless examples in all facets of life.

Saying, “Knowing the velocity of a particle more precisely causes you to know its position less precisely” is not the same thing as saying, “Knowing the velocity of a particle more precisely causes the position of the particle to exist somewhat less.”

Saying that the true Beta is unattainable, but because we have Beta-hat, Beta-hat is the true Beta, is similarly fallacious.

I’m getting tedious.

No, that’s an opinion of life. It’s only a fact if you can prove it… And if you can prove the first part (that the two things are different), then you can distinguish them, so the second part is then false.

If I were presented with a computer program which reacted in the same manner as a human being, then I would presume that the program had consciousness, for the same reason that I presume that other human beings have consciousness. Maybe the program’s just fooling me, but then, maybe you’re just fooling me, too.

Your example is not proveable, but the general statement “there are things that are different that cannot be distinguished” is proveable.

It’s a corollary to Heisenberg’s Uncertainty Principle.

One of the points Dennett made in `Consciousness Explained,’ which is relevant here, is that this ineffable feeling that there is something special and endlessly self-referential about consciousness, this feeling we have that we are aware of ourselves being aware that we are aware et cetera ad infinitum is almost certainly an illusion. Indeed, it can be proven experimentally that we are aware of much less than we feel we are. The experience of consciousness can be shown to be full of temporal gaps and frequent minor revisions, including swapping of the order of events. In fact our awareness is like an old movie print, scratchy and full of gaps, but we feel as if it is a perfect seamless DVD image flow. As Dennett points out, it is much cheaper for the brain to produce this wonderful feeling directly than it is to produce it indirectly, by mustering the processing power to produce an actually perfect ineffable flow of awareness. (This is in the same sense that it’s cheaper for the psychiatrist to make you happier by giving you Prozac than by finding you a wonderful girlfriend, great job, perfect health, and otherwise producing the happy feeling indirectly.)

By analogy: suppose I want my computer to pass a Turing test. I can do this two ways: (1) I amass unbelievably fast hardware and enough RAM to cover the Earth, and program for eleventy-million years until the computer can respond facilely and intelligently to any query whatsoever. (2) I get a much smaller bit of hardware and program a lot less, and instead I simply program the computer to stubbornly assert that it is conscious. If anyone tries to lead the computer toward other topics, I program it to get angry, change the topic back, go silent and pout, make ad hominem attacks, or take any other of the tacks people routinely take to avoid subjects about which they don’t want to talk.

Most people focus on expensive approach (1), but there is no reason cheap approach (2) would not succeed. For one thing, approach (2) is essentially how people prove to each other they are conscious: they simply assert it, and get angry, amused, contemptuous, or bored if someone doubts them. And it works.

For this reason I personally think the Turing test is worthless. That’s OK, however, because consciousness is not a very important thing to define or measure. As many have pointed out, you can assume it or not, and it doesn’t change much.

I’m willing to consider that consciousness is an illusion, and that we are nothing more than biological machines that believe we we have some control over our responses. I’m gonna go get that Dennett book.

However,

“consciousness is not a very important thing to define or measure. As many have pointed out, you can assume it or not, and it doesn’t change much.”

Depends on the context. I think the question of whether it exists or not is incredibly important, of its own end.

Further, free will, it seems to me, hang in the balance. How could it exist if consciousness does not? I guess the analogous thing would be a computer that insists it has free will, and pseudo-randomly chooses different responses to input. The whole thing is absolutely reactive, but is not measurably different from true free will.

Taking it to ridiculous extremes, if there is no consciousness, how can one feel anything? I can make a computer insist that it’s angry, happy, whatever, and not be measurably different from a person claiming the same thing. Then one who doesn’t believe in true aware experience can utter the phrase, “I’m not really happy - I just feel like I’m happy,” which begs all sorts of questions.

Wow does life get pointless at that point. No difference between the thing and the shadow it casts.

I’m not going to say I can prove consciousness/self-awareness really exists, but the question of whether it does, whether or not it can be adequately answered, seems incredibly important to me.

Or maybe I’m just telling myself that.

The Turing test is a bit more meaningful than that, cgrayce. Suppose I walk up to a person on the street, and ask them where the nearest McDonalds is. I would expect a response something like “Down main street three blocks, on your right”, or maybe “I’m not sure, I’m not from around here”. If a computer, presented with that question, gave me one of those answers, it might fool me, but if it said “Stop changing the subject, I’m self-aware, damn it!”, then I would wonder what was going on.

In other words: The test conversation is not necessarily itself on the topic of consciousness. It can be any conversation which two humans might reasonably be expected to have, and the computer should still give human responses.

Also, bup:

How is this a corellary of the Heisenberg Uncertainty Principle? If I have two electrons, say, of uncertain position, their positions might be indistinguishable… But how do you prove that they’re different? It’s perfectly consistent to say that they don’t even have a position to greater precision than can be determined by measurements. They do have a quantum mechanical state, but if all observables for both are the same, then the state is, in fact, the same.

Two things:

ONE: Balor: “…the dribble from Pavlov’s dog…”, brilliant, I hope it’s yours (though, if you quote, thank you anyway).

TWO: For those who don’t understand the Turing Test, this is it… here I am, I interface to you via this medium, you decide on the basis of this interface only – am I a “mere” amalgum of h/w&s/w or am I like you (I assume), “truly” intelligent, flesh and blood, blah-blah-blah?

THREE: BenJamin said somewhere that it is the capacity to create new ideas that counts: but BJ what mechanisms exist that allow you (or I) to create “new” ideas, are we not both “products of our environment”?

FOUR: Okay, I cannot count, what am I some sort of machine?

It was mine, thank you. You may quote me, but send all copyright royalties care of SDMB.

As you know, there are three kinds of people, those who can count and those who cannot. :slight_smile:

You might have me there.

Could we say, instead with mathematical probability approaching 1, that two electrons have been indistinguishable, though they were different?

I may have to cede that claim, though.

Anyway, I still think it’s a more accurate position to say if two things are indistinguishable, they might be different, rather than assuming they’re the same. Further, a question like, “Does consciousness exist?” is still an important question, whether or not it can be proven.

And one day it may or may not be proven - it’s hard to prove a negative. Proving that you cannot prove consciousness would be proving consciousness does not exist, I would think, which would be tough.