It's alive... it's ALIVE!! (Turing test)

[sub][disclaimer]I searched the archives (all forums / any date) with the keyword ‘Turing’ and found nothing. I find it hard to believe that there has never been a GD thread devoted to the Turing test, so if I am missing something please post a link. Also let me know if it’s been so thoroughly debunked that I should ask for a refund on my philosophy degree.[/sub]

Now then:
Suppose a Doper took the sock puppet idea one step further, and built a machine / wrote a program[sup]*[/sup] specifically to post to these boards. The machine, with a dedicated connection to the internet, is constantly gathering and processing information. When its vocabulary grew to a point that it met some kind of internal sufficiency, it registered on the SDMB under the name jb_farley. The original designer of the machine makes a hasty exit from this hypothetical, but forgets to turn the machine off.

The machine has posted to every forum on the board. It has ‘expressed’ a wide range of emotions, via both words and emoticons. It has posted diverse opinions on a very wide range of subjects. It has asked original questions, provided answers, and perhaps most importantly, made us laugh.

Does anyone want to deny that jb is sentient?

[sup]*[/sup][sub]Please play the game and pretend that the machine described could in fact be built. Maybe not today, but sometime within your imagination. Thanks. Beer and chips on your right.[/sub]

I don’t have an opinion on this, but I do have a link: http://cogsci.ucsd.edu/~asaygin/tt/ttest.html

John Searle suggests the following scenario to rebut the Turing Hypothesis:

The Chinese Room

Assume a large room with two letter boxes, one for input and one for output.

Input consists of questions phrased in chinese ideograms.

Inside the room are many humans, none of whom understand chinese, but who consult a massive grammar of chinese that relates acceptable responses in sets of ideograms to any question posed in ideograms. The busy humans construct an answer in ideograms and post it out of the output letter box. None of them understands the meaning of the ideograms, they are merely symbols to them.

To a chinese speaker the room would seem to be sentient but there is no sentience in Chinese in the room, only rule following.

Note the similarities to artficial intelligence computers where the sentience may reside in the human input to the rules, rather than to the processor.

Does it make a difference that jb has been posting his own original questions? The Chinese Room has rules of grammar, but does not know how to connect ideograms together to make sense. It may be able to make an original sentence that is grammatically correct, but nonsensical.

It appears that jb is making conscience choices as to what to put together into sentences. He not indiscriminately taking information from the net and posting it here, he is deciding what to post. He is, in effect, acting as the people in the room, and as the person on the inbox side.

Not being terribly familiar with jb’s writings on these boards, I must confess I am somewhat at a disadvantage in terms of deciding whether or not he is sentient ;). However, in both of the examples given on the thread so far, the main problem is that you lack the comparative element to your test that is crucial to the Turing test. It’s not really just, can you ask someone/thing a question and get a reasonable answer. The question is, can you engage both a sentient being and a machine in open dialogue and remain unable to tell the difference. Assuming that at least one sentient being has posted here and you know his/her identity (as Cecil might say, not necessarily a safe assumption), and jb has posted here, and both have engaged in a particular exchange with you, and you cannot tell the difference between the two, then yes he is sentient. There isn’t really enough information in your question for me to know if the criteria are met. My main point is that just posting to the SDMB doesn’t satisfy the requirements for actually implementing the Turing test. jb would have have to post in a particular way in relation to an interrogator who also must post in a particular way to a known sentient being.

As for the Chinese example, well, again you are lacking the comparative element. Let’s say you compared this room to a native chinese speaker. It would probably become obvious within minutes that the people in the room were just spitting out stock responses whereas the other person is not. For example, the room would just answer the same question over and over and over again exactly the same, but the person would probably step out of this process, and question what you were doing, and why. In the very least his responses would vary slightly each time. In fact, even if the other person didn’t speak Chinese, there would still be a variation in his responses that would indicate a consciousness that is able to sustain multiple frames of reference. His answers would probably start with, “ummm, I don’t understand” and eventually work themselves to “What the F***! Why the hell are you writing all of these stupid symbols all the time you’re driving me crazy.” And he could choose to stop answering. Something the Chinese room wouldn’t do. One of the principle things that the Turing test tests for is the ability to step outside of a particular question or frame of reference and think about it from an alternative point of view. Like if asked, what is the color of your shirt? A sentient being can say, “Why do you want to know?” A sensical response, but one that does not engage the original frame of reference (ok the term frame of reference kinda sucks, but I don’t have time to look up a better term for what I’m really talking about).

One last thing: remember that in the Turing test you don’t have to prove that one is a machine and the other is sentient, just to be able to tell the difference between the two.

Searle’s analogy is flawed on three counts:

  1. It presupposes that the construction of such a grammar book is possible. There’s nothing to suggest that compiling such a book would be any easier than similating brain function – in fact, it might be a lot harder. Essentially he’s just taking the problem and pushing it down one level.

  2. He doesn’t account for memory and context. It’s easy to write a program that generates an acceptable response to an isolated question. (Just have it always say "I’m sorry, I wasn’t listening, could you repeat that?) What’s hard is having it simulate an exchange of dozens of lines where the subject of the conversation becomes more and more abstracted.

  3. He underestimates the power of emergent behavior. Intuitively it’s natural to believe that following simple rules can only produce simple results, but computers and chaos theory have given us strong counter-examples: the Game of Life, the Mandelbrot set, etc. There’s no reason to believe that human consciousness isn’t merely the emergent result of a few billion neurons each carrying out some simple rules.

My main problem with the “Chinese Room” is that it is exactly the way my brain works. The little guy who does not know chinese is the computer hardware or the neurons. It is the software that knows chinese or has consciousness.

Just as an FYI, we did have a discussion on related issues in the following thread:

Do Sentient Computers Have Rights?

(Not that I’m trying to pre-empt this thread. I hate it when people post stuff along the lines of "Oh, we’ve already debated that. So what? Let’s debate it again.)

I don’t think that Searle was trying to prove necessarily that artificial intelligence couldn’t exist, just that the Turing Test was flawed.

There are many programs around that adequately pass the Turing test, which feature randomness in answers and even have sub routines for lying and missing the point.

Consciousness may well occur in non-meat-machines (in Searle’s picturesque language) but simply passing the Turing Test is probably not enough.

The Chinese Room demonsrates a system that could pass the Turing Test, but is only made up of Human Beings and a program. I can accept that consciousness might arise in a massively parallel computer with incredible conplexity, but not in a human written program to transcribe the Chinese ideograms.

The Turing test is also very dry and technical. In assessing consciousness we consider potential sentient beings in the whole with a certain ecological validity. Having seen the responses available from fairly simple Turing Imposters, I would require more than a cognitive link to the potential sentient being, I would want to know something of its history (‘childhood’) and how it learnt to be fully sentient, and how it worked in its own environment. Conversation is never sufficient to imply consciousness.

Are dogs “sentient?” How about chimps? I don’t think either could pass the Turing test. What if a computer developed a type of sentience we don’t recognize or can’t test?

I agree that the Turing test is not perfect (see below), but in general I don’t think the posts to this board have given it enough credit for the level of rigor it does achieve.

If there really are “many programs around that adequately pass the Turing test,” perhaps you would consider posting a link to a site that describes one and how it passes the test?

The Chinese Room does not really satisfy this unless you can refute the points I made in the earlier post. Additionally, Searle’s principle point with this example seems to be that while there are the behavioral and functional components of consciousness to the room, there is no comprehension involved. Well, I assume that there is no translation of the chinese involved, only instructions in english on how to correlate a given input with a given output. If this is the case, my objections from before still stand (as do those of Pochacco), and I don’t believe that it does not satisfy the behavioral and functional components adequately. I remain unconvinced that this passes the Turing test. If there is translation of the Chinese into english with the help of the book, a response made, and then translation back, there is obviously a sentient being involved in the production of the response, so yes there is sentience there, but this doesn’t seem to disprove Turing since you just stuck a person in a box. (Although this is not my understanding of Searle’s example).

As for computer programs with subroutines etc, these are not really going to trick the observate and patient human into believing that it is sentient. There will always be an unaccounted for situation, or a time when an inflexible subroutine takes over inappropriately. If this happens enough (ie. you talk to it for long enough) it is easy to discern the different between the machine and the human. Yes if I have a 2 minute conversation, maybe I could be fooled, but a rigorous investigation into the nature of its consciousness would most likely demonstrate the necessary difference.

And finally, as a visit to the link that was posted by flowbark showed, the strongest objection to the Turing test is that it tests only for a human type intellegence and not for some other alternative kind of sentience. This is, interestingly enough, opposed to your final objection about the “ecological validity” of consciousness. The Turing test seems to make the philosophical point that if it looks like a duck, acts like a duck, smells like a duck, and there is no way to distinguish it from other ducks, well probably it’s a duck. Where it fails to achieve its own message is that it only compares all possible ducks to one breed of duck (that is one type of intellegence, human), and therefore does not allow us to test for alternative types of sentience. Your point basically says that I don’t care about the phenomenological reality of consciousness, there are abstract critera which need to met. I say, no, the abstract criteria come only after the perception and reflection on the thing you are testing. The whole main objection to Turing that I just explained, says that Turing commits the same error and doesn’t adequately allow for a sentience that is does not fall under the rubric of human intelligence.

As to programs that have satisfied people that they are Turing entities I will have to go back to books (I haven’t taught this for seven or eight years). Will post refs later.

As far as passing the Turing test goes, I work with people who would certainly not pass a Turing test- people with severe and untreatable psychosis. To all of these people I attribute full consciousness because of their position within their own world, and their (often strange) communication with it. Most of their speech, if provided by a computer link rather than via their vocal cords would convince a person that they were not a Turing entity.

The Turing test is neither necessary nor sufficient for consciousness to be said to exist.

I’m positive that no existing machine can pass a Turing Test at present. There are Turing Competitions, but the point is to be the one that comes closest, not to pass.

The point of the Turing Test is that we have no reason to believe that our fellow humans are conscious. We assume that other humans are conscious because we are conscious and we assume that they are the same. So we could hypothesize that some humans are “zombies”…they act the same way we do, but are not conscious.

What the Turing Test seeks to do is say that is ridiculous. If an entity acts exactly the same as a conscious creature, there is no way to say from the outside whether that entity is conscious or not. But we never do that with humans. If an entity responds exactly the same way a conscious creature (a human) responds, we can simply call it conscious without worrying about its internal state.

Meaning that the easiest way to simulate consciousness is to have consciousness. Or that a simulation of consciousness IS conscious. Or that there is no way to determine the difference between an entity that simulates consciousness and an entity that is conscious. Since there is no way to tell the difference it makes no sense to insist there is a difference.

The Chinese Room thought experiment could only work if the room had some sort of memory. It wouldn’t work if it was simply a library that spit the same canned answer to an input, since a conscious entity doesn’t always give the same output for a given input. IF a chinese room could actually work and pass a Turing Test, then the room as a whole, with its hardware and software WOULD be conscious, even though no individual component of it is. The same way your brain is conscious even though the individual neurons are not. Even large aggregations of neurons are not conscious. But the human brain as a whole is.

And I agree that many conceivable conscious entities could not pass a Turing Test. For instance, someone who speaks only Chinese could not pass a Turing Test administered by someone who speaks only english. (We’re assuming a classic Turing Test via teletype.) So a failure of the Turing Test does not indicate lack of consciousness. But an if an entity can pass a Turing Test consistently it becomes perverse to insist that the entity cannot be conscious…since we don’t hold other humans to the same standards.

I think that you are completely missing the point of the Turing Test. The point of the Turing Test is not that anything that passes it is sentient. The point is that once something passes the Turing Test, it is just as valid to say that it is sentient as to say anything else is sentient. Any further test would be arbitrary and an expression of our own prejudices rather than an actual difference. You are correct; the Turing Test is neither neccessary nor sufficient. That’s not the point. The point is that the Turing Test is the strictest test that can be devised without relying on arbitrary criteria.

What does “adequately” mean?

So what would be your criteria?

That’s silly. The whole thought experiment depends on the ability of the Chinese Room being as complex as any conscious program. If that is not possible, then the thought experiment is impossible, and therefore irrelevant.

Do mean because it has specific criteria? That’s the whole point.

So what criteria would you impose on those factors?

An objection to the Chinese Room thought an experiment, with which I agree, is that it is circular reasoning. It simply posits that it is possible to construct a Chinese Room that passes the Turing Test but is not conscious, and from that claims that it has been “proved” that it is possible for there to be a being that passes the Turing Test yet is not conscious.

Two things:

  1. Could someone please post a link to what Turing himself said, and what he claimed his test would prove? That would clarify a lot of this “a Turing test is this, not that” stuff.

  2. If a man posts on a chat room as a woman, and no one can tell, is he really a woman? Just wondering.

Reply to Lemur866:

It all depends what you mean by ‘passing the Turing Test’.

If there is no closure to the test (always another question that might disprove consciousness), when can we say that the test is met?

We assign consciousness to other entities by various methods, but not by a formal test.

IMHO the Turing test can neither absolutely disprove consciousness, nor prove absolutely that an entity is conscious. It may give indications, but other tests may be more specific and more understandable- 'Is it a Meat-machine working in an understandable but complex environment? Does it have a history of learning and complex change in behaviour over time due to experience of the world? How like me is it (I can at least assume that I am conscious, can’t I?).

Part of it is that the complexity that we recognize as Meat-machine based intelligence has evolved over many many millions of years; it was not designed, but was engineered using that most precious resource, time, of which evolution has had more than enough.

This may be a good example, but may lack coherence, but it feels right. If a silicon chip based life form with claimed intelligence was shown to have evolved from lesser silicon chip based life forms over many thousands of years and then arrived by spaceship, then I would be tempted to more easily assign consciousness to this than to a complex mobile cognitive computer built in a lab that I was aware of by people following known scientific principles. The second entity has more of the Chinese Room about it than the former, and that would tip the balance of assessment for me.

Reply to The Ryan:

I don’t think that the criteria above, viz. long time in developing a complex self replicating organism in addition to ability to interact with a complex environment without control at a distance by another sentient being is ‘arbitrary’ or ‘an expression of prejudice’, merely applying a test that we apply to any sentient being on the scale from a lump of rock through amoeba to primates, in order to determine conscious processes.

Reply to toadspittle:

Alan Turing set out to consider ‘I propose to consider the question, ‘Can machines think?’’ (Computing machinery and ntelligence, 1950).

He proposed that there should be an ‘imitation game’ where there is a man, a woman, and an interrogator of either sex. The first part of the game is can the interrogator tell the sex of each of the subjects by simply asking questions whilst they are in separate rooms. Then the next phase is to replace one of the subjects with a computer and continue the interrogation and attempt to decide which is the computer.

Alan turing felt that by the end of the century (i.e. now) that ‘the use of words and general educated opinion will ahve altered so much that one will be able to speak of machines thinking without expecting to be contradicted.’

thus the Turing test is essentially a test for thinking- cognitive ability.

The OP used the word ‘sentient’ which implies conscious, and open to phenomenology- indeed having a moral status.

My argument is that a computer, rock, alien or widget could pass the Turing Test and this would mean it had cognitive processes, but the Turing Test alone would not test for sentience, consciousness or moral status.

The Turing Test is too often used to try to make claims for sentience, consciousness and moral status.

I hope this helps.

Only if you accept that a functional test is all we have as far as sentience goes. Turing’s test can be rephrased as: if a reasonable person can be reasonably fooled into accepting a machine as a sentient being, then that machine is, for all intents and purposes, sentient.

In Hilary Putnam’s rather silly essay “Are Robots Conscious?”, he mentions in a footnote an argument by Paul Ziff that goes like this:

The point of Ziff’s argument (and Searle’s Chinese Box argument as well) is that fooling an external perceiver is not a sufficient condition for sentience/life; our concept of these terms involves more than their functional applications; it involves our understanding of the material and proximate causes and preconditions of these phenomena.

The functionalist proposing Turing’s Test has to avoid a fallacy. If there’s no way to distinguish the machine from the human (i.e., if the machine were an android so perfectly constructed that no one could ever determine it wasn’t a flesh and blood human being), the functionalist must stipulate the condition “this is, undetectably, a machine”, and then argue that because it’s impossible for an observer to detect its machinehood, it’s ontologically indistinguishable from nonmachines. But if its machinehood is undetectable, then there’s no justification for arguing that it isn’t, in fact, a nonmachine, and the stipulation becomes arbitrary.

The machine must be, in some sense, detectable if it’s going to enlarge the category “sentient beings”. But if it’s detectable, then Ziff’s/Searle’s argument comes into play - the discovery of its machinehood disqualifies it conceptually.

I’ve always like the Turing Test as a reverse experiment… could we put a person in a room to act like a computer while interrogater goes at it with him/her and a real computer?
:wink:

pjen you have some interesting takes on this. “My argument is that a computer, rock, alien or widget could pass the Turing Test and this would mean it had cognitive processes, but the Turing Test alone would not test for sentience, consciousness or moral status.” Not a fan of Descartes? :smiley:

Ray Kurzwiel has some interesting ideas on this… in his book “The Age of Spiritual Machines” he takes a stab at why computers will simultaneously be and not be intelligent.

Discussion of the Turing Test always makes me think about Deep Blue 2 or whatever one won the majority of the matches. Did it really win? That is, at what point can speed and volume of calculation appear like true intelligence?

I agree the Turing Test is a little biased, but it is a silly bias. We are creating computers in our own image, of a sorts, so it seems almost natural that they should, if ever, be human-like in thinking.

I don’t understand this. Computers are useful, and excel, in exactly those aspects of cognition that are inhuman - mindless, repetitive, deterministic, possibly infinitely recursive tasks that no human can possibly duplicate. In the logical sense by which computers operate, they are the ultimate nonhuman. That aspects of their behaviour may appear intelligent is illusory (aided by a good deal of anthropomorphism).

Deep Blue proves this - it won not by being more intelligent than Kasparov in any meaningful sense, but by finding a better means of winning. Deep Blue won because raw computational power proved more effective at finding consistently winning moves than the intuitive and deductive methods employed by Kasparov. It was the triumph of an inhuman, mechanical method over a human one, in the same way that a car’s inhuman method of moving is superior in speed to the human method.

Computer cognition is a victory for specialization; the human mind is the opposite, an example of generalized cognitive abilities. As computers progress, they will diverge from human thinking, not the opposite.