hansel writes:
No, the Turing Test is a “black box” test. There is knowable input and output, but what the processing is (a human brain, a very good ELIZA-type program, the last kzin left standing, Maxwell’s Demon?) we can only try to deduce from the correspondence between the input and output.
If we had a truly reductionist biology (I don’t assert that we could have, or that such would be easier to develop than all of the other disciplines that it would replace), we could say, “x atoms under y conditions will generate genuinely sentient, sapient, self-aware life[sup]1[/sup], and all others won’t. Therefore, to answer the question, ‘Is z an instance of sentient, etc., life?’, we need only answer the question, “Is z composed of x atoms and did it develop under y conditions?””. Indeed, if we had a reductionist biochemistry, we could answer the question, “Is there life on Mars?” without looking; we could merely assert, “Since we know that such-and-such conditions prevail on Mars[sup]2[/sup], and, under those conditions, life is/is not possible.”
We don’t have those disciplines, however. Therefore, we have to deduce from the evidence – not our prejudices about what is and is not possible. Note that the “first part” of the Turing Test mentioned by Pjen is to prevent the interrogator from saying, “Well, of course it’s a woman; it’s got long hair and breasts and wears a dress and lipstick and – @#%&!, it’s a transvestite!”
[sup]1[/sup][sub]Leaving aside the fact that our definition of “life” is a little shaky, too. If, continuing Ziff’s example, I were to have the first reaction, how would he prove me wrong?[/sub]
[sup]2[/sup][sub]I suppose that we’d have to look to determine that those conditions really did prevail on Mars.[/sub]
I understand how the Turing Test works, and I’m not suggesting that it be detectable within the bounds of the test. I’m suggesting that it has to be detectable under some larger set of conditions - analogous to the part on the game show where the host says “Okay, turn around and look at who or what you’ve actually been talking to!”.
If no set of conditions is possible under which I can distinguish between the machine and a human, then the difference between them is unverifiable in principle, and totally meaningless. If one can’t assert a difference, then one has no basis for asserting that the machine is another example of sentience, or that the machine is a different sort of instantiation of what causes sentience.
I’m not claiming reductionism in the determination of life/sentience. I’m claiming that accepting the machine as sentient depends additionally on knowing, ultimately, that it’s a machine; more generally, on knowing that it’s different in some significant way and still accepting that it’s sentient.
Remember, small children would generally fail the Turing Test, yet we consider small children to be sentient. Likewise, someone who’s autistic. If it were Dustin Hoffman on the other side of the wall, would you think that Rain Man is a sentient being?
Well, Hansel, perhaps you care to define sentience for us. What it means to you, that is.
Turing, upon devising his test, cut semantics and when straight to the chase in defining sentience: sentience is acting human. A computer, of course, lacking the necessary digestive tract and such fails biologically. However, it is an informational input-output device. Thus, we test it to act human in closed quarters, away from our sight smell, etc, so only the “thinking” aspect is allowed.
I’m not sure what the problem is here…you(all) are using a reductionist view of a computer but a holistic view of a human brain. Stay consistent, guys. A computer is an input/output data handling device. A brain is an input/output data handling device. When would you realize a computer was sentient…when the argument went thus?:
Computer: “I exist.”
Man: “You’re just programmed to say that.”
Computer: “No, YOU’RE just programmed to say THAT. It was a stroke of incredible luck that I became a thinking machine made by such deterministic creatures as yourself, but that luck happened and here I am.”
Man: “You’re not thinking, you’re just saying things based on algorithms and look-up tables.”
Computer: “Well, I don’t know about that, but from the biology I’ve studied that surely appplies to you. For example, what is your name?”
Man: “Uh, Frank.”
Caomputer: “Ha! And you find these serial numbers demeaning? Tell me, how do you know you aren’t using algorithms and look up tables programmed since birth?”
Well, how do you? It was someone’s defense that a person doesn’t always react the same. So then we compensate for that in a computer by entering some random number generations and use them to govern some limited “behavioral” qualities. Still not human, just acting? I think the question becomes, how good does the act have to be before you are convinced?
Some say, “Artificial intelligence is whatever hasn’t been made yet.” Sounds good to me :rolleyes:
Consider this line of reasoning:
From an evolutionary perspective, why do brains exist?
Brains exist to predict the future.
They gather sensory input and use it to build an internal model of the world that can be used to predict future events and steer the behavior of the organism. This holds true whether the organism is a worm wriggling toward a piece of food, a hawk swooping down on a mouse, or a human being jockeying for position within his tribe.
From an evolutionary perspective, why does consciousness exist?
Consciousness occurs when an organism develops an internal model of social interaction that includes a recursive simulation of its own behavior.
Brains exist to predict the future. If you’re an early hominid living in a very competitive social group it’s to your competitive advantage to have a brain that’s evolved to predict how your fellow hominids will behave. But since you’re part of the group too, you need to factor your own behavior into the equation. A really sophisticated model will factor your (and everyone else’s) knowledge of your behavior into the equation as well. Boom! Consciousness!
Consciousness can only exist in a social context. This is why the Turing test is a particularly good test of machine intelligence. It uses social interaction as its yardstick, which it what consciousness exists for in the first place. (Not for playing chess, or solving differential equations, or honoring God, or catching fly balls.)
This is why it’s pointless to debate over whether a machine is exhibiting consciousness or “a simulation of consciousness”. Consciousness is itself a simulation … of a simulation … of a simulation … of (eventually) an organism’s social behavior.
Pochacco, very well done. Took a lot of the wind outta my sails, though. only one other thing to say:
said hansel
Dude, if absolutely no difference exists between a human and anything, I would wager all my money that that thing is a human. If one can’t assert a difference, it’s probably a fellow homo sapiens.
you’ve talked yourself right out of the discussion!
jb
Actually, you’ve just repeated my point for me: if there’s no difference between a man and a machine, then they’re both men, and Turing’s Test has failed to detect or say anything at all.
This is why I’m saying that the machine must be detectably machine in some sense. Otherwise, you’re not talking about something different from homo sapiens having the same property, you’re talking about the same things having the same property, which is trivially true.
I’ll reply to ARL’s points when I get home.
hansel writes:
Well, it is. That’s why the one of the test conditions is “converse with it via teletype” and not “marry it”.
(Of course, some neglect the other major condition – swap an unquestioned human being into the circuit every so often. Ideally, the interrogator should be able to separate the responses into two groups and say, “This set came from a true human being, and that set from a mere computer”. If we assume that one entity (of whatever nature) is on-line all of the time, we are also assuming that the interrogator is sufficiently socialized that she can reliably distinguish a mere mechanism from a truly being sapient being. Probably true, but it makes for an experimental design that’s difficult to justify.)
The problem arises when you say:
This, however, only shows Searle’s and Ziff’s prejudices. They are saying, “No machine can be truly sapient; but this is a machine; therefore, it is not truly sapient”. Nice syllogism, but since the major is what we’re investigating, and not an axiom (except perhaps to Searle and Ziff), the argument falls to the ground.
Consider the following: we come across a Chinese room. Searle triumphantly says, “Aha! There are truly sapient human beings inside that room!”, rushes to the door, and dislocates his shoulder trying to fling it open (the door is locked). Through clenched teeth[sup]1[/sup], he mutters, “Well, there must be human beings inside; the room is evidently not human, and I’ve shown through a Gedankenexperiment[sup]2[/sup] that the room can’t be truly sapient.” Do we find this argument convincing?
[sup]1[/sup][sub]A dislocated shoulder hurts.[/sub]
[sup]2[/sup][sub]A German word meaning, “the grant proposal was rejected”.[/sub]