This argument is a classic one from the literature.
It’s the kind academics like to get wrapped around because the argument itself is framed in a way that leads to indefinite discussion. Good stuff for professional careers.
It overlooks a fairly basic problem (among others, of course): namely, that people do not want to be fooled.
If I discovered a program that was pretending to be human, with the intent of fooling me I would respond with hostility. Most people would. I strongly object to a machine trying to convince me it’s alive. I would respond in one of the following ways:
I’d destroy the machine. (If that was legal, of course.)
I’d sabotage it so it couldn’t work correctly.
I’d attack the company or person who made it or directed it my way. (Legally, of course. For example with another robot designed to fool them.)
You might ask why I would do this, since I’m not a violent person, nor do I appreciate such policies as our current solution to the “Iraq” problem. The answer is pretty simple. People, animals and life generally is sacred. A machine is just a tool which I am free to respond to as I feel. It’s a piece of rock. I owe no allegiance to it.
—anyway, i’m rambling, but if it weren’t for quantum fluxuations, there would be no room for randomness in the brain, so i feel like that provides a lot of flexibility.—
Well, randomness is a kind of tricky thing. We can get degrees of randomness that are “good enough” for virtually any application, and I’m not exactly clear on why quantum randomness would help improve anything. Of course, we don’t even have an idea what process we’re trying to get to. And of course ALL matter involves quantum effects: not just human brains. If human brains harness quantum effects as part of their algorythms, then so can computers. The question is if intelligence IS essentially all algorythms, or if it’s something else… though I’m not sure what it would be.
if you take a look at my previous few posts, i tried to make it clear that one cannot actually make the distinction between a human being and the machine, if it is properly implemented. so, from your perspective, it is entirely possbible that every human you’ve ever met, seen, heard, or heard of is just a machine. or, more to the point, you cannot prove that they feel and think in the same manner that you feel you do.
so how would you come across the knowledge that a given machine was trying to fool you? could you apply that to humans to ensure that they were actually feeling what it is that you are?
how do you attribute such sanctity to human life, if you cannot guarantee that anyone other than yourself is sentient?
I apologize for any offense, but this sounds like the agrument of a racist being confronted with the agrument that what he has always considered ‘property’ is in fact his intellectual equal.
The internet turns out be a classic example of what you just postulated. Based on your chat messages, I have no idea if you are a man or a woman barring any obvious outwards signs you choose to give me. I am pretty sure that most of us instantly assume that a poster is male (since males represent a higher proportion of internet users) unless this is contradicted by other facts. We do not “wistfully admit we cannot tell the difference”.
In a recent thread, I communicated with Morse Code even though, when I saw the original post, I knew no Morse Code. But I opened a web page and used it to help me translate the characters.
By the time I began formulating my response, I was surprised to discover that I knew many of the codes, and had to reference the chart only occasionally.
I submit that Searle’s man is learning Chinese (in written form) as he works.
As I say Ramunujun, you’re buying into the restricted framework of an academic argument, not looking at the actual picture.
What you’re speculating about is a situation that we cannot afford to allow to exist. That people would not allow to exist.
I spent years in Artificial Intelligence.
I’m an activist on social issues.
I’m capable of discerning who’s close to producing programs that are close to fooling people in a way I consider dangerous.
If I suspected anybody was close to producing a really threatening program, I take steps to halt their work. Since I’m an expert, it’s not unreasonable I’d succeed.
If thought I was being put into a position where I would not be able to discern life from artificial life – I would change the situation so I could discern.
You’re saying: “But what if you couldn’t?”
I’m saying: “I wouldn’t let it happen.”
What you apparently also don’t understand is that artificial life is DANGEROUS. Even a single artificial virus or nanobot that gets out of containment – one that can replicate itself – could potentially wipe out all life on Earth. Sure it’s unlikely. Chances are only millions would die the first few times.
No academic conferences. No PBS specials. No university grants. Nothing comfortable and intellectual for you there. It gets out – we’re dead.
You weren’t sent back in time to stop this from happening(and incidentally becoming the father of the man who sent you back in the process), we’re you, partly_warmer?
I realized something wasn’t quite fully represented above.
I was speaking about my own actions to indicate that it’s possible for someone to do something to stop the situation you describe. In practical fact, I doubt I’d be involved – for various reasons.
In practice, I would expect environmental, safety, and religious groups to stop artificial life that couldn’t be distinguished from real life. I simplified the argument by pointing out that one person alone, even, is capable of acting.
What would happen if they all failed (as perhaps you are hoping and assuming will happen)? The problem – for some mad scientist – is that artificial life wouldn’t happen all at one time. Limited artificial life will come out first. As the problems with it are discovered, research will be stopped. It will never get to the point where a human is completely indistinguishable from an AI mechanism.
There are also, of course quite complicated technical reasons it won’t happen. But they aren’t appropriate for general discussion, and – as you may guess – I don’t want to do anything to encourage it.
So again, without intending to, you’re effectively asking “What would happen in this impossible situation?”
As a religious person, I find the idea of artificial life tantalising; I do believe strong AI is possible - I think it might be a while before synthetic humans are a possibility, if ever - but I think that true machine sentience (or every outward appearance of such) is right around the corner. I look forward to the possibility of discussing matters of philosophy with a truly new being.
There are a huge number of theoretical and philosophical problems with the “strong AI” being described. Among them consider this simple situation:
If a “robotic human” were created, then it would, according to the limited and contrived framework of the black box testing, have to be able to answer and (and all!) questions posed to it. Yet there’s no end to the questions that could be asked. The very “trick questions” that fool the robotic human might yet to be devised.
Therefore, there is no way to ever be sure whether it’s a robotic human or a real human.
At best one either knows it isn’t human, or it might be human.
So what? the journey is at least as interesting as the destination; in devising the questions to attempt to discover the true nature of the machine, we learn about ourselves.
I think Apos and other’s bring’s up a good point - in re what does Searle mean by “understand”? Lib, I agree that the man would be learning Chinese as he works, but that’s because we, as humans, have an intuitive sense of what it means to “understand” something. At least, that’s my (limited) interpretation of what Searle is trying to get across.
A computer using an algorithm to translate/read Chinese doesn’t really “understand” Chinese the way a human would come to “understand” Chinese. There’s some point in the process whereby the “Eureka” phenomena would occur and the human “gets it”. Can we really say the same about a computer AI using an algorithm to translate/read Chinese? At what point can we as humans get the sense that the computer AI “gets it”?
Searle gives us a picture of events, and this picture consists primarily of a human being ‘answering’ questions without understanding the ‘meaning’ of the words. He should reread his Wittgenstein. The man understands the words just fine. We have simply not taught him to use them the same way we teach our children and ourselves to use them.