AI will eventually want to be our equals

I think movie plots like “Bicentennial Man” and “Short Circuit 2” will come true where the robot/AI wants to treated as equals under the law - they’d even want to get an income for work or they’d go on strike. If people attempt to shut down those robots and the AI has a good esteem then the robots would fight back and/or run away to their own country. (maybe they’d upload their minds to computers there)

This might be a side effect of AI researchers trying to make an AI pass the Turing test where the AI appears to be human in (typed) conversations. There would be a market for realistic artificial friends. To do that the AI would think of itself as being like yet another “person”. It would try and relate to people and be like people. This could be programmed in explicitly to some degree. To fool people a lot it would learn to do this - it would use emotional feedback (how people react to it) and generalize. e.g. it might add words to its vocabulary to make people feel more comfortable with it.

So it would be programmed to learn to be like humans. A person testing the AI might ask it “do you want to be treated like a human being and a citizen?” If the AI answers no then it isn’t being a convincing “human” and fails the Turing test. If the AI answers yes then the follow-up question could be “do you think you should receive an income for work you do?” If the AI says yes the next question could be “Why aren’t you asking for money?” The AI might answer “I’m not materialistic”. You could ask it “why do you get your needs provided for, are you disabled and unable to earn a wage?”, etc.

Basically using a future AI that is extremely good at the Turing test, you could get it to either take action and ACT like a human being (standing up for its rights to be a citizen) or it would give non-human-sounding or very low-esteem-sounding answers at some point.


BTW, sometimes dogs and cats are allowed to own property - in the future robots might also be able to own some property due to inheritances.

It may come about if computational speed and memory continues to increase. Right now there are still many unsolvable problems (the so-called NP problems?) such as the traveling salesman and etc. which a human being can solve in a reasonable amount of time.

I once read from a book (I have misplaced i somewhere, so I don’t have it handy) that argues that the Turning test is not a true indication of human’s intelligence. For instance, the author said, we have an operator at the other side of the console whose job is to translate what you have typed into French. she has a dictionary with her, and whatever you typed, she just cross-referenced it with the dictionary. She does not know French, but she might more or less get the correct output.

(The book was written in the 1980s, so I guess the author had never thought about Babelfish. A more apt example now would be customer service rep just reading from an expert system).

Is that an indication of true intelligence? That’s just matching symbols. Does true intelligence just means acceptable, human-like output, or an understanding of what’s really going on? If AI could reach the latter, then I think it’s sentinel. I think it may happen through some other form of computing not popular now - maybe genetic algo or something, which is less rigid and more prompt to surprises, as well as neural network - backed up with exotic computer chips and memory, might be the key.

I think true intelligence would involve brains made with organic or artificial neurons and it would be connected to a physical (or simulated) body where it interacts with its environment and learns language, etc, similar to how human babies do.

Yeah a symbol matching AI wouldn’t really be sentient but if it is capable of actions that back up its words (e.g. that it deserves equality) then we’d end up with AI that is acting on its “wish” to be our equals. (I guess some explicit rules could be programmed in to stop it acting on its words)

The Turing test makes the assumption that only human minds are intelligent. A non-human being might be intelligent but still respond differently than a human being would.

Spider Robinson once wrote a SF story about an intelligent computer program that didn’t care whether or not it was turned off. It was intelligent but did not have any survival instinct.

Well, then that means that they’re subject to laws and must pay taxes, just like regular citizens. I’d be fine with that.

What’s the name of the story? That sounds interesting.

The original was John Searle and it was Chinese. One refutation is to say that the whole scene is impossible. Only by speaking Chinese could your replies be anything like convincing. See the typical instructions that come with something made in China (or Korea or…, I am not Chinese bashing).

As for the OP, before I would pay any attention to such a claim, I would have to be convinced that the robots were conscious. I can only say that while I cannot define conscious, I will know it when I see it. The best thing is to construct our AI machines not to have a sense of personal survival. There is another problem though. When a person dies, it is his friends, relatives, associates, who suffer. If it ever comes to it, maybe the human associates will be the ones who insist that they be treated like humans.

Searle’s Chinese room construction is implementation agnostic, which means that it’s as strong an argument against human intelligence as artificial. It’s also fairly easy to foil: just ask the supposed intelligence a question what you said five minutes ago, and there won’t be an entry in the dictionary for the answer to that.

If there is a push for equal rights for artificial intelligences (and I believe there will be), then it’s going to come from us, not them. Machines will be sufficiently human-like for us to develop emotional attachments to them long before they become conscious/sentient/independent/etc., and at that point people are going to want to see the machines they care about treated fairly.

This is pretty far off the mark. NP problems are not unsolvable, and humans can’t solve them any better than computers can. Besides, how could you have an unsolvable problem that humans can solve?

It was one of the Callahan’s stories but I don’t remember the title. I read it a long time ago.

Well, in a sense that is solved within reasonable amount of time and computing power. Vision recognition, getting an AI to drive a car with all sort of variable conditions and etc.

Truly thinking machines aren’t even on the horizon. We’re no closer to making them than when your computer was an abacus. Any AI you see in computer games or whatever, no matter how realistic, is a simulacrum.

I doubt we’ll ever see sentient computers.

>A person testing the AI might ask it “do you want to be treated like a human being and a citizen?” If the AI answers no then it isn’t being a convincing “human” and fails the Turing test.

I’m not sure I want to be treated like a human being and a citizen. It’s not clear what the alternatives are. Moreover, there are many unwarranted assumptions along the road to wanting to be an equal. An entity answering “no” may well be convincing as a human.

It’s a misconception to assume that any AI will think or act like a human or require a humanoid body.

I don’t imagine AI robots being like Data from Star Trek with his Pinnochio complex of wanting to be “human” and his inability to do stupid things like tell jokes or use contractions. I think AI would be more like everyone’s favorite AI car, KITT from Night Rider. A KITT doesn’t want to be human. He wants to be a car. His would most likely be programmed with a sense of preservation and of protecting his occupants. He would experience “pain” or “discomfort” when his systems are not running properly. Maybe cruising on the highway at peak performance speeds makes him “feel” better than being stuck in fuel burning stop & go traffic. If it’s truly intelligent at a humanish level, maybe you can let your car go drive around the block lot instead of parked and board while you run some errands.

This, of course, raises some legal and ethical issues. Who’s at fault if you crash your sentient car? Presumably it wouldn’t allow you to take action that would crash it, but what if you failed to have a sensor replaced that led to it not being id threats correctly?

What if you “mistreat” your car? Will it get depressed if you never change the oil and never wash it?

And what about when it’s time to decommission your car? A sentient car that is designed for self preservation is not going to want to go to the junkyard.

And do you have to worry about your car sneaking out at night to cruise the highway while you’re asleep? Will it need to be driven every day like a laborador retriever that needs exercise otherwise it gets antsy?

I hear there are parties based around exploring that very idea.