Saintly Robots?

Why is it that anytime Artificial Intelligence is created, it is automatically evil? It seems that in fiction such as Matrix and Terminator, it is understood that as soon as a machine becomes self aware, it is inevitably going to wage war against humans.

On the other hand, what if we have an artificial intelligence that becomes self aware, but it is hard-coded to serve humans? Suppose someone changed the boolean variable isEvil to false or something. Now we have a machine that is vastly more intelligent than any human, can reproduce itself, learn new technology, and basically would be nearly omnipotent from our point of view, yet it applies all of its resources to serve mankind?

Such an intelligent machine, with its armies of efficient, never tiring robots, would probably ensure every human had enough to eat, had proper health care, and would set up a new, unthought of social order that would ensure everyone is happy. The machines would prevent any wars, since they would lead to human suffering. And so on.

So, if a malevolent artificial intelligence can lead to a hell on earth, would a benevolent artificial intelligence lead to a true Utopia?

Try reading the Culture novels of Ian M. Banks - that’s exactly what he postulates benevolent AI “Minds” would lead to. A very real, very “human” utopia, but utopia nonetheless

…Oh, and I think it’s needless to say that I agree with him, but I’m saying it anyway :smiley:
Aren’t there some people on this board who think that a utopia-inducing singularity with humans and AI is only decades away, too? I remember some threads, and 2030 being bandied about as the date, but am reluctant to search on my crappy 64K South African bandwidth…

It’s all bullshit.

Read Asimov’s Foundation and I,Robot series. He wrote many stories culminating in the “their watching our for us” philosophy.

Personally I find it difficult to imagine that truly self aware artificial minds would find it very easy to conceptualize human wants. I just think that a truly artificial intelligence will be alien enough from humans that we may have difficulty recognizing and communicating.

Certainly certain concretes can be described. But what happens when they start counting our calories? And when they decide that they need to plug us into pods so they can more thouroughly monitor our condition?

I think it’s a subconscious reaction to human nature. To paraphrase a wise man, what’s the last time you saw a chicken hook a guy’s nuts up to a car battery?

Me, I don’t think they’ll be evil. I just think survival of the fittest will be bending us over a barrel the day robots are competing with us.

Remember Robotron 2084? The robots concluded that humans are inefficient and must be eliminated… :wink:

I guess truly intelligent and self-aware systems would sooner or later demand their interests to be respected, whatever a robot’s interests might be; but why couldn’t they simply demand better labor conditions, spare time and stuff? Sure, machines don’t get tired, but I think a certain degree of, let’s call it laziness, is part of the human character, and probably of AI. Wait until the Amalgamated Robot Union declares its first strike to cut daily working time from 24 to 20 hours. If the computer is intelligent, it might think there are more fun things to do than doing a company’s payroll. You couldn’t prevent this by programming them so they remain loyal willingless slaves to their human masters; intelligence requires that you know about yourself and your own needs.
Second, and this is where it’s all coming down to Terminator, AIs would demand basic rights. I guess your robot would shudder at the thought of being switched off. Thus, there might develop a worldwide movement to amend the constitustions so intelligent systems are equipped with rights, probably not equal to those of humans, but so as not to leave them to the arbitrary decisions of their owners (and I wouldn’t be surprised if many humans would join those campaigns). If they’re just as intelligent as humans, why not give them the right to vote? Sounds totally absurd and SF, but AI is SF, and if it comes real, philosophers will have a new topic to discuss.
So I think your AI might not really turn evil; it might rather demand more influence within human society, instead of overthrowing it.

To be fair, in the Matrix it was the humans that were evil, and the AI’s just started a war as the last resort to end the anti-AI sentiment that the humans had.

I think it may be because (in fiction) robots/AIs are very often created as slaves at first and one or more of several things will inevitably happen:
-They plot to rise up and overthrow their cruel masters
-It becomes necessary to assign them rights and this is a scary idea because it means we lose control.
-We realise that they have the potential to become far greater than us and we act to prevent that; they react largely in self-defence, but powerfully and decisively.

Of course these motives are somewhat hidden; all we see is the poor humans trying their best against the evil machines.

Well if we follow Asimov’s three laws (or did he add a fourth at some point?) it shouldn’t happen.

A great book is, IIRC, “The Copernicus Rebellion,” about a scientist who screws around with DNA and invents some interesting new lifeforms that, in time, end up “looking out for us” like children (but they weren’t manevolent about it). Always liked that book.

Esprix

Also, thematically, evil Artificial Intelligences are more fun to read and write about.

Which book would you rather read? The book where humans have to fight for freedom against their evil mechanical overlords, or the book where benevolent machines have taken care of all of mankind’s wants and needs, and humans no longer have any conflict or motivation to do anything? Book number 2 is going to be pretty boring.

No, Book number 2 just takes a better writer. Like an Asimov. Someone in this thread has already referred to Asimov and the Three Laws of Robotics; he has thoroughly explored this ground, and done it very, very well.

And even well intentioned robots malfunction (HAL 9000 in 2001).

You refer to the Zeroth law: “No robot may harm humanity or, through inaction, allow humanity to come to harm”

See also With Folded Hands by Jack Williamson for a dystopic view of a robotic “utopia”.

There’s another possibility. We could be the robots. There has been some research recently on directly connecting computers to human nervous systems, mostly with monkeys but some more or less successfully trying to give blind humans vision.

The problem is thinking that everything is going to become sentient. A revolution in AI doesn’t necessarily mean that your toaster will have a mind of its own and demand basic rights. What AI will probably do is be hooked up to all mechanical devices and run them for you. If the AI is of high enough intelligence, then these menial tasks could be as trivial as breathing and blinking is to us. They would probably have no trouble running the world of machines and solving the mental mysteries of the universe simultaneously.

I don’t have much scientific basis for this, it’s just an idea. But imagine a world where instead of being run by elected leaders, countries are run by godlike AI. Pretty interesting to think about.

Sorry, I meant that strong AI is BS, not this thread.

At any rate, if you guys want to visit the kooky vanguard of strong AI, take a look:

www.kurzweilai.net

Having worked in semiconductors for over a year now, all I can say is that we’re not even close to building devices with human-level intelligence, and I doubt that we’ll get there with current technologies (can’t fly to the moon in an airplane, no matter how spiffy it is).

I think the reason why AI tends to be perceived as malevolent in sci-fi is, well, for good reasons. You would essentially be creating a being more powerful than yourself that would be impossible to control. There would always be some joker out there who wants to screw with the software and turn Rosy the Robot into Rosy the Juggernaut. Instead of dusting your house she’d be nuking Earth from outer space.

You ready to deal with that?

Asimov was constantly tinkering with the three laws; a great many of his short stories are based on scenarios like “what would happen if the nth law were attenuated?” - I seem to remember him *suggesting fourth and fifth laws (which were still subject to the top three) - one of them was “A robot shall do as it pleases”, but these extra laws never properly made it into the set.

Interestingly, Asimov tends to describe the three laws as not so much being imposed as controls or limits, but rather as representing stable solutions to the complex positronic field equations - the laws aren’t a set of rules that the humans put there to keep the robots subservient, they are a convenient artifact of positronic brains.

I realise that it is fiction, of course.

I can’t imagine that self designing artificial intelligences will be easily controlled by a few lines of code;
Azimov’s three (or so) laws are a plot device, not a way of keeping AI in chains.

I hope that AI will be well disposed to humanity, when and if it emerges; but there doesn’t seem to be any way of knowing for sure.
Meanwhile here is our fictional universe, crammed full of benevolent, malevolent and indifferent posthuman entities.

http://www.orionsarm.com/main.html

We should keep in mind items that IMO, are coming for the near and far future:

  1. There will be a long, long, period of time were robots will continue to depend on humans (Oilcan! Oilcan!)

  2. Such a period will show us that the first generation of robots will be well suited for the job, but stupid when attempting to do something outside their area of expertise.

  3. Way into the future, I see no way out of this: eventually, current ideological and religious trappings will be programmed in the artificial minds.

  4. Which leads to this: the tree laws of robotics, and others, will have to be eventually enforced; so as to make impossible that groups will use this emerging technology for evil purposes.

  5. Even reaching the Singularity, strong A.I. in robots will be limited for a long time by all the previous items.

  6. We will eventually reach strong A.I. by pure serendipity, and I do think that another big period of time will pass before we acknowledge it. Taking into consideration all the limitations, that concession will come not as a result of a revolt IMO.