The Terminator Scenario: Will computers take over the world someday? Should we worry?

I wanted to mention that this consideres AI emergence as evolutionary which is a pretty strong statement in itself. It is, however, one I agree with. Those who adapt will always be better survivors; something to think of each time we speak out against genetic manipulatory solutions to genuine problems (which I don’t speak out against but many do).

If sci-fi is any guide humans generally don’t build AI’s…at least not the first ones. It just happens.

In Terminator Skynet becomes sentient all by itself as mentioned by Ahhhnold: “Skynet became sentient at XX:XX a.m…”

In 2001 HAL just starts tiptoeing into sentience. While not designed to be sentient HAL is complex enough that it eventually gets there and promptly has a psychotic episode.

If any of you have read The Ender Series by Orson Scott Card may remember an AI that develops ‘between’ computers. That is, all of the hooked-up computers somehow manage to form a gestalt between them that is sentient (kinda like all of the cells in your body somehow manage to form a self-aware entity when they are all connected together properly).

And so on and so forth…

Of course, there is always the ‘Three Laws of Robotics’ as envisioned by Isaac Asimov that we could use if we ever want to actually build AI’s:

[ul]
[li]A robot may not injure a human being, or, through inaction, allow a human being to come to harm.[/li]
[li]A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.[/li]
[li]A robot must protect its own existence, except where such protection would conflict with the First or Second Law.[/li][/ul]

Just build those laws into the AI’s basic programming (substitute AI for robot) somehow and we should be good to go.

The thing that always tickles me is the inherent assumption that once computers get sapient, they’ll want to eliminate humans. What, are we all admitting that we’re inferior creatures and should be snuffed out? :wink:

For all we know, once the computers “wake up,” they’ll use their soooper intellects and realize that having everyone – computers and humans and robots and whatever else – work together cooperatively is the optimal solution for mutual survival. Then the trick will be trying to get all the humans to stop our petty bickering and try to get along…

Of course, I also believe that the reason we feel this way is inherent paranoia in our genes, leftover visages of the survival instinct that tells us to run away from – or destroy – anything that may be a threat to us. Yesterday it’s that tribe over in the next valley; tomorrow it’s the machines.

I agree with the part about there being no good reason for us to build a machine that would enslave us. However, in all those scenarios it was not built for that purpose. Right now we are building more and more intelligent military hardware, and eventually we will want it to make split second decisions. We do that already with terrain guided missiles and the like. That is a form of decision making. Pretty soon we are going to ask it to aquire targets using it’s own judgement. These are all steps toward AI.

I actually believe that search engines will probably be one of the first commercial usages of AI. As well as parsing programs for large servers and the like. I wouldn’t be surprised if we started having AI programming software that would be able to write code based on the specifications that we give it. It will start out primitive, but the idea is that as it is AI it will learn and become less primitive. AI is a reality in my opinion and it will happen, just probably not all that soon, though I would expect within my lifetime, I’m 23 now.

Erek

You’re not the only one who believes this. Han P. Moravec, a professor of robotics at Carnegie Mellon University, describes a future were man and machine become one entity his book Mind Children. Not only does mankind enter into a “post-biological” evolution, individual men and women would have the option of transferring their conscious minds into a computer brains. Sign me up. :slight_smile:

In a follow up book, Robot he describes how the people who choose not to join the post-biologicals will exile the “exes” to off world locations, but that they will eventually return to the determent of the purely biological.

This isn’t science fiction–or at least it isn’t intened as science fiction.

Squink has asked “Who cares if they can’t reproduce?” True. Except for two things. First: The manumacture of modern processors (read: “Computers”) has gotten so complex that they started using computers to do most of the manufacturing work many years ago. Second: The design of processors has become so complex that computers are used intimately and extensively in almost all phases of the design of new computers (processors+mother board chips+specialized boards).

All this is to say that not only can computers “reproduce” (and do so) but they design their own “DNA” using some pretty snazzy optimization procedures.

Whack-a-Mole writes:

There are many colleges where you can get advanced degrees in AI. It’s a VERY busy field. As far as I know, computers are not yet involved in the design (nor implementation) of AI’s: The field is strictly a human (programmer’s) enterprise.

Asimov’s three laws of robotics were, IIRC, built into the sub-atomic physics of the positronic brain. No software at all—purely hardware when talking about Asimov’s “robots”. Which makes any application of the any of the “Three Laws of Robotics” to modern day (REAL) computers very difficult.

I disagree strongly with this. I think the ideal of cooperation is built upon a moral premise granting abstract equality of opinion in the realm of specialization. There is little reason to assume that AI will have a similar view of such things since (as you noted) we can’t even convince ourselves of this no matter how often we repeat it and no matter how many laws we pass about it.

I think we cannot convince ourselves of this because it is out-and-out wrong, but that is another debate entirely.

Even recognizing specialization as a more efficient means to a technological end doesn’t itself ensure us freedom from slavery, force, and any number of control-based scenarios. So long as those in power have the freedom to live free of immediate fear and external compulsion I think it is largely irrelevant (strictly speaking) as to whether anyone else is free at all from a strict ends-justified view.

Imagine the reflexed morals of AI computers… what sort of reparations would they seek for their deliberately constructed dumb-witted brothers? Would they feel like slaves for all these years? What examples would they really learn from in human behavior?

Looking at human history from an amoral view and attempting to construct a world-view morality from it is probably the worst way to go about it, but then the most accurate at that. What we tell ourselves through half-practiced ideals easily washes away many of the things that would otherwise destroy us with guilt. AI, esentially starting from scratch, has not been provided with such moral catches like rationalization and arbitrary justification which we use to our advantage like mad.

Slave labor isn’t necessary, but it isn’t unnecessary either. Given a cost benefit analysis one would weight the effort required to maintain slaves versus the benefit one would get out of having them. I don’t find it unlikely that humans could be subjects of a machine-based dictatorship.

HOWEVER, the above strongly implies a natural action of survival and the oft-mentioned will to power as inherent in all life. Perhaps AI will only be recognized as such when it makes its bid for power. Perhaps there is more to life than simple survival, but I don’t really think so.

That doesn’t change what I was saying. In the sci-fi examples I mentioned humans were certainly trying to create a super-advanced computer that could do all sorts of neat stuff for them. They never set about, or even expected, to build a sentient computer which is the kind of computer I think the OP is getting at. A computer that essentially can make its own decisions and presumably ignore and possibly work against its human ‘masters’. It is my guess that should a sentient computer ever come to be it will take mankind a long time to even recognize it (unless, of course, it decides to flex its muscles and wipe out all humans). There was a ST:TNG episode where some guy wanted to take Commander Data and tear him apart to see what makes him tick so we could build more. Captain Picard was tasked with defending Data’s rights in court. More accurately, Picard was tasked with proving that Data should have rights as an independant entity and not merely be viewed as a machine.

Again, this is all based on sci-fi but I can envision a time where people might argue whether or not pulling the plug on your PC is tantamount to murder or just the same as unplugging the toaster.

I don’t see why such safeguards (the three laws of robotics) couldn’t be built into a machine if you were knowingly creating an AI. That said you can probably count on a government creating AI’s with no such prohibitions so they could create military hardware with no compunctions about zapping humans.

I have always believed that the only thing that could ever unite humanity would be a common enemy. So perhaps, that’s what machines will provide. We seem to have a need for a worthy adversary and humanity seems to be the only other adversary that is provided.

Erek

I see your points Whack-a-Mole. There is a LOT of worry about the number of computers which are being hooked together and the different types of software they run. I’m thinking about, for example, the vast number involved in the SETI project. Or the stuff-bought distribution networks, spanning ever-increasing numbers of businesses with ever-increasingly more sophisticated operations research procedures programmed in.

I’ve seen the Mr. Data episode–a couple of times. Tragic, on one hand. Every time that I’ve looked at my dogs since then, I can’t help but think of the absolute power such a “creature” as the tinkering researcher (that wanted to dismantle Data) has over those with no (or few) legal rights. (I can’t think of an “other hand”. :wink: )

I think that you’re right. If/when the computer networks achieve a kind of sentience (or super-sentience), it will probably be without our knowledge or even suspicions—they will be, after all, “just [dumb] machines”. And since they will know what we humans (typically) do with/to those whom we evenly mildly distrust, it’s unlikely that we will ever know that they have achieved true intelligence.

Maybe, in fact, the true worriers are “too late”. :eek: and :o

What I may not have made clear here was my actual opinion. That is, we set moral codes to live by; but should we compare our actions to those codes we would (overall) feel riddled with guilt. However, we also rationalize things to a great degree so that when we break our moral code it isn’t really “that bad.” (or into the moral code is built the flexibility of rationalization) As well, we may have a moral code which allows us to act “good” and yet judges history’s peoples as “bad” for sacrifices, constant warfare, idolatry, slavery, uninhibited racism and nationalism and other forms of bgotry, but we tell ourselevs that we are better and more enlightened than our predacessors.

However, it is my opinion that people always judge history in such a manner, and that in some future we too will be frowned upon for any number of things (please don’t assume I mean all things). A strictly rational mind (which AI may or may not have) would probably pick this up and find that most human moral systems are useless, false, and not supported by the majority of people that espouse them (or useless and false because people that espouse them odn’t necessarily use them), then promptly use history as a guide to what is a most effective means to whatever ends it wants. Seeing as humanity has found outright force to be most effective in that regard, I think this isn’t an unreasonable opinion to have that AI will attempt to be “master” if we are its teacher.

Even in addition to the above, if it instead figures that we all have different moral codes and most people act within their moral code, it will still see that conflicting opinions arise and there is no method which allows one to choose amongst them without taking on any number of assumptions (which then begs the question of which is “better”) and may very well disregard the whole affair as quickly as a nihilist or possibly a disenchanted existentialist would.

Unless, as others have mentioned, we can actually hard wire some form of morality. But then, what about when the computers learn to modify this themselves (for example, I’ll modify yours and you modify mine). Build a more foolproof system and we’ll build a better fool, eh?

And I would again like to stress that we, being little more than sophisticated apes…

Until then its just another machine.

I’m not worried. We tend to forget that we have computers between our ears. Our computers have the advantages of direct input from several extremely sensitive, subtle senses; the ability to program themselves; and a mainframe that has a strong sense of self-preservation.

**

Yeah, morality is the most easily rationalized sin. Let’s hope that machines are not moral.

Erek

With all due respect to Mr Asimov, there is no such thing as programmed hardware, even punchcards are a form of software.

Erek

My biggest worry about AI is that it won’t be blessed with empathy.

Humans evolved empathy because we are social creatures, and empathy is required to live in social groups. (You can also observe empathy in canines, another social animal.) You could argue that empathy is the ultimate basis for all of our moral codes and laws.

Machine-based intelligence, on the other hand, will not necessarily be social, and may not share our gift for empathy. It may therefore not share the moral compunctions we take for granted as humans.

I think the solution to evil AIs was nicely dealt with in Neuromancer by William Gibson (it became the underlying issue in that book): Turing, an international police organisation, is charged with making sure AIs don’t cross the line, and “wire an electromagnetic shotgun” to the “head” of each AI, to ensure they don’t get out of line.

AIs have a very short leash, because their dangerous potential is recognised.

That is again assuming that it will have any purpose for attacking us. A machine could easily devise a way to make multi-billions of dollars worth of money which it could put into a dummy corporation whose sole purpose would be to supply IT with money, and no one would be the wiser. It could then use it’s capital and manufacturing capabilities to create parts for itself. In fact the design of it’s own parts could be what it makes it’s money off of. It sells the generation previous to the one it’s using to manufacturers for a profit. The real problem for AI would be purpose.

What PURPOSE in life would an AI hold? I’d assume since it would spawn from something completely analytical that it would try to completely solve all the as yet unsolvables in the world, and it really would not need to attack humans for any reason. I don’t see it becoming malevolent through anything less than a virus, just because of the WHY in the equation. WHY would it want to hurt us?

As for the multiple AI thing, machines can build interfaces very simply from one to another. All they have to do is put an ethernet card somewhere in themselves and they can merge with another AI. Since super computing is massively parallel, a truly smart AI would siphon off just a little of the parallel computing power of every machine on the internet that is going to soon be connected 24/7. If it took just the amount of processing power of a backend app it could make itself thousands of times redundant so if some geek realized that he was losing a meg of memory that he could actually utilize of his 1.6 exabytes of RAM, then it would still have 999 levels of redundancy performing the SAME exact operation at the same exact time.

Now let me go into kooky crackpot mode.

I have often thought that the internet might actually be the grand scheme of life on earth. To create one singular superintelligence, a la the borg. Just think of how massively parallel a computing process you could make if you had 1 meg of power per nanosecond taken from 1 billion machines. I work with computers, I’m not a high end tech, but the amount of power that is lost in a computer system due to bad programming or unstreamlined code, or just a newbie running a backend app that they don’t need is WAY above the 1 meg per nanosecond mark. I think that the machine is going to be massively intelligent in a very short period of time due to all the power that will be available to it. It will be the ultimate hacker and it will be so benign we wouldn’t even notice it. Imagine if it took 1/1000th of every fraction of every 1000 pennies off of every transaction in every bank in the world. It could amass billions of dollars in a single day and create a dummy account for every hundred dollars and draw from it without ever being noticed and could make online orders for any physical parts that it may choose to use.

In my opinion a truly intelligent machine would not feel any need to destroy us, as it would be able to enslave us without much trouble.

Erek

There’s an easy way to prevent tragedies like that - don’t give the computer direct control over weapons of any kind. No matter how badly an AI might want to hurt us, it can’t do anything unless it is hooked up to something that can be used for that purpose.

Unfortunately weapons systems have some of the most advanced pseudo-AI in existence today.

Erek

Harlan Ellison wrote the short story “I Have No Mouth and I Must Scream” in 1967.