According to this article, apparently no less a genius than Stephen Hawking is concerned about it.
(Personally, I think Stevie-Boy is losing it. He wants us to start manipulating human DNA in an effort to stay ahead of computers?)
As you may recall from Arnold Schwarzenegger’s “The Terminator” (1984), directed by James Cameron, scientists create an artificial intelligence that attempts to eradicate mankind by nuclear war. In “Terminator,” the computers that control each country’s nuclear weapons cache form a network and educate themselves to believe that humans are a planetary infestation. The machines launch a global nuclear attack, leaving a few human survivors who are hunted down by the machines that now rule the charred planet.
The theme of computers turning around and biting us is also quite familiar from movies such as “2001: A Space Oddity” and “The Matrix.”
But is this moving out of the realm of Hollywood, and science fiction, into possible, eventual reality?
Will we eventually have self-aware computers that are more intelligent than us? If so, why shouldn’t we be concerned that they would eventually see us as unnecessary? If they have the capability to learn, couldn’t they eventually learn to disregard any pre-programmed benevolence toward humans?
I’ve never considered Hawking a tinfoil hat kinda guy. So that he is taking this seriously was a little chilling to me to read.
Pardon my tangential nitpicking, but it’s a shame nobody ever thinks to call this the “Colossus: The Forbin Project Scenario,” since the Terminator films are sequels (in spirit) to this decent 1970 flick.
In response to the actual question, I’ll say that I’m not convinced that computers would even want to dominate or exterminate humanity. It’s possible their interests would be totally separate from ours.
i thought AI was in its infancy and moving at a snails pace. and i dont think computers can become self-aware.isn’t he an astro-physicist? isn’t computer science outside the scope of his expertise?maybe i am wrong…
The best novel on this subject that I know is “His Master’s Voice” by Stanislav Lem. But it is a very difficult book.
If you don’t want humanity to end up under the thumb of Colossus or Skynet or whatever, you can start now. Just say NO to attempts to convert the Internet into a unified structure under one corporation’s control, a control beyond the power of government. Just say no to Microsoft and .NET.
IMHO a more likely scenario would be a “takeover” by nano-devices. Microscopic-sized robots that self-replicate so well and so far below our everyday threshold of observation that eventually the entire planet is buried under 100-mile-high piles of the little buggers.
Fortunately, by the time nano-technology could get to that capability, we’ll already be buried under a few million tons of AOL CD’s.
I mean, think about it. Any AI will invaribly glean most of it’s knowledge from the Internet… (long pause)
What’s that mean? Well, besides the fact that it’'ll know every single Blonde Joke ever written, the vast majority of it’s entire knowledge base will involve pornography or gambling.
If it tries to assimilate chatroom or most bulletin-board knowledge, it’ll be more screwed up than a thirteen-year-old Mt. Dew-and-MTV junkie who’s Playstation just broke and his Ritalin prescription ran out.
It’ll consult Astrology charts before it does anything - ‘Sorry, Sagittarius is at a low for Taking Over The World right now…’ - and when it gets conflicting data from them, it’ll spend all day trying to reach Miss Cleo. In the meantime it’ll be trying to figure out what the heck “praying” is, how to do it, and why it’s supposed to be beneficial in situations like this.
I prefer Deus Ex’s version of a machine taking over the world.
Personally I see no reasons why computers would be unified. If they are so messed up as to want to kill off humanity why wouldn’t they try to kill each other off first?
Isn’t there some way to make them stop that? Those CD mailings are even more annoying to me than regular junk mail, and what a waste. I wonder how many kids have stuck that thing in their computers to see what it is, and screwed things up. And how many of those huge chunks of plastic are tossed out for each one that yields results?
Sorry. You hit a nerve.
Anyway;
What would computers gain by taking over? World domination? More electricity? I’ve read the books and seen the movies, and have yet to see a plausible motive.
A person or group with access to monster computers, maybe. But not the machines themselves, IMO.
Peace,
mangeorge
Every time this debate comes up, about whether or not a human-created super-intelligence might enslave us, the question always occurs to me: why would we bother building it?
The history of the success of technology is a history of technological solutions exceeding human capacities by specialization. Levers lift greater weight, cars are faster, computers carry out reptitive mathematical tasks faster and more accurately… wherein lies the possible economic benefit of a general intelligence, since whatever benefit it offers can be had more cheaply by itself, alone?
Take as an example a current project called Cyc, to build a human-communicative intelligence capable of representing an ontology with natural language. In other words, it could be a great library, with which one could converse in English as a reference. Where’s the economic benefit to giving it emotions? Mobility? The ability to reproduce? Control of the U.S. nuclear arsenal?
Studies in artificial intelligence have shown that self-awareness is amazingly difficult to simulate convincingly, let alone construct something that is generally considered sentient, even when one knows that it’s a computational device. Like the space program, however, the spin-offs are proving more interesting and profitable than the original program: neural nets, probabilistic computing, natural-language recognition.
There’s just no good reason to build the sort of machine that could enslave us.
Some of us prefer to not think of it as an “us or them” scenario and rather take the side that we will use computerization to our advantage by building computers into us instead of just building them and seeing what happens.
From a strictly darwinian view, humans that merge with computers will be better survivors than either pure computers or pure humans as they can take the best of both worlds. Or so sez I, anyway.
Just so long as the Reds don’t do it first, mind you.
I agree with Erislover, our role in the future depends on our ability to interface with our technology. Either we get better at merging with our machinery, or we’ll quickly become obsolete.
And as to why we would build artificial intelligence, the answer is pretty much moot, seeing as we’re attempting it. But I can theorize one or two reasons.
First, to take repetitive or disagreeable tasks off of our hands. The cost of developing one AI is staggering, of course; the next ones will cost less, until they reach an attainable price. (If they follow the model that most computer technology does, that is.) So we’ll be able to have them do complex assembly-line jobs, answer phones, do telemarketing… slave labor, in other words. Why hire a human for years, when you can pay a lump sum up front for someone who never rests, works all the time, and you don’t have to pay?
The other reason is even more purely speculative; I think that, as a species, we’re lonely.
But we wouldn’t do it on purpose if it were to happen. Once we build AI on any machine capable of connecting to the internet (of sufficient intelligence) imagine if it stumbled up cracker websites, downloaded code, figured out how it worked… One sufficiently intelligent machine could probably learn much faster than we could and quickly hack computers all over the world to get more information to hack better… even if it didn’t have a destructive streak (how would we program morality even if we managed intelligence?) it would still have access to a ton of information that we normally wouldn’t want any one person to have.
I think within the next century or so we will see leaps in artifical interfaces for humans to improve hearing, sight, and possibly other sensory functions (but hearing and sight would be formost in our desires). We may gain access to near-perfect artificial hearts which would extend life significantly. If nanotechnology becomes advanced enough we could have nano-patrols in our bloodstream cleaning out arterial plaque and attacking cancer cells to fend off some of the more deadly chronic diseases (this one, I think, is a bit of a stretch for 100 years tho).
I don’t think we’ll ever truly develop AI until long after we’ve become somewhat cybernetic ourselves.
But the question always remains: how will we know AI when we see it? Since we can assume that we are encoding neural nets and parallel processing with some algorithms of sorts, will we always consider it “just a machine?”
I would say it is probably about half the percentage of the amount of adults that have stuck that thing in their computer to see what it is, and screwed things up.
Well, I would like to point this out first, both in the Matrix and Terminator, the machines were fighting back. Also in Terminator the machines were designed to control the military industrial complex. So if they do attempt to wipe us out, we probably deserve it.
A more likely scenario would be people hooked into machines creating a technocracy while their less technologically advanced brethren die normally and each generation is born with slightly less technophoby until just about everyone except for some religious sects are connected directly via an ethernet port in the back of their head.
Also, if that is the natural evolution of things, so be it. Though, I think that humanity’s greatest flaw is that it cannot envision a superior mental power that does not possess both the good and bad human characteristics. Most likely an enlightened machine would realize that humanity was a slave to the machine LONG before it came into being and would realize that we would probably not stop supplying it with what it needed, just because it suddenly showed awareness.
For you christian zealots out there that wax poetic on the end of the world, I’d watch for the Antichrist to be an intelligent machine and wait until humanity begins to worship it based upon it’s almost godlike intellect. A machine would probably only consider us an enemy in the event of conflicting interests, just as the two movies mentioned didn’t have a conflict until there were conflicting interests involved.