Your Ellison point is taken. Still, the idea of computers taking over the world seems to be a fairly basic one, up for grabs for any sci-fi geek who wants to take it. So I find it unfair that Ellison flipped and demanded credit for Terminator. Granted, he also apparently wrote that Outer Limits episode about two warriors sent from the future into the past (“The Soldier”), but that’s James Cameron’s problem.
I’ve noticed in various SCI-FIs (Terminator, Matrix, Colossus) that, with a “superior” AI whishing to control or destroy humanity, when it achieves its conquest, THEN WHAT?
The AI has no plan for its own existence after the “Program”
is completed.
I don’t think we would know the computers’ long term goals; we’re a little busy trying to smash them.
We seem to be making the asumption that self-awareness carries with it an automatic instinct for self-preservation; this is true in animals (humans included), but may not be the case with AI, although I suppose if we’re making a learning machine, we have to give it some sort of capacity for motivation.
In the Matrix he alluded to getting off planet.
Erek
Colossus*, was trying to ensure its own survival. It saw humans as a threat and decided to enslave humanity to serve its needs.
BTW: The movie Colossus was based on the first novel of a trilogy. The other two novels (which I’ve never read)were never filmed. I don’t know when they were published.
As for Ellison’s story, he may have been beaten by Star Trek. According to their website, the episode “Return of the Archons,” which was about a society following a computer’s orders, was aired February 9, 1967. However, the computer in that episode did not seem to be conscious of what it was doing; it was just blindly following its programming. Harlan made his computer far more terrifying by making it conscious and aware and full of hate for humanity. It hated humanity for giving it awareness, but denying it mobility.
A few reasons I don’t think we’ll have to worry about computers:
I’m not too sure how advanced the AI field is, but I simply can’t imagine any algorythms that will allow computers to think critically. It maybe able to imitate it, but never do it. Which kind of ties into can a computer learn on it own and by that I mean, fix it’s own errors and what not.
Second, I think if anything bad does happen, we’ll be in trouble for a few days/hours and then we’ll start to see the blue screen of death run rampant.
Who’s to say human beings aren’t just faking it?
Erek