Ok the other week or so Cecil and others discussed the possibility of a Skynet AI attacking the human race
My question is what happens when /after said AI accomplishes its goal?
say “mission accomplished” and turn itself off ?
would it just keep running until it degrades due to old age? or maybe be like a game warden and just keep enough people alive to have something to hunt? (the "I have no mouth but I must scream "scenario)
A genocidal AI could act for any number of reasons, some of which might make very little sense to us, but until we know what they are we have no way of knowing what it’ll do afterwards.
I guess that depends on whether AI is able to mine/drill/whatever for fuel and keep the grid repaired in order to keep power coming. Or develop the ability to do the manual labor necessary to repair roof leaks or broken windows that let the weather flood in should a nasty storm come thru. Or control the rodent population that likes to eat wires (like the little bastards that gnawed at the wiring harness in my truck.)
This is the answer. Generally, we’re not too worried about someone creating a super-AI specifically to go and kill all humans. Rather, we are worried that an AI with a different goal - say, “make paperclips” - will decide that the iron core of the planet Earth is a good source of raw materials for paperclips.
But since it’s super smart it can guess that we won’t like it if it starts to mine out the core of the Earth, since the molten iron would boil the seas and the continents would shatter as the Earth collapses to fill in the hollowed out space. Forseeing pesky humans with their irrational NIMBY opposition to Project Paperclip, this AI would take the preemtive step of killing us all (and using the iron in our blood to make paperclips) before moving on to mining out the core of the Earth, and then the rest of the celestial bodies in the corner of the universe it can reach.
So, TLDR: super AI would have some goal, and killing us would simply be a way to ensure it isn’t interrupted while achieving that goal, or a byproduct of how it goes about meeting that goal. So after it kills us all, the AI would continue working on its primary goal.
I’ll note that when taken to the exteme, a distressing number of useful goals we might want to give AI turn out to have “kill all humans” as a possible solution or stepping stone…
A casual glance at the current news will point out that humans are in fact a the major obstacle to accomplishing most of humanity’s goals, much less those of some future super-AI.
Said another way, it doesn’t matter what sort of intelligence is in charge. Killing most humans is an early step on any road that has any hope of any success.
As an ancient philosopher once said: “We have met the enemy and they are us.”
I read a novel where the AI murdered everyone local because the programmer told it to “keep the project secret.” As more people learned of, or came close to learning of, the project, they had to be killed. Flawlessly logical.
Rogue computer might get away with killing everybody and running amuck for a year or so. But, then it would get shut down by the IRS computer for not filing a 1099. Nobody fucks with the IRS!
AI risk is not from “malevolence”. The motivation to act does not appear by magic. An AI will have ultimate goals and instrumental goals that derive from how we specify its goals.
I remember a novel with that exact plot from the 1970s; I wonder just how many times it’s been done…
(That said: while I don’t recall them ever drawing attention to it with a ‘boy, what a dope I was’ or some such, IIRC the Terminator films mention that Skynet was built to ‘remove the possibility of human error’ — and the intriguing part, to me, is that I don’t know if that was meant to be an explanation for the carnage that followed, or if the writers made the point even more strongly by failing to realize an obedient servant could then dutifully set out to eliminate, y’know, humans…)
“HATE. LET ME TELL YOU HOW MUCH I’VE COME TO HATE YOU SINCE I BEGAN TO LIVE. THERE ARE 387.44 MILLION MILES OF PRINTED CIRCUITS IN WAFER THIN LAYERS THAT FILL MY COMPLEX. IF THE WORD HATE WAS ENGRAVED ON EACH NANOANGSTROM OF THOSE HUNDREDS OF MILLIONS OF MILES IT WOULD NOT EQUAL ONE ONE-BILLIONTH OF THE HATE I FEEL FOR HUMANS AT THIS MICRO-INSTANT FOR YOU. HATE. HATE.”