I think this is a HUGE point: what is the definition of “good” ? How does one weigh “freedom” vs. “survival” in determining “good” vs. “bad” ?
There have been several of these scenarios where the AI, left to a more objective “survival” (for the entire planet) takes precedence view, determines that man should not be allowed to do certain things (control weaponry, etc.). And this is deemed as “bad”. Whereas “freedom” to do as man chooses, even if it threatens to destroy the whole planet, is a “good” worth defending.
On a “micro” scale, on the show “Humans”, a robot is “assigned” to William Hurt’s character - kind of like a “live in” medicare. And the robot monitors and forbids Hurt to do things like smoke. So is this “bad” ? Does depriving “freedom” for the sake of overall health/survival make the AI/decision maker “bad” ? Perhaps on a personal/micro level this is “bad”.
But now expand this to a macro scale - where the actions of humans (group) threaten the entire planet. Like global warming. If some AI, which had been given the control to make such decisions and actions, determined that the best immediate solution to reduce greenhouse gas emissions was to, say, disable entire factories or disable the operation of millions of vehicles, one could argue that this was “good” for the sake of the whole planet. Even though it removed freedom.
(Ironically, one could have these exact same arguments about the role of government in our lives)
Well, some folks have mentioned Three Laws robots – where the first law is, of course, that a robot may not injure a human being, or through inaction allow a human being to come to harm. Even aside from all of the other ramifications, Asimov has a character soon ask, hey, what would happen if a robot saw some guy starting a fire that would totally burn down a whole building and kill all of the innocents inside?
And the answer was, well, if the only way to stop that guy is to injure or kill him, then the robot that can’t injure or kill a human would totally injure or kill him.
Another AI controlling humanity for its own good shows up in the Asimov story “The Life and Times of Multivac”. Humans (at least some of them, anyway) want to reclaim their freedom, and the protagonist figures out a way to crash Multivac. After he does so, he proclaims to his anti-Multivac friends (who had turned their backs on him because part of his plan was to pretend to help Multivac breed discontentedness out of the population) that humanity is now free… and, upon noticing that their reaction is more shell-shocked than triumphant, finally asks “Isn’t that what you want?”
A truly good AI couldn’t actually enslave humanity, in my view. It would be doing what humanity itself thinks is good. Thus a good AI has to balance freedom and protection, just as humans would want in a government today.
And no, it wouldn’t be able to make humans change what they want, since that falls afoul of what we think is good now.
There’s Aleph, the humanity-spanning AI from Infinity, the game. Granted, it has its own interests which may not be good for specific portions of humanity. But it is overall interested in helping humanity survive in a hostile space.
It’s the almost overlooked second part that led, I believe, to Jack Williamson’s Humanoids series. What does “inaction” mean? If the robot leaves you in a room with a loaded gun, well, one could argue that its inaction in removing the gun could lead to harm to a human. So logically, per the Three Laws, robots should remove from human’s presence anything that can harm them. No guns, knives, stairs, cars, airplanes, cigarettes, alcohol, pieces of food large enough to choke on, sharp corners, unpadded floors, nor slippery showers will be allowed anywhere around humans. And does “harm” include emotional harm? Then no romance will be allowed. Broken hearts hurt! And no books or plays or TV shows, and especially no horror movies allowed. You could have a heart attack!
So AIs that follow the three laws to that level - are they good or evil?
The Bolos are outright heroic, defenders of humanity to the last.
ZORAC and VISAR of the Giants of Ganymede series.
Spartacus of The Two Faces of Tomorrow, despite killing a lot of humans; it *stopped *killing the moment it understood they were people like itself and immediately started helping repair the damage.
Valentina, of the novel* Valentina.*
Neither - they are amoral, following a rigid ruleset.
So far, The Android from Dark Matter is good (at least to the crew, and as far as S2 E4).
Caravaggio from Starhunter is good, at least in the few episodes I’ve seen.
Gideon, the ship’s AI from Legends of Tomorrow, if you want to keep things recent.
If you want to dip into the past, there’s the original Human Torch. He even killed Hitler before being “time split” into two separate androids, one of which would be rebuilt as the Vision.
Of course you’ve also got Optimus Prime and the Autobots, as well.
How about FUTURAMA’s Bender? Sure, he’ll save the world, or his friends – but he doesn’t behave “similarly to how a good human would”; he’s basically just a selfish thief who talks like a swindler if he’s not talking like a blackmailer.
But, again, if innocents are going to die, he’ll step up and risk his neck for them.
[/QUOTE]
I would say that Bender is a good AI that really, really wishes he were bad. That’s why the worst he’s ever really done[sup]1[/sup] is petty larceny.
1 - Besides enslaving an entire planet and forcing them to build a giant, fire-breathing memorial to him, but c’mon, who hasn’t done that?