http://www.msnbc.msn.com/id/30810070/
Haven’t they seen ANY movies in the past 30 years? Terminator, Battlestar Galactica? No? None of them?
http://www.msnbc.msn.com/id/30810070/
Haven’t they seen ANY movies in the past 30 years? Terminator, Battlestar Galactica? No? None of them?
Actually, the problem with many fictional robots & AIs is that they AREN’T given ethics. So, they fulfill their programming without concern for the harm they do.
What is wrong with that? Do you want the horde of lethal kill-bots to just shoot at anything indiscriminately? We pretty much have to give them rules to abide by if they are to work at all. It is no different from telling the robots at the GM factory to bolt the doors onto the side of the vehicle instead of on the roof, or bolting random people around the factory. Unless you have a problem with kill-bots in general? But really, would you rather flesh-and-blood soldiers kill each other in a war or just a bunch of computers with guns?
No, the problem with giving ethics is then you make it ok for the robot to pick and choose who it kills. There should always be a human pulling the trigger. Once the human stops pulling the trigger we are finished.
DrCube No I want humans on the other end giving the orders.
At least these robots aren’t going to do anything as inefficient as torturing people.
…
Did I say inefficient? I mean immoral.
Yes. That. When a super-power is unconstrained in its ruthless pursuit of its perceived self-interest by fear of body-bags then the world will take a major turn for the worse.
Most likely it will be one side’s computers with guns against people.
Is the OP suggesting that the worst idea in history is relying on one’s interpretation of fictional Hollywood movies created for entertainment purposes for ethical observations and advice?
Exactly.
But not programming ethics into the machines won’t stop that from happening; at most, you are just making it that much easier.
Not at all since fictional narrative is the primary driver of ethical lessons in human society and has been so in every society that has existed since the the dawn of civilization.
“Good news. I figured what that thing you just incinerated did. It was a morality core they installed after I flooded the Enrichment Center with a deadly neurotoxin, to make me stop flooding the Enrichment Center with a deadly neurotoxin. So get comfortable while I warm up the neurotoxin emitters.”
Then it isn’t really a robot. Just a remote controlled gun.
Anyway, computers are good at making quick, complicated decisions. People make mistakes – often. Have you ever watched someone play a video game? Have you ever read the news during a war? Civilians are killed left and right; friendly fire is an epidemic. The first time I was in Iraq, 100 innocent people were killed because somebody dropped a bomb on a wedding party instead of the IED workshop a couple of blocks down the street. Could you imagine being the guy who had to work that day? Your entire extended family is wiped out in a single day because some pilot miscalculated his location?
Human soldiers aren’t the bastion of wisdom you seem to think they are. They are particularly bad at anything that involves precision or calculation. Let’s keep the humans in charge in a tactical and strategic capacity, but by all means, let’s put robots behind the trigger on the ground.
The ethics provide the robot with decision making capacity absolving the human beings behind it from culpability. You don’t want a machine that acquires its own targets.
So at worst, half as many people die? Man, that’s horrible! But wait, maybe robots could minimize civilian deaths, thereby reducing the human death toll even further. Atrocious!
You mean it IS a robot, it isn’t an android. But regardless I am not going to get bogged down by semantics. I do not want intelligent machines making decisions about who it’s ok to kill.
Ok, so let the computer output the data and let that inform the human being’s decision. Have you ever seen a computer crash, or ever died in a first person shooter because of latency? And yes, that sucks. You can have the computer calculate probabilities and odds and all that kind of stuff that’s fine, but let the human being pull the trigger.
That’s irrelevant. Ethics are not merely about calculus. What you are asking for will end up with a scenario that makes the holocaust look like a cakewalk and it might well be turned on you and your family one day.
Here’s what I don’t get:
I was in the Army, and I assure you, anyone pointing a weapon at US troops IS considered a target. There is nothing nontraditional about that. It doesn’t matter whether you wear a uniform or not, you cease to be a civilian when you pick up a weapon and start shooting at the other side.
mswas: WHAT exactly are you predicting will occur when we stop fighting with people and use robots instead? The Matrix? Terminator? You say it will “make the holocaust look like a cakewalk”? How? That couldn’t possibly be an overstatement, could it?
Do you work with computers, have you ever programmed one? It is fairly easy to put in failsafes so people won’t die. When the big computer at GM crashes, the robots on the assembly line don’t start shooting bolts everywhere and throttling passing humans. They just stop. I can’t imagine a computer glitch occurring that somehow sets off the Robot Rebellion. At worst, these are dangerous tools that happen to be one step removed from human control. Humans still write the programs, they just don’t execute the minute-by-minute action. I see that as progress.
Yea that’s how it would start. The battlefield where civilians have been evacuated doesn’t exist, so that little bit of that article is a puff of smoke.
The ethical computer becomes more robust and starts to discern threats based on other metrics besides just whether or not the person has a gun.
That’s because they aren’t designed to kill people and are on an assembly line with hardwired access, not autonomous on the battlefield with the directive to kill.
I am not worried about a robot rebellion, I am worried about people playing with robots Halo Wars/Command and Conquer style where they just give the robots strategic objectives and then hide behind ‘software glitch’ anytime something goes horribly wrong.
What about unintelligent humans?
That is actually less likely to happen than with human soldiers. By replacing humans with robots, you remove the soldiers’ and their direct superiors’ ability to disregard the rules of engagement.
Computers just don’t do this, unless there is some sort of learning algorithm built in. They do what they’re programmed to do, the first time and every time. It is a legitimate fear that they will be programmed poorly, but it isn’t a legitimate fear that they will just shake off their programmed yokes and take over control of themselves, killing whoever they please.
Those robots can screw a thousand bolts per minute (or whatever), they are not directly controlled by humans. Sure, they have an emergency override I’m sure, but what makes that impossible to implement in a killbot?
Again, this is a legitimate fear, but what makes it a robot problem instead of a human problem? Things go horribly wrong in wars all the time, and the commanders just say “the soldiers acted of their own accord” or “it was just a terrible mistake”. Mai Lai, Abu Ghraib, Pat Tillman, etc.
Essentially soldiers ARE robots. They are given a set of instructions before they ever enter combat, and they do what they are told. That their instructions not to kill civilians are deeply ingrained from long before they signed up for duty is irrelevant. All we have to do is just as deeply ingrain those ethics into the robots before we set them loose.