I’m not separating them. Yes, I have a problem with robot autonomy, but specifically related to allowing them the option to make ethical decisions, but **msmith **makes a good point in that they are not ethics but operational parameters.
Well those are operational parameters based upon ethical conditions. But I have a problem with it choosing to kill. If it is a constraint that says it cannot kill ever, then that’s fine, but I am not ok with if: ____ then: kill. It should only kill if a human being tells it to, and a human being is operating remotely then it is also not operating under high stress kill or be killed conditions either, so the argument that hte robot is unafraid is rendered moot because the human is unafraid as well.
Yes, but that’s not a decision. Also, there are failsafes. If I am between it and its Prius then it will have trouble getting into the proper position to align the bolt and likely will not fire as I am pretty certain automobile factories work on a pretty precise basis. The eight inches of separation between the robot and its goal that I present will trigger switches telling it not to fire. Again, this is not a decision on its part.
No, my problem is the ethics. The robot should not have the choice of who to kill, a human being should bear ultimate responsibility for every shot fired.
So… it would somehow help if the robot could “feel fear”? (As in, become irrational and make poor judgements under some circumstances?) I’m not seeing the logic in that…
The rest I will address below.
You are making a false distinction between the “not a decision” of the prius robot and the “ethical decision” of the semiautonomous military robot. In both cases the ‘thinking’ process is basically identical: it has access to certain data, it is programmed with certain operational parameters/ethics, and based on the data and ethics available to it it chooses what action to take. If the Fenderbot elects not to bolt you because it determines that there is an obstruction between the bolt and the target, that is exactly as much a decision as the Killbot makes when it shoots a human because it determines that the human was acting in a hostile manner. I mean, it’s literally the same sort of process. If the second is an “ethical” choice, so is the first.
We are having a terminology disconnect. To me the word “ethics” means nothing at all more than “operational paramaters”. (Even when applied to humans - except a lot of humans claim to have one system of ethics and act according to another.) If your problem is that the robots are being given the autonomy to deliberately kill humans under specific circumstances, then that’s a fair complaint. But that’s a little more specific than complaining that they have the ability to make decisions based on “ethics”.
What bothers me isn’t the general concept, but one of the specific examples they give in the article. In the article’s scenario, a robo-soldier is under sniper fire, and is deciding whether to return fire to kill the sniper with a grenade or a rifle. It’s not, in the article’s example, deciding whether to return fire at all; that’s just taken for granted. But sophisticated and expensive as the robot may be, it’s just a piece of equipment: What they’re essentially saying is that lethal force is justified in preventing damage to equipment.
A robot doesn’t fear for his life, but neither does an operator 2000 miles away.
Not really, because the first one is not firing because the arm cannot get into position and it is programmed not to fire in case of obstruction. This would happen whether or not it was a hammer getting caught in one of it’s hinges or me being in the way.
Ok, fine, I’ll accept your definition of ethics as long as you will accept that there are operational parameters that govern function and not decision making. IE, the robot not firing because of obstruction is like me not shooting you because my arm is pinned. I am not making a decision not to shoot you, I just can’t because my arm is pinned. If the gun is in my hand what separates me from the robot is that I CAN still fire the gun, just not at you.
Unless it’s the bible of course. Or, even better, one’s own personal interpretation of the bible. And by ‘one’s’, we must not assume ‘mine’, or ‘the Pope’s’ or ‘Barack Obama’s’, but must assume ‘kanicbird’s’ and ONLY ‘kanicbird’s’.
Soldiers may currently use lethal force to protect equipment and property. So that is no different from today. Otherwise, what’s the point of fielding an army of killbots if they’re just going to sit there and let themselves be destroyed?
What bothers me is that it makes it so that the war only has consequences for one side really. That REALLY bothers me. Both sides should be in harms way if you are going to fight a war.
I was under the impression you were referring to the Fenderbot refraining from firing because it ascertained there was an obstruction between it and it’s target - that is, it wouldn’t be sitting there grinding its gears and trying to bolt you; it would be stopping because a failsafe had been triggered. That situation is the same as the Killbot deciding to fire or not.
There are functional restrictions on operational parameters that restrict the killbot too - for example, it cannot call in airstrikes. So?
That’s a rather odd perspective, and utterly contradictory to any weapon, including a child’s slingshot. The entire goal of a weapon is to make the consequences unfair, and the entire goal of a war is to damage them more than they damage you.
But it’s not because a killing of a human is a malfunction for fenderbot, it is the natural function of killbot. The most likely scenario of fenderbot going on a rampage is two or three destroyed fenders.
Yes, but killing is it’s native function. If someone dies as a result of it’s actions that is proper functionality. Fenderbot is under no uncertain circumstances meant to kill someone, killbot is supposed to decide WHICH people to kill.
Yes, I understand that, but at the point where it is consequenceless is the point at which one side stops contemplating the consequences.
The problem with an army of autonomous hunter/killer robots is that when you take away the emotional and physical risks and costs of war, it becomes too easy to rely on war as a viable alternative. You take away any moral authority in that you no longer need a reason that is strong enough to risk fighting and dieing over. You just need a reason that is worth killing over.
i think the article has specifically said that it is designed to be deployed in a war zone with no civilians, where everyone not on their side is fair game. as such, it seems to be an intelligent land mine able to direct its fire against hostiles instead of friendlies.
and what happens when the enemy dress up in your uniform? or duct tape weapons to POWs/captured civilians and rush them towards the machine? gaming the stupid AI is much easier than hacking it.
Yup. I was already thinking of sci fi scenarios. A cell phone worm that transmit a jamming frequency on every available broadcast protocol in the cell phone so that every cell phone acts as a multispectral ECM which the machine interprets as a hostile attack and then kills everyone with a cell phone.
And it only existed in WW I because the battle lines were so static. If anything, future conflcits will engage MORE civilians as fighting will likely be in built up urban areas.
“…there is no glory, no heroes, and no cowards in that kind of battle. Only survivors.”
-George S. Patton when asked about the future of “push button” warfare.
The future of war is street to street and a constant low level of conflict, home invasions a la what is happening with the Mexican cartels in Phoenix. The kind of thing that we likely will be largely oblivious to except conceptually, one district over and a million light years away.
Sorry, mswas, I’m not seeing this. Here’s how I look at all this:
Our “programmer” in the 1900s mounts a gun in a tree on his property.
He ties a fishing line around the trigger, runs it across his property line so anyone who trespasses, boom!
So he has a picnic, forgets to untie the line, and a guest gets killed.
We don’t blame the gun, we blame the rigger-- er, “programmer”, don’t we? It’s certainly not the gun’s fault. So it’s not really the robot’s fault
if it shoots someone who crosses the property line because you forgot to turn it off or some sadistic programmer purposely programs it to do so.
Either way, human error.