I am talking about it being programmed in, not it learning spontaneously. You can stop assuming I don’t know how a computer works.
Yes, and it’s also likely that they have certain other safeguards built-in like pressure plates that are activated only by the precise pressure of a fender being seated exactly as it’s supposed to be, otherwise the bolts don’t fire. A battlefield computer won’t have those sorts of things built into it. It’s safeguards are all software precisely because it is programmed to make decisions.
Yes, and I think that human beings should have human consequences in wartime. I don’t think war should be entirely consequenceless for one of the sides fighting. For instance, my cousin fired tomahawks. He was perfectly aware that there were living people on the other end of it, and it affected him, he cared. He was a trigger man so he was the final responsibility before the missile was fired. That missile was not responsible, he was as was the chain of command behind him. If you remove the decision to pull the trigger from human hands then you have a wide layer of human decision making removed and responsibility gets so widely distributed that individual culpability is too diffuse to really impact those making the decisions.
And then war becomes just a matter of the richest faction with the least scruples killing as many people as they need to to dominate a territory. Sounds great! No more pesky soldiers coming back and protesting the war, no more families complaining about their dead sons, no more people feeling guilty because they saw the blood and gore fly out the back of their targets head. It’ll be automated and efficient, just like an automobile factory!
I don’t see the difference. Whether a thousand people die from a thousand single Killbot 9000s, or if a thousand people die from one heavy bomb, someone is still making that deployment decision. The distinction of layers remains the same.
A computer doesn’t just “become more robust”. It is either programmed to kill that person or it isn’t. That isn’t a robot problem, it is the programmer’s problem. The programmer could just as easily make GM’s robot kill civilians if he wanted.
What makes you think safeguards won’t be built in? That’s the whole idea behind the article in the OP. The robot only turns the bolts when the fender is in place; the robot only fires the gun when a combatant is in its sights. What’s the difference?
If the war is worth fighting, then the enemy is worth killing. Sparing our soldiers the pain and guilt of battle is a good thing. Do you really think the guys at the top executing this war really care about how horrible it is to shoot somebody in the face? They’re sitting back in Florida watching it play out on a computer screen. The soldiers might as well be robots today from their point of view, what’s wrong with making that a reality?
Is the war worth fighting? Is the enemy worth killing? Then let’s do it in the most precise and humane way possible. I wouldn’t agree with our present war in Iraq whether there were American soldiers there or not. We’re killing Iraqis for no good reason. But if the war is worth fighting – let’s say Hitler – it would make things a million times better to get it over with as soon as possible, killing as few as possible.
Besides, who stands up against an army of killbots? Just like the A-bomb, it creates a disincentive to fight in the first place. That’s the Army’s stated mission anyway.
I wouldn’t say that. Since it can’t move, you’d have to teach it how to lure civilians into range with promises of delicious candy, then teach it how to lie about those other bodies lying around…
My problem is robots making ethical decisions regarding who lives and who dies. If they want to give some measure of autonomy to the arm that bolts the Fenders onto a Toyota Prius I have no problem with that.
So you are wrong, my problem is giving robots the ability to make ethical decisions. Also, Dr Cube’s argument regarding target acquisition isn’t really an ethical decision. ‘Shoot the guy with the gun.’, isn’t an ethical decision it’s a targetting parameter.
It’s purely about the role that it is being devoted to. A GM bot can be programmed to kill, but that’s not it’s primary function. I am going to stop here because either you accept that there is a difference in the roles of the two robots that is the crux of the problem here or you don’t. Because you are moving into hijack territory with implausible scenarios about killer robots in a GM plant. So do you understand the basic difference or do you not?
Did you ever see the movie Screamers? Is that how future wars will be fought? Teams of what amounts to heavy equipment operators sitting in command bunkers. Each side glaring at each other through IR cameras and motions sensors across hundreds of miles of no-mans land while autonomous killing machines destroy anything they view as hostile?
I don’t think “ethics” is the right word here. They aren’t teaching robots not to blow up a school because it’s “wrong”. They are programming it to only attack certain “threats” and avoid other “not a threats”.
If the movies have taught us anything, if you program killing machines to learn and make decisions, eventually they will figure out how to…love.
If you’re going to base your opinion on science fiction then why not on Isaac Asimov? In his stories his robots were almost always programmed with ethical rules (the “Three Laws”).
You fail. The issue you have is with the autonomy - if the arm that bolts fenders onto Toyota Prii had autonomy that enabled it to kill people (perhaps by luring them within reach with delicious candy), you bet your butt that you would have a problem with that.
If the robot had the autonomy to kill but made the ethical choice not to kill, then you can’t rationally be bothered by the outcome - you can only be bothered by the fact it was allowed to entertain the idea at all - which is a function of its autonomy, not its ethics.
Now one supposes that a person could have a problem with the particular scheme of ethics that was programmed into a robot - which is again really complaing that the robot was given the autonomy to make decisions to kill that you don’t like (as opposed to ones that you think are justified - ie, which align with your ethics).
Yeah, the difference is that one robot is used to build cars and the other robot is used to fight wars. They’re both tools, programmed by people, to accomplish people’s goals. I don’t have a problem with automobile manufacturing. I do have a problem with war. I think it is rarely necessary and always horrible. I just don’t think that robots increase that horror, and could quite possibly decrease it. You haven’t really given me a reason to think otherwise, except for outlandish vague scenarios about some robot apocalypse.
No that is not my problem. Either put up an argument in opposition to what I am saying or we have nothing to discuss.
Right, I am talking about ethical autonomy, not autonomy in general. So yes, I am against killing machines having ethical autonomy. Your attempt to pigeonhole it is what I have a problem with. You do this in like every thread but eventually come around to discussing it on a more nuanced basis. Why can’t you just start from the nuanced point, which you are clearly intelligent enough to understand so I don’t have to fight your straw man tendency to mischaracterize my argument ever so slightly. Yes, I am against killing machines having ethical autonomy. If you strip either the words killing machines or ethical from my argument then you are no longer addressing my argument. Please just accept this so we can have a civil conversation and I don’t have to spend the next ten posts correcting your oversimplification.
I don’t think a robot should be given ethical autonomy at all. I don’t think the debate should get as far as discussing the ethics of its particular decisions. I see a dangerous slippery slope here. People scoff at Terminator scenarios, but if it’s just about ‘which’ ethics, where do we stop? Do we let robots decide how much medicine to give a patient in an ICU so that the patient only sees a real human like once per week? Do we let robots decide which patients to treat giving them ethical control over triage? Where do we draw the line. I draw the line at allowing robots to make ethical decisions regarding the cessation of human life, period.
“Morals” are about right and wrong. “Ethics” are a set of behavioral rules that a group must follow. Ethics is absolutely the word to use here. It doesn’t matter whether a lawyer thinks it is right or wrong to discuss his client’s confidential information with others; maybe it will save a life, or keep an innocent man out of prison. Either way, it is unethical. That lawyer was programmed to follow that rule, regardless of his personal morality.
With different roles and different constraints. An android with a gun and its own power source is very different from a pneumatic arm that is bolted to the floor.
Actually you hit the nail right here on the head. I don’t want a machine that cannot override it’s programming to make a moral choice in charge of deciding who lives and who dies.
The problem with a written code of ethics is it doesn’t work for humanity - never has never will, I don’t expect it to work for human produced computers writing ethic code in computer language.
I’m thinking the overgeneralization is to separate ethics from autonomy, when the real problem is quite specific - there are specific decisions that you don’t want the robot to have the autonomy to make.
Perhaps the real problem I have is that when you program a robot to be utterly incapable of choosing to, for example, run over a person and crush them, you are not removing ethics from them; you are hardcoding ethics into them. The only way to remove ethics from the robot is to remove all restraints whatsoever.
From what I can see the complaint is simple. They’re making robots who are being given the autonomy to kill in certain situations. The “ethics” is the line drawn between the killing they may do, and the killing they may not do - which many robots lack. (The only thing that keeps those robots from killing people is physically limited autonomy; the bolter robot can’t chase you down or even detect you, but if you stand between it and its prius, it will definitely “try to” kill you.)
If you’d rather the robot carried the weapon but lacked that autonomy to elect to use it, your problem isn’t that is has ethics. Your problem is that you want it to have more limiting ethics.
Ethics are defined as a system of moral principles. A Code of Ethics are the rules that the group must follow. I don’t really want to get into a semantic debate because a robot doesn’t have morals and it doesn’t have ethics. It has programmed parameters. They aren’t telling the robot “Do no harm to civilians” and then letting it figure out what’s what. They are telling it “this shape is hostile” or “this shape is friendly”. Act accordingly.
It’s not as if the robot is subject to some sort of punishment if it accidently blows up a school.