Given the propensity of the USA to wage unjust wars, barely constrained by the need to hide the body-bags, then yes - I have big problems with a technology that allows the USA to wage aggressive wars cost-free. And I see no way at all that robots would ‘reduce civilian deaths’.
Let’s not get ahead of ourselves. The question of autonomous fighting machines having ethics or not is irrelevant until we have intelligent machines. But we don’t have intelligent machines. We might never have intelligent machines. Strong AI is like controlled nuclear fusion, always ten years in the future.
Too late - a recent US government considered the use of human soldiers “cost free”*.
- since you’re clearly not talking about monetary cost, since robots don’t grow on trees.
Not so, unless you can exhaustively predict every conceivable situation it will confront, and can rely on its perception algorithms to reliably interpret these situations. This robotics PhD student can assure you that even were the former possible (and it’s not), we aren’t even close to the latter. The real world is so impossibly messy, our current sensory abilities so woefully under-equipped to perceive it and our software so laughably incapable of fully interpreting even the limited information our sensors can provide, that pretending we can fully predict what a robot will or will not do in a battlezone is almost criminally self-deceptive.
Secondly, a large problem with robotics isn’t in working out what the robot will do when everything is fine, but how it will fail. Most of our robotic techniques rely heavily on making massive assumptions about the domain in which a robot is operating, as this is the only thing that allows it to interpret the welter of sensor data with which it’s confronted. If these assumptions break down, behaviour can become completely unpredictable. Moreover, it’s frequently far from obvious (to the robot) that this has occurred, making the design of failsafe mechanisms incredibly tricky.
Certain domains present relatively clean sensory environments for drones. This is why UAVs have proven reasonably successful; they operate in a near-empty space, have a lot of easily-interpreted data from traditional avionics, and need only identify targets on a 2D plane. But even then the problem of distinguishing friendlies from targets is huge and intractable.
Quite. This isn’t about imbuing machines with ethics at all (although the military would surely like us to think so, as it sounds vaguely worthy). It is about humans taking the ethical choice to delegate life and death decisions to dumb algorithms.