We Should Welcome Autonomous Combat Robots!

Nikola Tesla’s dream has been realized-we can now avoid the loss of soldier’s lives, through the use of fully autonomous combat robots. Tesla knew that once humans were taken out of the balance, wars could be fought without destroying life. So why the opposition to these wonder machines?
The arguments against them remind me of banning poison gas-as if being killed by a bullet is more “humane” than being gassed to death. I could be wrong, but the advent of these machines may well mark the end of war! (A VERY good thing)

Right, because once the Old Republic invented battle droids, there were no more wars. Just like nobody would ever attack the United States again once we had the bomb.

More within the context of the movie: He’d do anything to save his crew. Since he wasn’t on a ship, his crew is his family.

Because then we’ll have to report to the disintegration booths when our number is up.

StG

Yes, because the history of weapons development has taken us further and further away from human casualties. Drones are mainly being used to take out other drones already, and soon they’ll just be running simulations on computers to see who controls which territory.

Your suggestion is naive to the extreme.

For one thing, human soldiers have a fair chance of rebelling if used to, say, kill off a few million of their own countrymen. Machines will just do what they are told.

Actually, the thought of machines rebelling is even scarier.

Or the wealthy & powerful will soon have no need of the proles.
And the means to exterminate us.

This sounds suspiciously like the reasoning behind Skynet.

Look, the idea that robotic death machines could rebel is simply laughable. Robotic death machines are perfectly safe and will be programmed to never turn against their makers. It would be utterly impossible for that to happen. And by “utterly impossible” I mean “practically guaranteed”.

As for the notion that robotic death machines would mean that wars would be fought without destroying life, um, what? The robot bombers and tanks are going to be dropping bombs and firing guns, yes? And their target will be only other robotic death machines? They won’t drop bombs on people or shoot people?

A drone is just a airplane with a remote pilot. So a drone that drops a bomb on a human being is no different than a guy in an airplane dropping a bomb on a human being. And an autonomous drone merely dispenses with the remote pilot, the pilot becomes the guy who programmed the thing to go out and drop whatever bombs it drops. But dropping bombs on other robots is pointless. War is politics by other means. The point of war is to make the other guy do what you want. Dropping a bomb on his robots isn’t going to do that, you have to drop bombs on him until he gives up or dies, and so do the rest of the guys on his team.

And if we’re so civilized that we can agree to not drop bombs on humans, and only other robots, then why can’t we agree to not drop any bombs, and negotiate our disagreements with each other like adults?

Adults negotiate by dropping bombs on each other. Are you new? :wink:

Oh really?
:stuck_out_tongue:

I believe in the US they’ve said that there will always be a human who has to authorize the firing of a weapon with a robotic soldier or gun. But that policy is doomed to fail, because human reaction time is about 200 milliseconds, even longer when you factor in the fact that humans have to find the enemy, target them, then fire (although the first 2 could become autonomous, with the 3rd still requiring human input)

So despite the fact that war robots are currently only tools of wealthy countries, like tanks and jets once started off that way, soon even 3rd world countries will have them (soon = several decades, but within our lifetime).

And robots that depend on human reaction time to respond to a threat will be destroyed by autonomous robots that can find and target your robot before your human’s brain has even processed that there is an enemy present.

So there is a strong incentive to create truly autonomous robots since human reaction time will be limited by biology but robotic reaction time will constantly grow faster. So that is a problem.

Aside from that, you have to factor in the fact that attacking civilians still has benefits to war like it or not because you can terrorize the enemy doing that (although as nations evolve this seems less likely, because all it does is mobilize opposition to your nation). And wars involve conquering enemy cities and states, which are full of civilians. So civilians will still be killed in wars, in the second world war there were far more civilian deaths than soldier deaths.

Plus the less the public have to sacrifice in a war, the more warlike we may become. Without a draft and with a volunteer military making up less than 1% of the public, the US is probably more willing to go to war than we would be if we actually had to sacrifice. This could make it worse.

I’d soooo much rather be killed by a bullet than gas. I have a book with a reporter’s account of witnessing an execution in the gas chamber. He describes about ten minutes of sheer agony. The witnesses would think the man was dead, only to see more thrashing and wincing.

OTTOMH

Chlorine gas- minutes of chemical burns to your eyes, mucus membranes and lunges before you finally die.

Mustard Gas- serious chemical burns to EVERYTHING.

I could go on.

Yes, dead is dead. But the sheer amount of suffering is very different.

As For The Rest Of The OP

I’m with Asimov. Robots will definitely malfunction in interesting ways, but they won’t rise up against us.

They may not rise against us (ie be openly hostile to our goals and interests) but they may develop a set of goals and interests that are indifferent to our own. That is the more likely negative scenario, the idea that robots will always act in ways that are useful to the goals of the human race is not something we can count on.

Surely we can trust them with a simple and commendable goal, like reducing human error.

I’m not saying there won’t be problems. There will definitely be problems. I’m just saying the machines won’t rise up against us.

The “friendly AI” problem is much harder than you seem to think. Especially for a war machine, which by nature has to have exceptions to any “don’t kill” programming.

And in many wars attacking and/or terrorizing civilians is the whole point; you can’t nonviolently ethnically cleanse a population or kill off people opposed to your politics or religion.

Yip. It’s such a hard problem that many think it won’t be solved in time, and any non-friendly AI will, once it starts being able to create new, optimized versions of itself, will wind up dooming humanity.

And, no, that’s not the worries some technophobes. Those are the opinions the people actually working on AI. It is much, much easier to create an AI that will cause problems than one that will not. And once they are smarter than us, there’s not much we can do about it. A truly intelligent AI can thwart any attempt to shut it down.