We Don't Need No Stinkin' Laws of Robotics

Last year a company named Ghost Robotics announced it was mounting guns on those robotic “dogs”. The U.S. Army is reportedly testing them. (Boston-based Boston Dynamics has stated that it has no intention to do this with their more famous dogs).

On Tuesday, the Board of Supervisors of San Francisco voted to give police "the ability to use potentially lethal, remote-controlled robots in emergency situations ".

I recall reading a few years ago that Asimov’s Three Laws of Robotics weren’t consistent with our reality, in part because we already were building automated systems that killed people. (Like drones. Or a lot of less sophisticated automated devices) But these forays look like the first robotic people-lethal devices. They’re not completely autonomous, “thinking” devices. Yet. But mounting guns on such platforms feels like a big step in that direction.

A walking drone is in the same category as a flying one. They are transportation systems for sensors and weapons–merely yet another means for humans to kill each other at a distance.

Didn’t a police robot take out a sniper in Dallas in 2016? What, exactly, is new under the Sun?

And hasn’t Israel deployed robotic combat vehicles for yonks?

It depends on who or what is making the kill decision.

Is it the police using a remote-controlled device to enforce public safety, or is an actual robot with autonomous decision making capabilities being allowed to autonomously kill a person?

A robot dog may have the ability to autonomously walk, balance and navigate. I do not believe it has the ability to properly decide who exactly is a threat and decide to target and attack that threat.

As more and more autonomy is gained by robots, we get closer and closer to the day in which robots may actually be able to make kill decisions on their own. To allow this technology creep to be implemented without strong laws protecting people, that would be a problem.

Missed that one.

What’s notable is that, in both the Dallas case and the new San Francisco decision, there was a lot of heated debate (and it was a largely Democratic Board in SF) regarding the use of such robotic devices. It may becoming more common, but it’s still highly controversial.

And they aren’t autonomous, yet.

The nuance here is that someone can be accidentally killed if there is a bug or software glitch. Fortunately, there’s no such thing as buggy software.

I’m concerned with calling every electro-mechanical remote control device a “robot.” To me, robot = autonomous. To call all these devices “robots” muddies the issue.

Classic. (I’d reply using only an emoticon if it were allowed.)

That thing was a bomb disposal robot with some kind of explosive device on one of its arms.

In effect, it was a “land drone” more than a “robot”, in that it was remotely controlled and actuated by the police.

No difference conceptually between the drones the Ukrainians are dropping grenades on the Russians with and this thing- one uses fans, the other uses treads, and that’s it.

And really, not much difference between that and painting a target with a laser and letting a missile home in on it either, and we’ve been doing that for decades.

And where do we draw the line between something like a skeet submunition carrier that you just fire into the general vicinity of the enemy troops, and it ejects skeet submunitions that each independently identifies and destroys targets, and a “killer robot”?

If a self-driving car is skidding toward a group of five people, but still has time to swerve toward just one person…

The closest we have to killer robots are cruise missiles. They are mobile entities that use their own senses to make an autonomous decision to detonate.

The decision to detonate is already made by the human operator; their autonomy is limited to finding the location where they’re supposed to detonate.

I recall reading that there were some problems with this in the second Gulf War with missile navigation failing because the landmarks they were searching for had already been destroyed by previous missiles. :person_facepalming:

That goes deep into history. Right after R.U.R. hit the culture, all electro-mechanical remote devices were called robots. During WWII the German V-2’s were called robot bombs. The use of robots in war was an obsession of science fiction writers in the 1950s.

Something everybody forgets about the I, Robot stories is that robots were banned from Earth right after they were introduced. That’s why the early 40s stories were all set in space and why Stephen Byerly couldn’t reveal himself. Asimov saw the fear of rational machines from the start.

Being afraid of rational machines is like being afraid of rational people. You have to be, but you can’t live your entire life in a state of fear.

I didn’t realize it went back that far. I remember automated mechanical arms used on assembly lines being called “robots” in the 1980s, but I didn’t know the word was applied to earlier devices (probably because I wasn’t alive then).

It’s a very slippery slope for sure. One we’re some distance down already.

When a fighter pilot pushes their red button to launch a missile at a blip on their scope, 100% of the data that went into their decision came through a computer. The computer told them, based on gosh knowns what sort of weird RF science: “That is an enemy aircraft of type X”.

To be sure, humans up the chain of command must have given general authorization to shoot at blips, not wait until they saw the enemy with their eyeballs to confirm who they were. But that decision by the military bureaucracy to shoot at blips amounts to a decision to defer substantially all the decision-making that matters to machines. Yes, ultimately the pilot chose to push their button. But in the absence of any practical ability to add any more knowledge to the decision, was it really their decision?

The same thing certainly obtains when ships fire at ships, or ground based missiles fire at airplanes. The human exercising the final “control” to fire or not isn’t really, if their procedural flowchart says “shoot at apparently hostile blips”.


There is lots of ferment in the military right now about the overall implications as more and more of warfare moves to computer-mediated. And eventually computer-controlled. So called Human-in-the-Loop decision-making is known to be too slow for many tasks, especially on defense. So they’re migrating towards so-called “Human-on-the-loop” decision-making. Which amounts to letting the computers make and implement attack decisions while the human has a chance to push a “stop” button, but only after at least some shots have been fired.

Not going to be pretty.

There’s a scene in one of the Danny Dunn books that calls a simple thermostat a “robot.”