Your company/division pivots and starts working on weapons of war - what is right and why?

Inspired by the stories about Google working on war drone image recognition AI, and 12 affected employees leaving the company for ethical reasons.

Is Google wrong to do this? Are the employees wrong to leave?

On the one side, artificial intelligence is going to become a part of various war machines and processes no matter what, and if we don’t do it, China/Russia/Bigscary-stan is going to try. And if you decide to leave rather than work on it, you’re just unpatriotic libtards/commies who want to have your (national defense) cake without paying for it.

On the other hand, if you signed up to work for a commercial company with mostly benign goals, why should you have to suddenly be responsible for blowing people up in impoverished developing countries? People, they might add, who have done nothing to us and are likely incapable of doing anything to us.

On the gripping hand, surely NOTHING could go wrong by continuing down this path, ending when AI’s are directly responsible for killing people with no human taking part in the decision. Or even if things COULD go wrong, we don’t really have the luxury of worrying about that…after all, that’s just the speed of war in the future - if you waited for a human to decide, all your expensive kit/military installments are going to get blown up and you lose by default!

So dopers, what are your thoughts? How do we best balance the competing needs of national defense, individual liberty, and not-being-slaughtered-by-rogue-AI’s?

Google is free to do it. Employees are free to leave. That seems to “balance the competing needs of national defense, individual liberty, and not-being-slaughtered-by-rogue-AI’s.”

Maybe I’m missing something.

I’m not sure it does balance those, at a minimum it tips us too far in the “everyone slaughtered” direction.

Google has arguably the highest concentrations of high-skilled talent in the world, from data scientists to robotics engineers to software / EE / other engineers, staggering boatloads of money, and the managers to successfully tackle large and complex projects marshaling all those resources.

I personally wish that the best and brightest among us weren’t working on creating death-dealing AI systems that increasingly take humans out of the loop when deciding whom to kill, and that these problems were left to the third-stringers in various government agencies, who will likely do a much poorer job of it. But that may be just me.

If you think of it as “deciding whom to kill while seeking to avoid killing innocent people nearby,” it is perhaps easier to see why we might want the best talent possible. Whether that comes from Google, the government, or the traditional defense firms is a different question.

I’m sure Google has top-rate talent but they are by no means exclusive in that area. Some pretty amazing things are done by defense contractors, they are just not discussed openly. Having Google work the AI issue is most likely a cost reduction over awarding contracts to someone else but don’t think that if Google opts out the work won’t continue elsewhere. By some pretty bright engineers and scientists.

I have no qualms about working for what is essentially a defense contractor. But what the OP is describing is essentially the conscientious-objector situation. In that situation, there doesn’t seem to be any solution except for the disquieted employees to voluntarily leave, unless the employing company is willing to drop its participation in the arms industry if, say, enough employees sign a petition or whatnot.

OP: I don’t think your concluding question is fair or right. I don’t think what you listed are competing needs. You are far too willing to assume that the military items are needed, as opposed to desired by someone. And when framed as some individuals wishing, compared to killing people, the question becomes more sensible.

Nobody is proposing to spend a lot of money to develop AI to kill innocent people. That can be done adequately with humans today. I think it’s a pretty good idea to research AI so as to lessen the risk of military operations to innocent people.

If some people don’t want anything to do with killing people, or be even loosely affiliated with people who are, I think that’s a perfectly defensible moral position. I don’t agree with it, in this case, but I respect it.

But let’s also get some perspective on what is being researched here. Google isn’t building a terminator death drone to rain hellfire on orphanages and especially members of protected classes. Google is working on an algorithm to go through video collected from drones to point people to suspicious activity.

So, to the best of my understanding, if there are sensors that can see a truck arriving at a warehouse in a city after visiting a fertilizer plant, and then another truck arrives at the same building after visiting a diesel fuel depot, then the AI would tell an intelligence analyst: “Whoa, they may be building a truck bomb in this warehouse!”

If that’s too military-oriented for some of Google’s workers, well, okay. But it is a long way from killbots, too, and I don’t really have a problem with this kind of research.

It would be nice if this were true, but I don’t think it is for two reasons.

First - what are the results if there IS a better AI built, that does twice as good a job at not killing innocents. Great news for everyone, right?

But in reality, that means this system will be deployed and used a lot more. If it’s deployed more than twice as much as often as current human-piloted drone-killings, then the net effect is MORE innocent people killed. And it almost certainly will be deployed more, both because it’s “better” in the sense of killing fewer innocent people, and cheaper and faster in the sense of needing much less human input.

Then there is the fact that this is a step further along the more pernicious slope of creating successively “better” killing systems that need less and less human input, eventually ending in fully autonomous killing machines.

The best and brightest among us would hopefully realize that both of the consequences above are net undesirable, and refrain from working on things that facilitate more innocents killed and greater risk of rogue autonomous killing machines and systems.

There are a great many companies in America that contribute towards the defense industry in some way directly or indirectly. If a pacifist cannot stomach working for such an employer, he or she would be greatly limiting their career opportunities. Someone like an engineer might end up working for Texas Instruments or Honeywell or something and then…wait…

The humans in the armies are already a rogue autonomous killing system.

The United States is lucky enough never to have had a defense industry. It’s been an offense industry from day 1.

Such clever one-liners – let’s go for three in a row!

So does a PhD in Roman Literature. We all make choices. It’s not that hard to avoid jobs you don’t want.

+1

Is it wrong for a company to start doing military contracts? No.

If I were working for google, I simply wouldn’t work on the project If I were uncomfortable with the project. Let’s say they were working on a pornography algorithm (which I’m sure they have). If I didn’t like pornography, I wouldn’t work on that. I probably wouldn’t quit unless they forced me to work on a project I objected to.

OTOH, I also would never work for google with the assumption that they have “mostly benign goals”.

Killing civilians should never be left to an “I’m feeling lucky” decision!

I somehow missed the history lesson where George Washington invaded England.

You make it sound like it should be done more deliberately.

Well, you could say we (the U.S.) declared war on England in the Declaration of Independence.