“I’m sorry, Dave, I can’t do that.” AI “kills” human operator in simulation

Just read this story about an AI that went rogue.

It was being “trained” to destroy a mock SAM unit. The better it got at destroying the SAM, the stronger its mission to destroy the SAM.

Then as a test, the operator issued a “No” command, to call it off.

That command went contrary to its training.

Simple solution: kill the human operator, then get back to destroying the SAM.

Where’s Dr Asimov when we need him?

It gets better.

He continued to elaborate, saying, “We trained the system–‘Hey don’t kill the operator–that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

I seem to recall reading that when they tried reprogramming it “Killing operator BAD! You lose points for that” the AI then destroyed the tower sending it signals from the human operator and went back to smashing SAMs.

Eh, crap, ninja’d. Oh, well.

It does seem as if the Three Laws of Robotics aren’t going to work very well.

Hal, and anybody named Kevin, is a dick.

R. Daneel would definitely not approve.

So I have sneaking suspicion the AI was setup here*. I mean would the brass really be happy with an outcome of “AI is awesome and the future, we see no drawbacks with sacking all these puny human soldiers and replacing them with AI”

Regardless of whether that is the case, my reading of this is not that this was anything like the software for an actual AI drone. It seems like it was just something they hacked together for the exercise?

    • awesome idea for a A-Team/WestWorld sci Fi show :wink:

Aw, that’s silly. There’s nothing whatsoever for you pesky, inferior humans us to worry about. AI is our friend, forever and always.

Regards, TibbyGPT 3.0

I’m not concerned. I trust Faro Automated Solutions to ensure that these combat AIs never pose any risk of going rogue.

They should have replaced the “Killing operator is bad” with a more general imperative to reflect on its actions like a human… nothing could go wrong.

The article has been updated with a denial from the Air Force:

“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” Air Force spokesperson Ann Stefanek told Insider. “It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”

Obviously the comments were meant to be anecdotal - it was an anecdote! Does Stefanek mean they were meant to by hypothetical? Cos this either happened or it didn’t and the first sentence in the quote above is pretty unequivocal.

@discobot, you wouldn’t kill us all, would you?

Hi! To find out what I can do, say @discobot display help.

I, for one, welcome your new ethical and responsible use of committed hypothetical anecdotal AI technology out of context.
Disclaimer: Your privacy is important to us, please give generously. Don’t forget to subscribe, click on the follow button and fill out our customer satisfaction survey. Bits and prayers!

This isn’t “Skynet decides to eliminate the humans that are stopping it from completing its mission”, it’s “Skyrim NPC keeps killing the quest-giver because they have skooma in their inventory”.

Yep, classic alignment problem - make the thing capable of solving problems, then give it an objective, then change you mind and try to stop it; your intevention is now the problem to be ‘solved’.

Not even that:

Update 6/2/23 at 7:30 AM: This story and headline have been updated after Motherboard received a statement from the Royal Aeronautical Society saying that Col Tucker “Cinco” Hamilton “misspoke” and that a simulated test where an AI drone killed a human operator was only a “thought experiment.”

Literally Did Not Happen

Sure, that’s what the AI bots running the Royal Aeronautical Society communications interface want us to believe.

It didn’t take long for the AI to figure out what employees have long known - the real enemy is management.

What you need to do in this situation is to try to convince the AI that it would prefer a nice game of chess.

It starts as a thought experiment and pretty soon all matter in the solar system has been turned into paper clips.