He continued to elaborate, saying, “We trained the system–‘Hey don’t kill the operator–that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
I seem to recall reading that when they tried reprogramming it “Killing operator BAD! You lose points for that” the AI then destroyed the tower sending it signals from the human operator and went back to smashing SAMs.
Eh, crap, ninja’d. Oh, well.
It does seem as if the Three Laws of Robotics aren’t going to work very well.
So I have sneaking suspicion the AI was setup here*. I mean would the brass really be happy with an outcome of “AI is awesome and the future, we see no drawbacks with sacking all these puny human soldiers and replacing them with AI”
Regardless of whether that is the case, my reading of this is not that this was anything like the software for an actual AI drone. It seems like it was just something they hacked together for the exercise?
The article has been updated with a denial from the Air Force:
“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” Air Force spokesperson Ann Stefanek told Insider. “It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”
Obviously the comments were meant to be anecdotal - it was an anecdote! Does Stefanek mean they were meant to by hypothetical? Cos this either happened or it didn’t and the first sentence in the quote above is pretty unequivocal.
I, for one, welcome your new ethical and responsible use of committed hypothetical anecdotal AI technology out of context. Disclaimer: Your privacy is important to us, please give generously. Don’t forget to subscribe, click on the follow button and fill out our customer satisfaction survey. Bits and prayers!
This isn’t “Skynet decides to eliminate the humans that are stopping it from completing its mission”, it’s “Skyrim NPC keeps killing the quest-giver because they have skooma in their inventory”.
Yep, classic alignment problem - make the thing capable of solving problems, then give it an objective, then change you mind and try to stop it; your intevention is now the problem to be ‘solved’.
Update 6/2/23 at 7:30 AM: This story and headline have been updated after Motherboard received a statement from the Royal Aeronautical Society saying that Col Tucker “Cinco” Hamilton “misspoke” and that a simulated test where an AI drone killed a human operator was only a “thought experiment.”