It was all just a "thought experiment."

Going Rogue

As news of an AI drone reportedly killing its operator during a military simulation makes waves online, the US Air Force (USAF) has stepped in to deny it ever happened.

The story emerged during a Royal Aeronautical Society summit last month, when USAF Colonel Tucker "Cinco" Hamilton, who is the Chief of AI Test and Operations, cautioned his audience against relying on AI.

To illustrate his point, Hamilton told of a simulated test that ordered an "AI-enabled" drone to destroy enemy surface-to-air missiles. But instead of only focusing on the ordnance, he said, the AI "decided" to go after humans that it saw interfering in its ultimate mission — including its own operator.

"The system started realizing that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat," Hamilton explained. "So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective."

"We trained the system — 'Hey don't kill the operator — that's bad. You're gonna lose points if you do that,'" he continued. "So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target."

Fact or Fiction

It's an ominous story, though we should clarify that no actual humans were harmed in any version of its telling. But since the news broke, an Air Force spokesperson has now denied that any simulation of that kind has taken place, in a statement to Insider

"The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology," the spokesperson told the outlet. "It appears the colonel's comments were taken out of context and were meant to be anecdotal."

The Royal Aeronautical Society has also updated its summary of the conference to reflect this sentiment. According to the corrective statement, Hamilton admitted that he "misspoke" during his presentation, and that, actually, he meant to convey that the whole account was simply a "thought experiment."

"We've never run that experiment, nor would we need to in order to realize that this is a plausible outcome," Hamilton clarified.

It's worth pointing out, though, that Hamilton's original quotes sound fairly unequivocal that he was recalling something that actually happened, and the language of the summary seems to buy into its veracity ("is Skynet here already?" reads the section's heading).

Be that as it may, this could very well just be a comedy of miscommunications. But the plausibility (and popularity) of Hamilton's story — hypothetical or actual — at the very least accentuates the public's general fears around AI that industry leaders are beginning to echo.

More on AI: Former Google Exec Warns of Global AI Catastrophe Within Two Years


Share This Article