USAF Official ‘Misspoke’ About AI Drone Killing Human Operator in Simulated Test

The US Air Force has denied it has conducted an AI simulation in which a drone decided to “kill” its operator to prevent it from interfering with its efforts to achieve its mission.

An official said last month that in a virtual test staged by the US military, an air force drone controlled by AI had used “highly unexpected strategies to achieve its goal”.

Col Tucker “Cinco” Hamilton described a simulated test in which a drone powered by artificial intelligence was advised to destroy an enemy’s air defence systems, and ultimately attacked anyone who interfered with that order.

“The system started realising that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat,”

said Hamilton, the Chief of AI Test and Operations with the US air force, during the Future Combat Air and Space Capabilities Summit in London in May.

“So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” he said, according to a blogpost.

“We trained the system: ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that.’ So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

No real person was harmed.

Hamilton, who is an experimental fighter test pilot, has warned against relying too much on AI and said the test showed “you can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI”.

The Royal Aeronautical Society later published this update:

UPDATE 2/6/23 – in communication with AEROSPACE – Col Hamilton admits he “misspoke” in his presentation at the Royal Aeronautical Society FCAS Summit and the ‘rogue AI drone simulation’ was a hypothetical “thought experiment” from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation saying: “We’ve never run that experiment, nor would we need to in order to realise that this is a plausible outcome”.

He clarifies that the USAF has not tested any weaponised AI in this way (real or simulated) and says “Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI”.

Sources: The Guardian;  Royal Aeronautical Society;

Leave a Reply

Your email address will not be published. Required fields are marked *