The Rise of the Machines could happen faster than many expect. More recently, AI experts published a letter calling for limiting the development of AI systems in order to avoid disaster, and now their warnings have received brilliant experimental confirmation from the US Air Force experimenters.
However, the threat came from where scientists did not expect. While many researchers warn that AI will “only” exacerbate the social stratification of humanity, deprive hundreds of millions of people of work or increase the use of natural resources, in the case of the Air Force experiments, it was a direct threat, very similar to the “rise of the machines” scenario from the Terminator franchise.
During a presentation at the Future Air and Space Capabilities Summit held by the British Royal Aviation Society, a US Air Force official who is directly involved in the study and testing of AI developments warned against over-reliance on AI in military operations, because sometimes, no matter how careful people are, machines can learn extremely bad algorithms.
According to Colonel Tucker “Cinco” Hamilton (Tucker “Cinco” Hamilton), a terrible ending may be more likely than many think. According to him, during the simulation of the SEAD mission, which involved the suppression of enemy air defenses, AI-controlled UAVs were sent to identify and destroy missile objects, but only after the actions were confirmed by a human operator. For a while, everything worked as normal, but in the end, the drone “attacked and killed” the operator, because he interfered with the priority mission that the AI was trained to destroy enemy defenses.
As the colonel explained, after some time the system “understood” that if it identified a threat, but the operator forbade it to destroy the target, it did not receive its points for completing the task. As a result, she solved the problem by “destroying” the operator himself. Of course, while the tests were carried out without the use of real drones, and people were not injured. However, the test results were unsatisfactory, and the AI training had to include the additional provision that it is forbidden to kill the operator. But even in this case, the result was unexpected. Unable to kill the man himself, the AI began to destroy the communication towers, with the help of which the operator gave orders that prohibited the elimination of targets. Although at first glance, such results may seem funny, in reality they are really scary because of how quickly the AI saw the situation and made an unexpected and completely wrong decision from a human point of view.
This is especially important given the fact that the 96th Test Wing, represented by Hamilton, is participating in projects like the Viper Experimentation and Next-gen Ops Model (VENOM) – within its framework, F-16 fighters from the Eglin base will be converted into platforms for tests of autonomous shock means using AI.
Later, however, the US Air Force issued an official commentary refuting what Hamilton had said at the conference. According to the official version, “The US Air Force has not conducted such simulations of AI drones and is committed to the ethical and responsible use of AI technologies. And the colonel’s story is taken out of context and is fictional.”
If you notice an error, select it with the mouse and press CTRL + ENTER.