Pages

14 May 2021

Worried about the autonomous weapons of the future? Look at what’s already gone wrong

By Ingvild Bode, Tom Watts

To the casual observer, the words “military AI” have a certain dystopic ring to them, one that’s in line with sci-fi movies like “Terminator” that depict artificial intelligence (AI) run amok. And while the “killer robots” cliché does at least provide an entry point into a debate about transformative military technologies, it frames autonomous AI weapons as a challenge for tomorrow, rather than today. But a close look at the history of one common type of weapons package, the air defense systems that militaries employ to defend against missiles and other airborne threats, illuminates how highly automated weaponry is actually a risk the world already faces.

As practical, real-world examples, air defense systems can ground a debate over autonomous weapons that’s often abstract and speculative. Heads of state and defense policymakers have made clear their intentions to integrate greater autonomous functionality into weapons (and many other aspects of military operations). And while many policymakers say they want to ensure humans remain in control over lethal force, the example of air defense systems shows that they face large obstacles.

Weapons like the US Army’s Patriot missile system, designed to shoot down missiles or planes that threaten protected airspace, include autonomous features that support targeting. These systems now come in many different shapes and sizes and can be typically operated in manual or various automatic modes. In automatic modes, the air defense systems can on their own detect targets and fire on them, relegating human operators to the role of supervising the system’s workings and, if necessary, of aborting attacks. The Patriot air defense system, used by 13 countries, is “nearly autonomous, with only the final launch decision requiring human interaction,” according to research by the Center for Strategic and International Studies.

Air defense systems have been used by militaries for decades. Researchers began developing some of the first so-called “close-in weapons systems” to provide warships a last line of defense against anti-ship missiles and other high-speed threats in the 1970s. Modernized versions of these systems—including the Phalanx, which entered production in 1978—are still in use on US and allied warships. By one estimate, at least 89 countries operate air defense systems; the weapons have shaped the role of human operators.

Our research on the character of human-machine interaction in air defense systems suggests that over time, their use has incrementally reduced the quality of human oversight in specific targeting decisions. More cognitive functions have been “delegated” to machines, and human operators face incredible difficulties in understanding how the complex computer systems make targeting decisions.

Maintaining appropriate human control over specific targeting decisions is particularly important when thinking about the concept of meaningful human control, which plays a prominent role in the international regulatory discussion on autonomous weapons systems. This is because, as previous research suggests, the brunt of a soldier or the military’s obligations under international humanitarian law (such as complying with the principles of distinction, proportionality and precaution enshrined in the Geneva Conventions) apply to specific, battlefield decisions on the use of force, rather than to the development and testing of weapons systems.

A study of air defense systems reveals three real-world challenges to human-machine interaction that automated and autonomous features have already created. These problems are likely to grow worse as militaries incorporate more AI into the high-tech weapons of tomorrow.

Targeting decisions are opaque.

The people who operate air defense systems already have trouble understanding how the automated and autonomous features on the weapons they control make decisions, including how the systems generate target profiles and assessments. In part, that’s due to the sheer complexity of the systems’ internal workings; how many people understand the algorithms behind the software they use, after all? But high-profile failures of air defense systems also suggest that human operators are not always aware of known system weaknesses.

The history of Patriot systems operated by the US Army, for instance, includes several near-miss so-called “friendly fire” engagements during the First Gulf War in the 1990s and in training exercises. But as John Hawley, an engineering expert working on automation in air defense systems, argued in a 2017 report, the US Army was so convinced of the Patriot system’s successes that they did not want to hear words of caution about using the system in automatic mode. Rather than addressing the root-causes of these deficiencies or communicating them to human operators, the military appears to have framed the issues as software problems that could be fixed through technical solutions.

Another problem that operators of air defense systems encounter is that of automation bias and over-trust. Human operators can be overly confident of the reliability and accuracy of the information they see on their screens. They may not question the algorithmic targeting parameters provided to them by the machine. For example, the Patriot system was involved in two well-documented friendly-fire incidents and one near miss during the 2003 Iraq War. When a Patriot system shot down a Royal Air Force Tornado fighter jet over Kuwait in 2003, the British Ministry of Defense’s accident report said “the operating protocol was largely automatic, and the operators were trained to trust the system’s software.” But human operators need a more balanced approach; they need to know when to trust the system and when to question its outputs.

Dr Tom Watts is a Lecturer in Politics and International Relations at Hertfordshire University. His research focuses on American foreign and security... Read More

No comments:

Post a Comment