Pages

20 November 2017

The critical human element in the machine age of warfare

Elsa B. Kania

In 1983, Stanislav Petrov helped to prevent the accidental outbreak of nuclear war by recognizing that a false alarm in Soviet early warning systems was not a real report of an imminent US attack. In retrospect, it was a remarkable call made under enormous stress, based on a guess and gut instinct. If another officer had been in his place that night—an officer who simply trusted the early warning system—there could have been a very different outcome: worldwide thermonuclear war.

As major militaries progress towards the introduction of artificial intelligence (AI) into intelligence, surveillance, and reconnaissance, and even command systems, Petrov’s decision should serve as a potent reminder of the risks of reliance on complex systems in which errors and malfunctions are not only probable, but probably inevitable. Certainly, the use of big data analytics and machine learning can resolve key problems for militaries that are struggling to process a flood of text and numerical data, video, and imagery. The introduction of algorithms to process data at speed and scale could enable a critical advantage in intelligence and command decision-making. Consequently, the US military is seeking to accelerate its integration of big data and machine learning through Project Maven, and the Chinese military is similarly pursuing research and development that leverage these technologies to enable automated data and information fusion, enhance intelligence analysis, and support command decision-making. Russian President Vladimir Putin, meanwhile, has suggested, “Artificial intelligence is the future, not only for Russia, but for all humankind… Whoever becomes the leader in this sphere will become the ruler of the world.”

To date, such military applications of AI have provoked less debate and concern about current capabilities than fears of “killer robots” that do not yet exist. But even though Terminatorsaren’t in the immediate future, the trend towards greater reliance upon AI systems could nonetheless result in risks of miscalculation caused by technical error. Although Petrov’s case illustrates the issue in extremis, it also offers a general lesson about the importance of human decision-making in the machine age of warfare.

It is clear that merely having a human notionally “in the loop” is not enough, since the introduction of greater degrees of automation tend to adversely impact human decision-making. In Petrov’s situation, another officer may very well have trusted the early warning system and reported an impending US nuclear strike up the chain of command. Only Petrov’s willingness to question the system—based on his understanding that an actual US strike would not involve just a few missiles, but a massive fusillade—averted catastrophe that day.

Today, however, the human in question might be considerably less willing to question the machine. The known human tendency towards greater reliance on computer-generated or automated recommendations from intelligent decision-support systems can result in compromised decision-making. This dynamic—known as automation bias or theoverreliance on automation that results in complacency—may become more pervasive, as humans accustom themselves to relying more and more upon algorithmic judgment in day-to-day life.

In some cases, the introduction of algorithms could reveal and mitigate human cognitive biases. However, the risks of algorithmic bias have become increasingly apparent. In a societal context, “biased” algorithms have resulted in discrimination; in military applications, the effects could be lethal. In this regard, the use of autonomous weapons necessarily conveys operational risk. Even greater degrees of automation—such as with the introduction of machine learning in systems not directly involved in decisions of lethal force (e.g., early warning and intelligence)—could contribute to a range of risks.

Friendly fire—and worse. As multiple militaries have begun to use AI to enhance their capabilities on the battlefield, several deadly mistakes have shown the risks of automation and semi-autonomous systems, even when human operators are notionally in the loop. In 1988, theUSS Vincennes shot down an Iranian passenger jet in the Persian Gulf after the ship’s Aegisradar-and-fire-control system incorrectly identified the civilian airplane as a military fighter jet. In this case, the crew responsible for decision-making failed to recognize this inaccuracy in the system—in part because of the complexities of the user interface—and trusted the Aegis targeting system too much to challenge its determination. Similarly, in 2003, the US Army’s Patriot air defense system, which is highly automated with high levels of complexity, wasinvolved in two incidents of fratricide. In these stances, “naïve” trust in the system and the lack of adequate preparation for its operators resulted in fatal, unintended engagements.

As the US, Chinese, and other militaries seek to leverage AI to support applications that include early warning, automatic target recognition, intelligence analysis, and command decision-making, it is critical that they learn from such prior errors, close calls, and tragedies. In Petrov’s successful intervention, his intuition and willingness to question the system averted a nuclear war. In the case of the USS Vincennes and the Patriot system, human operators placed too much trust in and relied too heavily on complex, automated systems. It is clear that the mitigation of errors associated with highly automated and autonomous systems requires a greater focus on this human dimension.

There continues, however, to be a lack of clarity about issues of human control of weapons that incorporate AI. Former Secretary of Defense Ash Carter has said that the US military will never pursue “true autonomy,” meaning humans will always be in charge of lethal force decisions and have mission-level oversight. Air Force Gen. Paul J. Selva, vice chairman of the Joint Chiefs of Staff, used the phrase “Terminator Conundrum” to describe dilemmas associated with autonomous weapons and has reiterated his support for keeping humans in the loop because he doesn’t “think it’s reasonable to put robots in charge of whether we take a human life.”To date, however, the US military has not established a full, formalized definition of “in the loop” or of what is necessary for the exercise of “appropriate levels of human judgment” over use of force that was required in the 2012 Defense Department directive on “Autonomy in Weapons Systems.”

The concepts of positive or “meaningful” human control have started to gain traction as ways to characterize the threshold for giving weapon system operators adequate information to make deliberate, conscious, timely decisions. Beyond the moral and legal dimensions of human control over weapons systems, however, lies the difficult question of whether and under what conditions humans can serve as an effective “failsafe” in exercising supervisory weapons control, given the reality of automation bias.

When war is too fast for humans to keep up. The human tendency towards over-reliance on technology is not a new challenge, but today’s advances in machine learning, particularly the use of deep neural networks—and active efforts to leverage these new techniques to enable a range of military capabilities—will intensify the attendant risks.

Moreover, it remains to be seen whether keeping human operators directly involved in decision-making will even be feasible for a number of military missions and functions, and different militaries will likely take divergent approaches to issues of automation and autonomy.

Already, there has been the aforementioned transition to greater degrees of automation in air and missile defense, driven by the inability of humans to react quickly enough to defend against a saturation attack. Similar dynamics may be in play for future cyber operations, because of comparable requirements of speed and scale. Looking to the future potential of AI,certain Chinese military thinkers even anticipate the approach of a battlefield “singularity,” at which human cognition could no longer keep pace with the speed of decision and tempo of combat in future warfare. Perhaps inevitably, keeping a human fully in the loop may become a major liability in a number of contexts. The type and degree of human control that is feasible or appropriate in various conditions will remain a critical issue.

Looking forward, it will be necessary to think beyond binary notions of a human “in the loop” versus “full autonomy” for an AI-controlled system. Instead, efforts will of necessity shift to the challenges of mitigating risks of unintended engagement or accidental escalation by military machines.

Inherently, these issues require a dual focus on the human and technical dimensions of warfare. As militaries incorporate greater degrees of automation into complex systems, it could be necessary to introduce new approaches to training and specialized career tracks for operators. For instance, the Chinese military appears to recognize the importance of strengthening the “levels of thinking and innovation capabilities” of its officers and enlisted personnel, given the greater demands resulting from the introduction of AI-enabled weapons and systems. Those responsible for leveraging autonomous or “intelligent” systems may require a greater degree of technical understanding of the functionality and likely sources of fallibility or dysfunction in the underlying algorithms.

In this context, there is also the critical human challenge of creating an “AI ready culture.” To take advantage of the potential utility of AI, human operators must trust and understand the technology enough to use it effectively, but not so much as to become too reliant upon automated assistance. The decisions made in system design will be a major factor in this regard. For instance, it could be advisable to create redundancies in AI-enabled intelligence, surveillance, and reconnaissance systems such that there are multiple methods to ensure consistency with actual ground truth. Such a safeguard is especially important due to the demonstrated vulnerability of deep neural networks, such as image recognition, to being fooled or spoofed through adversarial examples, a vulnerability that could be deliberately exploited by an opponent. The potential developmentof “counter-AI” capabilities that might poison data or take advantage of flaws in algorithms will introduce risks that systems could malfunction in ways that may be unpredictable and difficult to detect.

In cases in which direct human control may prove infeasible, such as cyber operations, technical solutions to unintended engagements may have to be devised in advance. For instance, it may be advisable to create an analogue to “circuit breakers” that might prevent rapid or uncontrollable escalation beyond expected parameters of operation.

While a ban on AI-enabled military capabilities is likely improbable, and treaties or regulations could be too slow to develop, nations might be able to mitigate likely risks of AI-driven systems to military and strategic stability through a prudent approach that focuses on pragmatic practices and parameters in the design and operation of automated and autonomous systems, including adequate attention to the human element.

No comments:

Post a Comment