Pages

12 May 2020

ARTIFICIAL INTELLIGENCE AND THE BOMB: NUCLEAR COMMAND AND CONTROL IN THE AGE OF THE ALGORITHM

James Johnson
Source Link

In 2016, DeepMind’s AI-powered AlphaGo system defeated professional Go grandmaster Lee Sedol. In one game, the AI player reportedly surprised Sedol by making a strategic move that “no human ever would.” Three years later, DeepMind’s AlphaStar system defeated one of the world’s leading e-sports gamers at StarCraft II—a complex multiplayer game that takes place in real time and in a vast action space with multiple interacting entities—devising and executing complex strategies in ways that, similarly, a human player would unlikely do. These successes raise important questions: How and why might militaries use AI not just to optimize individual and seemingly mundane tasks, but to enhance strategic decision making—especially in the context of nuclear command and control? And would these enhancements potentially be destabilizing for the nuclear enterprise?

Pre-Delegating Nuclear Decisions to Machines: A Slippery Slope

AI systems might undermine states’ confidence in their second-strike capabilities, and potentially, affect the ability of defense planners to control the outbreak of warfare, manage its escalation, and ultimately terminate armed conflict. The central fear focuses on two related concerns: The first revolves around the potentially existential consequences of AI surpassing human intelligence—imagine the dystopian imagery associated with Terminator‘s Skynet. The second emphasizes the possible dangers caused by machines that lack human empathy or other emotional attributes and relentlessly optimize pre-set goals (or self-motivated future iterations that pursue their own) with unexpected and unintentional outcomes—picture something like Dr. Strangelove’s doomsday machine.


AI, functioning at higher speeds than human cognition and under compressed decision-making timeframes might, therefore, increasingly impede the ability—or the Clausewitzian “genius”—of commanders to shape the action and reaction cycles produced by AI-augmented autonomous weapon systems. For now, there is general agreement among nuclear-armed states that even if technological developments allow, decision making that directly impacts nuclear command and control should not be pre-delegated to machines—not least because of the “explainability” (or interpretability), transparency, and unpredictability problems associated with machine-learning algorithms.

Psychologists have demonstrated that humans are slow to trust the information derived from algorithms (e.g., radar data and facial recognition software). However, as the reliability of the information improves, the propensity to trust machines increases—even in cases where evidence emerges that suggests a machine’s judgment is incorrect. This tendency of humans—to use automation (i.e., automated decision support aids) as a heuristic replacement for vigilant information seeking, cross-checking, and adequate processing supervision—is known as “automation bias.”

Despite humans’ inherent distrust of machine-generated information, once AI demonstrates an apparent capacity to engage and interact in complex military situations (e.g., wargaming) at a human (or superhuman level), defense planners would likely become more predisposed to view decisions generated by AI algorithms as analogous with (or even superior to) those of humans—even if these decisions lacked sufficiently compelling “human” rationality, characterized instead by fuzzy “machine” logic. AI experts predict that by 2040, AI systems may reach that threshold, demonstrating an ability to play aspects of military wargames or exercises at superhuman levels.

A Human in the Loop is Not a Panacea

Human psychology research has found that people are predisposed to do harm to others if ordered to do so by an authority figure. As AI-enabled decision-making tools are introduced into militaries, human operators may begin to view these systems, by virtue of their comparatively greater intelligence, as agents of authority, and thus be more inclined to follow their recommendations, even in the face of information that indicates they would be wiser not to.

This predisposition will likely be influenced, and possibly expedited, by human bias, cognitive weaknesses (notably decision-making heuristics), false assumptions, and the innate anthropomorphic tendencies of human psychology. For example, US Army investigators discovered that automation bias was a factor in the 2003 Patriot missile fratricide incidents, in which operators mistakenly fired upon friendly aircraft early during the Iraq War.

Experts have long recognized the epistemological and metaphysical confusion that can arise from mistakenly conflating human and machine intelligence, especially important in safety-critical, high-risk domains such as the nuclear enterprise. Further, studies have demonstrated that humans are predisposed to treat machines that share task-orientated responsibilities as “team members,” and in many cases exhibit similar in-group favoritism as humans do with one another.

Contrary to conventional wisdom, having a human in the loop in decision-making tasks does also not appear to alleviate automation bias. Instead, human-machine collaboration in monitoring and sharing responsibility for decision making can lead to similar psychological effects that occur when humans share responsibilities with other humans, whereby “social loafing” arises—the greater tendency of humans to seek ways to reduce their own effort when working redundantly within a group than when they work individually on a task.

A reduction in human effort and vigilance caused by these tendencies could increase the risk of unforced error and accidents. Besides, a reliance on the decisions of automation in complex and high-intensity situations can make humans less attentive to—or more likely to dismiss—contradictory information, and more predisposed to use automation as a heuristic replacement—a shortcut—for information seeking.

Regime Type and the AI-Nuclear Dilemma

The decision to automate nuclear capabilities might also be influenced by the regime type, political stability and legitimacy, and threat perceptions of a particular nuclear-armed state. An authoritarian, nuclear-armed regime—in China, North Korea, or Pakistan, for example—that fears either an internal coup or foreign interference may elect to automate its nuclear forces so that only a small circle of trusted officials is involved in the nuclear enterprise.

For example, during the Cold War, the Soviet Union developed a computer program known as VRYAN, a Russian acronym for “Surprise Nuclear Missile Attack,” designed to notify Soviet leaders of a pre-emptive US nuclear strike. However, the data used to feed the system was often biased, and thus, it propelled a feedback loop that heightened the Kremlin’s fear that the United States was pursuing first-strike superiority.

Currently, China maintains strict controls on its nuclear command-and-control structures (e.g., separating nuclear warhead and delivery systems), and the evidence does not suggest Beijing has pre-delegated launch authority down the chain of command if a first strike decapitates the leadership. As a means to retain centralized command-and-control structures and strict supervision over the use of nuclear weapons, AI-enabled automation might become an increasingly attractive option to authoritarian regimes such as China.

Autocratic states would also likely perceive an adversary’s intentions differently from a democratic one if there is a belief that a regime’s political survival (or legitimacy) is at risk, potentially causing leaders to consider worst-case scenario judgments, and thus behave in a manner predicted by offensive realist scholars. During a crisis or conflict, this assessment that a regime’s survival is at stake—especially when information flows are manipulated or communications compromised—would likely increase the appeal of expanding the degree of automation in the command-and-control process, in the hope that, in doing so, the regime might insulate itself against both internal and external threats.

Nondemocratic leaders operating in closed political systems such as China’s might exhibit a higher degree of confidence in their ability to respond to perceived threats in international relations. Biases from a nondemocratic regime’s intelligence services, for instance, might distort leaders’ view of their position vis-à-vis an adversary. If, for the reasons above, the regime has chosen to incorporate AI into its nuclear command-and-control structure, such a distortion could combine with compressed decision-making timeframes to become fundamentally destabilizing.

In short, nondemocratic nuclear states with relatively centralized command-and-control structures, those less confident in the survivability of their nuclear arsenal, and those whose political legitimacy and regime stability depends on the general acceptance of official narratives and dogma would likely be more persuaded by the merits of automation, and less concerned about the potential risks—least of all the ethical, human cognitive, or moral challenges—associated with it.

Although Chinese statements pay lip-service to the regulation of military AI by global militaries, China is already demonstrating a willingness to deploy the technology for at least some security purposes, pursuing a range of AI-related initiatives (e.g., the use of data for social surveillance to enable a social-credit scoring system and ubiquitous facial-recognition technology), for example, focused on the impact on social stability, and in particular, efforts to insulate the legitimacy of the regime against potential internal threats. By contrast, and in the context of nuclear command and control, the political processes, accountability measures, nuclear-launch protocols, nuclear strategy and doctrine, mature civil-military relations, and shared values between allies (e.g., the United States and its NATO allies) in democratic societies should make them less predisposed—or at least more reticent—to use AI in the nuclear domain.

Technological developments are forcing strategists to contend with a world in which command and control of nuclear weapons could become increasingly enabled by AI. In a sense, that represents a dramatic change. Yet the framework in which that change could potentially take place is largely consistent with that which has defined most of the nuclear age. Today, states face contradictions, dilemmas, and trade-offs regarding the decision about whether or not to integrate AI and autonomy into the nuclear enterprise—just as leaders have faced in the quest for strategic stability, effective deterrence, and enhanced security in a multipolar nuclear world more generally.

James Johnson is a Postdoctoral Research Fellow at the James Martin Center for Nonproliferation Studies at the Middlebury Institute of International Studies at Monterey. His latest book project is entitled Artificial Intelligence & the Future of Warfare: USA, China, and Strategic Stability. Twitter: @James_SJohnson.

No comments:

Post a Comment