Pages

22 March 2021

ARTIFICIAL INTELLIGENCE, AUTONOMY, AND THE RISK OF CATALYTIC NUCLEAR WAR

James Johnson 

In 2016, the AWD News site reported that Israel had threatened to attack Pakistan with nuclear weapons if Islamabad interfered in Syria. In response, Pakistani Defense Minister Khawaja Muhammad Asif issued a thinly veiled threat. On Twitter he warned Israel to remember that Pakistan—like Israel—is a nuclear-armed state. Luckily, the report was false, and it was subsequently debunked as fictitious by the Israeli Ministry of Defense.

This incident puts a modern, and alarming, spin on the concept of “catalytic nuclear war”—in which third-party (a state or nonstate actor) actions provoke a nuclear war between two nuclear-armed powers—and demonstrates the potentially severe damage caused by the misinformation and manipulation of information by a third party. During the Cold War, the main concern about “catalytic nuclear war” centered on the fear that a small or new nuclear power would deliberately set a major exchange in motion between the United States and the Soviet Union. As its name suggests, the concept was inspired by chemical reactions where the catalyzing agent (a third-party actor) would remain unscathed by its initiated process. As we know, however, a catalytic nuclear war never occurred.

In the digital era, the catalyzing chain of reaction and counter-retaliation dynamics set in motion by a third-party actor’s deliberate action is fast becoming a more plausible scenario. The concept of catalytic nuclear war—considered by many as unlikely given the low probability of a terrorist group gaining access to nuclear weapons—should be revisited in light of recent technological change and improved understanding of human psychology. Specifically, the human propensity for making fast, intuitive, reflexive, and heuristic judgments (known as “System I” thinking), which is exacerbated when information overload and unfamiliar technologies are more prominent features of decision making. In short, emerging technologies—most notably cyber, AI technology, and drones—are rapidly creating new (and exacerbating old) low-cost and relatively easy means for state and nonstate actors to fulfill their nefarious goals. This is compounded by the exponential rise in data emerging from today’s information ecosystem—and in the speed with which it does so—which will create novel attack pathways to manipulate and propagate misinformation and disinformation during crisis times.

Emerging Technology and Nuclear Stability

In theory, a third-party actor might target nuclear command, control, and communication systems’ early-warning satellites and radars systems with AI-enhanced cyber capabilities, without the need for actual physical contact with or manufacture of nuclear weapons to have the “power to hurt.” While such an attack would likely be beyond the abilities and resources of opportunistic and less sophisticated nonstate actors, emerging technology such as AI and autonomy will likely lower this threshold, creating new pathways to accomplish nefarious goals, especially through manipulation and distortion of the information landscape.

Recent developments in AI-enabling technology have exacerbated these vulnerabilities and introduced additional tools for third-party actors to exploit. A key risk is that these technologies, by enabling the information landscape in which decisions about atomic weapons occur to be manipulated, precipitate a catalytic nuclear war—in particular, social media manipulation and the spreading of misinformation, false memes, and fake news. The consequences could be catastrophic during a crisis involving two nuclear powers if communication systems are compromised, nuclear arsenals are on high alert, decision-making timeframes are compressed, or launch authority is pre-delegated (e.g., to nuclear-armed submarine commanders).

A good case in point is the 2019 Indian-Pakistani crisis, which put two nuclear-armed adversaries dangerously close to the brink of catastrophe. The coalescence of cross-border terrorist attacks, disputed territory, both countries fielding growing nuclear arsenals, and a fraught social media environment—including implicit nuclear threats—makes it is easy to imagine how the actions of a motivated actor might spark an accidental nuclear crisis.

Furthermore, in a high-pressure crisis environment with confusion and paranoia running high the risk of misperceptions of an adversary’s intentions and behavior (e.g., putting nuclear arsenals on high-alert status or nonroutine troop movement) increases, along with the temptations for preemptive action. That is, the catalyzing action of a state or nonstate actor could produce the effect of an imminent attack on one or both of two nuclear-armed states, for which preemption is considered the most profitable strategy. What factors might aggravate these escalation pathways during a crisis?

Risk of Catalytic Nuclear War in the Digital Age

Three features of the digital age—information complexity; greater automation of nuclear command, control, and communication systems; and mis- and disinformation—make nuclear crisis management more difficult than in the past. These variables do not, however, constitute mutually exclusive risk scenarios. Rather, the interplay between these conditions might allow them to feed into one another with uncertain and potentially self-reinforcing effects. In short, these conditions are a function of the confusion and uncertainty created by the sociotechnical complexity generated in the digital age.

Information Complexity

As nuclear states (notably the United States, Russia, and China) modernize and overhaul their outdated nuclear command-and-control systems there are a multitude of challenges to consider, including information warfare, information manipulation, comingled nuclear and conventional weapons systems, and the risks posed by cyberattacks.

A key characteristic of operating modern nuclear systems—especially those designed to support both conventional and nuclear operations—infused with advanced technology such as AI, machine learning, big-data analytics, and cyber is the vast quantities of data and information collected to inform decision making. The complex interactions within tightly enmeshed, comingled systems that control nuclear weapons systems (e.g., early-warning satellites; intelligence, surveillance, and reconnaissance; electronic data networks; and missile defenses) have become a critical risk in the digital age.

Going forward, this complexity will mean that the risk of accidents, short of a nuclear detonation, will continue to grow. Further, the risks of technical and human errors that arise from this complexity and interdependency in modern nuclear systems are compounded by the prospect of cyberattacks against early-warning and command-and-control systems. Minuteman nuclear missile silos are, for example, considered by some to be particularly vulnerable to cyberattacks. Moreover, nuclear-powered ballistic missile submarines, once believed to be air-gapped and hack-proof are, however, connected via various electromagnetic signals make that them more susceptible to cyberattacks. Submarines are particularly vulnerable to malware introduced to the network during the procurement and construction phase of operations, and when it is docked for maintenance, refurbishment, and software updates.

While AI and machine-learning-augmented technology (e.g., pattern recognition technology) will significantly enhance states’ intelligence, surveillance, and reconnaissance capabilities, the introduction of multiple data streams could equally overwhelm the ability of human decision makers to determine the credibility of data—particularly if the provenance and validity of the information cannot be easily verified.

The US Air Force, for example, has characterized this phenomenon as four “V’s”—higher volume (collection of more data points); velocity (the volume of data is acquired at rapid speed); variety (numerous formats of information from diverse sources); and veracity (the volume, velocity, and type of data includes a substantial amount of noise and redundant data). Similarly, the US Navy has reported being overwhelmed by the floods of data generated from its existing information-gathering systems.

The quantity of information generated by the advanced technology supporting nuclear command, control, and communication systems can increase escalation risks in three crucial ways. First, human decision makers’ dependence on the data produced by complex and enmeshed nuclear command, control, and communication systems can exacerbate the degradation of decision-making quality and compromise the reliability of these systems via cyberattacks.

Second, decision makers overwhelmed with information might be willing to take more risk. During a crisis, information inadequacy (or “information asymmetry”) can prompt decision makers to eschew traditional caution and the acute fear of escalation for preemption. As a result, the risk of inadvertent escalation increases. In sum, the prospect of escalatory crises originating in (or being exacerbated by) the evolving information ecosystem will continue to rise because of the inherent speed, scope, and opacity surrounding cyber capabilities.

Third, advanced weapons systems can qualitatively improve the “always-never” criteria (i.e., nuclear weapons must always be ready when orders are given by a legitimate authority but never used by accident or unauthorized persons) that nuclear command, control, and communication systems must meet. The complexity and uncertainties introduced by sophisticated command-and-control mechanisms (especially early-warning satellites and radars) can also cause errors, unexpected interactions, and unintended consequences. In combination, these factors can upend deterrence and create rational (or subrational) incentives to escalate a situation. This trade-off is a product of the organizational and strategic-cultural variables in the human decision-making process, creating pressures to escalate a situation rather than viewing technology as an independent variable.

The assumption of rationality in a nuclear-armed adversary, or that the adversary has more information than might be the case, may stem from misperceptions about the nature of an opponent’s technological capabilities, the origins of a crisis, or an opponent’s intentions. As a result, these factors could exacerbate escalation pressures on states with less robust nuclear command, control, and communication systems and safeguards, or on those that perceive their survival to be at stake (e.g., North Korea, Pakistan, or India). Furthermore, the normal (or peacetime) perception of events can shift during crises or geopolitical tension, when decision makers are more prone to harbor worst-case scenario expectations and see things they expect or want to see—what is known as “cognitive consistency.”

During the height of the 1962 Cuban Missile Crisis, the United States took seriously a false report of an imminent attack by the Soviet Union on the United States. This mistake was made due to insufficient institutional checks and balances, and despite an extensive early warning system. A complete understanding of the relationship between advanced technology and strategic stability requires a deeper understanding of human tendencies, not merely technical capabilities.

Greater Automation of Nuclear Command, Control, and Communication Systems

Increasing automation levels in modern nuclear command, control, and communication systems—particularly those augmented and supported by AI technology—together with nefarious interference in cyberspace, will likely increase the risk of accidental nuclear war. The historical record demonstrates the vulnerabilities of nuclear command, control, and communication systems to frequent false alarms, accidents, “close calls,” and other risks associated with increasingly complex, porous, and interconnected systems, which despite their alleged “closed” nature, may offer actors multiple pathways to cause harm.

Several experts have suggested that integrating AI, machine learning, and autonomy into nuclear command, control, and communication systems may strengthen nuclear safety. They argue that new technologies will improve situational awareness, reduce information overload, and enhance the speed and scope of intelligence collection and processing. Others contend that the introduction of AI and other emerging technology could create new vulnerabilities and sources of errors, which motivated actors will inevitably seek to exploit. These concerns have prompted some to propose that nuclear states commit to retaining humans’ role in decisions relating to nuclear command, control, and communication (especially early warning systems) and the nuclear enterprise more broadly.

In a similar preventative approach, some have called for states to prohibit using cyber capabilities against nuclear command, control, and communication systems. However, this would do little to preclude nonstate or state-sponsored cyber interference. In short, given the complex interactions and interdependencies of modern nuclear systems, technical solutions may well create new risks, vulnerabilities, and uncertainties that compound the existing dangers of catalytic escalation.

Because of the inherent problems of attribution in cyberspace, an actor might plausibly make both states the target of its attack, convincing each side that the other party is responsible (i.e., a “double-sided catalytic attack”). This problem set is compounded by the shortened response timeframes available to decision makers, particularly where a “launch on warning” posture is present. Besides, the increasingly interdependent and commingled (or entangled) nature of states’ conventional and nuclear command-and-control systems might exacerbate the incentives to escalate a situation to a nuclear level once a conventional crisis or conflict begins.

This is particularly true if such risks are compounded by organizational failures, ambiguous information, misperceptions, or excessive trust in technology, which may lead human operators to delegate judgment to AI algorithms without fully understanding their limitations (a phenomena known as “automation bias”). Such automation bias—especially in human-machine interactions—could also mean that both false negatives and false positives go unnoticed (or discarded) because the operators are overconfident in systems augmented with advanced technology like AI.

This challenge is further deepened during a crisis when stress, fatigue, information overload, and commingled (nuclear and conventional) systems encounter a priori situations between asymmetric nuclear rivals, thickening the fog of war—the inevitable uncertainties, misinformation, or even breakdown of organized units, which influences warfare—resulting in irrevocable actions, when nuclear use seems like the only option.

Disinformation, Misinformation, and Information Manipulation

One rapidly developing and increasingly prominent field of AI-augmented technology that can complement and act as a force multiplier for existing malicious social manipulation behavior and generate campaigns of manipulation—most notably the spread of misinformation or disinformation—is the ability to create audio and video images that fabricate events, create fictitious situations, and propagate falsehoods.

As AI technology advances, the quality, cost, and availability of generative and other tools—especially AI-enhanced audio software—will make it increasingly difficult to discern what is real from what is not, eroding public trust in hitherto trustworthy information sources. In 2014, for example, residents of St. Mary Parish, Louisiana received a fake text message alert warning of a toxic fume hazard resulting from a chemical factory explosion in the area, with the story propagated further via fake media outlets on social media. Further fanning the flames, a YouTube video was also posted showing a masked, purported Islamic State fighter standing next to looping footage of an explosion. Thus, it is not difficult to imagine how these AI-enhanced technologies in the hands of actors with nefarious goals or an apocalyptic worldview might have dangerous consequences for nuclear security and strategic stability.

Deliberate malevolent information manipulation by nonstate actors could have destabilizing implications for effective deterrence and military planning, both during peace and war. Generative adversarial networks’ deepfakes might also exacerbate the escalation risks by manipulating the digital information landscape, where decisions about nuclear weapons are made.

It is easy to imagine unprovoked escalation caused by a malicious third-party actor’s clandestine false-flag operation in a competitive strategic environment. During a crisis, a state’s inability to determine an attacker’s intent may lead the state to conclude that an attack—threatened or actual—was intended to undermine its nuclear deterrent.

The literature on crisis stability and nuclear deterrence generally assumes actors are rational, thus emphasizing how different capabilities and nuclear postures can affect crisis dynamics. Today, decision makers are exposed to greater volumes of misinformation, disinformation, and information manipulation. Increasingly sophisticated AI-enhanced techniques exacerbate the findings of research that suggests preexisting cognitive schemes, beliefs, and attitudes rather than credulity or gullibility determine whether the public believes particular fakery is real or not. In short, fake images and videos can achieve acceptance even though they can be easily debunked.

AI-enhanced fake news, deepfakes, bots, and other malevolent social media campaigns could also influence public opinion—creating false narratives or amplifying false alarms—with destabilizing effects on a mass scale, especially in times of geopolitical tension and internal strife. In 2017, for example, a deepfake video was circulated on Russian social media, alleging a US B-52 bomber had accidentally dropped a ‘dummy nuclear bomb’ on a Lithuanian building. In short, AI-augmented technology is rapidly becoming another capability in state and nonstate actors’ toolkit to wage campaigns of disinformation and deception—one that both sides may have used against them.

Pathways to Nuclear Conflict

Consider the following fictional scenarios, in which AI-augmented capabilities in the hands of state and nonstate actors might accidentally or inadvertently drag a competitive nuclear dyad into conflict.

Cyber False-Flag Operation

Party A (a nonstate actor) launches a false-flag cyber operation—data manipulation, social media flooding, a spoofing attack, or other deception—against State B and State C, which is not traced to Party A and appears to both State B and C to come from the opposing state. Convinced that the other state is responsible and believing it is about to be attacked from the opposite side of the nuclear-dyad, State B retaliates against State C in a preemptive attack that State C views as unprovoked aggression—sparking a catalytic war.

During a crisis between two states, leaders would likely be predisposed to assume the worst about the other’s intentions, thus making them less likely to exercise rigorous due diligence to establish high confidence attributions due to a cyberattack.

Cyberterrorism vs. Nuclear Early Warning System

During a period of heightened tension between State A and State B, a third-party actor floods social media outlets and open-source crowdsourcing platforms with false information (e.g., satellite imagery, 3D models, or geospatial data) about the suspicious movement of State A’s nuclear road-mobile transporter erector launchers. Because of State B’s inability to determine with confidence the veracity of this information, and with mounting pressures to respond from its military command, State B escalates the situation by launching what it believes to be a preemptive military strike.

Alternative outcomes from this fictional scenario are, of course, possible. For example, counter-AI systems might uncover the leak’s source or false nature before it can do severe damage. State A might also be able to assure State B through backchannel or formal diplomatic communications of this falsehood. While social media platforms have had some success in slowing down users’ ability to orchestrate manipulative and dangerous campaigns, once these operations (e.g., deepfakes and bots) go viral, the ability to curtail them is extremely difficult for human operators or machines.

Taken together, the increasing sophistication and accessibility of deepfake technology, the inherently dual-use nature of AI, the problem of attribution in cyberspace, the increasingly complex and interdependent nature of nuclear systems, and a compressed timeframe for strategic decision making associated with hyperspeed warfare will continue to lower the threshold for false-flag operations.

Deepfake Disinformation

To incite conflict between two nuclear-armed rival states, State A hires proxy hackers to launch a deepfake video, depicting senior military commanders of State B conspiring to launch a preemptive strike on State C. This footage is then deliberately leaked into State C’s AI-augmented intelligence collection and analysis systems, provoking it to escalate the situation with an unprovoked retaliatory strike. State B, fearful of a decapitating strike and losing the first mover’s advantage, swiftly escalates the situation. These dynamics might also be set in train once a crisis has begun—if, for example, in the aftermath of a high-casualty terrorist attack that triggers a period of heightened tension between nuclear-armed adversaries (e.g., India and Pakistan), a nonstate actor (potentially a state proxy) launches a propaganda campaign on social media, starting a spiral of escalation.

How can militaries maintain effective command and control of nuclear forces in a rapidly evolving, uncertain, and complex conflict environment? While no amount or combination of controls, procedures, or technical enhancements can eliminate the possibility of catalytic nuclear escalation—and accidental escalation more broadly—specific measures that focus on reducing the likelihood of accidental nuclear war may help to reduce some of the risks highlighted in this article—especially human and technical errors that occur in cyberspace.

These measures can be categorized into three broad areas: first, enhancing the safety of nuclear weapons and hardening nuclear systems and processes (e.g., safeguards and risk analysis to strengthen nuclear systems against cyberattacks); second, improving command-and-control protocols and mechanisms (e.g., adding redundancies and enhancing launch protocols and authentication codes); and third, designing more robust safeguards to contain the consequences of errors and accidents when they occur (e.g., personnel training and collective monitoring of events). Social media could be used, for example, as independent sources to verify and collaborate government threat and crisis assessments, as well as to rapidly distribute and disseminate accurate and time-sensitive information during a crisis or conflict.

Ultimately, the effects of catalytic escalation risk that emerge from the use of these technologies will depend on the relative strength of the destabilizing factors involved, how both sides perceive these factors, and the fear it instills in others. As decision makers ruminate over the ways and means by which the actions of a state and nonstate actors might spark a catalytic nuclear exchange, they will need to consider how they would respond to such an attack and how an appropriate response might become a self-fulfilling prophecy—a chain reaction of retaliation and counter-retaliation—that leads unintentionally to nuclear war.

Can arms control agreements encompass emerging technologies such as AI? How might nonproliferation look in the age of AI? In short, legacy arms control frameworks, norms, and even the notion of strategic stability itself will increasingly struggle to assimilate and respond to these fluid and interconnected trends. When the lines between dual-use capabilities and nuclear and non-nuclear are blurred, arms control is much more challenging, and strategic competition and arms racing are more likely to emerge.

No comments:

Post a Comment