10 March 2023

AI Nuclear Weapons Catastrophe Can Be Avoided

Noah Greene

In October 2022, the U.S. Department of Defense released its National Defense Strategy, which included a Nuclear Posture Review. Notably, the department committed to always maintain human control over nuclear weapons: “In all cases, the United States will maintain a human ‘in the loop’ for all actions critical to informing and executing decisions by the President to initiate and terminate nuclear weapon employment.”

This commitment is a valuable first step that other nuclear powers should follow. Still, it is not enough. Commitments like these are time and circumstance dependent. The U.S. military does not currently feel the need to produce and deploy such weapons, in part because it does not see other nuclear powers engaging in similar behavior. Thus, the threat of an artificial intelligence (AI)-enabled arms race is not a high-level concern for military planners. In the future, emerging AI features will only increase the potential for disaster through the possibility of semiautonomous or fully autonomous nuclear weapons. To prevent this technology from ever entering the nuclear command-and-control structure, the five permanent members (P5) of the U.N. Security Council should lead a diplomatic negotiation between nuclear-armed states with the goal of producing an agreement that bans the research and development of semiautonomous and fully autonomous AI-enabled nuclear weapons. In this case, P5 members serve as a shorthand for the most prominent nuclear-armed states.

In one sense, the landscape for debating possible legal frameworks for lethal autonomous weapons systems (LAWS) is quite dynamic. Since 2014, the U.N.’s Group of Governmental Experts (GGE) on LAWS has met at multiple plenaries to discuss possible guardrails for the use of these systems. So far, the result has been less than promising. Member states have only agreed to 11 guiding principles in this area rather than a LAWS treaty or protocol that could be added to existing treaties. These principles are better than a complete absence of supranational guidance, but their revocable nature means they lack any enforcement mechanism. This is compounded by the fact that these principles mostly reference existing agreements rather than treading new ground. Considering that the guidelines are nonbinding, they could at least go further in their ambitions.

In another sense, the absence of a firm agreement in this area also provides a key insight into the perceptions of U.N. member states: A crisis that involves LAWS-related systems is considered to be an issue for the future, not today.

However, autonomous weapons in this vein are far from abstract. During the Cold War, Soviet military planners developed and placed into use a semiautonomous nuclear system known as Perimeter. In the event of nuclear war, Perimeter was designed to launch the Soviet Union’s vast missile arsenal without express guidance from central command. In theory, after a human activated the system, network sensors then determined whether the country had been attacked. If the system determined that the country had been attacked, it would check with leaders at the top of the command-and-control structure to confirm. If no response was given, the onus to deploy the missiles fell on a designated official. This was essentially an attempt to ensure mutually assured destruction even in the event of the decapitation of a central government or a “dead hand” scenario.

A lack of urgency in banning such weapons is due to concerns regarding long-term international security implications. At its core, states don’t want to make a commitment that could negate a first-mover advantage in adopting certain AI systems, nor do they want to lock themselves out of the market for becoming an early adopter should their enemies decide to utilize these systems. AI-enabled nuclear weapons are particularly concerning due to their civilization-destroying nature. As James Johnson highlighted in War on the Rocks last year, the question of AI technology being integrated into nuclear mechanisms is not a question of if, but “by whom, when, and to what degree.” If viewed along a spectrum, the most extreme degree of AI involvement would be a nuclear weapons system capable of identifying targets and firing on those targets without human approval. The second most extreme example would be a nuclear weapons system capable of firing on a target independently, after a human has locked the target into the system. While neither of these specific systems is known to exist, the future environment for more risky research in this area is far from certain. And both scenarios could be catastrophic. They would also increase the chances of a “broken arrows” incident, in which a nuclear weapon is released accidentally. To at least better humanity’s odds of survival, initiating a total ban on these weapons through a P5-led agreement would be a substantial step forward.

In the past, major nuclear powers have negotiated agreements on limiting nuclear proliferation in general and specifically limiting weapons testing in certain places. These agreements include the Nuclear Test Ban Treaty (NTBT) and the Nuclear Non-Proliferation Treaty (NPT). The thinking of the major nuclear powers—the U.S., the U.K., and the Soviet Union—when negotiating the NTBT was similar to today’s thinking on this issue. Namely, there should be limits on how and where nuclear weapons are tested. Similarly for the NPT upon its initial signing, the chances of nuclear war and destruction outweighed the strategic benefits of unmitigated weapons research and development. Although AI and nuclear technology have changed drastically since this period, the core elements of those arguments are still true today.

Russia’s February 2022 invasion of Ukraine reinvigorated the specter of a nuclear exchange. Russian President Vladimir Putin has engaged in nuclear signaling to deter Western involvement in the conflict. This comes after Russia and the U.S. have invested in new-age hypersonic missiles, some of which are capable of evading defensive measures. This evasive component may very well include some form of machine learning that allows a nuclear warhead to have fixed optical sensors to identify and evade intersecting missiles coming from a target’s defenses. China has also increased its nuclear arsenal. Again, all of these events have elevated the need for another wide-sweeping nuclear weapons treaty. Banning nuclear weapons in which a human operator is not fully in control is an area of diplomatic discussion that likely has more agreement than general bans on nuclear weapons. It is always worth noting that nuclear war or broken arrows incidents are not in any state’s interest. The genesis of a treaty between today’s nuclear powers could begin with the U.S., Russia, and China engaging in trilateral talks. Once a base level of desire for an agreement is established, a broader summit with other P5 members could be convened.

It is reasonable to question the plausibility of such an agreement being created in the current security environment between the East (Russia and China) and West (the U.S., France, and the U.K.). But even at the height of Cold War antagonism, the U.S. and the Soviet Union managed to reach significant nuclear agreements, despite a deep sense of distrust by both sides that, at times, brought humanity to periods of heightened brinkmanship, such as the Cuban missile crisis.

To illustrate the need for an international nuclear agreement, let’s imagine the following scenario. The year is 2050, the Russia-Ukraine war has long ended, but in the decades since, both Russia and China have updated their nuclear arsenals. Some systems maintain semiautonomous and fully autonomous missile systems in the command-and-control structure. The adoption of this technology forces U.S. military officials to rethink the government’s commitment made 30 years prior to always keep a human in the loop in U.S. systems. Considering these developments, the Department of Defense decides to forgo its own commitment and follow the example set by other nuclear powers to invest in similar systems. Existing in this same reality is North Korea, and while Kim Jong Un is no longer in charge, the country still fears that the new Supreme Leader Kim Yo Jong (his sister) will be assassinated. Planning for this possibility and the simultaneous fear of the decapitation of its government, the country places multiple evasive hypersonic nuclear missiles into service that are capable of declaring themselves “weapons free” if sensors detect a seismic shockwave from an attack.

This scenario has ample room for catastrophe—for both the international community and domestic leaders—should any of these weapons malfunction and lead to a nuclear fallout. As the Soviet-era Col. Petrov case kindly taught us, without a human firmly in control of the nuclear command-and-control structure, the odds of disaster creep slowly toward an unintended or uncontrolled nuclear exchange. An agreement between nuclear powers on this issue led by P5 states would be an important step toward recreating a patchwork of nuclear treaties that has dissolved over the past two decades. To do otherwise would be to flirt with an AI-enabled nuclear arms race.

No comments: