Pages

14 May 2022

Killer Robots Are Here—and We Need to Regulate Them

Robert F. Trager and Laura M. Luca

Swarms of robots with the ability to kill humans are no longer only the stuff of science fiction. Lethal autonomous weapons systems (LAWS) are here. In Ukraine, Moscow has allegedly deployed an artificial intelligence (AI)-enabled Kalashnikov ZALA Aero KUB-BLA loitering munition, while Kyiv has used Turkish-made Bayraktar TB2 drones, which have some autonomous capabilities. Although it’s always hard to determine whether a weapon’s autonomous mode is used, these technologies have reportedly been employed in at least one conflict: Last year, a United Nations report suggested Turkey used autonomous firing by its Kargu-2 drones to hunt fleeing soldiers in Libya’s civil war (though the CEO of the Turkish company that produced the drone denies it is capable of this).

Unlike traditional drones, these systems have the ability to navigate on their own, and some can select targets. Although a human controller can still decide whether or not to strike, such weapons are acquiring ever more autonomous capabilities. Now that militaries and paramilitaries worldwide have taken note, these technologies are poised to spread widely. The world today stands at the very moment before much more advanced versions of these technologies become ubiquitous.

So far, at least Israel, Russia, South Korea, and Turkey have reportedly deployed weapons with autonomous capabilities—though whether this mode was active is disputed—and Australia, Britain, China, and the United States are investing heavily in developing LAWS with an ever-expanding range of sizes and capabilities.

Already, some LAWS can loiter in an area to find targets that machine-learning algorithms have trained them to recognize, including enemy radar systems, tanks, ships, and even specific individuals. These weapons can look vastly different: For instance, the Turkish Kargu-2 drone, which was introduced in 2020 and used in Libya’s war, is 2 feet long, weighs around 15 pounds, and can swarm in groups. Autonomous systems can also be much larger, such as unmanned AI-driven fighter jets like the modified L-39 Albatros, and much smaller, such as rudimentary commercial drones repurposed with autonomous software.

Once these technologies have spread widely, they will be difficult to control. The world thus urgently needs a new approach to LAWS. So far, the international community has done nothing more than agree that the issue needs to be discussed. But what it really needs to do is take a page from the nuclear playbook and establish a nonproliferation regime for LAWS.

Currently, countries at the forefront of LAWS development resist any calls for their ban. The United States has claimed that existing international humanitarian laws are sufficient to govern LAWS; the U.S. Defense Department’s policy is that they must be designed to ensure “appropriate levels of human judgment over the use of force.” China has remained ambiguous, stating the importance of “full consideration of the applicability of general legal norms” while insisting on a narrow definition of LAWS. Russia, meanwhile, refuses to even consider the issue, using diplomatic procedural tools to stall and reduce the time the United Nations devotes to debating the subject.

But most countries have called for a ban on developing and using LAWS—or, at a minimum, regulating them. In 2019, U.N. Secretary-General António Guterres said LAWS are “politically unacceptable, morally repugnant, and should be prohibited by international law.”

There are many reasons countries, international nongovernmental organizations, scholars, and AI experts worry about LAWS. Although they do not all agree in their predictions of how such weapons could affect society, there’s a growing consensus that their spread could bring substantial and harmful consequences.

First, LAWS could facilitate violence on a large scale since they’re not restricted by the number of people available to man them. Second, in combination with facial recognition and other technologies, they can target individuals or groups that fit certain descriptions, which could appeal to violent groups and state militaries committing political assassinations and ethnic cleansing. Third, LAWS may make it easier for those who control them to hide their identities. LAWS thus have the potential to upend political orders and enable tighter authoritarian control. In addition, they can always malfunction, including by mistaking civilians for combatants.

So far, the international community has attempted—and failed—to regulate LAWS. In December 2021, after eight years of technical discussions, government and civil society representatives met at the U.N. in Geneva to set an agenda for regulating LAWS for the first time in what was billed as a “historic opportunity.”

Most attendees favored legally binding rules that apply equally to all states to govern the development and use of these technologies. Yet, by any standard, the meeting failed. Despite years of preparatory discussions within the framework of the U.N. Convention on Certain Conventional Weapons (CCW)—a forum for restricting the use of weapons considered to cause unnecessary or unjustifiable suffering to combatants or to affect civilians indiscriminately—the attendees barely managed to agree on 10 more days of discussion this March and July. This outcome was to be expected given the positions of the major powers and the CCW rules requiring consensus before action is taken. (Disclosure: Both authors have been affiliated with the CCW.)

In response, a wide array of actors—from Amnesty International and Human Rights Watch to some of the states in favor of a LAWS ban, including Argentina and the Philippines—are calling for a process to develop legally binding prohibitions on these weapons outside of the CCW. Alternative approaches to prohibition treaties have had some success in the past, such as when countries agreed to give up land mines through the 1997 Ottawa convention; cluster munitions through the 2008 Oslo Accords; and even nuclear weapons through the 2017 Treaty on the Prohibition of Nuclear Weapons, which was the first treaty to completely ban nuclear weapons in line with international humanitarian law and establish pathways for current nuclear weapon states to renounce them. However, while many states signed these treaties, most of the powerful states did not.

Unfortunately, even this limited success is likely to be elusive in the case of LAWS. The primary reason is that states are increasingly aware that these non-substitutable technologies may become crucial to their security and are thus unlikely to unilaterally abandon them. If states’ adversaries have them, they will likely believe they need them—and absent the sort of nonproliferation regime that exists for nuclear weapons, their adversaries will, in fact, continue to rapidly develop LAWS without much oversight.

Nonproliferation has yet to receive much attention in the case of LAWS, but it has worked to varying degrees in the past with the missile technology control regime, biological weapons, chemical weapons, and, of course, nuclear weapons.

The landmark Nonproliferation of Nuclear Weapons Treaty (NPT), which entered into force in 1970, met the requirements of major and lesser powers: It did not require those who possessed nuclear weapons to renounce them immediately, allowed other states access to the benefits of civilian use of nuclear power, and relied on a trusted international organization that was charged with the dual responsibility of promoting and controlling nuclear technology.

A similar nonproliferation regime for LAWS could facilitate regulation of their development, transfer, and employment—even by the powers that do not give them up. It would reduce the use of these technologies for authoritarian control and terrorist actions worldwide and, as in the nuclear case, create the possibility of developing norms against LAWS use that apply to all.

This would admittedly be complicated. The nuclear nonproliferation regime has thus far been largely successful because it has been a central pillar of major powers’ grand strategies. Indeed, detente in the Cold War involved superpowers agreeing to police proliferation in their spheres of influence. Security guarantees mollified countries that otherwise would have felt too insecure to forgo nuclear weapons. Threats and promises convinced countries, such as Iran when it signed the Joint Comprehensive Plan of Action, to accept stricter limits on its nuclear program than it would have otherwise.

Without this focused attention, nonproliferation and arms embargo regimes regularly fail. Indeed, the autonomous weapons used in Libya were exported by Turkey in violation of a U.N. arms embargo. A successful LAWS nonproliferation regime would thus require states to prioritize the issue more than they currently do in their national security strategies, especially as the technologies become more widespread and effective.

Another issue that all nonproliferation regimes face is the dual-use nature of technology. In a striking example, in December 2020, the U.S. Air Force employed an algorithm called MuZero to select targets for missile strikes in a training exercise. MuZero was developed and made public by DeepMind, a subsidiary of Google’s parent company, Alphabet, which has pledged not to work on autonomous weapons. Although DeepMind showcased its algorithm by training computers to play games like chess and Go at superhuman levels, the U.S. Air Force’s use of the technology raised concerns about using DeepMind’s work for more lethal purposes.

Existing nonproliferation regimes try to address this problem through targeted export restriction agreements for dual-use technologies. The Australia Group provides a model for this: Its 43 countries coordinate export limitations on technologies that facilitate biological or chemical weapons development. Although this hasn’t halted proliferation entirely, it has arguably slowed its pace.

The problem of restricting the spread of software, which can be copied, is even more difficult. This is important because many actors could pair autonomy-enabling software with commercially available hardware to produce weapons easily.

An international legal nonproliferation regime could address this problem by mandating safety provisions to prevent copying of software, identifying the classes of software that should not be publicly available, restricting the transfer of sophisticated hardware, and criminalizing activities intended to further proliferation, including the financing of critical materials. Although it is impossible to prevent independent development of basic systems, such a regime would slow the spread of more sophisticated LAWS technologies.

Once a nonproliferation regime is established, it could even be combined with prohibition to govern these dangerous technologies. Indeed, a nonproliferation regime for LAWS would encourage some states to sign a treaty banning LAWS. It is impossible to imagine the Treaty on the Prohibition of Nuclear Weapons, for instance, without the nuclear nonproliferation regime; many states would not have signed it without the nonproliferation regime first providing security assurances and convincing their rivals to forgo nuclear weapons.

Clearly, this dual approach has worked before, even as onlookers worry about the potential for nuclear escalation in Ukraine, and it should be used again in the case of LAWS. In tandem, nonproliferation and prohibition can even be a model for the regulation of other emerging technologies, including bioengineering and other forms of artificial intelligence. Nonproliferation may not be the ideal, but it can make the world a whole lot safer.

No comments:

Post a Comment