Pages

3 March 2018

Failure to Define Killer Robots Means Failure to Regulate Them

By Johannes Lang, Rens van Munster and Robin May Schott for Danish Institute for International Studies (DIIS)

According to Johannes Lang et al, disagreements on how to define ‘autonomy’ are stalling formal UN discussions on the compliance of autonomous weapons with international humanitarian law. So what should states do to help the discussion move forward? Our authors believe that the way most likely to succeed would be to adopt a practical approach that focuses on lethal autonomous weapon systems’ critical functions, such as target selection and firing. This article was originally published by the Danish Institute for International Studies (DIIS) on 2 February 2018.

Disagreements on how to define “autonomy” are stalling formal UN discussions on the compliance of autonomous weapons with international humanitarian law. A pragmatic approach that focuses on the weapon’s critical functions, such as target selection and firing, can help move discussions forward in the future.

In November 2017, the first formal UN meeting on lethal autonomous weapon systems (LAWS) took place in Geneva. The Group of Governmental Experts (GGE) on LAWS met over a period of five days to discuss the technical, military, legal and ethical aspects of such weapons. During the meeting, it became clear that different views on how to define autonomy complicated discussions and took up most of the time. This is no surprise. A formal definition will have important implications for how LAWS will be regulated and what type of systems will be allowed.

States should prioritize reaching agreement on how to define LAWS. Without agreement, international regulation risks falling behind technological developments.

A definition of autonomous weapons that focuses on the autonomy of critical functions such as target selection and firing is most likely to succeed within the context of the CCW.

The CCW focuses on the compliance of LAWS with international humanitarian law. However, international regulation should also address the broader effects LAWS have on military competition.

However, the failure to reach a basic agreement on the definition of LAWS is to the detriment of all states. If the international community does not take steps to regulate the critical functions of LAWS, then regulation in this area will continue to lag behind the rapid technological advances in the fields of robotics, artificial intelligence and information technology.

The GGE meeting occurred within the framework of the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects (CCW). Governments, international organizations and non-governmental organizations took part, including all the major military powers, the European Union, the International Committee of the Red Cross and Human Rights Watch.

Different definitions of autonomy reflect states’ different technological capabilities and strategic interests in developing autonomous weapons, as well as differing ethical positions on the morality of such weapons. Although, in the end, the 2017 meeting of the GGE could not agree on a single definition, several states, including the United States, Russia, Belgium, the Netherlands and Switzerland, emphasized the need to identify the central characteristics of autonomous weapons. So far, definitions fall into three main categories: human-machine interaction, autonomy of systems and autonomy of functions.

Human-machine interaction

One category of definitions focuses on issues of command and control in the relationship between the human being and the weapon. Is the human being “in the loop” (with sole authority to decide when to use the weapons), “on the loop” (with authority to call in or call off the weapon) or completely “out of the decision-making loop”? What is at stake in debates about human-machine interaction is the central juridical and ethical question of whether fully autonomous weapons are capable of abiding by the principles of international humanitarian law.

Most states, including the US, insist that governments would never delegate decisions over life and death to machines. The US and a range of other actors define autonomous weapons as weapons capable of selecting and engaging targets without human intervention. But humans will always be involved, they argue, in programming the weapon, selecting its types of targets and deciding when to deploy it. From this perspective, human beings will continue to occupy central positions in the chain of command, and there is little reason to fear that they will lose control of their weapons. The scenario most states envisage is one of sliding autonomy, where the balance between human and machine involvement can be flexibly adjusted. The more realistic scenario, however, is perhaps one of creeping autonomy, where the lines between human and machine action become blurred. The fusion of cyberspace, the electromagnetic spectrum, artificial intelligence and information theory with surveillance, intelligence gathering and target acquisition is making it increasingly difficult to assess what “meaningful human control” actually means, and at what point machine autonomy undermines it.

The autonomy of systems

A second category of classifications, preferred by the United Kingdom, focuses on levels of capability. The Concepts and Doctrine Centre of the British Ministry of Defence defines an autonomous weapon system as “capable of understanding higher-level intent and direction […] It is capable of deciding a course of action, from a number of alternatives, without depending on human oversight and control.”

This kind of definition relies on engineering and defense language to emphasize the weapon’s technical capacities. Such an approach distinguishes between different degrees of complexity: between “automatic” systems (with simple mechanical responses), “automated” systems (with more complex, rule-based responses) and “autonomous” systems (capable of artificially intelligent actions that do not require human interference). Because fully autonomous systems do not yet exist, a definition emphasizing the autonomy of entire systems pushes the debate about LAWS far into the future, without any consideration for the not-yet-fully-autonomous weapon systems already in use or emergent. A systems approach, such as the one employed by the UK, gained little traction at the meeting, and several speakers at the GGE panel on the crosscutting dimensions of technology, military effects and ethics warned against the pitfalls of viewing autonomy as a general attribute of a system, rather than applying the term to its various functions.

The autonomy of functions

A final type of definition focuses on the nature of the tasks that a system performs autonomously. At the recent meeting in Geneva, Switzerland proposed to shift the focus beyond LAWS to focus on autonomous weapon systems (AWS), defined as “weapon systems that are capable of carrying out tasks governed by international humanitarian law in partial or full replacement of a human in the use of force, notably in the targeting cycle.” Focusing on systems with elements of autonomy rather than autonomous systems is a more inclusive approach, allowing for a wide variety of configurations and a more differentiated debate. The International Committee of the Red Cross similarly focuses on the specific functions of autonomous weapons, defining such a weapon system as one that has autonomy in its critical functions, meaning a weapon that can select and attack targets without human intervention.

Background of the CCW

Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects

The United Nations Convention on Certain Conventional Weapons was concluded in Geneva on 10 October 1980 and entered into force in December 1983. It seeks to prohibit or restrict the use of certain conventional weapons which are considered excessively injurious or whose effects are indiscriminate.

The convention has five protocols, each dealing with a different category of weapons: 
Protocol I restricts weapons with non-detectable fragments 
Protocol II restricts landmines and booby traps 
Protocol III restricts incendiary weapons 
Protocol IV restricts blinding laser weapons 
Protocol V sets out obligations and best practice for the clearance of explosive remnants of war 

Article 36 of the Additional Protocol I of the Geneva Conventions provides the central framework for discussing LAWS in the context of the CCW. Article 36 requires states to review new weapons in order to prevent unintended, unnecessary or unacceptable harm.

In contrast to the systems-based approach, a functions-based approach is able to address currently existing weapons, as well as those under development. This approach highlights context and space, instead of defining capabilities in advance of use and regardless of different contexts. The importance of context can be illustrated in this way: while engineers can design autonomous weapons that operate in constrained and predictable environments like air space or certain underwater locations, the challenges involved in developing autonomous weapons are of quite another order when it comes to their operation in urban environments and in interaction with other weapon systems and humans under changing and unpredictable situational conditions. A context-sensitive approach focusing on critical functions will improve the international community’s ability to evaluate whether autonomous weapons comply with international humanitarian law.

A pragmatic approach is needed

All GGE discussions concern the appropriate nature and scope of human-machine interaction. States seem to agree that fully autonomous weapons are incapable of acting as ethical or legal entities under the principles of international humanitarian law. There is a consensus that the central question going forward is how to define meaningful or adequate human control. Yet states disagree about the answer to this question. Whereas the UK approach relegates autonomy to a future it insists it would never endorse, the functional approach takes a more pragmatic perspective that focuses on elements of already existing weapons, indicating directions for the future. Given the general political situation of tensions, distrust and disagreements between major powers, a pragmatic way forward seems most likely to succeed. Although military powers with vested political, economic and military interests in the development of LAWS (including the US, UK, Israel, China and Russia) have shown little interest in establishing binding regulations, the closing debates at the GGE encouraged a focus on the weapon’s various functions.

Reaching agreement on the definition of meaningful human control over critical functions would be an important next step. Still, the GGE meetings within the CCW framework only offer a limited platform for discussing the challenges related to the development and use of LAWS. Autonomous weapon systems create challenges beyond compliance with humani-tarian law. Most importantly, their development and use could accelerate military competition and cause strategic instability. Technologies used for swarming and loitering are likely to fuel an arms race across all domains: air, land and oceans, as well as in space. At the same time, the growing accessibility of robotic technologies in both civilian and military sectors risks the proliferation of these systems to actors that may not necessarily abide by humanitarian principles of warfare. For those reasons, discussions about compliance with international humanitarian law should go hand in hand with disarmament talks.

No comments:

Post a Comment