28 September 2023

WHAT IS MEANINGFUL HUMAN CONTROL, ANYWAY? CRACKING THE CODE ON AUTONOMOUS WEAPONS AND HUMAN JUDGMENT

Lena Trabucco 

Flying high above a near-future battlefield, an AI-enabled MQ-9 Reaper drone alerts operators that it has detected enemy forces moving in a vehicle in a remote location. The drone uses available data to predict that the vehicle will enter a residential area in fifteen seconds. Operators receive the alert and a request to authorize a strike before the window of opportunity closes. With three seconds left for optimal strike conditions, the operator is still deliberating, and the drone has not yet received either approval or rejection for the strike request. The drone engages the vehicle with one second left under what it has identified as optimal conditions. Six noncombatants are killed.

In the wake of the strike, the public discussion focuses on whether the operator had meaningful human control (MHC) of the autonomous weapon system (AWS). But that is the wrong question to ask, and focusing on the MHC of solely the operator in this tactical situation fails to appreciate the significance of the entire life cycle of the AWS. What about the MHC of the developers and designers of the AWS? What about the campaign planner who authorized the introduction of the AWS into this operational environment and authorized AWS strike capacity if the strike occurred in remote areas? In an era of rapidly increasing autonomy, failing to expand our conceptualization of MHC risks overlooking other opportunities, earlier in an AWS’s life cycle, for embedded MHC that can lead to more responsible and robust autonomous weapons.

What is Meaningful Human Control?

MHC is a loaded political concept that emerged from the debate on autonomous weapons. It generally refers to preserving human judgment and input while employing autonomous systems. Some advocates maintain that MHC is necessary for compliance with the law of armed conflict (LOAC). This position assumes that an AWS inherently cannot comply with existing legal principles (notably distinction) and that an AWS will alter the direct relationship between exercising control and legal responsibility. However, LOAC does not explicitly require human control; instead, it requires any means or method of warfare to comply with existing legal obligations. If an AWS submits to a legal review and, thus, complies with LOAC requirements, then there is little to suggest that MHC is a legal obligation.

There is far greater debate on MHC in the policy space. The concept has struggled to reach a consensus among the United States and its international partners due to disagreements in terminology. While the United Nations Convention on Certain Conventional Weapons, specifically its Group of Governmental Experts, has employed the phrase, US Department of Defense Directive 3000.09 uses the term “appropriate human judgment” instead. Differences in state preferences for particular terminology have resulted in a stalemate on the operationalization of MHC. Despite this stalemate, stakeholders need to address bigger issues to produce guidance for the responsible implementation of military autonomy.

The biggest issue remains identifying what constitutes MHC in practice and determining what steps satisfy the threshold for MHC of autonomous systems. Even more problematic, there needs to be more consensus on the threshold itself. Regardless, MHC should incorporate human judgment into machine performance to maintain the benefits inherent to autonomy without losing the unique benefits of human judgment. Experts must take a deeper step into the technology and trace embedded human judgment in developing autonomous weapons through the weapon systems’ life cycles.

The term “meaningful human control” first appeared in a 2013 report from Article 36, a British nongovernmental organization. In the original report, Article 36 identified three elements constituting MHC (descriptions of each element are drawn directly from the report).Information. A human operator, and others responsible for attack planning, need to have adequate contextual information on the target area of an attack, information on why any specific object has been suggested as a target for attack, information on mission objectives, and information on the immediate and longer-term weapon effects that will be created from an attack in that context.
Action. Initiating the attack should require a positive action by a human operator.
Accountability. Those responsible for assessing the information and executing the attack need to be accountable for the outcomes of the attack.

These elements resemble other commentators’ conceptualization of MHC (see, for example, here and here). However, upon close examination, the elements each have shortcomings that raise questions of whether, collectively, they truly establish meaningful human control as a distinct standard. The first element requires combatants to acquire relevant information to contextualize an attack. This information includes geographic details, reliable intelligence about the target, and the effects of the weapon used in an attack. However, this is already a requirement under LOAC and, as such, the element does not add value. This step is undoubtedly important in maintaining human control because the commander ultimately decides whether to use an AWS for a particular operation based on certain information, which will be discussed further. However, this is not a new requirement and reflects current international legal obligations.

The second element calls for positive action by a human operator to authorize an attack. A requirement for operator authorization would prevent an AWS from independently engaging a target, much like current drone operations. However, a positive action may not best reflect the advantage an AWS offers. One of the benefits of autonomy is delegating functions requiring human cognition to a machine—AWS may offer unparalleled capabilities in targeting by removing human cognition and decision-making at this stage, which can be slower and prone to mistakes. Requiring a human to positively authorize an attack qualifies as MHC but negates the added advantage of autonomous capabilities.

Finally, the third element calls for a clear pathway of responsibility for appropriate combatants. While important, this element does not factor into the performance of the autonomous system. If human control is to qualify as meaningful, there must be an effect on the system’s behavior. Responsibility issues are relevant after an attack has occurred and in the situation of machine failure or other unintended consequences. While establishing a regime of responsibility is vital, other stakeholders have also called for similar requirements. Again, however, this is separate from MHC.

Current proposals resembling Article 36’s MHC elements do not adequately capture what is unique about autonomous systems and the myriad of roles human judgement (or control) has prior to activating the system.

Meaningful Human Control Embedded in the Life Cycle

Deconstructing the life cycle of an AWS offers insights that provide a better, more nuanced understanding of MHC for policymakers and experts. Firstly, and most importantly, it identifies what MHC is in practice. Current discussions are abstract and theoretical—but a life cycle perspective operationalizes MHC and forces researchers to identify actions and protocols that qualify as MHC. Three stages of an AWS life cycle, in particular, are important to explore: design and development, operational planning, and tactical planning and engagement.

Autonomous systems suffer from what some call the many hands problem; that is, many people are involved in making AWS a reality. This problem is typically only discussed in the context of assigning responsibility—if many hands are involved, who should be responsible in the event of machine failure or malfunction? Nevertheless, the many hands problem is also relevant to the MHC discussion. There are opportunities for embedded MHC by expanding the scope to include the many groups with various expertise in creating the AWS, and a life cycle perspective will better capture the various roles involved in that process.

The first stage is the design and development of the AWS. AI developers create intelligent systems capable of learning, analyzing, and predicting. Developers are responsible for defining the type of system (whether machine learning, deep learning, or neural networks) and creating the software architecture or the system’s boundaries that define the parameters for system behavior. System designers have an essential role in creating design principles for the system to encounter unexpected environmental stimuli that may occur simultaneously in dynamic environments. The purpose of employing an autonomous weapon system (or any autonomous system) is for the advantages in speed and accuracy for specific processes that are mundane or overwhelming to human cognition. By developing the system architecture, does the developer qualify as having MHC to a degree sufficient to comply with policies or other requirements of MHC? Do developer decisions made at this stage effectively embed MHC in AWS?

The second stage is operational planning. Even though this stage has essential implications for MHC, as will be discussed, it receives less consideration than other stages. Nevertheless, decisions made at this stage do factor into determining MHC. This stage is close to the elements Article 36 proposed because this stage includes contextual decision-making regarding attacks. The most critical decision is whether to employ an AWS in a particular operational environment. Much of the controversy surrounding AWS (and, indeed, the calls necessitating MHC) stems from the risk of an AWS in an urban environment where distinctions between civilians and noncombatants will be most challenging. However, there are other environments where an AWS will not pose the same risks to the civilian population or objects, such as the vast majority of the sea domain and areas on land that are largely or entirely uninhabited, like deserts and forests. Several other factors could inform the decision to use an AWS unrelated to the operational environment, such as command leadership style or willingness to accept risks posed to friendly troops. These considerations are another facet of MHC that is distinct from the considerations at the design and development stage.

The third stage is the tactical planning and engagement phase and is the most intuitive phase for applying MHC standards. Article 36’s elements of MHC ultimately come down to target engagement. Traditional notions of human control are associated with the “positive action” Article 36 calls for in its second MHC element. As previously noted, requiring a human to take a positive action and authorize a target engagement ultimately defeats the purpose of employing an AWS. Because of this, some suggest that “human supervision has been and is likely to remain a necessary form of control measure to ensure safety, reliability and efficiency of military operation[s].” However, supervision on its own is not likely to amount to MHC—definitionally, it falls short of “positive action.” But there are other important considerations, like those that occur at the design and development stage and the operational planning stage, that influence how supervision occurs at the tactical stage and can significantly impact how “meaningful” that supervision is.

For example, data presentation to an operator can influence how the operator interprets what is happening on the ground. Operators can easily be biased, swayed, or otherwise influenced by the way the interface presents data and operator authorizations. Humans process information differently from one another, so a particular operator may respond differently to map- or image-based data presentation than to a text-based interface. Even something as simple as a system that reports a 95 percent probability that a target is an enemy force might produce a different operator response than one that emphasizes the 5 percent probability that it is not, and that there is therefore a one-in-twenty chance that civilians would likely be killed. Does the human have MHC if the very same scenario can lead to different outcomes because the data is presented differently?

Additionally, the supervision of AWS asks a lot of the human brain. Experts have repeatedly recognized the fast-paced nature of algorithmic decision-making and rightfully raised concerns about human cognition’s ability to keep up. Would supervision qualify as MHC if the human operator cannot keep pace with a machine in real time? This is even more unlikely if an operator has more than one system to monitor, such as a swarm scenario. However, the other side of the spectrum also bears acknowledging—instances where the system is idle or has minimal action and is too slow to hold human attention. An AWS will likely not be in constant, fast-paced situations; instead, the system may need to wait and monitor an environment long before engaging with an intended target. Like speed, this is one of the benefits of an AWS: the system will not get too tired or bored, unlike its human counterparts. It is only natural that operators tasked with AWS supervision will struggle to keep up or stay vigilant.

It is important to conceptualize and embed MHC in an AWS, but experts must broaden the scope of what and who contributes to MHC in practice. Deconstructing the weapon system life cycle offers a glimpse into stages wherein different stakeholders operationalize MHC in a multitude of ways. Each stage explored here—design and development, operational planning, and tactical planning and engagement—presents different considerations and interests that permeate decision-making at that stage and ultimately demonstrate MHC’s different varieties and manifestations. This is not to advocate that one stage, more than others, is where MHC exists—although many current discussions do implicitly make that case by focusing one tactical employment. It is also not to suggest that all the stages cumulatively constitute MHC. MHC is a moving target. But as autonomous weapon systems increasingly appear on the future battlefield, it is imperative to prepare by pushing the boundaries of current discourse on MHC and thinking creatively about how it can and should be operationalized in practice.

Lena Trabucco is a visiting scholar at the Stockton Center for International Law at the US Naval War College, specializing in artificial intelligence and international law. She is also a research affiliate at Cambridge University and the University of Copenhagen. She holds a PhD in law from the University of Copenhagen and a PhD in international relations from Northwestern University.

The views expressed are those of the author and do not reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.

No comments: