13 October 2023

Algorithmic Aversion? Experimental Evidence on the Elasticity of Public Attitudes to “Killer Robots”

Ondřej Rosendorf

The ability of states to leverage technological advances for military purposes constitutes one of the key components of power in international relations, with artificial intelligence, machine learning, and automation leading the way in global military innovation.Footnote1 One of the most prominent—and controversial—applications of these technologies nowadays concerns the development of “Lethal Autonomous Weapon Systems” (LAWS), sometimes dubbed “killer robots.” If deployed on the battlefield, these weapon systems could select and engage targets without direct human oversight with unprecedentedly high speed and accuracy.Footnote2 LAWS raise serious legal and ethical concerns, and international debates are already underway about possible limitations or even an outright ban on their use.Footnote3

As in the case of earlier humanitarian disarmament initiatives, the proponents of a ban on LAWS highlight the public opposition to these weapon systems as one of the key arguments why their use should be prohibited altogether. These claims, promoted by like-minded states and non-governmental organizations (NGOs), are partially supported by evidence from public opinion surveys suggesting that the real-world employment of LAWS would be met with significant disapproval by ordinary citizens.Footnote4 However, previous research has shown that much depends on the context. For example, Michael C. Horowitz found that public opposition to LAWS weakens when individuals are presented with scenarios where “killer robots” provide greater military utility than alternative options.Footnote5 Our knowledge of the factors that affect public attitudes to LAWS is, nevertheless, still limited.

In this article, we address this gap by investigating the role of three factors that are central to the international debate on whether or not to regulate the use of LAWS. The first are consequentialist concerns that autonomous technology is particularly accident-prone. Second, legal concerns that machines cannot be held responsible for striking the wrong targets. Third, moral concerns that delegating decision-making powers over life and death to robots violates human dignity. While our primary goal is to investigate the public’s sensitivity to certain factors surrounding the use of “killer robots,” rather than explaining the all-else-equal aversion to LAWS per se, the results of our study also provide some hints about mechanisms underlying these attitudes.

To test whether and how these factors affect public support for LAWS, we conducted a survey experiment with 999 U.S. citizens. We randomly assigned the participants to one of five versions of a hypothetical scenario describing a UN-mandated counterinsurgency operation, where the commander decided whether to deploy a remote-controlled or autonomous drone to eliminate the insurgent threat. Our experimental treatments varied in terms of the risk of target misidentification associated with each drone option and responsibility attribution for potential civilian fatalities. To measure the support for LAWS, we asked the participants to indicate their preference for either the remote-controlled or autonomous drone. In a follow-up survey with the same participants, we examined their sensitivity to violations of human dignity.

To gain a deeper understanding of the relationship between our three factors and support for LAWS, we conducted two additional surveys with separate samples of U.S. citizens. In these surveys, we inquired about our participants’ risk estimates and their perceptions of the differences between remote-controlled and autonomous drones in various aspects of their use.

Our findings demonstrate that although there is a substantial baseline aversion to LAWS among the public, these attitudes also exhibit a significant degree of elasticity. Most importantly, we find empirical evidence that public support for LAWS is contingent on their error-proneness relative to human-operated systems. When the public is presented with information indicating that the risk of target misidentification is even slightly lower for the autonomous drone compared to the remote-controlled one, there is a rapid and significant shift in favor of using these weapon systems. In contrast, our findings do not provide empirical support for the proposition that the explicit mentioning of command responsibility can alleviate opposition to LAWS. Additionally, we find limited empirical support for the proposition that concerns about human dignity violations increase public opposition to these weapon systems. Overall, among the three factors examined in our study, the consequentialist concern about the accident-prone nature of “killer robots” has the strongest association with the attitudes of Americans.

Our findings contribute to the growing scholarly literature on the international efforts to regulate autonomous weapons by probing the public’s sensitivity to some of the frequently cited pro- and anti-LAWS arguments.Footnote6 Moreover, we contribute to the recent wave of Security Studies literature on public attitudes toward the use of force by examining the factors affecting the public support for particular means of warfare.Footnote7 Finally, our study has significant implications for current policy debates about the international regulation of LAWS. The elasticity of public attitudes toward “killer robots” demonstrated here raises concerns about the long-term sustainability of public support for potential limitations or prohibitions on these systems.Footnote8 If LAWS eventually prove to be more reliable in target discrimination than human-operated systems, we could potentially observe an increasing public demand for their use.

We proceed as follows. First, we present a brief overview of the debates concerning the use of LAWS in warfare. Second, we formulate our theoretical expectations related to the three central concerns about these weapon systems. Third, we introduce our experimental design. Fourth, we present and discuss our empirical findings. We conclude by laying out the implications of our study and discussing avenues for future research.

The Advent of LAWS

Emerging technologies such as artificial intelligence and autonomous machines have significant influence over military weaponry and the character of contemporary warfare.Footnote9 While the debate about the “revolutionary” effects of these technologies is still ongoing, the growing investment in autonomous military technologies has already become a reality.Footnote10 The forerunners in this new era of military-technological competition are primarily great powers such as the United States, China, and Russia, but also smaller, technologically advanced countries such as Israel, Singapore, and South Korea.Footnote11

LAWS are clearly the more notable—and controversial—direction in this area of military innovation. In simple terms, LAWS can be defined as weapon systems that, once launched, select and engage targets without further human input.Footnote12 It is precisely the autonomy in targeting that distinguishes LAWS from other weapon systems, including remote-controlled drones, that may incorporate autonomy in functions such as navigation, landing, or refueling, but humans retain decision-making power over target selection and engagement.Footnote13 An example of such a system could be the Israeli loitering munition “Harpy” that, once launched, detects and dive-bombs enemy radar signatures without further human input. When it finds a target that meets the preprogrammed parameters, persons responsible for its launch can no longer override its actions.Footnote14

The development of LAWS presents us with potential benefits as well as challenges. On the one hand, some believe that weapon autonomy promises advantages such as a speed-based edge in combat, reduced staffing requirements, or reduced reliance on communication links. Like remote-controlled drones, their use would also reduce the risks faced by human soldiers.Footnote15 On the other hand, some believe that the machine-like speed of decision-making implies that militaries will exercise less control over the way LAWS operate on the battlefield, which exacerbates the risks of accidents and unintended escalation.Footnote16 Moreover, the unpredictable nature of complex autonomous systems would pose challenges to ensuring compliance with international humanitarian law (IHL).Footnote17 Finally, their use could be deemed dehumanizing, because—as inanimate machines—LAWS will never truly understand the value of human life, and the significance of taking it away.Footnote18

International discussions about these and other challenges have been occurring at the UN Convention on Certain Conventional Weapons (CCW) since 2013. In 2016, States Parties to the CCW established what is known as the Group of Governmental Experts (GGE) to formulate recommendations on how to address the LAWS issue. However, a growing polarization between states interested in exploring the benefits of the technology and those concerned with its humanitarian impact has prevented any substantive progress on the issue.Footnote19 Undeterred by the failure of the GGE LAWS process, NGO campaigners gathered under the auspices of the “Campaign to Stop Killer Robots” are now looking to explore the possibility of moving the issue to a different venue, where willing states could agree to prohibitions on the development and use of LAWS.Footnote20

The utility of autonomous weapons for political and military purposes will, to an extent, depend on their public acceptance. This aspect of the discussion is especially relevant for proponents of the ban, who leverage the negative public attitudes expressed in opinion polls on “killer robots” as a compelling reason for prohibiting the technology.Footnote21 In this view, the use of “killer robots” despite public opposition would violate the Martens Clause in the 1977 Additional Protocols to the Geneva Conventions, which prohibits the use of means and methods of warfare contrary to the “dictates of public conscience.” While the interpretations of the clause differ, considerations of public conscience have driven international negotiations on prohibiting other weapon systems in the past.Footnote22 Investigating the public attitudes to LAWS and the factors that affect these attitudes is, therefore, pertinent with respect to ongoing international regulatory efforts.

Algorithmic Aversion?

In many fields, from diagnosing complex diseases to legal advice, algorithms already outperform human decision-makers.Footnote23 Yet, we observe that the public often rejects algorithmic decision-making in favor of human expertise even when the latter is objectively inferior. To date, researchers have identified various factors that influence this aversion, including seeing an algorithm err, the complexity of the task, or the type of task performed.Footnote24 People’s reasoning for rejecting algorithms seems to vary across domains. For instance, in the field of medical diagnosis, consumers prefer the advice of human doctors to that of algorithms because they believe that the latter cannot account for their “unique characteristics and circumstances.”Footnote25

Existing surveys indicate such resistance toward autonomous weapons. For example, Charli Carpenter found that approximately 55% of adult U.S. citizens opposed the use of LAWS.Footnote26 The polling company Ipsos conducted several cross-national surveys on behalf of Human Rights Watch, which showed that about 61% of respondents worldwide oppose “killer robots.”Footnote27 Potential aversion is also indicated by recent surveys of AI and machine-learning researchers and local officials in the US.Footnote28

Although all of these findings suggest a strong public aversion to LAWS, they also have limitations that prevent us from drawing clear conclusions. Notably, they often ask the respondents about their views on LAWS without providing further context and may reflect general pacifist attitudes rather than a genuine aversion to autonomous weapons per se. Arguably, more compelling evidence on public attitudes to “killer robots” comes from the small number of survey experiments that explore the influence of factors such as military effectiveness, responsibility attribution, and sci-fi literacy.Footnote29 The study by Michael C. Horowitz, in particular, demonstrates that much of the opposition to these weapon systems depends on context. When the public is presented with scenarios where “killer robots” offer superior military utility compared to alternative options, the opposition weakens substantially.Footnote30 However, our knowledge of the specific factors that affect these attitudes is still limited.

In the following subsections, we will discuss three such potential factors that are at the core of the international debate on regulating LAWS. The first are consequentialist concerns that the autonomous technology is particularly accident-prone. The second involves legal concerns that machines cannot be held responsible for striking the wrong target. Third are moral concerns that delegating decision-making powers over life and death to robots violates human dignity. While not exhaustive, this list represents some of the most frequently cited arguments in favor of regulating LAWS. Investigating whether and how factors such as the risk of an error, responsibility attribution, and considerations of human dignity affect public attitudes to “killer robots,” thus, holds particular policy relevance.

Like other authors, we distinguish between contingent and non-contingent concerns.Footnote31 Contingent concerns revolve around the limitations of current-generation technology, including the inability of LAWS to properly discriminate between lawful and unlawful targets. Non-contingent concerns are independent of technological advancements and encompass arguments highlighting the inherent immorality of automated killing. Certain concerns, such as responsibility attribution, can be classified as contingent or non-contingent depending on whether they are regarded as an issue of strict liability or principled justice. If public attitudes are primarily driven by contingent concerns, it is possible that attitudes may change as the technology evolves. In the case of non-contingent concerns, attitudes are less likely to shift regardless of technological progress.Footnote32

Accident-proneness

According to Paul Scharre, humans exhibit two basic intuitions about autonomous machines.Footnote33 Some harbor a utopian intuition about autonomous systems as a reliable, safe, and precise alternative to human-operated systems. Others hold an apocalyptic intuition about “robots run amok,” which holds that autonomous systems are prone to spiraling out of control and making disastrous mistakes. The prevalence of the latter belief—based on a distrust of the predictability and reliability of autonomous systems and compounded by the public’s exposure to “robocalyptic” imaginaries in popular culture—may, therefore, drive the corresponding aversion to the use of LAWS.Footnote34

A recent public opinion survey showed that 42% of respondents opposed to LAWS were worried that these systems would be subject to technical malfunctions.Footnote35 Such worries are not completely unfounded. Militaries worldwide already struggle with highly automated systems, as illustrated by several fratricidal incidents with the Patriot missile defense system during which the system misidentified friendly aircraft as an enemy missile.Footnote36 On a more general level, machine learning systems have repeatedly shown a propensity for making unforeseen and counterintuitive mistakes.Footnote37 Unlike humans, algorithms might not have a sufficient capacity to understand nuance and context. They are trained to make clear verdicts under specific conditions and they may operate incorrectly for long periods of time without changing their course of action because they lack the necessary situational awareness.Footnote38

These technical limitations have potentially far-reaching implications for upholding basic ethical standards on the battlefield. From the perspective of IHL, ensuring compliance with the principle of distinction presents perhaps the most daunting challenge for LAWS. Many authors contend that today’s autonomous systems would not be able to comply with this principle, which protects civilians, non-combatants, and combatants hors de combat, from being subject to the use of military force.Footnote39 So far, scientists have developed no technology that can distinguish between lawful and unlawful targets as well as human judgment can. These inadequacies are particularly troubling when put in the context of a modern-day battlefield environment, which is typically populated by both combatants and civilians, and where belligerents deliberately obfuscate their legal status.Footnote40

Many of these challenges are, nevertheless, contingent on the state of technology. In principle, technological advances may eventually enable LAWS to discriminate between lawful and unlawful targets at least as well, if not more reliably, than human-operated systems.Footnote41 In the future, autonomous weapons could even prove more discriminating then humans because they do not have to protect themselves in cases of low certainty of target identification, they can be equipped with a broad range of sensors for battlefield observation and process the input far faster than a human could, or because they could be programmed without the emotions that often cloud the judgment of ordinary soldiers.Footnote42

Irrespective of whether LAWS will eventually prove more reliable at target discrimination, it is plausible that the public opposition to their use is partially influence by prior beliefs about their error-proneness relative to humans. Evidence from the field of experimental psychology further suggests that individuals are much more tolerant of errors made by humans than those made by machines.Footnote43 In the case of LAWS, such considerations should be particularly salient, because an error might result in target misidentification leading to innocent deaths, which is a pertinent concern of the public vis-à-vis the use of military force.Footnote44 If we assume that public attitudes to “killer robots” are influenced by preexisting beliefs about the accident-prone nature of the technology, we would expect that presenting the public with scenarios where the use of autonomous systems carries a lower risk of target misidentification compared to human-operated systems would increase the support for LAWS.

Responsibility Gaps

Another objection to LAWS is that their use would lead to distinct “responsibility gaps.”Footnote45 In this view, the employment of “killer robots” on the battlefield could make it exceedingly difficult, if not impossible, to establish legal and moral responsibility for potential adverse outcomes such as fatal accidents or outright war crimes. It would be unfair to hold a human in the decision-making chain responsible for outcomes they could not control or foresee, yet we simultaneously could not hold the machine itself accountable because it lacks the moral agency for such responsibility.Footnote46 The emergence of responsibility gaps presents a potential challenge to adherence to the IHL, and some scholars even suggest that if the nature of LAWS makes it impossible to identify or hold individuals accountable, it is morally impermissible to use them in war.Footnote47

Other scholars have contested the above proposition on legal and moral grounds. On the legal side, some argue that the fact that no-one has control over the system’s post-launch targeting decision does not take away the responsibility for its actions from humans. For instance, the programmers who decided how to program the system or the commanders who decided when to deploy it would be held accountable for any atrocities caused by LAWS if they acted with disregard to IHL.Footnote48 On the moral side, some experts question whether the ability to hold someone accountable for battlefield deaths is a plausible constraint on just war.Footnote49 Marcus Schulzke observes that militaries already operate through shared responsibility, where “[t]he structure of the military hierarchy ensures that actions by autonomous human soldiers are constrained by the decisions of civilian and military leadership higher up the chain of command.”Footnote50 This “command responsibility” would arguably be equally applicable to LAWS. Programmers and commanders would be accountable to the extent that they failed to do what is necessary to prevent harm.Footnote51

Existing surveys suggest that responsibility could be a concern for the broader public. The results of the 2020 Ipsos survey indicate that approximately 53% of those who opposed the development and use of LAWS agreed that such systems would be unaccountable.Footnote52 We aim to test the responsibility gaps proposition by looking at whether and how public attitudes change when we place the onus of potential responsibility explicitly on the military leadership, as opposed to leaving this to the public’s imagination. If the public believes that LAWS are unaccountable, as suggested by previous surveys, it is plausible that the opposition to LAWS could be partly driven by fears that the parties involved in their development and use could escape accountability for adverse outcomes. If we assume that public attitudes to “killer robots” are influenced by such beliefs, we would expect that presenting the public with a scenario in which the military leadership explicitly assumes such responsibility would increase the support for LAWS.

Human Dignity

One non-contingent issue that cannot be addressed by means of technological progress is the idea that using autonomous machines to kill humans would constitute an affront to human dignity.Footnote53 In this view, death by algorithm implies treating humans as mere targets, or data points, rather than complete and unique human beings.Footnote54 Consequently, LAWS are believed to violate the fundamental right to human dignity, which prohibits the treatment of humans as mere objects.Footnote55

Whereas concerns about target discrimination typically focus on the outcome of the deployment and use of LAWS, the argument about undignified killing focuses precisely on the process.Footnote56 According to Frank Sauer, “…being killed as the result of algorithmic decision-making matters for the person dying because a machine taking a human life has no conception of what its action means…”Footnote57 A human soldier could deliberate and decide not to engage a target, but in the case of LAWS, there would be no possibility for the victim to appeal to the humanity of the attacker—the outcome would already be predetermined by the narrow nature of algorithmic decision-making.Footnote58 The absence of such deliberation would thereby make targeting decisions inherently unethical.Footnote59

The introduction of the “human dignity” argument to the LAWS literature has not been without controversy, and many experts view it as a contested and ambiguous concept.Footnote60 Critics typically counter with consequentialist arguments: Concerns about indignity are ultimately outweighed by the promise of improved military effectiveness and reduced risks of target misidentification compared to human-operated systems.Footnote61 While it is true that machines cannot comprehend the value of human life, it may be irrelevant to the victim whether they are killed by a human or a machine.Footnote62 The conduct of human soldiers already falls short of ethical standards invoked in anti-LAWS arguments. Some experts also stress that the concept is too vague, and there is little reflection in the literature on the interpretation and meaning of the term.Footnote63

While the logical coherence of the human dignity argument is disputed, it underscores the role of moral instincts as one of the potential drivers of the public aversion to “killer robots.” The 2020 Ipsos survey provided some empirical evidence for the centrality of these instincts in public attitudes. Approximately 66% of respondents who opposed LAWS expressed the belief that delegating lethal decision-making to machines would “cross a moral line.”Footnote64 If we assume that moral instincts play a role in shaping public attitudes to “killer robots,” we would expect to observe a negative correlation between individuals’ sensitivity to the infringement of human dignity and their support for the use of these weapon systems.

Experimental Design

To test these hypotheses, we designed an original survey experiment with vignettes describing a hypothetical UN-mandated multinational counterinsurgency operation.Footnote65 We divided the vignette into three parts. In the first part, we informed the participants that their country joined a multinational task force to bring an end to a Boko Haram insurgency in Nigeria and asked them about their views on the importance of counterinsurgency operations to U.S. national security. In the second part, we communicated that the task force received intelligence about a suspected insurgent training camp located in a nearby village. The commander in charge was deciding whether to use a remote-controlled or autonomous drone to eliminate the threat. We then asked how much they approved of countries conducting drone strikes abroad.Footnote66

In the third part, we outlined the difference between the commander’s options as follows: “A remote-controlled drone is an aircraft controlled remotely by a human pilot who decides which target to hit with the missile,” while “an autonomous drone is an aircraft controlled by a computer program that, once activated, decides which target to hit with the missile without further human input.” The latter description emphasizes the autonomy that LAWS display in the critical functions of target selection and engagement.Footnote67

Moreover, we informed our participants that while striking the insurgents from afar would keep the UN troops out of harm’s way, civilians on the ground may still be at risk of injury or death as they may be wrongly identified as targets. In the remaining part of the survey, we experimentally varied the risk of target misidentification associated with each weapon option and the explicit accountability of military officers involved in the strike. Respondents were randomly assigned to one of the five conditions: (1) control, (2) “equal risk,” (3) “equal risk + responsibility,” (4) “unequal risk,” and (5) “highly unequal risk.” The difference between our treatments is outlined in Table 1.

The control group did not include any additional information. In the four remaining groups, we noted that the data from previous operations showed that using one of the weapon options would result in either an equal or a greater risk of misidentifying civilians as insurgents. In the “equal risk + responsibility” group, we further informed the participants that there were military officers in the chain of command whose responsibility was to minimize such risks and who would be held accountable if the strike resulted in the unlawful killing of civilians. Throughout our treatments, we deliberately chose more conservative risk percentages to ensure the plausibility of the scenario. We made this decision because launching a military strike with a higher probability of civilian fatalities could be perceived as inherently indiscriminate. The values are partially informed by data from the Bureau of Investigative Journalism on U.S. drone strikes, which suggests that approximately one in eight fatalities is a civilian fatality.Footnote68

After the participants read all three parts of the vignette, we asked whether they preferred the commander to conduct the strike using the remote-controlled or autonomous drone.Footnote69 Consistent with previous research investigating public attitudes to nuclear weapons use, our analysis relies on the binary preference variable.Footnote70

Furthermore, we included an open-ended question to investigate the reasoning behind our participants’ drone preferences, along with a manipulation check that asked about the percentage risk mentioned in the third part of the fictional scenario.Footnote71 The participants’ responses to the open-ended question, in particular, offer additional insights into the potential mechanisms underlying the aversion to LAWS when all else is equal. The survey questionnaire also included a battery of pre-treatment questions on age, gender, income, education, and political orientation. Following Michael C. Horowitz, we also inquired about our participants’ attitudes toward robots (a 6-point scale from “very positive” to “very negative”). These variables serve as controls in our regression models.Footnote72 Lastly, after filling out their responses, the participants read a short debrief to counteract the potential conditioning effects of the experiment.Footnote73

We fielded the survey through the online polling platform Prolific to a sample of 999 U.S. adult citizens between June 7 and June 9, 2022.Footnote74 Surveying the American public has a distinct policy relevance, considering the United States’ leading role in developing these technologies. In order to increase the representativeness of our sample, we used quotas for gender and party identification (Republican, Democrat, and Independent) since the Prolific platform tends to attract more males, younger, and more liberal and educated participants.Footnote75 Despite implementing these quotas, our sample differs somewhat from the U.S. population in terms of educational attainment and party identification.Footnote76

The experimental design allowed us to test hypothesis 1 by examining the variations in public attitudes across the control, “equal risk,” “unequal risk,” and “highly unequal risk” groups. Similarly, we tested hypothesis 2 through the experimental design by examining the variations in public attitudes across the “equal risk” and “equal risk + responsibility” conditions. However, for hypothesis 3, we opted for a correlational design. This decision was motivated by the assumption that the concern about human dignity is non-contingent or inherent to autonomous technology, and, therefore, independent of the presented context, except for the type of weapon used.

To test hypothesis 3, we administered a follow-up survey to the same sample of U.S. participants after one month. Here, we asked the participants three questions related to human dignity. The first two questions asked how much they agreed with the statement that “even terrorists should be treated with dignity” and “only humans should be allowed to kill other humans” (a 6-point scale from “strongly disagree” to “strongly agree”). We then presented the participants with another hypothetical scenario describing a situation conceptually analogous to that described in the main experiment. Our respondents read that their government was considering the introduction of a new execution method in the prison system which would involve the use of an autonomous execution module.Footnote77 The logic of this example is similar to the use of LAWS in that a machine decides about human life. After they read the scenario, the respondents indicated how much they agreed with several randomized statements, including the statement that the new method would “present a more serious violation of prisoners’ dignity.” We used these three questions to create the “human dignity concern” variable (1 = minimum concern to 6 = maximum concern) by obtaining a simple average of the three items.

In addition to the main experiment and the follow-up on human dignity, we conducted two supplementary surveys with different samples of respondents, utilizing the same polling platform and quotas. In a first supplementary survey, involving 300 U.S. citizens, we presented participants with the same hypothetical scenario as our control group in the main experiment. They were asked to provide their estimates of the risk of target misidentification associated with each drone option, and subsequently, we inquired about their preference for either the remote-controlled or autonomous drone. The survey results provide us with a deeper understanding of the public’s baseline beliefs about the accident-prone nature of the technology.Footnote78

In a second supplementary survey, involving 1,037 U.S. citizens, we randomly assigned the respondents to one of three conditions identical to the control, “equal risk,” or “equal risk + responsibility” conditions from the main experiment. We then asked them about the perceived differences between remote-controlled and autonomous drones in several aspects of their hypothetical use: legal accountability, moral responsibility, military effectiveness, force restraint, costs, ethicality, and human dignity. The respondents rated the remote-controlled and autonomous drones using a 5-point scale, indicating whether one is better, worse, or equal to the other in these aspects. We asked half of the participants whether they preferred the remote-controlled or autonomous option before answering the questions about perceived differences, and the other half after answering those questions. We randomized the sequence of questions in these parts of the survey. The results of this survey allowed us to conduct additional robustness checks and investigate other potential factors affecting public attitudes toward LAWS.Footnote79

Empirical Findings

Accident-proneness

First, we examined the participants’ preferences for the use of LAWS across treatments. Figure 1 indicates that the support for LAWS, measured by the preference for autonomous drones over remote-controlled ones, correlates with their precision relative to remote-controlled systems (i.e., non-LAWS). Only 7% of participants in the control group preferred the autonomous drone suggesting a remarkably high baseline aversion to LAWS when the risk of target misidentification is not explicitly stated. As the risk of target misidentification increases for the remote-controlled drone compared to the autonomous one across the experimental groups, the proportion of participants preferring autonomous drones experiences a significant increase.

Figure 1. “LAWS preference” by the experimental group.

Note: 95% CIs. N = 810. Lower N is due to the exclusion of our “equal risk + responsibility” treatment.

To test hypothesis 1, we conducted a series of logistic regressions and investigated whether providing our respondents with additional information on the risk of target misidentification had a statistically significant impact on “LAWS preference.” The results, depicted in Figure 2, reveal that respondents in the “equal risk” group were significantly more likely to prefer the autonomous drone over the remote-controlled drone than those in the control group (OR = 2.011, p = 0.034). This finding suggests that at least some participants may believe that the risk of misidentifying civilians is higher for the autonomous drone when not stated otherwise.Footnote80

Figure 2. Comparison of experimental treatments.

Note: Results of the logistic regression. 95% CIs. “Equal risk – Control” N = 418; “Unequal risk – Equal risk” N = 390; “Highly unequal risk – Unequal risk” N = 392. See Appendix 7 for full results and robustness checks.


Most importantly, we find that when the risk of target misidentification is modestly higher for the remote-controlled drone option, such as in the “unequal risk” treatment, our participants are much more likely to prefer the autonomous drone, even when compared to the “equal risk” treatment (OR = 10.24, p < 0.001). This finding suggests that public support for LAWS is significantly contingent on the knowledge of the relative risk. However, informing the participants that using the autonomous drone will entail a substantially lower risk of target misidentification did not completely mitigate the aversion to their use. Our participants in the “highly unequal risk” group were not significantly more likely to prefer LAWS than those in the “unequal risk” group (OR = 1.241, p = 0.308). In Appendix 7, we show that these findings hold when subjected to various robustness checks.

Our priming exercise regarding the relative risk of target misidentification highlights the considerable degree of elasticity of public attitudes to “killer robots.” However, these findings do not definitively indicate that concerns about the accident-prone nature of the technology are the sole or primary cause of the aversion to LAWS. To explore this possibility further, we turn to our participants’ responses to the open-ended question, in which we inquired about the reasoning behind their choice. We find that the majority of participants in the control group who preferred the remote-controlled drone option expressed such concerns. For instance, they frequently mentioned that having a human in the loop for target selection and engagement would be less likely to result in an indiscriminate attack:

“I think there may be less error because a real person could probably better identify innocent civilians vs the ‘bad guys’.”

“I am more comfortable giving control of life and death situations to a trained human being rather than an autonomous program. I am concerned the autonomous drone will not be able to distinguish between enemies and civilians.”

“Humans, in this situation, have better ability to understand and set the target than a robot.”

Additionally, a considerable number of respondents voiced a general lack of trust in the reliability and appropriate level of sophistication of the technology when it came to making targeting decisions. These responses indicate that some participants perceive such systems as inherently more prone to accidents compared to human-operated systems:

“I don’t believe AI is advanced enough to properly distinguish between targets yet.”

“[I chose the remote-controlled drone because] at least there’s a human making the ultimate decision in identifying targets, which I trust more than an algorithm.”

“A remote-controlled drone makes it less likely for the drone to perform unexpected actions.”

To gain a deeper understanding of the public’s baseline perceptions regarding the relative risk of target misidentification, we analyzed the results of our survey on risk estimates, fielded to a different sample of 300 U.S. citizens. We presented the participants with the same scenario as our control group in the main experiment and asked them to estimate the risk of target misidentification for each of the two types of drones. The results of a paired t-test reveal that our participants systematically estimated the risk to be higher for the autonomous drone (around 55%) than for the remote-controlled drone (around 37%).Footnote81

After obtaining the risk estimates from our participants, we proceeded to inquire about their drone preference. We calculated a “risk estimate difference” measure by subtracting the risk estimate for the remote-controlled drone from the risk estimate for the autonomous drone. We used this measure as a predictor of “LAWS preference.” The results of the logistic regression reveal a statistically significant and positive association between the “risk estimate difference” and the dependent variable.Footnote82 Thus, participants who estimated the risk to be lower for the autonomous drone relative to the remote-controlled drone were significantly more likely to prefer LAWS. However, as evident from Figure 3, participants who believed the risk to be roughly equal were still more likely to prefer the remote-controlled option.

Figure 3. Adjusted predictions of risk estimate difference.

Note: A plot of predictive margins based on the results of a logistic regression. 95% CIs. N = 300. See Appendix 8 for full results.


Overall, our study provides compelling evidence in support of hypothesis 1. The results show that presenting the public with scenarios in which autonomous drones are less prone to wrongly identifying civilians as targets than human-operated systems significantly increases the support for their use. Public attitudes toward LAWS, therefore, appear to be contingent upon knowledge of the relative risk of target misidentification. In addition, the frequent mention of concerns about the inadequate level of technological sophistication and the inferior ability of algorithms to distinguish between civilians and targets in the open-ended responses suggests that beliefs about the accident-prone nature of the technology could be one of the major drivers behind the baseline aversion to “killer robots.” This is further substantiated by the results of the survey on risk estimates, which reveal that the public consistently perceives the risk to be higher for the autonomous drone, and that these beliefs affect their preferences.

Responsibility Gaps

To test hypothesis 2, we initially examined the differences in preferences among our participants in the “equal risk” and “equal risk + responsibility” treatments from the main experiment. Figure 4 illustrates that the preference for using LAWS was roughly equal between the two groups. The results of the logistic regression of “LAWS preference” reveal that informing the participants in the “equal risk + responsibility” group about the presence of accountable military officers did not significantly increase their preference for the autonomous drone compared to the participants in the “equal risk” group.Footnote83

Figure 4. “LAWS preference” by the experimental group.

Note: Error bars represent 95% CIs. N = 391. Lower N is due to the exclusion of the control group and our “unequal risk” and “highly unequal risk” treatments.


This null finding should not be interpreted as evidence that responsibility did not matter to our participants at all. For instance, it is possible that informing the participants about the presence of accountable military officers in the chain of command simply did not do enough to mitigate such concerns. Nevertheless, responses to the open-ended question rarely mentioned responsibility concerns.Footnote84

The lack of a statistically significant effect of our responsibility prime and the infrequent mention of concerns about responsibility attribution in write-in responses may be due to prior beliefs that someone will be held accountable even when LAWS are used. Conversely, the public may believe it would be equally difficult to hold someone accountable if a remote-controlled or autonomous drone was used. To explore this possibility, we examine the results of our survey on the perceived differences between remote-controlled and autonomous drones. Figure 5 reveals that most respondents (57%) found it more challenging to attribute legal accountability for civilian fatalities caused by autonomous drones compared to remote-controlled drones. Similarly, most respondents (55%) found it more difficult to assign moral responsibility for civilian fatalities caused by autonomous drones compared to remote-controlled ones.Footnote86

Figure 5. Perceived differences between remote-controlled and autonomous drones.Footnote85

Note: N = 1,037. The figure presents the aggregated results for all experimental treatments. See Appendix 14 for full results.


However, despite the public perception that responsibility attribution is more challenging for autonomous drones than remote-controlled ones, this did not significantly impact support for LAWS. Variables measuring a difference in legal accountability or moral responsibility between autonomous and remote-controlled drones proved statistically insignificant as predictors of “LAWS preference.”Footnote87

On balance, we found no evidence in support of hypothesis 2. Informing the participants about the presence of accountable military officers in the chain of command did not increase the support for LAWS. Furthermore, the infrequent mention of responsibility concerns in the responses to the open-ended question suggests that the issue of holding someone accountable for civilian fatalities caused by autonomous drones does not automatically come to mind for ordinary citizens. However, our null findings should not be interpreted as evidence that the issue of responsibility is completely irrelevant to the public. When asked whether it would be more difficult to hold someone legally accountable or morally responsible for civilian deaths caused by remote-controlled or autonomous drones, most people recognize that “killer robots” may pose greater challenges. Nevertheless, individuals who hold such beliefs are still not more or less likely to support the hypothetical use of LAWS than those who do not.

Human Dignity

Finally, to test hypothesis 3, we turn to the results of the follow-up survey, utilizing the “human dignity concern” measure as a predictor of “LAWS preference.” We were able to follow up with 836 out of 999 participants who partook in the main experiment.Footnote88 Figure 6 shows a coefficient plot for three logistic regression models. In Model 1, we use our variable “human dignity concern” as the predictor. In Model 2, we control for several socio-demographic variables and political orientation. In Model 3, we additionally control for “attitudes toward robots” and “approval of drone strikes.”

Figure 6. Logistic regression of “LAWS preference”.

Note: Results of the logistic regression. 95% CIs. Model 1 N = 836. Model 2 N = 826. Model 3 N = 826. Lower N is due to missing observations for certain demographic variables. See Appendix 11 for full results and robustness checks.


The results reveal that the “human dignity concern” variable attains a statistically significant and negative association with “LAWS preference” across all models.Footnote89 On average, respondents who scored higher on the “human dignity concern” measure were more likely to prefer the remote-controlled drone than the autonomous one.Footnote90

To further evaluate the relative significance of human dignity compared to other factors, we ran a logistic regression of “LAWS preference” with an interaction term between the “human dignity concern” variable and our experimental groups (see Appendix 12). Figure 7 indicates that the relationship between “human dignity concern” and “LAWS preference” is conditional on the experimental treatment.Footnote91 These results reveal an important tradeoff in our participants’ choice: When the risk of target misidentification is even modestly higher for the remote-controlled drone relative to the autonomous drone, on balance, individuals appear to be willing to sideline potential concerns about the inherently undignified nature of automated killing.

Figure 7. Adjusted predictions of human dignity concern by experimental group.

Note: Interaction plot based on the results of a logistic regression. 95% CIs. N = 679. Lower N is due to the exclusion of our “equal risk + responsibility” treatment for better interpretability. See Appendix 12 for full results.


Despite these findings, it is still possible that concerns about the inherent unethicality of “killer robots” drive the aversion to LAWS when all else is equal. To investigate this possibility, we examine the answers to our open-ended question in the control group. The write-in responses to our open-ended question reveal that at least some of our participants preferred the commander to use a remote-controlled drone because they believed, as a matter of principle, that lethal decision-making should have a human origin. In some cases, the participants also expressed concerns about the absence of distinctly human qualities, such as compassion, in the process of algorithmic decision-making. Nevertheless, we found no substantiated evidence to indicate that participants directly connected their rationale to the concept of human dignity or the undignified nature of automated killing:

“If the drone is killing people then it needs to be done by human hands.”

“[…] With a computer doing it, they don’t feel emotion and wouldn’t care who the target was or wasn’t.”

“I would rather have a human being involved. Humans have compassion, computers don’t.”

“[The autonomous drone] is more likely to kill innocent civilians than a drone under the control of a human that can make the moral choice not to fire.”Footnote92

Overall, these responses reveal that while some participants were concerned about the unethicality of ceding the decisions to kill to algorithms, they did not necessarily think of such concerns in terms of violations of human dignity per se. This claim is further supported by the results of our last survey on the perceived differences between remote-controlled and autonomous drones (see Figure 8). Although a significant number of respondents (41%) indicated that being killed by a computer program in the autonomous drone is less ethical than being killed by a human piloting the remote-controlled drone, the vast majority of respondents (71%) saw no difference between the two types of drones when it comes to human dignity.Footnote93 This finding suggests that our participants perceived the ethical aspects of LAWS use as disconnected from the issue of human dignity.

Figure 8. Perceived differences between remote-controlled and autonomous drones.

Note: N = 1,037. The figure presents the aggregated results for all experimental treatments. See Appendix 14 for full results.


Overall, our results provide only very limited evidence in support of hypothesis 3. While we found that individuals who were sensitive to violations of human dignity exhibited a greater opposition to LAWS use on average, these attitudes were still significantly contingent on different factors, particularly the risk of target misidentification. When presented with scenarios where using an autonomous drone carries a lower risk of hitting the wrong target compared to using a remote-controlled drone, their aversion to LAWS was less pronounced. Furthermore, the open-ended responses in the control group suggest that while the all-else-equal aversion to “killer robots” could be partially driven by concerns about the inherent immorality of automated killing our participants did not think about these concerns in terms of human dignity violations per se. As is evident from our survey on the perceived differences between remote-controlled and autonomous drones, the vast majority of individuals believe that being killed by a remote-controlled or autonomous drone is equally undignified.

Other Considerations

In addition to legal accountability, moral responsibility, human dignity, and ethicality, our survey on the perceived differences also inquired about other potential concerns related to the use of LAWS: risk of target misidentification, military effectiveness, costs, and force restraint.Footnote94 Using a 5-point scale, respondents indicated whether autonomous drones are better, worse, or equal to remote-controlled drones in each aspect. To assess the relative importance of these factors, we conducted a logistic regression of “LAWS preference,” using the eight measures of perceived differences as predictors.Footnote95

The results of this analysis are shown in Figure 9. The findings indicate that our participants were more likely to prefer the autonomous drones over remove-controlled drones when they believed that doing so would result in a lower likelihood of target misidentification, a higher likelihood of accomplishing mission objectives, and that killing carried out by a computer program would be more ethical than killing performed by a human operator.

Figure 9. Logistic regression of “LAWS preference.”

Note: Results of the logistic regression. 95% CIs. Model 1 N = 162. Model 2 and Model 3 N = 160. Lower N is due to the exclusion of our “equal risk” and “equal risk + responsibility” treatments and the exclusion of participants who received the preference question before answering the questions on the perceived differences. See Appendix 14 for full results.


These findings provide clues about possible mechanisms underlying the aversion to LAWS when all else is equal. Respondents opposed “killer robots” partially because they believed they would be more prone to making mistakes in target selection and engagement, less militarily effective, and more unethical than human-operated systems. While the absence of a statistically significant association for other variables does not necessarily mean that these factors do not matter to our participants, they certainly matter less, on average, when it comes to the choice between remote-controlled and autonomous drones in our scenario.

Conclusion

In this research article, we have addressed a question of both scholarly and policy interest: What factors affect the elasticity of public attitudes to lethal autonomous weapon systems (LAWS), or “killer robots” as they are commonly known? First, we found that while the baseline aversion to LAWS is remarkably high, public attitudes are also considerably elastic. The public opposition to these weapon systems is significantly contingent on their error-proneness relative to human-operated systems. Consequently, when the public is presented with scenarios in which the use of LAWS carries a lower risk of target misidentification compared to human-operated systems, we observe a rapid and significant shift in preference toward using “killer robots.” The frequent mention of concerns about the insufficient level of technological sophistication in open-ended responses, along with the results of the supplementary survey on risk estimates, further suggest that beliefs about the accident-prone nature of the technology could constitute one of the main mechanisms underlying the baseline aversion to LAWS.

Second, we found no evidence in support of the proposition that the explicit mentioning of command responsibility can alleviate opposition to LAWS. Finally, we found limited evidence in support of the idea that non-contingent concerns about the undignified nature of automated killing increase public opposition to these systems. On average, the respondents who scored higher on our “human dignity concern” measure were less likely to prefer LAWS. However, additional analysis reveals that many participants are willing to concede these concerns when the risk of target misidentification is even modestly higher for the remote-controlled drone compared to the autonomous drone. Overall, our findings indicate that among the three factors explored in this study, concerns related to the accident-prone nature of autonomous systems have the strongest association with attitudes of the U.S. public to the hypothetical use of LAWS.

Our study has distinct policy implications for the current international efforts to impose limitations or prohibitions on “killer robots.” The significant elasticity of the attitudes toward the military use of LAWS, demonstrated through our primes on the risk of target misidentification, implies that regulations may be unsustainable in the long run. If LAWS eventually prove to be more reliable at target discrimination than human-operated systems, the public will support them.

In our view, this does not automatically discount the value of regulatory measures. First, there is no guarantee that the technology will advance enough to outperform human decision-makers. Second, our findings regarding the public’s sensitivity to violations of human dignity, the write-in responses to the open-ended question, as well as the results of our survey on the perceived differences suggest that at least some part of the aversion still appears to be driven by non-contingent concerns. As argued elsewhere, arguments about the unpredictable and indiscriminate nature of “killer robots” may prove to be effective in mobilizing the public in support of regulatory measures in the short run.Footnote96 Such a framing strategy would remain vulnerable to shifts in public opinion caused by technological change. Instead, an approach combining arguments about the inherent immorality and accident-prone nature of LAWS in a balanced fashion appears to be the most advantageous for mobilizing broader segments of the public in support of regulation.

While illuminating with respect to some of the influential factors behind public attitudes to “killer robots,” our study is not without limitations. Our “human dignity concern” measure may not capture all aspects underlying the human dignity argument. Future studies could develop a more nuanced measure by incorporating other conceptually analogous situations. Furthermore, we have only examined the attitudes of the U.S. public, but attitudes to “killer robots” and the relative importance of the different factors affecting them may differ across countries. Lastly, the attitudes of specific groups, such as political elites and the military, may also differ substantially from the general public. These limitations of our study present potentially intriguing avenues for future research.

No comments: