Pages

17 April 2019

Respect for Persons and the Ethics of Autonomous Weapons and Decision Support Systems

C. Anthony Pfaff

INTRODUCTION

Last Spring, Google announced it would not partner with the Department of Defense’s Project Maven, which sought to harness the power of artificial intelligence (AI) to improve intelligence collection and targeting. Google’s corporate culture, which one employee characterized as “don’t be evil,” attracted people who were opposed to any arrangement where their research would be applied to military and surveillance applications. As a result, Google had to choose between keeping these talented and skilled employees and losing potentially hundreds of millions of dollars in defense contracts. Google chose the former.[1] Later that fall, the European Union called for a complete ban on autonomous weapon systems.[2] In fact, several organizations and researchers working in artificial intelligence have signed a “Lethal Autonomous Weapons Pledge” that expressly prohibits development of machines that can decide to take a human life.

…IF THESE SYSTEMS CAN REDUCE SOME OF THE CRUELTY AND PAIN WAR INEVITABLY BRINGS, THEN IT IS REASONABLE TO QUESTION WHETHER DEHUMANIZING WAR IS REALLY A BAD THING.


The ethical problems associated with lethal autonomous weapons are not going to go away as the development, acquisition, and employment of artificially intelligent systems challenge the traditional norms associated not just with warfighting but morality in general.[4] Among the many concerns associated with developing lethal autonomous weapon systems driven by artificial intelligence is that they will dehumanize warfare.[5] On the surface this seems like an odd case to make. War may be a human activity, but rarely does it feel to those involved like a particularly humane activity, bringing out the worst in humans more often than it brings out the best. Moreover, lethal autonomous weapons and decision support systems are often not only more precise than their human counterparts, they do not suffer from emotions such as anger, revenge, frustration, and others that give rise to war crimes. So, if these systems can reduce some of the cruelty and pain war inevitably brings, then it is reasonable to question whether dehumanizing war is really a bad thing. As Paul Scharre notes, the complaint that respecting human dignity requires that only humans make decisions about killing “is an unusual, almost bizarre critique of autonomous weapons;” he adds, “There is no legal, ethical, or historical tradition of combatants affording their enemies the right to die a dignified death in war.”[6]

Scharre’s response, however, misses the point. He is correct that artificial-intelligence systems do not represent a fundamentally different way for enemy soldiers and civilians to die than those human soldiers are permitted to employ. The concern here, however, is not that death by robot represents a more horrible outcome than when a human pulls the trigger. Rather it has to do with the nature of morality itself and the central role respect for persons, understood in the Kantian sense as something moral agents owe each other, plays in forming our moral judgments.

KILLING AND RESPECT FOR OTHERS

Immanuel Kant (Wikimedia)

Drawing on Kant, Robert Sparrow argues respect for persons entails that, even in war, one must acknowledge the personhood of those with whom one interacts, including the enemy. Acknowledging that personhood requires whatever one does to another, it is done intentionally with the knowledge that, whatever the act is, it is affecting another person.[7] This relationship does not require communication or even the awareness by one actor that he or she may be acted upon by another. It just requires the reasons actors give for any act that affects another human being take into account the respect owed that particular human being. To make life-and-death decisions absent that relationship subjects human beings to an impersonal and pre-determined process, and subjecting human beings to such a process is disrespectful of their status as human beings.

Thus, a concern arises when non-moral agents impose moral consequences on moral agents. Consider, for example, an artificially intelligent system that provides legal judgments on human violators. It is certainly conceivable that engineers could design a machine that could consider a larger quantity and variety of data than could a human judge. The difficulty with the judgment the machine renders, however, is that the machine cannot put itself in the position of the person it is judging and ask, “If I were in that person’s circumstances, would I have done the same thing?” It is the inability to not only empathize but then employ that empathy to generate additional reasons to act (or not act) that makes the machine’s judgment impersonal and pre-determined.[8] Absent an interpersonal relationship between judge and defendant, defendants have little ability to appeal to the range of sensibilities human judges may have to get beyond the letter of the law and decide in their favor. In fact, the European Union has enshrined the right of persons not to be subject to decisions based solely on automated data processing. In the United States, a number of states limit the applicability to computer-generated decisions and typically ensure an appeals process where a human makes any final decisions.[9]

This ability to interact with other moral agents is thus central to treating others morally. Being in an interpersonal relationship allows all sides to give and take reasons regarding how they are to be treated by the other and to take up relevant factors they may not have considered before-hand.[10] In fact, what might distinguish machine judgments from human ones is the human ability to establish what is relevant as part of the judicial process rather than before-hand. That ability is what creates space for sentiments such as mercy and compassion to arise. This point is why only persons—so far at least—can show respect for other persons.

So, if it seems wrong to subject persons to legal penalties based on machine judgment, it seems even more wrong to subject them to life-and-death decisions based on machine judgment. A machine might be able to enforce the law, but it is less clear if it can provide justice, much less mercy. Sparrow further observes that what distinguishes murder from justified killing cannot be expressed by a “set of rules that distinguish murder from other forms of killing, but only by its place within a wider network of moral and emotional responses.”[11] Rather, combatants must “acknowledge the morally relevant features” that render another person a legitimate target for killing. In doing so, they must also grant the possibility that the other person may have the right not to be attacked by virtue of their non-combatant status or other morally relevant feature.[12]

The concern here is not whether using robots obscures moral responsibility; rather the concern here is that the employment of artificial-intelligence systems obscures the good humans can do, even in war. Because humans can experience mercy and compassion they can choose not to kill, even when, all things being equal, it may be permissible.

ACTING FOR THE SAKE OF OTHERS: JUSTICE, FAIRNESS, AND AUTONOMOUS WEAPONS

The fact that systems driven by artificial intelligence cannot have the kind interpersonal relationships necessary for moral behavior accounts, in part, for much of the opposition to their use.[13] If it is wrong to treat persons as mere means, then it seems wrong to have a mere means in a position to decide how to treat persons. One problem with this line of argument, which Sparrow recognizes, is all employment of autonomous systems breaks the relevant interpersonal relationship. To the extent humans still make the decision to kill or act on the output of a decision support systems, they maintain respect for the persons affected by those decisions.

However, even with semi-autonomous weapons, some decision-making is taken on by the machine, mediating, if not breaking, the interpersonal relationship. Here Scharre’s point is relevant. Morality may demand an interpersonal relationship between killer and killed, but, as a matter of practice, few persons in those roles directly encounter the other. An Islamic State fighter would have no idea whether the bomb that struck him was the result of a human or machine process; therefore, it does not seem to matter much which one it was. A problem remains, however, regarding harm to noncombatants. While, as a practical matter, they have no more experience of an interpersonal relationship than a combatant in most cases, it still seems wrong to subject decisions about their lives and deaths to a lethal artificial-intelligence system just as it would seem wrong to subject decisions about one’s liberty to a legal artificial-intelligence system. Moreover, as the legal analogy suggests, it seems wrong even if the machine judgment were the correct one.

This legal analogy, of course, has its limits. States do not have the same obligations to enemy civilians that they do towards their own. States may be obligated to ensure justice for their citizens and not be so obligated to citizens of other states. There is a difference, however, between promoting justice and avoiding injustice. States may not be obligated to ensure the justice of another state; however, they must still avoid acting unjustly toward that other state’s citizens, even in war. So, if states would not employ autonomous weapons on their own territory, then they should not employ them in enemy territory.[14]

Of course, while states may choose not to apply lethal autonomous weapons in their own territory in conditions of peace, the technology could get to the point where they would employ such systems under conditions of war precisely because they are less lethal. If that were to be the case, then the concern regarding the inherent injustice of systems driven by artificial intelligence could be partially resolved. Of course, it is not enough that a state treats enemy civilians with the same standards it treats its own. States frequently use their own citizens as mere means, so we would want a standard for that treatment that maintained a respect for persons.

As Isaak Applbaum argues, “If a general principle sometimes is to a person’s advantage and never is to that person’s disadvantage, then actors who are guided by that principle can be understood to act for the sake of that person.”[15] So, to the extent systems driven by artificial intelligence do make targeting more precise than human-driven ones as well as reduce the likelihood that persons will be killed out of revenge, rage, frustration, or just plain fatigue, then their employment would not put any persons at more risk than if those systems were not employed. To the extent that is the case, arguably states are at least permitted, if not obligated, to use them. Because employing these systems under such conditions constitutes acting for the sake of those persons, it also counts as a demonstration of respect towards those persons, even if the interpersonal relationship Sparrow described is mediated, if not broken, by the machine.

“An Act of Compassion” (Paul Stivers)

CONCLUSION

What this analysis has shown is the arguments for considering military artificial-intelligence systems, even fully autonomous ones, mala en se are on shakier ground than those that permit their use. It is possible to demonstrate respect for persons even in cases where the machine is making all the decisions. This point suggests that it is possible to align effective development of artificial-intelligence systems with our moral commitments and conform to the war convention.

Thus, calls to eliminate or strictly reduce the employment of such weapons are off base. If done right, the development and employment of such weapons can better deter war or, failing that, reduce the harms caused by war. If done wrong, however, these same weapons can encourage militaristic responses when other non-violent alternatives are available, resulting in atrocities for which no one is accountable and desensitizing soldiers to the killing they do. Doing it right means applying respect for persons not just when employing such systems but also at all phases of the design and acquisition process to ensure their capabilities improve our ability not just to reduce risk but also to demonstrate compassion.

No comments:

Post a Comment