Pages

6 November 2023

Autonomous weapons are the moral choice

Thomas X. Hammes

To succeed in the battlespace, the United States must field autonomous weapons. This is the argument Deputy Secretary of Defense Kathleen Hicks made in a speech on August 28:

“To stay ahead, we’re going to create a new state of the art—just as America has before—leveraging attritable, autonomous systems in all domains—which are less expensive, put fewer people in the line of fire, and can be changed, updated, or improved with substantially shorter lead times.”

Many defense professionals are largely in agreement with this statement, but there remains a significant anti-autonomy coalition that continues to argue that the use of lethal autonomous weapon systems (LAWS)—particularly drones—is immoral. From “slaughterbots” videos intended to inflame public fear to international conferences, these groups have argued strenuously that LAWS are simply not acceptable to a moral nation.

These groups are wrong. Indeed, it is morally imperative for the United States and other democratic nations to develop, field, and, if necessary, use autonomous weapons.
Autonomy foes deploy unpersuasive arguments

The Department of Defense makes a distinction between “semi-autonomous” and “autonomous” weapons, but there is less of a clear line between the two than many might expect. With the former, an operator must select a target but then might launch an advanced “fire and forget” munition, which does not require a line of sight. Even with notionally “autonomous” weapons, humans must still design, build, program, position, arm, and determine the conditions under which to activate these systems. But opponents of autonomy argue that the latter, LAWS, remove human oversight from the process of killing, with the International Committee of the Red Cross (ICRC) asserting, for example, that it will be difficult to assign legal responsibility for the actions of an autonomous weapon.

Opponents also state that LAWS violate human dignity for a variety of reasons. The lack of human deliberation, they argue, means that an attack by an autonomous weapon is arbitrary and unaccountable. Further, they contend that these weapons will limit freedom, reduce the quality of life, and create suffering.

These arguments are unpersuasive. The ICRC notes: “Normally, the investigation will look into the person that fired the weapon, and the commanding officer who gave the order to attack.” The ICRC then goes on to ask “who will explain” an attack by an autonomous weapon on civilians, but the answer would be the same. In keeping with Western military concepts, commanders are responsible for actions taken by their forces and individual operators are responsible for the employment of their weapon. Remember that each of these weapons must be activated by a human operator.

As for the “loss of dignity” argument, it is difficult to see how being mistakenly or even intentionally killed by an autonomous weapon has any less dignity than being killed by a human. In many accounts of conflict, killing has been triggered by fatigue, anger, or prejudices of the human pulling the trigger. History is also full of instances where humans made the decision to use a weapon that killed indiscriminately. Since humans will still be the ones to program and launch autonomous weapons, the human dignity argument does not make sense.

Finally, all weapons are specifically designed to limit the freedom and reduce the quality of life of the targeted individual or group. And all weapons, when used, create suffering. Autonomous weapons are no different.
The Pentagon risks falling behind

The Department of Defense’s January 25 Directive 3000.09: Autonomy in Weapons Systems restates the policy that “autonomous and semi-autonomous weapon systems will be designed to allow commanders and operators to exercise appropriate levels of human judgement over the use of force.” This is critical, since it essentially allows for the employment of LAWS.

Unfortunately, the heart of the directive concerns the approval process required before any such weapon can be developed and deployed. This is a bureaucratic process that will be inherently slow and clearly risks the United States falling behind in this field. While creating a path to develop such weapons, the policy failed to establish a sense of urgency to deploy these systems. Nor did it address the ethical and moral imperative to do so rapidly. It never mentioned the moral imperative to employ LAWS to protect US troops or to improve the probability of military success.

In her August 28 speech announcing the Replicator autonomous weapons initiative, Hicks said the goal is “to field attritable autonomous systems at scale of multiple thousands, in multiple domains, within the next eighteen-to-twenty-four months.” It remains to be seen whether this guidance will speed up the process for approving, producing, and fielding autonomous weapons.
LAWS are nothing new

Fully autonomous weapons are not only inevitable; they have been in the United States’ inventory since at least 1979, when it fielded the Captor Anti-Submarine Mine, which held a torpedo anchored on the bottom that launched when onboard sensors confirmed a designated target was in range. Today, the United States holds a significant inventory of Quickstrike smart sea mines that, when activated, autonomously select their targets using onboard sensors. The US Navy’s Mark 48 ADCAP torpedo can operate with or without wire guidance, and it can use active and/or passive homing. In fact, the fastest-growing segment of the torpedo market is for autonomous torpedoes.

Autonomous anti-ship cruise missiles have been developed and fielded. Modern air-to-air missiles can lock on a target after launch. More than ten nations operate the Israeli-developed Harpy, a fully autonomous drone that is programmed before launch to fly to a specified area and then hunt for a class of targets using electromagnetic sensors. The follow-on system, Harop, adds visual and infrared sensors. Harop was only the first of a rapidly growing family of weapons known as loitering munitions. These munitions are designed to “loiter” over a battlefield until they can identify a target and then strike it. While many such munitions still require a human operator to select a target, they are essentially a software upgrade away from autonomy.

And, of course, victim-initiated mines (the kind one steps on or runs into) have been around for well over a century. These mines are essentially autonomous. They are unattended weapons that kill humans without another human making that decision. Despite strong international opposition and the Ottawa Convention or Anti-Personnel Mine Treaty, anti-personnel mines are still in use. But even these primitive weapons are really “human starts the loop” weapons. A human designed the detonators to require a certain amount of weight to activate the mine. A human selected where to place them based on an estimation of the likelihood they will kill or maim the right humans. But once they are in place, they are fully autonomous. Thus, much like current autonomous weapons, a human sets the initial conditions and then allows the weapon to function automatically. The key difference between the traditional automatic mine and a smart, autonomous mine, like the Quickstrike, is that the smart mine attempts to discriminate between combatants and noncombatants. Dumb mines do not. Thus, it is fair to assume that smart mines are inherently less likely to hurt noncombatants than older mines.

In short, arguments about whether democratic nations should field and employ LAWS miss the point. They have used autonomous weapons for decades—in multiple domains and in large numbers.
The drone war has changed

Further complicating the picture, current arguments against autonomous weapons are primarily based on the multi-decade US use of drones to hunt and kill individuals. These missions developed over days or even weeks with operators closely controlling each drone. Analysts had time to evaluate each mission and provide advice to senior officers, who then made the final decision, and usually after consulting with lawyers. During this period, it was both reasonable and ethical to refuse to use LAWS. It will continue to be right to refuse to employ LAWS under these conditions.

However, the Ukrainian conflict reveals a rapid, major change in the character of how war is fought. Both sides are using hundreds of drones at a time. Their routine use has been an essential element in Ukraine’s ability to hold its own against larger Russian forces. Further, the use of drones is increasing at an almost exponential rate. Ukraine ordered two hundred thousand drones to be delivered during 2023 and has trained ten thousand drone pilots to date.

In response, both sides are very active in counter-drone electronic warfare (EW). To defeat Russian EW efforts, Ukraine is combining tactics and technology. Tactically, they are flying lower and seeking gaps in Russian EW coverage. They are also pursuing technology to increase autonomy in existing drones.

The logical conclusion of this counter-measure process is full autonomy. Autonomous drones will not have the vulnerable radio link to pilots, nor will they need GPS guidance. Autonomy will also vastly increase the number of drones that can be employed at one time. Both the Harpy and the Shahed drones have demonstrated that it is possible to rapidly launch large numbers from trucks or containers. The era of the autonomous drone swarm has begun.
Autonomy is an ethical imperative

In a February 2023 interview with WIRED, Eric Schmidt, a former chief executive officer of Google, put autonomy in its historical context:

“Every once in a while, a new weapon, a new technology comes along that changes things. Einstein wrote a letter to Roosevelt in the 1930s saying that there is this new technology—nuclear weapons—that could change war, which it clearly did. I would argue that [AI-powered] autonomy and decentralized, distributed systems are that powerful.”

When advanced manufacturing tools, including robots, computer control, and 3D printing, are applied to building drones on a large scale, thousands of drones could soon be in the air at any given time. From a purely practical point of view, autonomy is the only way to employ that many simultaneously.

Current discussions concerning the ethics of autonomy are focused on the past. The focus must instead be on the future, where autonomous weapons will be present in the thousands. This requires a fundamental rethinking of the ethics of drones. No longer will militaries have the luxury of debating the impact on a single target. Instead, the question is how best to protect thousands of people while achieving the objectives that brought the country to war. It is difficult to imagine a more unethical decision than choosing to go to war and sacrifice citizens without providing them with the weapons to win.

Calls for international treaties to prohibit or heavily restrict LAWS ignore the repeated failure of such measures in the past. From Pope Innocent II’s papal bull banning crossbows to the post-World War I Washington Naval Treaty, these efforts only briefly slowed the development and employment of effective, affordable weapons. The perceived needs of national security have consistently overcome moral and legal restrictions on weapons that provide decisive advantages.

In Ukraine, the combination of pervasive surveillance, artificial intelligence–enhanced command and control, and massed precision fires is creating an incredibly lethal battlespace. Both sides are employing these systems and continually striving to increase their capabilities. Ukrainians faced with the reality that Russians are raping, murdering, and kidnapping their citizens clearly understand that the use of autonomous weapons is both necessary and moral.

This is the fundamental reason that employing autonomy is the ethical choice. Failing to do so in a major conventional conflict will result in many deaths, both military and civilian, and potentially the loss of the conflict.

No comments:

Post a Comment