Pages

6 October 2023

AI and the Future of Drone Warfare: Risks and Recommendations

Brianna Rosen

The next phase of drone warfare is here. On Sep. 6, 2023, U.S. Deputy Defense Secretary Kathleen Hicks touted the acceleration of the Pentagon’s Replicator initiative – an effort to dramatically scale up the United States’ use of artificial intelligence on the battlefield. She rightfully called it a “game-changing shift” in national security. Under Replicator, the U.S. military aims to field thousands of autonomous weapons systems across multiple domains in the next 18 to 24 months.

Yet Replicator is only the tip of the iceberg. Rapid advances in AI are giving rise to a new generation of lethal autonomous weapons systems (LAWS) that can identify, track, and attack targets without human intervention. Drones with autonomous capabilities and AI-enabled munitions are already being used on the battlefield, notably in the Russia-Ukraine War. From “killer algorithms” that select targets based on certain characteristics to autonomous drone swarms, the future of warfare looks increasingly apocalyptic.

Amidst the specter of “warbot” armies, it is easy to miss the AI revolution that is underway. Human-centered or “responsible AI,” as the Pentagon refers to it, is designed to keep a human “in the loop” in decision-making to ensure that AI is used in “lawful, ethical, responsible, and accountable ways.” But even with human oversight and strict compliance with the law, there is a growing risk that AI will be used in ways which fundamentally violate international humanitarian law (IHL) and international human rights law (IHRL).

The most immediate threat is not the “AI apocalypse” – where machines take over the world – but humans leveraging AI to establish new patterns of violence and domination over each other.

Drone Wars 2.0

Dubbed the “first full-scale drone war,” the Russia-Ukraine War marks an inflection point where states are testing and fielding LAWS on an increasingly networked battlefield. While autonomous drones reportedly have been used in Libya and Gaza, the war in Ukraine represents an acceleration of the integration of this technology into conventional military operations, with unpredictable and potentially catastrophic results. Those risks are even more pronounced with belligerents who may field drones without the highest level of safeguards due to lack of technological capacity or lack of will.

The lessons from the war in Ukraine include that relatively inexpensive drones can deny adversaries air superiority and provide a decisive military advantage in peer and near-peer conflicts, as well as against non-state actors.

The United States and other countries are taking these lessons seriously. Mass and speed will apparently dominate the future drone wars, as the United States – through Replicator and other initiatives – seeks to develop the capacity to deploy large amounts of cheap, reusable drones that can be put at risk to keep pace with adversaries such as China. Increasingly, discrete drone strikes against non-state actors will be displaced by AI-enabled drone swarms that communicate with each other and work together (and with humans) to destroy critical infrastructure and other targets.

This emerging technology poses even greater risks to civilians than the drone wars of the past. Unlike conventional drone warfare, which is vetted and controlled by human operators, the new drone wars will be more automated. Human-machine collaboration will pervade nearly every stage of the targeting cycle – from the selection and identification of targets to surveillance and attack. The largest shift will be the least visible, as proprietary algorithms sift through reams of intelligence data and drone feeds to compile target lists for human approval.

While humans may continue to sign off on the use of lethal force, AI will play a more pervasive role in shaping underlying choices about who lives and dies and what stands or is destroyed.

As AI reduces human involvement in killing, drone warfare will most likely become less explainable and transparent than it is now. This is true not only for the public – which is already kept in the dark – but also for government officials charged with implementing and overseeing the drone program.

The problem of explainability, where humans cannot fully understand or explain AI-generated outcomes, is a broader issue with AI that is not limited to drone strikes. Computational systems that rely on AI tend to be opaque because they involve proprietary information, evolve as they learn from new data, and are too complex to be understood by any single actor.

But the problem of explainability is particularly acute when it comes to drone warfare.

In the sprawling U.S. interagency process, military and intelligence agencies rely on different information streams, technology, and bureaucratic procedures to support the drone program. These agencies are developing their own AI tools which are highly classified and based on algorithms and assumptions that are not shared with key policymakers or the public. Add to this mix AI systems producing outcomes that cannot be fully understood and it will be impossible for government officials to explain why an individual, for example, was mistakenly targeted and killed.

The problem of explainability will foster a lack of accountability in the coming drone wars – something that is already in short supply. When civilians are mistakenly killed in AI-enabled drone strikes, Pentagon officials will also be able to blame machines for these “tragic mistakes.” This is especially the case for drone swarms, where drones from different manufacturers may fail to communicate properly, despite the Pentagon spending millions of dollars on the technology. As drones begin to talk to each other as well as to humans, the accountability and legitimacy gap between the human decision to kill and the machines performing the lethal act is likely to grow.

Minding the Gap

These challenges are well known, and the Pentagon has long touted a policy of “responsible AI” that aims to address them through a labyrinth of laws and regulations. This sounds good on paper, but the conventional drone program, too, was promoted as being “legal, ethical, and wise” before serious concerns about civilian harm surfaced. If the past drone wars are any indication, truly responsible AI drone warfare similarly may prove elusive, particularly where gaps in protection arise in the various legal, ethical, and policy frameworks that govern AI use.

For this reason, several states and the International Committee of the Red Cross (ICRC) have proposed banning weapons systems that lack meaningful human control and are too complex to understand or explain. In the first United Nations Security Council meeting on AI in July, U.N. Secretary-General António Guterres proposed that states adopt within three years a “legally-binding instrument to prohibit lethal autonomous weapons systems that function without human control or oversight, which cannot be used in compliance with international humanitarian law.”

But even if states agree to such a ban in principle, significant questions remain. What legal limits must be placed on autonomous weapons systems to ensure compliance with IHL? What type and degree of human control is needed to ensure that future drone strikes meet the IHL principles of necessity, proportionality, and discrimination, as well as precaution? Is compliance with IHL sufficient or is a new treaty required? While many states have called for such a treaty, the United States, Russia, and India maintain that LAWS should be regulated under existing IHL.

As the new drone wars become more ubiquitous, the exceptional rules that are said to apply in war – notably the lower levels of protections afforded by IHL – risk becoming the default regime. In the long term, the practical effects of this are the continued erosion of the prohibition on the use of force and the adoption of increasingly permissive interpretations of international law. The full costs and consequences of these developments are still emerging, but the precedents set now are likely to undermine individual rights in pernicious and irreversible ways.

To counter this trend, states at a minimum should reaffirm the application of IHRL within and outside of armed conflict. The individualisation and automation of war has prompted a turn toward principles enshrined in IHRL, such as a stricter interpretation of the necessity criterion under certain conditions and similarly the provision that force should be used only if bystanders are unlikely to be harmed. Yet while IHRL offers additional protections beyond IHL, the precise interaction between IHL and IHRL is disputed and varies according to state practice. Fundamentally, these legal regimes were not designed to regulate non-traditional conflicts and non-traditional means of using lethal force, and gaps in legal protections are likely to grow wider in the coming drone wars.

These gaps have prompted the ICRC to emphasize “the need to clarify and strengthen legal protections in line with ethical considerations for humanity.” In cases not covered by existing treaties, Article 1(2) of Additional Protocol I and the preamble of Additional Protocol II to the Geneva Conventions, commonly referred to as the “Martens Clause,” provide that individuals should be protected by customary IHL, as well as the “principles of humanity and the dictates of public conscience.”

But ethical considerations may diverge substantially from the law. The relationship between morality and law is a longstanding scholarly debate beyond the scope of this article. Briefly, the law serves a different purpose from morality insofar as it must consider the effect that conventions will have on behavior, degrees of epistemic uncertainty in the real world, and anarchy in the international system. Under these circumstances, the morally optimal laws may be, in the words of Henry Shue, just those that “can produce relatively few mistakes in moral judgment – relatively few wrongs – by angry and frightened mortals wielding awesomely powerful weapons.”

The unpredictable and complex nature of AI, however, complicates efforts to discern, ex ante, the right course of action. Even when humans follow all the legal and policy guidelines, the gap between human decision-making and machine action implies that outcomes may not be moral. Far from it.

What is moral may not be legal or wise – and vice versa.

Policy guidance, meanwhile, is not a substitute for the protections that the law affords. The newly crafted U.S. Presidential Policy Memorandum (PPM), for example, is supposed to provide additional protections above what the laws of war require for direct action, that is, drone strikes and special operations raids. But the policy guidance is not legally binding, can secretly be suspended at any time, contains numerous exemptions for collective and unit self-defense, and applies to only a fraction of U.S. drone strikes outside of “areas of active hostilities,” notably in Iraq and Syria.

Moreover, the policy guidance was written with conventional drone strikes in mind. As the world stands at the precipice of a new phase in AI-driven drone warfare, it is time to rethink the rules.

Walking Back from the Precipice

There have been a number of proposals for regulating lethal autonomous weapons systems, including AI-enabled drones. But if the past drone wars are any indication, these regulations are still likely to fall short. Human oversight and compliance with existing laws and standards is essential, but not sufficient.

To more fully protect civilians in the coming drone wars, U.S. policymakers should take the following steps as a matter of urgency:
  1. Develop a U.S. government-wide policy on the use of AI in drone warfare. While the Department of Defense has published numerous guidelines on AI and autonomous weapons systems, these directives do not necessarily apply to other agencies, such as those in the U.S. Intelligence Community. This oversight is deeply concerning given the crucial role that these other agencies may play in identifying, vetting, and attacking targets on a routine basis.
  2. Follow the “two-person rule.” During the Cold War, the two-person rule required two or more authorized individuals to be present when nuclear weapons or material were being repaired, moved, or used. This rule was designed to prevent nuclear accidents or misuse that could pose significant risks to human life. AI-enabled weapons have similar potential for catastrophic results and should follow the same rule for all drone operations.
  3. Reduce the accountability gap. Increasing autonomy in drone warfare will make strikes more unpredictable, resulting in mistakes that cannot be attributed to any particular individual. To reduce this risk, the timeframe between when humans approve a target for lethal action and when drones take that action should be minimized to mere seconds or minutes, not days or months. Under no circumstances should drones be allowed to independently target individuals who are on a pre-approved (human approved) “kill list.”
  4. Conduct and publish routine AI health audits. To mitigate the problem of explainability, humans must check AI and AI must check itself. “Checking AI” can be a powerful tool in ethical audits, helping humans test AI systems and identify flaws or underlying biases in algorithms. AI health checks must be performed at regular intervals, and the results should be briefed to members of Congress (e.g., the Gang of Eight), and a redacted version should be made available to the public.
Pandora’s box has been opened, but policymakers can still place necessary guardrails on the AI revolution in drone warfare. In the words of Martin Luther King, the United States is “confronted with the fierce urgency of now” and there is “such a thing as being too late.”

No comments:

Post a Comment