Pages

11 March 2023

‘Not the right time’: US to push guidelines, not bans, at UN meeting on autonomous weapons

SYDNEY J. FREEDBERG JR.

WASHINGTON — On Monday, government experts from around the globe will once again gather in Geneva to debate the ethics and legality of autonomous weapons.

The crucial question for arms controllers: What’s the greatest danger from militarized AI and other autonomous systems? Many peace activists and neutral nations focus on out-of-control killing machines, like the Terminators of pop culture or, more plausibly, the swarming assassination drones of the mockumentary Slaughterbots. But others, including US officials and experts, often focus on something subtler, like an intelligence analysis algorithm that mistakes civilians for terrorists, a hospital for a military base, or a scientific rocket for a nuclear first strike — even if it’s still a human pulling the trigger.

A growing movement hopes the United Nations Group of Government Experts meeting in Geneva will help lay the groundwork for a binding legal ban on at least some kinds of officials call “lethal autonomous weapons systems” and what activists call “killer robots”— however they end up being defined, a question that’s bedeviled the Geneva negotiators for nine years. Just last week, at a conference in Costa Rica, 33 American nations, from giant Brazil to tiny Trinidad, declared “new prohibitions and regulations… are urgently needed,” in the form of “international legally binding instrument” like those already banning land mines and cluster bombs.

But the US is expected to push back with a very different vision, calling not for new binding treaties but for voluntary self-restraint — and this year, unlike at past Geneva meetings, they’ll have a detailed outline of the kinds of measures they mean, laid out by Under Secretary of State Bonnie Jenkins at a February conference in the Hague.

This week, ahead of the GGE convening, a State Department spokesman described the US approach in a detailed statement exclusively to Breaking Defense.

In essence, the US fears a formal treaty would hobble legitimate, even life-saving applications of a rapidly developing military technology for those countries that actually comply, while ignoring the insidious dangers of algorithms gone wrong in applications outside automated weapons. Instead, the US declaration at the Hague laid out non-binding “best practices [and] principles,” focused on artificial intelligence and based on Pentagon policies, to guide all military uses of AI.

“The United States continues to believe that it is not the right time to begin negotiating a legally binding instrument on LAWS [Lethal Autonomous Weapons Systems],” the State Department spokesman said. “In the GGE [Group of Government Experts], States appear to continue to have basic disagreements about [which] weapons systems we’re talking about and basic disagreements about what the problem is.”

What Is An Autonomous Weapon

A central question is the most basic one: What counts as an autonomous weapon?

If your definition is too broad, even an old-school landmine counts, since it detonates automatically when someone steps on it, with no human decision involved. If your definition is too narrow, nothing counts but unrestrained robot berserkers: China, notably, has proposed a uniquely restrictive definition that boils down, as the nonpartisan Congressional Research Service phrased it, to “indiscriminate, lethal systems that do not have any human oversight and cannot be terminated” (i.e. shut off). In between are all sorts of real-world weapons, like US Navy Aegis and Army Patriot air defense systems, which already have automatic modes in which the computer autonomously identifies incoming threats, picks targets, and fires interceptors, potentially killing human pilots (even friendly ones, as in a 2003 incident). But there is no agreed-on definition from which negotiators can work.

“[Trying] to negotiate a legally binding instrument with such fundamental divergences in our understandings and purposes… is likely to fail,” the State Department spokesperson continued. So if the GGE in Geneva wants to make real progress, they argued, it should focus on “clarifying” how autonomous weapons are already restricted by existing international law, which has long banned indiscriminate killing and deliberate strikes on civilians, and which requires clear accountability on the part of human commanders. A 2022 proposal [PDF] from the US and five allies — Australia, Britain, Canada, Japan, and South Korea — can serve as “a foundation for that work.”

Only once that foundation is laid, the US argues, can the GGE “responsibly develop” a formal treaty. So, the spokesman said, “states who seek a legally-binding instrument should support the [US and allied] Joint Proposal even if they only do so as an intermediate step.”


Then-Lt. Col. Matt Strohmeyer briefs reporters on an Advanced Battle Management System (ABMS) experiment in 2020. (U.S. Air Force photo by Senior Airman Daniel Hernandez)

Beyond The Terminator

The US doesn’t only fear a treaty would be premature and overly restrictive; it’s also worried such an approach would be too narrow. That’s because a binding global treaty on “lethal autonomous weapons systems” would have no effect on many military uses of AI, perhaps even most. The US Defense Department, for instance, is already exploring artificial intelligence for a vast array of non-combat functions: predicting maintenance needs, driving vehicles off-road, and organizing humanitarian relief.

Even the Pentagon’s most ambitious AI-driven effort, called Joint All Domain Command & Control, is not a “weapons system” by most definitions. The goal is to build a meta-network that uses AI and automation to synthesize data from sensors across land, sea, air, space, and cyberspace, identify key targets, and select the best weapon to attack them from any of the armed services — but with humans, not computers, pulling the triggers.

“The issue of military use of AI is broader than just the application of AI and autonomy in weapon systems,” the State spokesperson told Breaking Defense. The Political Declaration issued by the US at the Hague “is aimed at promoting responsible behavior in the development, deployment, and use of AI and autonomy in a military context more broadly.” It is meant to augment the ongoing negotiations in Geneva on the narrower issue of autonomous weapons and the joint six-nation proposal the US made to the Geneva GGE in 2022.

“The AI Declaration is a compliment to, not a substitute for, the joint proposal,” a Defense Department spokesperson told Breaking Defense. “The AI Declaration focuses on military applications of AI and autonomy broadly, not autonomous weapon systems. We’d hope that countries would endorse both of these proposals.”

US officials are reluctant to detail how non-weapons applications of AI might go wrong. But in an era when the Justice Department says a self-driving car ran down a pedestrian while ChatGPT “hallucinates” facts and even source documents that don’t actually exist, it’s not difficult to imagine.

“A lot of the discussion around bans focuses exclusively on autonomous weapons… but those systems aren’t being developed yet,” said Lauren Kahn, a researcher at the Council on Foreign Relations. “There’s a real need for building blocks, confidence-building measures… baby steps, almost… because it deals with a lot of applications that are happening today.”

What kind of building blocks is the US proposing? The political declaration in the Hague essentially distills a decade of Pentagon policy-making into a generic framework any nation can apply. “Several existing DoD policies influenced the Declaration,” a Defense Department spokesperson told Breaking Defense, “including DoD’s AI Ethical Principles, the Responsible AI Strategy and Implementation Pathway, the AI Strategy, DoD Directive 3000.09: Autonomy in Weapon Systems, and the Nuclear Posture Review.”

Drawing on those policies, the declaration calls for such measures as strict “human control and involvement” over all aspects of nuclear weapons — no delegating launch decisions to computers, the nightmare of Terminator’s fictional Skynet — and rigorous human oversight, testing, and verification at every stage of military AI, from initial R&D to initial deployment to subsequent updates, including self-modification by AI algorithms as they rewrite themselves. AI systems should have their missions clearly defined from the start and be subject to shutdown as a last resort.

But that’s not enough for some. The group leading the charge for the more aggressive ban, Stop Killer Robots, denounced the American declaration in The Hague as “feeble” and “a significant step backwards”, while extolling the Costa Rica conference’s call for a binding treaty as a way to bypass years of “gridlock” in Geneva. (Stop Killer Robots did not respond to repeated requests for comment.) An allied group, the Arms Control Association, called the US declaration “constructive but inadequate.”

“The declaration can be useful in highlighting the numerous risks incumbent upon the unregulated military use of AI and acknowledging the need for restraints of some sort,” Michael Klare, secretary of the ACA board, told Breaking Defense. “Any system of control, voluntary or obligatory, will have to address all the issues encompassed in the declaration.”

“But the principles, however comprehensive and commendable, do not constitute formal rules or regulations, and so are not enforceable,” Klare warned. “This means that any state (including the United States) can endorse the declaration and claim to be abiding by its principles, but then proceed to violate them with impunity.”

So the sticking point comes down to trust. Will the US and its allies live up to the voluntary principles it’s proposed? Would Russia and China, for their part, abide by any kind of legal ban, no matter how many neutral countries signed it? Those fundamental questions are about the reliability, not of machines, but of human beings.

No comments:

Post a Comment