21 October 2020

A Partial Ban on Autonomous Weapons Would Make Everyone Safer

BY ZACHARY KALLENBORN

The United Nations Convention on Certain Conventional Weapons Group of Governmental Experts met late last month to discuss lethal, autonomous weapons. The semiannual meetings are the first serious effort by global governments to control autonomous weapons. And the weapons pose serious risks to global security: Even the best artificial intelligence isn’t well suited to distinguishing farmers from soldiers and may be trained only on laboratory data that is a poor substitute for real battlefields.

As U.N. Secretary-General António Guterres wrote on Twitter, “Autonomous machines with the power and discretion to select targets and take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law.”

Organizations such as the Campaign to Stop Killer Robots, the International Committee for Robot Arms Control, and Human Rights Watch advocate for a comprehensive ban on all autonomous weapons, but such a ban is unlikely to succeed. The military potential of autonomous weapons is too great. Autonomous weapons guard ships against small boat attacks, search for terrorists, stand sentry, and destroy adversary air defenses.

Just a few weeks ago, an AI simulation defeated a living, breathing F-16 pilot five to zero in a simulated dogfight. Such an AI system could conceivably command a future aerial drone. No doubt the technology will grow and mature. No serious military power would give up such potential—especially when concerns are theoretical and adversaries may not follow suit. Russia didn’t even show up to the experts’ meeting.

Instead of a broad ban on all autonomous weapons, the international community should identify and focus restrictions on the highest-risk weapons: drone swarms and autonomous chemical, biological, radiological, and nuclear weapons, known as CBRN weapons. A narrower focus would increase the likelihood of global agreement, while providing a normative foundation for broader restrictions.

In 2018, the tech company Intel flew 2,018 drones at once in a Guinness World Record-breaking light show in Folsom, California. Earlier this year, Russia and China flew light shows of more than 2,000 drones too. The drones carried flashy lights and were meant as modern fireworks, but similar drones could be designed for war with thousands of guns, bombs, and missiles.

A thousand-drone swarm has a thousand points of potential error. And because drones in a true swarm communicate with one another, errors may propagate throughout the swarm. For example, one drone may misidentify a cruise ship as an aircraft carrier, then unleash the full might of the swarm on a few thousand civilians.

The same may occur if the drone correctly identifies the cruise ship as not a target, but the word not is lost, due to simple accident or adversary jamming. Swarm communication also leads to emergent behavior—collective behaviors of the swarm that do not depend on the individual parts—that further reduces both the predictability and understandability of the weapon.

As P.W. Singer, a strategist and senior fellow at New America, wrote in his book Wired for War, “a swarm takes the action on its own, which may not always be exactly where and when the commander wants it. Nothing happens in a swarm directly, but rather through the complex relationships among the parts.”

Drone swarms pose a greater threat to powerful militaries, because cheap drones can be flung one after another against expensive platforms until they fall.

Drone swarms pose a greater threat to powerful militaries, because cheap drones can be flung one after another against expensive platforms until they fall. In 2018, a group calling itself the Free Alawites Movement claimed responsibility for launching 13 drones made largely of plywood, duct tape, and lawnmower engines that attacked Russia’s Khmeimim Air Base in Syria.

The movement claimed the successful destruction of a $300 million S-400 surface-to-air missile system. (The exact identity of the “Free Alawites Movement” is unclear. The only attacks it has claimed are the Khmeimim attacks and another drone attack on a Russian naval base in Syria on the same day. Sources have also attributed the attacks to the Islamic State, Hayat Tahrir al-Sham, and Ahrar al-Sham.)

Russian officials acknowledged the drones flew autonomously and were preprogrammed to drop bombs on the base but claim no damage was done. (The Russian officials did not comment on whether the drones communicated with one another to make a true drone swarm.) However, in Libya, Turkish Bayraktar TB2 drones disabled at least nine Russian air defense systems. The Bayraktar drones are considerably more advanced than those used in Syria, but they illustrate the same principle: Drones pose major threats to air defenses and other expensive systems.

An adversary could fling tons of drones against a $1.8 billion USS Arleigh Burke-class guided-missile destroyer in an attempt to disable or destroy it and still have a cost advantage. Facing such a threat, great powers should choose to lead—rather than resist—the arms control charge for certain weapons. Yes, great powers would give up the potential to unleash their own massive swarms, but swarms are likely to favor weaker powers. If swarms are most effective when used en masse against big, expensive platforms, then major powers that possess such expensive equipment stand to lose the most.

If swarms are most effective when used en masse against big, expensive platforms, then major powers that possess such expensive equipment stand to lose the most. Swarms might also be easier to control.

A key arms control challenge for autonomous weapons is knowing if a weapon is actually autonomous. At root, autonomy is just a matter of programming the weapon to fire under given conditions, however simple or complex. A simple landmine explodes when enough weight is put upon it; an autonomous turret fires based on analyzed information collected from sensors and any design constraints. With autonomous weapons, an outside observer cannot tell whether the weapon operates under predesigned rules or is being controlled remotely. However, no human can reasonably control a swarm of thousands of drones.

The complexity is simply too much. They must monitor hundreds of video, infrared, or other feeds, while planning the swarm’s actions and deciding who to kill. Such a massive swarm must be autonomous, may be a weapon of mass destruction in its own right, and could carry traditional weapons of mass destruction.

Discussion of autonomous weapons takes place under the auspices of the Convention on Certain Conventional Weapons, assuming the weapon fires bullets, bombs, or missiles. But an autonomous weapon could just as readily be armed with CBRN agents.

Autonomous vehicles are a great way to deliver chemical, radiological, and biological weapons. An autonomous vehicle cannot get sick with anthrax, nor choke on chlorine. Drones can more directly target enemies, while adjusting trajectories based on local wind and humidity conditions. Plus, small drones can take to the air, fly indoors, and work together to carry out attacks. Operatives from the Islamic State in Iraq and Syria were reportedly quite interested in using drones to carry out radiological and potentially chemical attacks. North Korea also has an arsenal of chemical, biological, and nuclear weapons and a thousand-drone fleet.

When robots make decisions on nuclear weapons, the fate of humanity is at stake.

When robots make decisions on nuclear weapons, the fate of humanity is at stake. In 1983, at the height of the Cold War, a Soviet early warning system concluded the United States had launched five nuclear missiles at the Soviet Union. The computer expressed the highest degree of confidence in the conclusion. The likely response: immediate nuclear retaliation to level U.S. cities and kill millions of American civilians. Fortunately, Stanislav Petrov, the Soviet officer in charge of the warning system, concluded the computer was wrong. Petrov was correct. Without him, millions of people would be dead.

New restrictions on autonomous CBRN weapons should be a relatively easy avenue for new restrictions. A wide range of treaties already restrict production, export, and use of CBRN weapons from the Geneva Convention to the Nuclear Non-Proliferation Treaty and the Chemical Weapons Convention. At minimum, governments could collectively agree to incorporate autonomous weapons in all applicable CBRN weapons treaties.

This would signal a greater willingness to adopt restrictions on autonomous weapons without a requirement to resolve the question of autonomous weapons with conventional payloads. Of course, a ban may require giving up capabilities like a nuclear “dead hand”—in the words of proponents, “an automated strategic response system based on artificial intelligence”—but nuclear weapons experts are overwhelmingly against the idea. The risks to great powers of increased CBRN weapons proliferation and accidental nuclear war are far greater than any deterrent advantage already gained with a robust conventional and nuclear force.
The risks to great powers of increased CBRN weapons proliferation and accidental nuclear war are far greater than any deterrent advantage already gained with a robust conventional and nuclear force.

Placing autonomous weapons on the global agenda in the first place is a definite success—a global treaty can never be made if no one cares enough to even talk about it—but the question is what happens next. Do government experts simply keep talking or do these meetings lead to actionable treaties?

What combination of inducements, export controls, transparency measures, sanctions, and, in extreme events, the use of force are best suited to preventing the threat? Historically, comprehensive bans took decades—the global community took about 70 years to go from the Geneva Protocols against chemical weapons usage to states giving up the weapons—but autonomous weapons are growing and proliferating rapidly.

Countries might not be willing to ban the weapons outright, but banning the highest-risk autonomous weapons—drone swarms and autonomous weapons armed with CBRN agents—could provide a foundation for reducing autonomous weapons risks. Great powers would give up little, while improving their own security.

No comments: