20 May 2019

The United Nations and the future of warfare

By Ariel Conn

Lethal autonomous weapons.

When I mention this phrase to most people who are unfamiliar with this type of weapon, their immediate response is almost always repulsion. This is usually followed by the question: Like drones?

No. Drones are controlled remotely by a person, and they cannot launch an attack against a human target on their own–the person in control has to make that decision. Lethal autonomous weapons, as the name implies, are weapons that could kill people autonomously. These are weapons that could select and attack a target, without someone overseeing the decision-making process—ready to abort an attack if it looked like something had gone wrong.

These are also weapons that technically don’t exist. Yet.


Not surprisingly, many countries within the United Nations are concerned about what warfare might become if lethal autonomous weapons are developed. Equally unsurprising is that most people around the world don’t even realize countries are having this debate, much less that the outcome of these United Nations discussions could affect the future of humanity.

What’s happening at the United Nations? The discussion around lethal autonomous weapons picked up steam in 2013, when Christof Heyns, a law professor who advises the United Nations, presented a report to the UN Human Rights Council on the challenges and threats posed by these weapons. The UN Convention on Certain Conventional Weapons (known more simply as the CCW) began discussions about what these weapons are and whether they should be developed. The CCW is also the group that regulates or bans conventional weapons considered too inhumane for warfare, including anti-personnel mines and blinding lasers.

Heyns defined lethal autonomous weapons systems as “weapon systems that, once activated, can select and engage targets without further human intervention.” For six years, delegates to the CCW have met twice a year for week-long debates to identify exactly what is included in that definition.

The debate is still ongoing.

This critical overarching debate to define lethal autonomous weapons also includes other important topics: Can these weapons be deployed with meaningful human control? Can they be used ethically and humanely? Do they need to be banned, or are they already regulated or illegal according to international humanitarian law (also known as the law of war)?

Essentially, the CCW is considering whether it’s ethical and legal for a machine or an algorithm to decide to kill a person. And if such a weapon could be ethical and legal, how much meaningful human control and oversight does it need?

These are difficult questions to answer, and in that regard, it’s understandable that the CCW debates have continued into their sixth year. But two issues stand out as especially problematic. The first is that the countries with the most advanced AI and autonomous weapons systems are the countries that seem the most flummoxed by definitions. These are countries, like the United States, that insist the questions haven’t been answered satisfactorily enough to move on to negotiating a formal treaty. The United States, Russia, the United Kingdom, Israel, South Korea, and Australia are the countries most strongly opposing a ban on lethal autonomous weapons. In fact, Russia successfully advocated for decreasing the amount of time countries could even discuss the issue within the formal CCW setting, and 2019 became the first year in which countries will meet for seven days, rather than ten. Moreover, the CCW is consensus based. The group can’t move down any path unless every participating country is on board. So it’s possible for this handful of countries to prevent negotiations toward a ban indefinitely.

The second and perhaps even more important issue is the incredible pace of technological advancement. The technologies necessary to design a weapon that can select and engage a target already exist, albeit at varying levels of capability. Radar sensors, lidar sensors, thermal detection, facial recognition, GPS/navigation capabilities, etc.–these all exist. Many of them are already used in weapons systems today, while some of them need improvement to be reliably deployed in battle–which may or may not be technically possible. But, battle-ready or not, the early stages of lethal autonomous weapons are here.

AI experts and tech leaders worry that once this genie is fully out of the bottle, there will be no stopping it. They insist that the weapons must be preemptively banned. If CCW discussions continue at their current pace, countries will miss this window for a preemptive ban.Will countries ban lethal autonomous weapons in time? Credit: Amin via Wikimedia Commons. CC BY-SA 4.0. Cropped.

The current state of lethal autonomous weapons. Today, many weapons systems have autonomous capabilities. For example, precision-guided munitions can autonomously stay on target using a variety of technologies and sensors, but a person first decides what that target should be. Loitering munitions, like the IAI Harpy 2, are even more autonomous, as they can scout an area looking for unmanned targets, such as military radar signals, and can attack those targets. Then there are sentry guns, like the SGR-A1, which have been deployed along the demilitarized zone between North and South Korea. These weapons detect people using heat sensors. While SGR-A1s have an unsupervised mode that allows them to identify and fire against a target, they are currently only used in their supervised mode, notifying a human commander if they detect that a person has gotten too close. From there, the human overseeing the weapon decides what to do.

To be clear, none of these weapons have been used to autonomously target people. When lethal force is required by these weapons, human oversight is mandatory, and no one is advocating that these systems should be included in a lethal autonomous weapons ban. However, a handful of countries are pushing increasingly toward weapons systems that could soon cross the line from acceptable to unacceptable uses of automation and AI.

The European peace organization PAX recently released a report that highlights the activities of the seven countries with the most advanced AI and autonomous weapons programs–the United States, China, Russia, the United Kingdom, Israel, South Korea, and France. The report highlights how each of these countries is ramping up the autonomous functions of their weapons.

For example, in the United States, one of the problems hindering the use of autonomous weapons (both lethal and non-lethal), is that commanders don’t trust the systems. This is understandable, given that autonomous systems are inherently more unpredictable. To address this, the US Defense Advanced Research Projects Agency (DARPA) is working on the Explainable AI program, which the PAX report describes as aiming to “enable human users ‘to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners.’”

Additionally, all countries listed in the report are developing increasingly autonomous unmanned aerial systems, essentially unmanned drones, and Russia is investing in underwater autonomous systems. Autonomous tanks are also increasingly popular with various countries.

Perhaps one of the most disconcerting military technological developments is that of swarm technology. Swarms of robots are exactly what they sound like: large numbers of robots working together to achieve some goal. Militaries are especially interested in this technology because it can add “mass” to the battlefield, without the need for more people. Swarms of tanks, planes, submarines, or even small drones, as anticipated in the fictitious Slaughterbots film, can attack multiple targets and, because of their numbers, they’re difficult for adversaries to stop.

DARPA, for example, has launched the OFFSET program (OFFensive Swarm-Enabled Tactics) to address difficulties that troops have in maneuvering safely through cities. The DARPA website explains:

“Unmanned air vehicles (UAVs) and unmanned ground vehicles (UGVs) have long proven beneficial in such difficult urban environs, performing missions such as aerial reconnaissance and building clearance. But their value to ground troops could be vastly amplified if troops could control scores or even hundreds—’swarms’—of these robotic units at the same time. The prime bottleneck to achieving this goal is not the robotic vehicles themselves, which are becoming increasingly capable and affordable. Rather, US military forces currently lack the technologies to manage and interact with such swarms and the means to quickly develop and share swarm tactics suitable for application in diverse, evolving urban situations.”

DARPA describes this program as offering assistance to military personnel, rather than for launching attacks, but the step from reconnaissance and identifying targets to engaging with targets is not a big one. More importantly, taking that step is more about deciding how much human control should be involved in kill decisions, rather than overcoming technical limitations.

For now, all countries at the CCW have publicly stated that they intend to always ensure meaningful human control and oversight of their lethal weapons systems. The devil, however, is in the definitions.

What is meaningful human control. In 2016, AI experts Heather Roff and Richard Moyes presented a paper to the CCW in which they outlined meaningful human control. Among other things, they found that for a human to maintain meaningful control over a weapons system, the system must be “predictable, reliable, and transparent.” It must provide “accurate information for the user on the outcome sought, operation and function of [the] technology, and the context of the use.” It must allow for “timely human action and a potential for timely intervention.” And it must provide “accountability to a certain standard.”

By the March, 2019, CCW meeting, the vast majority of countries that gave statements agreed that some level of human control must always exist over lethal autonomous weapons systems, and that all weapons must act in such a way that a human would always be responsible and accountable for the decision to take a life. However, it’s unclear if humans could ever maintain meaningful human control over swarms of weapons that move too quickly and analyze too much data for a person to keep up with.

The recent tragic crashes of Boeing 737 Max jets provide examples of why meaningful human control is so difficult to ensure over automated systems. According to a New York Times report on the accidents, pilots likely had less than 40 seconds to fix the errors with the systems. Humans simply couldn’t react fast enough.

Then there’s the human bias toward trusting a machine in emergency settings. In one study at Georgia Tech, students were taking a test alone in a room, when a fire alarm went off. The students had the choice of leaving through a clearly marked exit that was right by them, or following a robot that was guiding them away from the exit. Almost every student followed the robot, away from the safe exit. In fact, even when the students knew from previous experience that the robot couldn’t be trusted, they still followed it away from the exit.

This all begs the question: Can humans really have meaningful control if they’re overseeing the swarms of weapons militaries want to develop?

Even with just one weapons system, if a person can’t keep up with the speed of the machine or the amount of data that the weapon is processing, and if the person has an inherent bias toward trusting a machine’s suggestion, then can that person ever have enough meaningful human control to be held responsible and accountable when a machine chooses whom to kill?

The ethics and legality of lethal autonomous weapons. A few phrases are commonly thrown around during the CCW discussions regarding the legal and ethical ramifications of lethal autonomous weapons. These include: international humanitarian law, international human rights law, the Martens Clause, Article 36, the Geneva Convention, the accountability gap, human dignity, verifiability, distinction, proportionality, and precaution.

These words and phrases, along with many others that aren’t listed here, are all connected to a single question: Can lethal autonomous weapons be used in such a way that falls in line with current international law and that maintains human dignity in war?

Perhaps one reason for advocating for new laws surrounding lethal autonomous weapons is precisely because there’s so much disagreement over the answer to that question.

The handful of states that currently oppose a ban on these weapons systems suggest that the weapons would be more accurate than people and thus decrease civilian casualties. Because of this, the countries argue that these weapons would actually be more ethical. The countries also say that because of the way international law is set up, it would already be illegal to deploy these types of weapons without being able to first prove that the weapons are reliable and predictable.

The countries and nonprofit organizations that argue in support of a ban or declaration against lethal autonomous weapons suggest that the very act of a machine deciding to take a human life is unethical and inhumane. They argue that current law is not clear on who would be held responsible if an autonomous machine killed the wrong person or people, and that current law is designed around weapons systems that are much more predictable. It’s entirely possible that a lethal autonomous weapons system–which is based on machine learning and evolves as its experiences grow–could pass all tests predictably, only to fail catastrophically in battle when it encounters an enemy autonomous system that the military didn’t anticipate. Among other issues with this scenario, countries worry that current international law is not equipped to handle who would be held accountable in such a situation.

Limitations of the CCW. The two countries–South Korea and Israel–that are using weapons furthest along the autonomous spectrum, deployed these weapons along two of the most contentious borders in the world: along the demilitarized zone between North and South Korea, and between Israel and the Palestinian-controlled Gaza Strip. Some experts worry these will not be the only borders patrolled by autonomous weapons in the future.

As the global temperature increases, regions around the world are expected to become increasingly uninhabitable, driving a wave of migration across the globe. Refugees will have no choice but to seek asylum in countries with more stable, inhabitable climates, yet given recent clashes at borders, many are concerned that countries with more stable climates may not welcome refugees. For countries that want to guard their borders, cheap, ubiquitous autonomous weapons systems could be much easier to deploy than massive numbers of ground troops.The borders between Israel and the Gaza Strip and North and South Korea are two of the most contentious in the world. Gaza photo by שועל CC BY-SA 3.0. Cropped.

It’s not hard to imagine a future in which thousands, tens of thousands, and maybe even millions of autonomous weapons are deployed to guard borders around the world, taking aim at innocent civilians who try to cross the borders.

This highlights two limitations of the CCW: the CCW was created to negotiate weapons use during armed conflict, and the CCW does not cover weapons of mass destruction. Because the weapons in the scenario above would be used by border patrol against noncombatants outside of armed conflict, this situation wouldn’t be covered by a ban, even if one were passed.

Fortunately, if the CCW did agree that algorithms are not allowed to make the decision–or to unduly influence the decision—to harm or kill a human, this would set a powerful norm. The stigma alone from such an agreement would make it much harder for companies to build these weapons and for countries to use them against anyone and for any purpose.

Implications for national security. The issues and debates surrounding lethal autonomous weapons sometimes seem infinite, but there is one final point that’s important to consider: though many countries working toward these weapons believe the weapons will make their militaries stronger, in fact, lethal autonomous weapons could pose a very serious national security threat.

Earlier this year, the Defence and Security Accelerator in the United Kingdom awarded more than $3 million to Blue Bear Systems Research “to develop drone swarm technology.” The managing director at Blue Bear Systems said, “The ability to deploy a swarm of low cost autonomous systems delivers a new paradigm for battlefield operations.”

However, large swarms would be weapons of mass destruction, in the sense that they would enable very few to kill very many.

This sits in stark contrast to the words of Dr. Matthew Meselson who helped lead the effort to get the United States to ban biological weapons. On a tour of Fort Detrick in the early 1960s, Meselson was informed that the United States was making anthrax because biological weapons are a lot cheaper than nuclear weapons. Meselson later explained:

“I don’t think it took me very long to realize that ‘hey, we don’t want devastating weapons of mass destruction to be really cheap and save us money. We would like them to be so expensive that no one can afford them but us, or maybe no one at all.’ It’s ridiculous to want a weapon of mass destruction that’s ultra-cheap.”

It was this argument that helped lead to the biological weapons ban because the world agreed that, among other things, the threat from cheap weapons of mass destruction wasn’t a risk worth taking.

Today, there is a very real threat that if the CCW doesn’t act quickly, if these weapons aren’t banned soon, lethal autonomous weapons could become ultra-cheap, easily accessible weapons of mass destruction. That is a fate humanity can live without.

No comments: