19 April 2018

How can international law regulate autonomous weapons?

Ted Piccone

Some militaries are already far advanced in automating everything from personnel systems and equipment maintenance to the deployment of surveillance drones and robots. Some states have even deployed defensive systems (like Israel’s Iron Dome) that can stop incoming missiles or torpedoes faster than a human could react. These weapons have come online after extensive review of their conformity with longstanding principles of the laws of armed conflict, including international humanitarian law. These include the ability to hold individuals and states accountable for actions that violate norms of civilian protection and human rights.

Newer capabilities in the pipeline, like the U.S. Defense Department’s Project Maven, seek to apply computer algorithms to quickly identify objects of interest to warfighters and analysts from the mass of incoming data based on “biologically inspired neural networks.” Applying such machine learning techniques to warfare has prompted an outcry from over 3,000 employees of Google, which partners with Department of Defense on the project.

These latest trends are intensifying an international debate on the development of weapons systems that could have fully autonomous capability to target and deploy lethal force—in other words, to target and attack in a dynamic environment without human control. The question for many legal and ethical experts is whether and how such fully autonomous weapons systems can comply with the rules of international humanitarian law and human rights law. This was the subject of the fifth annual Justice Stephen Breyer lecture on international law, held at Brookings on April 5 in partnership with the Municipality of The Hague and the Embassy of The Netherlands.

REGULATING THE NEXT ARMS RACE

The prospect of developing fully autonomous weapons is no longer a matter of science fiction and is already fueling a new global arms race. President Putin famously told Russian students last September that “whoever becomes the leader in this sphere [of artificial intelligence] will become the ruler of the world.” China is racing ahead with an announced pledge to invest $150 billion in the next few years to ensure it becomes the world’s leading “innovation centre for AI” by 2030. The United States, still the largest incubator for AI technology, has identified defending its public-private “National Security Innovation Base (NSIB)” from intellectual property theft as a national security priority.

As private industry, academia, and government experts accelerate their efforts to maintain the United States’ competitive advantage in science and technology, further weaponization of AI is inevitable. A range of important voices, however, is calling for a more cautious approach, including an outright ban on weapons that would be too far removed from human control. These include leading scientists and technologists like Elon Musk of Tesla and Mustafa Suleyman of Google DeepMind. They are joined by a global coalition of nongovernmental organizations arguing for a binding international treaty banning the development of such weapons.

Others suggest that a more measured, incremental approach under existing rules of international law should suffice to ensure humans remain in the decisionmaking loop of any use of these weapons, from design through deployment and operation.

At the heart of this debate is the concept that these highly automated systems must have “meaningful human control” to comply with humanitarian legal requirements such as distinction, proportionality, and precautions against attacks on civilians. Where should responsibility for errors of design and use lie in the spectrum between 1) the software engineers writing the code that tells a weapons system when and against whom to target an attack, 2) the operators in the field who carry out such attacks, and 3) the commanders who supervise them? How can testing and verification of increasingly autonomous weapons be handled in a way that will create enough transparency, and some level of confidence, to reach international agreements to avoid worst-case scenarios of mutual destruction?

Beyond the legal questions, experts in this field are grappling with a host of operational problems that impinge directly on matters of responsibility for legal and ethical design. First, military commanders and personnel must know if an automated weapon system is reliable and predictable in its relevant functions. Machine learning, by its nature, cannot guarantee what will happen when an advanced autonomous system encounters a new situation, including how it will interact with other highly autonomous systems. Second, the ability of machines to differentiate between combatants and civilians must overcome inherent biases in how visual and audio recognition features operate in real time. Third, the ability of computers not just to collect data but to analyze and interpret them correctly is another open question.

The creation of distributed “systems of systems” connected through remote cloud computing further complicates how to assign responsibility for attacks that go awry. Given the commercial availability of sophisticated technology at relatively low cost, the ease of hacking, deceit, and other countermeasures by state and non-state actors is another major concern. Ultimately, as AI is deployed to maximize the advantage of speed in fighting comparably equipped militaries, we may enter a new era of “hyperwar,” where humans in the loop create more rather than fewer vulnerabilities to the ultimate warfighting aim.

No comments: