Pages

20 November 2019

Assessing ethical AI principles in defense

Mark MacCarthy

On October 31, the Defense Innovation Board unveiled principles for the ethical use of AI by the Defense Department, which call for AI systems in the military to be responsible, equitable, reliable, traceable, and governable. Though the recommendations are non-binding, the Department is likely to implement a substantial number of them. The special focus on AI-based weapons arises because their speed and precision make them indispensable in modern warfare. Meanwhile, their novel elements create new and substantial risks that must be managed successfully to take advantage of these new capabilities.

WHAT MAKES AI WEAPONS SYSTEMS SO CONTROVERSIAL?

The chief concern of the Board was the possibility that an AI weapon system might not perform as intended, with potentially catastrophic results. Machine learning incorporated into weapons systems might learn to carry out unintended attacks on targets that the military had not approved and escalate a conflict. They might in some other way escape from the area of use for which they had been designed and launch with disastrous outcomes.

As former Secretary of the Navy Richard Danzig has noted, whenever an organization is using a complex technological system to achieve its mission, it is to some extent playing “technological roulette.” Analyses of the 1979 Three Mile Island nuclear power incident and the 1986 Challenger Space Shuttle disaster have shown a combination of organizational, technical, and institutional factors can cause these systems to behave in unintended ways and lead to disasters. The Department of Defense has devoted substantial resources to unearthing the causes of the 1988 incident where the cruiser USS Vincennes downed an Iranian civilian flight killing 290 people. That tragedy that had nothing to do with the advanced machine learning techniques, but AI weapons systems raise new ethical challenges that call for fresh thinking.


PRINCIPLES FOR AI WEAPONS SYSTEMS

Among these principles, several key points stand out. One is that is that there is no exemption from the existing laws of war for AI weapons systems, which should not cause unnecessary suffering or be inherently indiscriminate. Using AI to support decisionmaking in the field “includes the duty to take feasible precautions to reduce the risk of harm to the civilian population.”

The Defense Department has always tested and evaluated their systems to make sure that they perform reliably as intended. But the Board warns that AI weapon systems can be “non-deterministic, nonlinear, high-dimensional, probabilistic, and continually learning.” When they have these characteristics, traditional testing and validation techniques are “insufficient.”

The Board strongly recommended that the Department develop mitigation strategies and technological requirements for AI weapons systems that “foreseeably have a risk of unintentional escalation.” The group pointed to the circuit breakers established by the Securities and Exchange Commission to halt trading on exchanges as models. They suggested analogues in the military context including “limitations on the types or amounts of force particular systems are authorized to use, the decoupling of various AI cyber systems from one another, or layered authorizations for various operations.”

The Department’s 2012 directive 3000.09 recommended that commanders and operators should always be able to exercise “appropriate levels of human judgment” over the use of autonomous weapons in the field. The idea was that the contexts in which AI systems might be used in the military differ in so many crucial details that no more precise rules can be formulated in the abstract. The Board agreed with this reasoning. It did not try to make this guidance more precise, saying instead it is “a standard to continue using.” But it did add other elements to this guidance through a discussion of an off switch for AI weapons systems.

The Board publicly debated whether humans should be able to turn off AI weapons systems, even after they have been activated. The discussion seemed to turn on whether the systems would have to be slow enough for humans to intervene, which in many cases would defeat the purpose. In the end, the Board agreed that there had to be an off switch, but it might have to be triggered automatically without human intervention. In this way, the Board recognized the reality that “due to the scale of interactions, time, and cost, humans cannot be ‘in the loop’ all the time.” Others, including Danzig, have noted “communications and processing speed tilt the equation against human decision making.” The report moves beyond reliance on human decisionmakers to recommend designing systems that can disengage or deactivate automatically when they begin to go off course.

IMPLEMENTING THESE PRINCIPLES

The Board reported that there have already been exercises with DOD personnel to see how some of the principles would work in practice. It would be especially important to implement one of the Board’s most thoughtful and most consequential recommendations, namely, to develop a risk management typology. This framework would introduce AI-based military applications based on “their ethical, safety, and legal risk considerations” with the rapid adoption of mature technologies in low-risk applications and greater precaution in less mature applications that might lead to “more significant adverse consequences.”

Next steps might be for the Board or Department leaders to reach out to the group of AI researchers seeking to discourage scientists from working on AI military research and the human rights groups seeking an international treaty banning fully autonomous weapons. The Department’s aim of seeking reliable weapons systems that do not engage in unintended campaigns coincides with the critics’ aim to prevent the development of out-of-control systems that violate the laws of war. The Board’s report can be seen by both sides as a signal of good faith and a demonstration that there is much common ground as the basis for a discussion.

No comments:

Post a Comment