Pages

24 January 2019

Does the U.S. Face an AI Ethics Gap?

By Benjamin Boudreaux

Members of Congress, the U.S. military, and prominent technologists have raised the alarm that the U.S. is at risk of losing an Artificial Intelligence (AI) arms race. China already has leveraged strategic investment and planning, access to massive data, and suspect business practices to surpass the U.S. in some aspects of AI implementation. There are worries that this competition could extend to the military sphere with serious consequences for U.S. national security.

During the prior Cold War arms race era, U.S. policymakers and the military expressed consternation about a so-called “missile gap” with the USSR that potentially gave the Soviets military superiority. Other ‘gaps’ also infected strategic analysis and public discourse, including concerns about space gaps, bomber gaps, and so forth.

Echoes of gap anxiety continue today. The perspective that the U.S. is in an AI arms race suggests another gap—an AI “ethics gap” in which the U.S. faces a higher ethical hurdle to develop and deploy AI in military contexts than its adversaries. As a result of this gap, the U.S. could be at a competitive disadvantage vis-à-vis countries with fewer scruples about AI.

Some contend that the U.S. faces obstacles working with its technology companies in ways that China and other countries do not. Take for example Google’s decision to cancel its Project Maven image recognition contract with the Department of Defense (DoD) due to ethical objections expressed by employees about using their research to improve military targeting. Other major U.S. tech companies have also faced workforce pressure about partnering with the military. Compare this to the Cold War era during which American industry considered it a patriotic duty to support the U.S. military.

China has also pursued AI applications that might be ethically prohibited from the U.S. point of view. For instance, China’s aspirational government-run social credit system uses data points about individuals to assign them a social credit score to determine certain benefits and burdens. This is just one of many examples that underscore divergent international views about the ethics of AI.

Of course, ethical uncertainty in the U.S. about the development and use of AI abounds. There are hard questions related to the fairness of algorithms, the appropriate collection and use of personal data, the safety of autonomous systems, and so forth. Like many challenging technological issues, there’s no consensus ethical perspective, and practical principles often lag behind technology development. 

The DoD has invested heavily in military AI applications but has proceeded under a relatively restrained policy that requires “appropriate levels of human judgment” in the development and deployment of autonomous weapons. Indeed, defense officials regularly affirm values such as international humanitarian law, human dignity, and AI safety.

The resulting dynamic is that the U.S. military faces challenges that might limit its capabilities while also following more transparent and stricter requirements than its rivals. From that perspective, the ethics gap widens.

But whether the ethics gap is a real threat to U.S. national security or is merely illusory (as hindsight revealed of prior Cold War gaps), there is a need for clarity on the types of ethical and other risks that military AI may pose. 

Some risks raised about military AI—including the risks that autonomous killer robots will violate international humanitarian law, undermine human dignity, and cannot be held accountable for wrongful actions—raise hard ethical questions. Other risks are more straightforwardly about the technical operation of AI systems, such as whether testing can ensure that AI systems will perform as intended. Still, other risks are strategic, for instance, those that focus on the proliferation of autonomous drones to terrorists or risks of a rapidly escalating “flash war.”

With this taxonomy in hand, the AI ethics gap begins to narrow. China, Russia, and other supposedly unethical actors also care about operational and strategic risks—after all, it’s not in their interests to build uncontrollable or proliferating systems that could destabilize their authoritarian regimes. For these types of shared risks, there’s an opportunity for the U.S. to take a page out of Cold War negotiations and explore collaborative confidence-building measures to mitigate dangers. For instance, engagement with Russia and China could focus on creating robust standards for testing and evaluation or promoting more transparency about weapons review procedures.

Even with differences in how ethical risks are interpreted between the U.S. and its adversaries, this is not a gap that will debilitate the U.S. but could instead be a source of U.S. strength. Ethical conduct by the DoD is essential to bolster domestic popular support and the legitimacy of military action. This is especially important in the context of open source access to details about military operations that previously would have been opaque to most Americans. Further, an emphasis on ethical action could also help the military build partnerships with the private sector to leverage the most advanced technologies, attract AI talent, and promote multinational alliances with like-minded countries in Europe and elsewhere. So ethical considerations could become a fundamental component of how the U.S. builds the partnerships and capabilities essential for both its hard and soft power.

But in addition to these pragmatic reasons to care about ethics, the U.S. should also recognize that the ethical risks raised about AI reflect real humanitarian values that matter deeply.

Instead of worrying about an ethics gap, U.S. policymakers and the military community could proudly demonstrate a commitment to leading in AI ethics, and build standards of responsible AI behavior reflecting American values that can rally the international community. Indeed, U.S. leadership on AI ethics could be essential to ensuring that risks are mitigated and the AI arms race does not become a race to the bottom.

No comments:

Post a Comment