Pages

19 August 2022

Artificial Intelligence Safety and Stability


Nations around the world are investing in artificial intelligence (AI) to improve their military, intelligence, and other national security capabilities. Yet AI technology, at present, has significant safety and security vulnerabilities. AI systems could fail, potentially in unexpected ways, due to a variety of causes. Moreover, the interactive nature of military competition means that one nation’s actions affect others, including in ways that may be detrimental to mutual stability. There is an urgent need to explore actions that can mitigate these risks, such as improved processes for AI assurance, norms and best practices for responsible AI adoption, and confidence-building measures that improve stability among all nations.

The Center for a New American Security (CNAS) Artificial Intelligence Safety and Stability project aims to better understand AI risks and specific steps that can be taken to improve AI safety and stability in national security applications. Major lines of effort include:

Anticipating,  preventing, and mitigating catastrophic AI failures

Improving Defense Department processes for ensuring safe, secure, and trusted AI

Understanding and shaping opportunities for compute governance

This cross-program effort includes the CNAS Technology and National Security, Defense, Indo-Pacific Security, Transatlantic Security, and Energy, Economics, and Security programs. CNAS experts will share their findings in public reports and policy briefs with recommendations for policymakers.

This project is made possible with the generous support of Open Philanthropy.

No comments:

Post a Comment