Pages

29 July 2021

AI Accidents: An Emerging Threat: What Could Happen and What to Do

Zachary Arnold, Helen Toner

Executive Summary
Modern machine learning is powerful in many ways, but profoundly fragile in others. Because of this fragility, even the most advanced artificial intelligence tools can unpredictably fail, potentially crippling the systems in which they are embedded. As machine learning becomes part of critical, real-world systems, from cars and planes to financial markets, power plants, hospitals, and weapons platforms, the potential human, economic, and political costs of AI accidents will continue to grow.

Policymakers can help reduce these risks. To support their efforts, this brief explains how AI accidents can occur and what they are likely to look like “in the wild.” Using hypothetical scenarios involving AI capabilities that already exist or soon will, we explain three basic types of AI failures—robustness failures, specification failures, and assurance failures—and highlight factors that make them more likely to occur, such as fast-paced operation, system complexity, and competitive pressure. Finally, we propose a set of initial policy actions to reduce the risks of AI accidents, make AI tools more trustworthy and socially beneficial, and support a safer, richer, and healthier AI future. Policymakers should:

Facilitate information sharing about AI accidents and near misses, working with the private sector to build a common base of knowledge on when and how AI fails.

Invest in AI safety research and development (R&D), a critical but currently underfunded area.

Invest in AI standards development and testing capacity, which will help develop the basic concepts and resources needed to ensure AI systems are safe and reliable.

Work across borders to reduce accident risks, including through R&D alliances and intergovernmental organizations.

No comments:

Post a Comment