23 November 2021

Vulnerability Disclosure and Management for AI/ML Systems: A Working Paper with Policy Recommendations

James X. Dempsey and Andrew J. Grotto

Almost as rapidly as artificial intelligence is being adopted, there is developing an understanding of how risky it can be. Much attention has focused on the ways in which AI-based systems can replicate or even exacerbate racial and gender biases.1 But increasing attention is now focusing on the ways in which AI systems, especially those dependent on machine learning (ML), can be vulnerable to intentional attack by goal-oriented adversaries, threatening the reliability of their outputs.2 As the National Security Commission on Artificial Intelligence found, “While we are on the front edge of this phenomenon, commercial firms and researchers have documented attacks that involve evasion, data poisoning, model replication, and exploiting traditional software flaws to deceive, manipulate, compromise, and render AI systems ineffective.

No comments: