Pages

8 April 2023

Adversarial Machine Learning and Cybersecurity Risks, Challenges, and Legal Implications

Micah Musser

Artificial intelligence systems are rapidly being deployed in all sectors of the economy, yet significant research has demonstrated that these systems can be vulnerable to a wide array of attacks. How different are these problems from more common cybersecurity vulnerabilities? What legal ambiguities do they create, and how can organizations ameliorate them? This report, produced in collaboration with the Program on Geopolitics, Technology, and Governance at the Stanford Cyber Policy Center, presents the recommendations of a July 2022 workshop of experts to help answer these questions.Download Full Report

Views expressed in this document do not necessarily represent the views of the U.S. government or any institution, organization, or entity with which the authors may be affiliated. Reference to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise, does not constitute or imply an endorsement, recommendation, or favoring by the U.S. government, including the U.S. Department of Defense, the Cybersecurity and Infrastructure Security Agency, or any other institution, organization, or entity with which the authors may be affiliated.

Executive Summary

In July 2022, the Center for Security and Emerging Technology (CSET) at Georgetown University and the Program on Geopolitics, Technology, and Governance at the Stanford Cyber Policy Center convened a workshop of experts to examine the relationship between vulnerabilities in artificial intelligence systems and more traditional types of software vulnerabilities. Topics discussed included the extent to which AI vulnerabilities can be handled under standard cybersecurity processes, the barriers currently preventing the accurate sharing of information about AI vulnerabilities, legal issues associated with adversarial attacks on AI systems, and potential areas where government support could improve AI vulnerability management and mitigation.

Attendees at the workshop included industry representatives in both cybersecurity and AI red-teaming roles; academics with experience conducting adversarial machine learning research; legal specialists in cybersecurity regulation, AI liability, and computer-related criminal law; and government representatives with significant AI oversight responsibilities.

This report is meant to accomplish two things. First, it provides a high-level discussion of AI vulnerabilities, including the ways in which they are disanalogous to other types of vulnerabilities, and the current state of affairs regarding information sharing and legal oversight of AI vulnerabilities. Second, it attempts to articulate broad recommendations as endorsed by the majority of participants at the workshop. These recommendations, categorized under four high-level topics, are as follows:

1. Topic: Extending Traditional Cybersecurity for AI Vulnerabilities1.1. Recommendation: Organizations building or deploying AI models should use a risk management framework that addresses security throughout the AI system life cycle.1.2. Recommendation: Adversarial machine learning researchers, cybersecurity practitioners, and AI organizations should actively experiment with extending existing cybersecurity processes to cover AI vulnerabilities.1.3. Recommendation: Researchers and practitioners in the field of adversarial machine learning should consult with those addressing AI bias and robustness, as well as other communities with relevant expertise.

2. Topic: Improving Information Sharing and Organizational Security Mindsets2.1. Recommendation: Organizations that deploy AI systems should pursue information sharing arrangements to foster an understanding of the threat.2.2. Recommendation: AI deployers should emphasize building a culture of security that is embedded in AI development at every stage of the product life cycle.2.3. Recommendation: Developers and deployers of high-risk AI systems must prioritize transparency.

3. Topic: Clarifying the Legal Status of AI Vulnerabilities3.1. Recommendation: U.S. government agencies with authority over cybersecurity should clarify how AI-based security concerns fit into their regulatory structure.3.2. Recommendation: There is no need at this time to amend anti-hacking laws to specifically address attacking AI systems.

4. Topic: Supporting Effective Research to Improve AI Security4.1. Recommendation: Adversarial machine learning researchers and cybersecurity practitioners should seek to collaborate more closely than they have in the past.4.2 Recommendation: Public efforts to promote AI research should more heavily emphasize AI security, including through funding open-source tooling that can promote more secure AI development.4.3. Recommendation: Government policymakers should move beyond standards-writing toward providing test beds or enabling audits for assessing the security of AI models.
Authors

Micah Musser is a research analyst with the CyberAI Project at CSET, where Andrew Lohn is a senior fellow. James X. Dempsey is senior policy advisor for the Program on Geopolitics, Technology, and Governance, Stanford Cyber Policy Center, and lecturer at the UC Berkeley School of Law. Jonathan Spring is a cybersecurity specialist at the Cybersecurity and Infrastructure Security Agency (CISA), and was at the time of the July 2022 workshop an analyst at the CERT Division of the Software Engineering Institute at Carnegie Mellon University. Ram Shankar Siva Kumar is a data cowboy at Microsoft Security Research and tech policy fellow at the CITRIS Policy Lab and the Goldman School of Public Policy at UC Berkeley.

Brenda Leong is a partner at BNH.ai, a boutique law firm focused on the legal issues surrounding AI. Christina Liaghati is AI strategy execution and operations manager for the AI and Autonomy Innovation Center at the MITRE Corporation. Cindy Martinez is a policy analyst focusing on AI governance and regulation. Crystal D. Grant is a data scientist and geneticist who studies the relationship between emerging technologies and civil liberties. Daniel Rohrer is vice president of software product security, focused on advancing architecture and research at NVIDIA. Heather Frase and John Bansemer both work at CSET, where Heather is a senior fellow leading the AI standards and testing line of research, and John is the director of the CyberAI Project. Jonathan Elliott is chief of test and evaluation in the Test and Evaluation Division of the Chief Digital and Artificial Intelligence Office. Mikel Rodriguez works on securing AI-enabled systems at DeepMind, and was formerly the director of the AI and Autonomy Innovation Center at the MITRE Corporation at the time of the July 2022 workshop. Mitt Regan is McDevitt professor of jurisprudence and co-director of the Center on National Security at Georgetown University Law Center. Rumman Chowdhury is the founder of Parity Consulting, an ethical AI consulting group, and was formerly the director of Machine Learning, Ethics, Transparency, and Accountability at Twitter at the time of the July 2022 workshop. Stefan Hermanek is a product manager with expertise and experience in AI safety and robustness, and AI red-teaming.

No comments:

Post a Comment