Ian Mitch, Matthew J. Malone
Growing concerns about the societal risks posed by advanced artificial intelligence (AI) systems have prompted debate over whether and how the U.S. government should promote stronger security practices among private-sector developers. Although some companies have made voluntary commitments to protect their systems, competitive pressures and inconsistent approaches raise questions about the adequacy of self-regulation. At the same time, government intervention carries risks: Overly stringent security requirements could limit innovation, create barriers for small firms, and harm U.S. competitiveness.
To help the U.S. government and AI industry navigate these challenges, RAND researchers identified four distinct governance approaches to strengthen security practices among developers of advanced AI systems:
No comments:
Post a Comment