Dean W. Ball and Ketan Ramakrishnan
The Technology and International Affairs Program develops insights to address the governance challenges and large-scale risks of new technologies.
Our experts identify actionable best practices and incentives for industry and government leaders on artificial intelligence, cyber threats, cloud security,
countering influence operations, reducing the risk of biotechnologies, and ensuring global digital inclusion.Learn More
Dean W. Ball co-authored this piece before joining the U.S. Office of Science and Technology Policy. All views represented are purely those of the authors and do not necessarily reflect U.S. government policy.
At the heart of frontier artificial intelligence (AI) policy lies a key debate: Should regulation focus on the core technology itself—AI models—or on the technology’s uses? Advocates of use-based AI regulation argue that it protects innovation by giving model developers the freedom to experiment,
free from burdensome licensing regimes and thickets of technical standards. Advocates of model-based regulation, on the other hand, argue that their approach concentrates the compliance burden on developers while granting users of AI the latitude to deploy the technology as they see fit, thus aiding technology diffusion in the long run.
Each of these familiar paradigms for regulating frontier AI faces serious objections. Use-based regulation can be just as onerous as model-based regulation—often much more so,
as suggested by the EU AI Act and a gaggle of successor bills in various U.S. states.1 Although the burden of use-based regulation does not fall on developers in the first instance,
it can nevertheless be expected to burden model development in serious ways, such as by deterring adoption through increased compliance costs for model users.
No comments:
Post a Comment