Pages

8 November 2023

A Patchwork of rules and regulations won’t cut it for AI

KENT WALKER

This year has marked a turning point for artificial intelligence. Advanced AI tools are writing poetry, diagnosing diseases, and maybe even getting us closer to a clean energy future. At the same time, we face new questions about how to develop and deploy these tools responsibly.

The past two weeks have been a milestone in the young history of AI governance — a moment of constitutional creation. The G7 just released an international code of conduct for responsible AI; the United Nations announced its AI advisory group; the U.S. Senate continued its “AI Insight Forums,” the Biden Administration’s Executive Order directed federal agencies to use AI systems and develop AI benchmarks; and the UK just held an international summit on AI safety.

As a result, we’re beginning to see the emerging outlines of an international framework for responsible AI innovation.

It’s a good thing too, because while the latest advances in AI are a triumph of scientific innovation, there is no doubt we need smart, well-crafted international regulation and industry standards to ensure AI benefits everyone.

If we don’t put such a framework into place, there is a very real risk of a fractured regulatory environment that delays access to important products, makes life harder for start-ups, slows the global development of powerful new technologies, and undermines responsible development efforts. We’ve seen that happen before with privacy, where a patchwork of rules and regulations has left people with uneven protections based on where they live, and made it harder for small businesses to navigate conflicting laws.

To avoid these missteps, we first need regulatory frameworks that can promote policy alignment for a worldwide technology. We’ll need to keep advocating for democratic values and openness in the governance of these tools. And at the national level, sectoral regulators from banking to healthcare will need to develop their own expertise and assess whether and where there may be gaps in existing laws. There’s no one-size-fits-all regulation for a general-purpose technology, any more than we have a Department of Engines, or a single law that governs all uses of electricity. Every agency will need to be an AI agency, and in the U.S. the National Institute of Standards and Technology can be at the center of a hub-and-spoke model providing coherent, practical approaches to AI governance.

Second, public-private partnerships, regulators, and industry bodies will need to be both technically informed and nimble, promoting research on where AI is going, and filling gaps where regulation is still evolving. Promoting alignment on industry best practices will also be imperative, even as many companies have already made commitments to pursue AI responsibly.

For example, Google was one of the first to issue a detailed set of principles in 2018, with an internal governance framework and annual progress reports. The development of cross-industry bodies — such as the Frontier Model Forum and its new $10 million AI Safety Fund — will also go a long way toward investing in the long-term safety of emerging technologies.

Additionally, broader coalitions of AI developers, academics, and civil society will be vital to developing best practices and international performance benchmarks for AI development and deployment. The good news here is that the Partnership on AI, MLCommons, and the International Standards Organization are building common technical standards that can align practices globally. These industry-wide codes and standards will be a cornerstone of responsible AI development, the equivalents of the Underwriters’ Laboratory or Good Housekeeping Seal of Approval.

AI can bring us science at digital speed — but that won’t happen by accident.

As AI innovation advances, we need public and private stakeholders to keep coming together to map out an opportunity agenda to harness AI’s potential in preventive medicine, precision agriculture, economic productivity, and much more through a global, flexible, multi-dimensional AI policy framework.

At a challenging time for international institutions, work on AI policy is off to a promising start. For proof of that, we need look no further than the new G7 Code of Conduct, which will provide a strong and consistent framework as we move forward. But continued progress is essential and may well prove that governments can still work constructively on important transnational issues.

None of the developments of the past week will be a panacea. But they are a sign that the global AI ecosystem gets what’s at stake and stakeholders are ready to do the work needed to unlock the benefits of artificial intelligence — not in a vacuum, but collaboratively, together.

No comments:

Post a Comment