Pages

21 May 2023

The G7 Summit is an Opportunity to Tackle AI Regulation

Cassandra Shand

This week, world leaders will converge in Hiroshima, Japan, to kick off the G7 Summit. This gathering of leaders from the United States, the United Kingdom, France, Japan, Germany, Canada, Italy, and others presents a well-timed opportunity to lay the groundwork for an international artificial intelligence regulatory framework.

Unfortunately, AI development isn't even listed on the agenda.

Instead, the spotlight will likely focus on U.S.-China relations, a trilateral meeting about Indo-Pacific cooperation between the United States, Japan, and South Korea, and the ongoing conflict in Ukraine, with the overarching theme of the summit revolving around outreach to the Global South and how the G7 can encourage a rules-based order. All are worthy topics, but it looks like the conversation on AI regulation and development is at risk of being sidelined altogether. That would be a missed opportunity to address rapid innovation in AI and its inevitable societal implications.

The G7 should initiate the development of a codified and widely adopted agreement or treaty setting clear rules prohibiting dangerous AI development. This agreement should consider prominent state approaches to AI regulation while addressing industry and open-source development standards as distinct components to AI development. It might encourage the adoption of a risk-based AI regulatory framework, which would limit dangerous AI applications while enabling competition in the AI space. The G7 should also anticipate reluctance from less aligned nations, such as China, and formulate strategies to align international interest in establishing international rules and standards around AI development.

Since the debut of GPT-3, policymakers worldwide have grappled with regulating AI as a groundbreaking technology. Today, industry-led pacts to limit training of large language models (LLMs) beyond GPT-4 provide some respite, allowing for a regulatory catch-up. But this weak, implicit pause on training future LLMs is precarious, and regulators are ill-prepared for the imminent evolution in the AI landscape. The G7 needs to address this and work toward establishing international norms for AI regulation. If they don’t, humanity could reap the consequences.

International norm-setting is urgently needed. A bipolar AI race between the United States and China may lead to a race to the bottom. Due to the extreme uncertainty presented by further AI development, it is crucial to govern the rollout of advanced AI models, allowing international lawmakers and regulators to anticipate and be prepared to address the spillover effects of further AI development.

The growing competency of the open-source AI development community poses a substantial hurdle to regulation. Open-source development blurs the lines of responsibility for different AI applications and makes AI regulations harder to enforce, despite being initially encouraged by tech giants. With such a chaotic approach to AI development, how do we hold AI developers accountable for myopic AI advancement? While many countries are considering how to regulate AI, a risk-based AI regulation framework has been the most pragmatic solution. This approach, which paved the way for the UK’s risk-based AI Framework launched in March 2023, fosters innovation while ensuring the cautious development of AI applications with potentially negative impacts. Under a risk-based framework, AI regulation is considered in the context of specific AI use cases as opposed to curbing AI development unilaterally. Adopting a risk-based approach allows regulators to mitigate against existential threats presented by underregulated AI development while allowing countries to be competitive in the AI space.

Recent developments, like Italy’s resolved ban on ChatGPT after OpenAI addressed the nation's data privacy concerns, suggest that there are ways to reach a consensus between the AI industry and state regulators. The summit could serve as a platform to explore similar data standards and privacy concerns on a larger scale, providing a path to align AI development with international expectations through multinational agreements.

The genie is indeed out of the bottle. The G7 nations must collaborate on a strategic plan to address emerging norms and suggest an international approach to mitigating the perverse downstream effects of AI. Their focus should be two-pronged: encouraging proactive AI innovation while minimizing negative spillovers instigated by AI. Their ultimate goals should be encouraging all states to engage in a dynamic, collaborative approach to international AI regulation.

Cassandra Shand is a Ph.D. candidate at the University of Cambridge and a Young Voices Innovation Fellow. Twitter: @CassandraShand.

No comments:

Post a Comment