Pages

17 May 2023

Bridging the AI Regulation Gap

CÉDRIC O

PARIS – On March 22, the Future of Life Institute published an open letter calling for a six-month moratorium on the development of generative artificial intelligence systems, citing the potential dangers to humanity. Since the publication of that letter, numerous high-profile figures have voiced similar concerns, including AI pioneer Geoffrey Hinton, who recently resigned from Google to raise the alarm about the “existential threat” posed by the technology he played so pivotal a role in developing.

The seriousness of these warnings should not be underestimated. Demands for government intervention rarely originate from tech companies, which in recent years have fiercely resisted efforts by American and European policymakers to regulate the industry. But given the economic and strategic promise of generative AI, development cannot be expected to pause or slow down on its own.

Meanwhile, members of the European Parliament have voted in favor of a more stringent version of the AI Act, the landmark regulatory framework designed to address the challenges posed by “traditional” AI systems, trying to adapt it to tackle the so-called “foundation models” and advanced generative AI systems such as OpenAI’s GPT-4. As one of the main negotiators of the European Union’s groundbreaking Digital Markets Act (DMA) and Digital Services Act (DSA), I recognize the importance of creating a human-centered digital world and mitigating the potential negative impact of new technologies. But the speed at which the EU is developing restrictive measures raises several concerns.

No comments:

Post a Comment