Pages

15 March 2024

Governments Must Shape AI’s Future

MARIANA MAZZUCATO and FAUSTO GERNONE

LONDON – Last December, the European Union set a global precedent by finalizing the Artificial Intelligence Act, one of the world’s most comprehensive sets of AI rules. Europe’s landmark legislation could signal a broader trend toward more responsive AI policies. But while regulation is necessary, it is insufficient. Beyond imposing restrictions on private AI companies, governments must assume an active role in AI development by designing systems and shaping markets for the common good.

To be sure, AI models are evolving rapidly. When EU regulators released the first draft of the AI Act in April 2021, they hailed it as “future-proof,” only to be left scrambling to update the text in response to the release of ChatGPT a year and a half later. But regulatory efforts are not in vain. For example, the law’s ban on AI in biometric policing will likely remain pertinent, regardless of advances in the technology. Moreover, the risk frameworks contained in the AI Act will help policymakers guard against some of the technology’s most dangerous uses. While AI will develop faster than policy, the law’s fundamental principles will not need to change – though more flexible regulatory tools will be needed to tweak and update rules.

But thinking of the state as only a regulator misses the larger point. Innovation is not just some serendipitous market phenomenon. It has a direction that depends on the conditions in which it emerges, and public policymakers can influence these conditions. The rise of a dominant technological design or business model is the result of a power struggle between various actors – corporations, governmental bodies, academic institutions – with conflicting interests and divergent priorities. Reflecting this struggle, the resulting technology may be more or less centralized, more or less proprietary, and so forth.

No comments:

Post a Comment