3 April 2023

The European Union’s Artificial Intelligence Act, explained

Spencer Feingold

The European Union is considering far-reaching legislation on artificial intelligence (AI).

The proposed Artificial Intelligence Act would classify AI systems by risk and mandate various development and use requirements.

European lawmakers are still debating the details, with many stressing the need to both foster AI innovation and protect the public.

The European Union (EU) is considering a new legal framework that aims to significantly bolster regulations on the development and use of artificial intelligence.

The proposed legislation, the Artificial Intelligence (AI) Act, focuses primarily on strengthening rules around data quality, transparency, human oversight and accountability. It also aims to address ethical questions and implementation challenges in various sectors ranging from healthcare and education to finance and energy.

“[AI] has been around for decades but has reached new capacities fueled by computing power,” Thierry Breton, the EU’s Commissioner for Internal Market, said in a statement. The Artificial Intelligence Act aims to “strengthen Europe's position as a global hub of excellence in AI from the lab to the market, ensure that AI in Europe respects our values and rules, and harness the potential of AI for industrial use.”

The cornerstone of the AI Act is a classification system that determines the level of risk an AI technology could pose to the health and safety or fundamental rights of a person. The framework includes four risk tiers: unacceptable, high, limited and minimal.

AI systems with limited and minimal risk—like spam filters or video games—are allowed to be used with little requirements other than transparency obligations. Systems deemed to pose an unacceptable risk—like government social scoring and real-time biometric identification systems in public spaces—are prohibited with little exception.
On artificial intelligence, trust is a must, not a nice to have.”— Margrethe Vestager, Executive Vice-President for a Europe fit for the Digital Age

High-risk AI systems are permitted, but developers and users must adhere to regulations that require rigorous testing, proper documentation of data quality and an accountability framework that details human oversight. AI deemed high risk include autonomous vehicles, medical devices and critical infrastructure machinery, to name a few.

The proposed legislation also outlines regulations around so-called general purpose AI, which are AI systems that can be used for different purposes with varying degrees of risk. Such technologies include, for example, large language model generative AI systems like ChatGPT.

EU's Artificial Intelligence Act: for safely harnessing AI's full potential

“With this Act, the EU is taking the lead in attempting to make AI systems fit for the future we as human want,” said Kay Firth-Butterfield, the Head of AI at the World Economic Forum.

European Executive VP Margrethe Vestager and European Internal Market Commissioner Thierry Breton give a media conference on the EU approach to AI in 2021. Image: REUTERS

The Artificial Intelligence Act proposes steep non-compliance penalties. For companies, fines can reach up to €30 million or 6% of global income. Submitting false or misleading documentation to regulators can result in fines, too.

“With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted,” Margrethe Vestager, the Executive Vice-President for a Europe fit for the Digital Age, added in a statement. “Future-proof and innovation-friendly, our rules will intervene where strictly needed: when the safety and fundamental rights of EU citizens are at stake.”

The proposed law also aims to establish a European Artificial Intelligence Board, which would oversee the implementation of the regulation and ensure uniform application across the EU. The body would be tasked with releasing opinions and recommendations on issues that arise as well as providing guidance to national authorities.

“The Board should reflect the various interests of the AI eco-system and be composed of representatives of the Member States,” the proposed legislation reads.

The Artificial Intelligence Act was originally proposed by the European Commission in April 2021. A so-called general approach position on the legislation was adopted by the European Council in late 2022 and the legislation is currently under discussion in the European Parliament.

“Artificial intelligence is of paramount importance for our future,” Ivan Bartoš, the Czech Deputy Prime Minister for Digitalisation, said in statement following the Council's adoption. “We managed to achieve a delicate balance which will boost innovation and uptake of artificial intelligence technology across Europe.”

Once the European Parliament adopts its own position on the legislation, EU interinstitutional negotiations—a process known as trilogues—will begin to finalise and implement the law. Trilogues can vary significantly in time as lawmakers negotiate sticking points and revise proposals. When dealing with complex pieces of legislation like the Artificial Intelligence Act, EU officials say, trilogues are often lengthy processes.

No comments: