J.P. SINGH
Disregarding ethical considerations in the name of AI innovation – the direction taken by the US – is a recipe for disaster. But pretending that lofty ethical principles will solve the real governance challenges AI raises is similarly misguided.
WASHINGTON, DC – The biggest governance dilemma in AI is setting guidelines for the technology’s ethical use without unduly weakening the incentive to innovate. So far, countries and regions have largely failed to strike any kind of balance, instead tipping the scales one way or the other while loftily proclaiming reverence for both.
Financial Investors Can’t Profit From Complacency Forever
ŞEBNEM KALEMLI-ÖZCAN argues that bond-market participants and others are consciously choosing to ignore obvious policy risks.
The concept of responsible AI (RAI) exemplifies this idealistic rhetoric. The principles it espouses – from ensuring that algorithms are not based on faulty datasets to preventing privacy and human-rights violations – are undoubtedly worthy. Where RAI falls short is in showing how these ideals should be incorporated into AI governance, and how to balance regulation with incentives for continued innovation.
Nonetheless, RAI has been embraced by many governments, which have incorporated relevant language into national AI policies. International organizations have also championed RAI –UNESCO’s Global AI Ethics and Governance Observatory is a leading example – with the goal of shaping international norms and national policy. But such top-down approaches contrast sharply with the deliberative, bottom-up decision-making that has proven most effective in addressing problems requiring collective action and coordination.
Meanwhile, corporations are touting their supposed commitment to RAI, often while resisting the regulations that would force them to implement its principles. Even universities have jumped on the RAI bandwagon, offering AI ethics courses in computer science departments. AI governance courses, however, are usually offered in other departments, so computer science students may not take them.
Sign up for our weekly newsletter, PS Politics
Go beyond the headlines to understand the issues, forces, and trends shaping the US presidential election – and the likely implications of its outcome.
By signing up, you agree to our privacy policy and terms of service.
But it is the brass tacks of AI governance, not the promotion of vague principles, that will lead to politically feasible, ethically desirable, and economically beneficial outcomes. Policymakers in many economies are struggling on this front, especially when trying to balance ethical imperatives with incentives for innovation. Whereas South Korea and Japan seem to have found some equilibrium, the European Union has placed a higher priority on ethics, and the United Kingdom and the United States have put innovation first.
The EU’s 2024 AI Act attempts to take a balanced approach, classifying AI applications according to risk, with high-risk activities more heavily regulated and minimal-risk activities left unregulated. Its experience in enforcing AI ethics can offer useful lessons for the rest of the world, as it monitors the implementation of its regulations across member states. Nonetheless, as French President Emmanuel Macron rightly noted in February, the EU is currently “not in the race” when it comes to AI innovation. Today, the European Commission still struggles to articulate a pro-competitive innovation strategy.
No comments:
Post a Comment