19 September 2023

How To Regulate Artificial Intelligence

Mark MacCarthy

The journalist and academic Nicolas Lemann has observed that when the internet came along a quarter century ago, “just about everyone, including liberals, assumed that an unregulated Internet would be a good idea.” Well, policymakers saw what that produced: tech markets dominated by a handful of giant companies, privacy invasions of truly staggering proportions and a blizzard of hate speech and disinformation that no amount of self-regulation seems to control.

In my forthcoming book, Regulating Digital Industries, I argue that the U.S. should now establish a digital regulator empowered to set competition, privacy and content moderation rules for companies operating in core digital industries, which include search, ecommerce, social media, the mobile app infrastructure and ad tech.

With the negative example of the internet in front of them policymakers have decided that they can and should do better in the case of artificial intelligence. They are seeking to get out in front of this promising yet dangerous technology and establish rules that will ensure that its use will be safe and trustworthy and will lead to economic growth and progress, not discrimination, inequality, loss of human control, technological unemployment, and a host of other potential harms.

But how should policymakers do this? Despite the sense among the public that AI suddenly happened with the release of Chat GPT late last year, AI has been making steady progress for a decade or more and this progress has already prompted a policy response. After a promising start where it proposed regulating high-risk AI uses, Europe appears to have veered off toward an attempt to regulate AI as such. Its latest draft seems to require companies providing general purpose algorithms to prove to the satisfaction of new AI agencies in member countries that they are safe. But this has been recognized as a dead end for years. As the report of the Stanford University Study Panel of AI experts said in 2016, “…attempts to regulate “AI” in general would be misguided, since there is no clear definition of AI (it isn’t any one thing), and the risks and considerations are very different in different domains.” It is hard to see how a general-purpose AI system could establish it is safe for all uses.

The good news is that for three Administrations, the Executive Branch in the U.S. has been on the right track. Starting with a White House report in the Obama Administration, U.S. policy has been to regulate AI as it is used in practices that are already regulated by federal agencies.

Are companies using AI to establish creditworthiness? Then the Consumer Financial Protection Bureau can and should makes sure the companies abide by the fair lending laws when they use this new technology. Are companies using AI to screen workers for employment or promotion? Then the Equal Employment Opportunity Commission can and should make sure that these companies do not discriminate against protected classes when they use these new tools. Are companies using AI to make pitches to consumers about the quality and price of consumer goods? Then the Federal Trade Commission can and should makes sure that companies are not engaging in unfair or deceptive acts or practices when they use these algorithms.

On April 25, speaking at a press conference with a group of federal regulatory agencies, FTC Chair Lina Khan summed up this approach to AI regulation when she observed succinctly and tartly, “There is no AI exception to the laws on the books.”

Now my colleague Alex Engler at Brookings has suggested a valuable addition to this current approach. He would vastly expand the power and tools available to certain federal agencies by granting them the authority to set new rules on discrimination, privacy, access to data, accuracy, an opt out of algorithmic decision-making, and decision-making transparency. He would make sure they had administrative subpoena power to examine the insides of AI algorithms and the data used to train them.

According to this approach, these new policy tools should be available to regulators when they can show that advanced algorithms create a significant risk of harm, harm that the agency is already empowered to prevent. Congress should enumerate the agencies whose mission is so crucial that they need these extra tools to carry it out in the face of the new risks created by AI.

I have a couple of quibbles with the details. Engler wants to require agencies to prove that they are dealing with “harms to health care access, economic opportunity, or access to essential services” before they can access these new regulatory tools. But this is superfluous. If Congress determines that the FTC, for instance, should have these expanded powers to enforce its statute, why should the agency have to prove in addition that it is addressing these enumerated harms. By giving the agency the new powers to regulate AI, Congress has already determined that the FTC should be able to use them when it can show it needs them to enforce its statutory mandate to protect consumers and competition.

Another quibble is that Engler would give agencies new authority to root out discrimination and protect privacy, even if the agency does not currently have this mission. New tools such as access to data and the power to compel transparency seem well designed to help an agency fulfill its statutory mission in the face of new AI challenges, but authority over discrimination and privacy seem like brand-new statutory missions if the agency does not already have this responsibility.

In summary, this approach to algorithmic regulation gives enumerated existing agencies access to new regulatory tools to control algorithms used in areas under their jurisdiction when the use of these algorithms poses a significant risk of substantial harm that the agency is empowered to prevent under its existing statute.

Senator Chuck Schumer is calling still another meeting, this one on September 13, to gather ideas for how to regulate AI. This proposal for an expansion of regulatory powers to deal with the special risks of AI should be on the table for discussion at that meeting and then move to the top of the Congressional agenda for swift consideration.

No comments: