6 June 2023

Around the halls: What should the regulation of generative AI look like?

Nicol Turner Lee, Niam Yaraghi, Mark MacCarthy, and Tom Wheeler 

We are living in a time of unprecedented advancements in generative artificial intelligence (AI), which are AI systems that can generate a wide range of content, such as text or images. The release of ChatGPT, a chatbot powered by OpenAI’s GPT-3 large language model (LLM), in November 2022 ushered generative AI into the public consciousness, and other companies like Google and Microsoft have been equally busy creating new opportunities to leverage the technology. In the meantime, these continuing advancements and applications of generative AI have raised important questions about how the technology will affect the labor market, how its use of training data implicates intellectual property rights, and what shape government regulation of this industry should take. Last week, a congressional hearing with key industry leaders suggested an openness to AI regulation—something that legislators have already considered to reign in some of the potential negative consequences of generative AI and AI more broadly. Considering these developments, scholars across the Center for Technology Innovation (CTI) weighed in around the halls on what the regulation of generative AI should look like.

REGULATION OF GENERATIVE AI COULD START WITH GOOD CONSUMER DISCLOSURES

Generative AI refers to machine learning algorithms that can create new content like audio, code, images, text, simulations, or even videos. More recent focus has been on its enablement of chatbots, including ChatGPT, Bard, Copilot, and other more sophisticated tools that leverage LLMs to perform a variety of functions, like gathering research for assignments, compiling legal case files, automating repetitive clerical tasks, or improving online search. While debates around regulation are focused on the potential downsides to generative AI, including the quality of datasets, unethical applications, racial or gender bias, workforce implications, and greater erosion of democratic processes due to technological manipulation by bad actors, the upsides include a dramatic spike in efficiency and productivity as the technology improves and simplifies certain processes and decisions like streamlining physician processing of medical notes, or helping educators teach critical thinking skills. There will be a lot to discuss around generative AI’s ultimate value and consequence to society, and if Congress continues to operate at a very slow pace to regulate emerging technologies and institute a federal privacy standard, generative AI will become more technically advanced and deeply embedded in society. But where Congress could garner a very quick win on the regulatory front is to require consumer disclosures when AI-generated content is in use and add labeling or some type of multi-stakeholder certification process to encourage improved transparency and accountability for existing and future use cases.

Once again, the European Union is already leading the way on this. In its most recent AI Act, the EU requires that AI-generated content be disclosed to consumers to prevent copyright infringement, illegal content, and other malfeasance related to end-user lack of understanding about these systems. As more chatbots mine, analyze, and present content in accessible ways for users, findings are often not attributable to any one or multiple sources, and despite some permissions of content use granted under the fair use doctrine in the U.S. that protects copyright-protected work, consumers are often left in the dark around the generation and explanation of the process and results.

Congress should prioritize consumer protection in future regulation, and work to create agile policies that are futureproofed to adapt to emerging consumer and societal harms—starting with immediate safeguards for users before they are left to, once again, fend for themselves as subjects of highly digitized products and services. The EU may honestly be onto something with the disclosure requirement, and the U.S. could further contextualize its application vis-à-vis existing models that do the same, including the labeling guidance of the Food and Drug Administration (FDA) or what I have proposed in prior research: an adaptation of the Energy Star Rating system to AI. Bringing more transparency and accountability to these systems must be central to any regulatory framework, and beginning with smaller bites of a big apple might be a first stab for policymakers.

REVISITING HIPAA AND HEALTH INFORMATION BLOCKING RULES: BALANCING PRIVACY AND INTEROPERABILITY IN THE AGE OF AI

With the emergence of sophisticated artificial intelligence (AI) advancements, including large language models (LLMs) like GPT-4, and LLM-powered applications like ChatGPT, there is a pressing need to revisit healthcare privacy protections. At their core, all AI innovations utilize sophisticated statistical techniques to discern patterns within extensive datasets using increasingly powerful yet cost-effective computational technologies. These three components—big data, advanced statistical methods, and computing resources—have not only become available recently but are also being democratized and made readily accessible to everyone at a pace unprecedented in previous technological innovations. This progression allows us to identify patterns that were previously indiscernible, which creates opportunities for important advances but also possible harms to patients.

Privacy regulations, most notably HIPAA, were established to protect patient confidentiality, operating under the assumption that de-identified data would remain anonymous. However, given the advancements in AI technology, the current landscape has become riskier. Now, it’s easier than ever to integrate various datasets from multiple sources, increasing the likelihood of accurately identifying individual patients.

Apart from the amplified risk to privacy and security, novel AI technologies have also increased the value of healthcare data due to the enriched potential for knowledge extraction. Consequently, many data providers may become more hesitant to share medical information with their competitors, further complicating healthcare data interoperability.

Considering these heightened privacy concerns and the increased value of healthcare data, it’s crucial to introduce modern legislation to ensure that medical providers will continue sharing their data while being shielded against the consequences of potential privacy breaches likely to emerge from the widespread use of generative AI.

LAMPEDUSA ON AI REGULATION

In “The Leopard,” Giuseppe Di Lampedusa’s famous novel of the Sicilian aristocratic reaction to the unification of Italy in the 1860s, one of his central characters says, “If we want things to stay as they are, things will have to change.”

Something like this Sicilian response might be happening in the tech industry’s embrace of inevitable AI regulation. Three things are needed, however, if we do not want things to stay as they are.

The first and most important step is sufficient resources for agencies to enforce current law. Federal Trade Commission Chair Lina Khan properly says AI is not exempt from current consumer protection, discrimination, employment, and competition law, but if regulatory agencies cannot hire technical staff and bring AI cases in a time of budget austerity, current law will be a dead letter.

Second, policymakers should not be distracted by science fiction fantasies of AI programs developing consciousness and achieving independent agency over humans, even if these metaphysical abstractions are endorsed by industry leaders. Not a dime of public money should be spent on these highly speculative diversions when scammers and industry edge-riders are seeking to use AI to break existing law.

Third, Congress should consider adopting new identification, transparency, risk assessment, and copyright protection requirements along the lines of the European Union’s proposed AI Act. The National Telecommunications and Information Administration’s request for comment on a proposed AI accountability framework and Sen. Chuck Schumer’s (D-NY) recently-announced legislative initiative to regulate AI might be moving in that direction.

INNOVATIVE AI REQUIRES INNOVATIVE OVERSIGHT

Both sides of the political aisle, as well as digital corporate chieftains, are now talking about the need to regulate AI. A common theme is the need for a new federal agency. To simply clone the model used for existing regulatory agencies is not the answer, however. That model, developed for oversight of an industrial economy, took advantage of slower paced innovation to micromanage corporate activity. It is unsuitable for the velocity of the free-wheeling AI era.

All regulations walk a tightrope between protecting the public interest and promoting innovation and investment. In the AI era, traversing this path means accepting that different AI applications pose different risks and identifying a plan that pairs the regulation with the risk while avoiding innovation-choking regulatory micromanagement.

Such agility begins with adopting the formula by which digital companies create technical standards as the formula for developing behavioral standards: identify the issue; assemble a standard-setting process involving the companies, civil society, and the agency; then give final approval and enforcement authority to the agency.

Industrialization was all about replacing and/or augmenting the physical power of humans. Artificial intelligence is about replacing and/or augmenting humans’ cognitive powers. To confuse how the former was regulated with what is needed for the latter would be to miss the opportunity for regulation to be as innovative as the technology it oversees. We need institutions for the digital era that address problems that already are apparent to all.

Google and Microsoft are general, unrestricted donors to the Brookings Institution. The findings, interpretations, and conclusions posted in this piece are solely those of the author and are not influenced by any donation.

No comments: