Pages

27 May 2023

The regulators are coming for your AI

Steven Tiell

The Group of Seven (G7) has lobbed the latest of three notable salvos in signaling that governments around the globe are focused on regulating Generative Artificial Intelligence (AI). The G7 ministers have established the Hiroshima AI Process, an inclusive effort for governments to collaborate on AI governance, IP rights (including copyright), transparency, mis/disinformation, and responsible use. Earlier in the week, testimony in the United States highlighted the grave concerns governments have and why these discussions are necessary.

“Loss of jobs, invasion of personal privacy at a scale never seen before, manipulation of personal behavior, manipulation of personal opinions, and potentially the degradation of free elections in America.” These are the downsides, harms, and risks of Generative AI as Senator Josh Hawley (R-MO) recapped after the Senate Judiciary Committee hearing on May 16 saying, “this is quite a list.”

Just last week, the European Union (EU) AI Act moved forward, paving the way for a plenary vote in mid-June on its path to becoming law.

Make no mistake, regulation is coming.

While the EU is indexing their regulation from the risk associated with the activities AI is affecting, with ratings of low/minimal, limited, high, and unacceptable. In doing so, the EU is signaling that the higher the risk, the more regulation–—where those activities with unacceptable risk are banned outright (e.g., real-time biometric identification in public spaces, including for uses such as social credit scoring and certain aspects of predictive policing). Specifically responding to the latest developments in Generative AI, the EU is also looking to require organizations to be more responsible by assessing the environmental damage of training Large Language Models (LLMs), which are quite energy/compute-intensive, and forcing model makers to disclose “the use of training data protected under copyright law.” Another provision calls for the creation of a database to catalog where, when, and how models in the two mid-tiers of risk are being deployed in the EU. The devil is in the details…and the details haven’t been solidified yet.

At the May 16, 2023 Judiciary Committee hearing in the US, lawmakers sent strong signals of support for an entirely new agency to regulate the use of AI. Surfaced in testimony from Sam Altman (OpenAI), Christina Montgomery (IBM), and Gary Marcus (NYU), were calls for licensing systems that are capable of certain tasks (e.g., technology that can persuade, manipulate, or influence human behavior or beliefs, or create novel biological agents) or require a certain amount of compute/memory to train/operate; while this risk-based approach is similar to the current EU AI Act, it differs by suggesting a regulator could require pre-review and licensing in certain situations. This license could be taken away when compliance with yet-to-be-defined safety standards falls short (e.g., if models can self-replicate or self-exfiltrate into the wild). Commensurate with pre- and post-review of deployed AI systems, the testimony uniformly made calls for some form of impact assessments and/or audits.

Both governments have recognized the unique needs for competition and suggest that their regulatory regimes will be significantly less onerous on small innovators and startups, simultaneously encouraging innovation while stifling the ability of AI innovation at scale to cause harm to humanity.

Perhaps legislators have learned a lesson from the blanket protections Section 230 of the Communications Decency Act has provided to social media companies for decades, shielding them from liability for the content that people share on their services. These protections were recently sidestepped, and thus upheld, in a May 18, 2023 Supreme Court decision where the justices said they, “decline to address the application of Section 230 to a complaint that appears to state little, if any, plausible claim for relief.” It’s clear the Court is calling on Congress to amend the laws, especially when taken in context of Justice Elena Kagan’s comments during oral arguments where she said, “every other industry must internalize the cost of its conduct. Why is it that the tech industry gets a pass? We’re a court, we really don’t know about these things. These are not like the nine greatest experts on the Internet.”

Given the interest in legislating for and regulating the tech industry, new sub-sectors within the tech industry should be paying attention. Over the last fifteen years, the demand for regulatory reform has been focused on social media companies that host user-generated content, but with Generative AI, the focus will quickly shift. With strong signals from European and US regulators, it won’t be long until social media companies are in the minority of all the tech companies staring down the barrel of regulation.

During the Judiciary Committee hearing, the spotlight was solely focused on Generative AI. Based on suggestions put forth in testimony for regulation, hyperscalers and infrastructure companies could see regulation sooner than social media companies. For example, if systems require a certain amount of compute to be licensed, then hyperscalers and infrastructure companies may have to provide this data to regulators and be subjected to audit and governance controls. The implications expand as the use cases for Generative AI continue to proliferate and the promise that these real-time technologies will yield real-world outcomes for humanity grows by the day.

Already, consumer use of Generative AI is growing at an order of magnitude faster pace than any consumer technology in history. For this growth to transfer to the enterprise and scale to augment global workforces, foundational models will need to be fine-tuned for specific domains and research and development funds invested to reduce the costs associated with training and executing generative models. The portfolio of solutions that emerges will mean that every company must become a Real-time AI company in order to compete and thrive. When time-sensitive, contextual, and low-latency response times are critical to business and consumer success, there will be no other option than Generative AI solutions delivered in real-time.

While professionals across industries are scrambling to understand how Generative AI can help their organizations to enter new markets and disrupt existing ones, their service providers–big and small–are likely to have an increasingly important role to play with regulatory compliance. Will your infrastructure as a service provider be a help or a hindrance to your organization’s ability to thrive in the era of widespread, and regulated real-time AI?

Steven Tiell is a nonresident senior fellow with the Atlantic Council’s GeoTech Center. He is a strategy executive with wide technology expertise and particular depth in data ethics and responsible innovation for artificial intelligence.

No comments:

Post a Comment