Pages

29 June 2023

AI Is Winning the AI Race

Mariano-Florentino Cuéllar

One of the questions we get most frequently from officials in Washington is: “Who’s winning the U.S.-China AI race?” The answer is simple and unsettling: Artificial intelligence is winning, and we’re nowhere near ready for what it will bring.

In the past decade, cutting-edge AI systems moved from beating simple video games to solving decades-old scientific challenges such as protein folding, speeding up scientific discovery and accelerating the development of small-molecule drugs. The fastest-moving branch of AI is spawning large language models, such as OpenAI’s ChatGPT. Much progress in these models stems from a relatively simple engineering insight—the scaling hypothesis—that has been carefully implemented using specialized software and vast arrays of networked computers. The hypothesis predicts that the bigger an AI model is—the more data, computations, and parameters it incorporates—the better it will perform and the more it will be able to mimic or achieve intelligence irrespective of whether it is generating a draft of a speech, writing computer software, designing new weapons, or teaching kids math.
A Foreign Policy magazine cover illustration shows a glowing AI projection figure emerging from a pile of technological machinery and semiconductors. The on-image text reads: The Scramble for AI. Paul Scharre, Stanley McChrystal, Alondra Nelson, and more thinkers on the dawn of a new age in geopolitics. Erik Carter illustration for Foreign Policy

This article appears in the Summer 2023 print issue of FP. Read more from the issue.

AI scientists are divided on where this is all headed. Some see the scaling hypothesis continuing to bear fruit as the relevant systems are refined by humans, and eventually by the machines themselves, until we build models that surpass human intelligence. Others are skeptical of large language models and doubt that scaling them up will yield anything comparable to human intelligence. If the scaling group is right, the risks from powerful models that behave unpredictably could be catastrophic—or even existential. These models are already capable of articulating plans to get around constraints imposed by their designers.

But even if scaling skeptics are right, the AI of today is still set to transform our economy and society. Algorithms are already affecting who gets parole in the United States and are poised to increase misinformation. Large language models will expand educational opportunities, but they will also likely reproduce biases and “hallucinate” falsehoods, generating text that sounds plausible but isn’t rooted in reality. Operating on the internet, these models will hire workers, deceive people, and reshape social relationships. Cumulatively, this will stress-test our economic, political, and social fabric.

Regardless of which camp is right, the technical drivers of recent AI progress—returns to scale in the construction of models—mean that the United States can’t bank on decisively “winning” an AI race with China and then regulating the technology afterward. The core algorithmic breakthroughs powering today’s large language models have been around for years; much of the recent progress comes from labs simply tweaking the core algorithmic ideas and using ever larger models to learn from vast tracts of data. China’s progress on AI can be slowed, in other words, but likely can’t be stopped. As AI advances and diffuses throughout society, it will challenge the United States and its open society as much as—if not more than—China.

To meet the moment, then, U.S. leaders need to change their definition of success in AI policy. Success in AI policy isn’t just staying ahead of China. It requires developing a relationship to the technology that Americans can live with and one that reduces the risk of catastrophic accidents in U.S. interactions with China. Instead of simply plowing ahead as fast as possible, we must do the hard work necessary to strike a balance between promoting progress on AI and prudently governing the technology in a manner that makes sense for Americans and the world.

Visitors look at an AI security software program on a screen full of charts and graphics with text in Chinese characters at the China International Exhibition on Public Safety and Security in Beijing in 2018.

Fears that China will surpass the United States in AI loom over all policy discussion of the technology in Washington. China is indeed a serious competitor, producing top AI research papers on par with the United States and deploying the technology across its economy and its domestic surveillance apparatus. When it comes to building the most powerful AI models, China has been a fast follower behind U.S. labs, often producing comparable models within a couple years.

And geopolitically China’s divergent interests pose major challenges for the United States. Its authoritarian uses of AI—for surveillance and tracking of ethnic minorities—serve as an inspiration to autocrats and a warning sign for democratic societies.

Taken together, these tensions have led the U.S. government to roll out a raft of policies targeting China’s AI development, with the most effective being last year’s export controls on the chips used to train AI systems. Further limits on Chinese access to AI-relevant hardware and software may follow.

But despite these constraints, China’s AI capabilities aren’t going away. The country has the domestic AI ecosystem—the research talent, data, and corporate investment—to remain near the global cutting edge. And the nature of AI’s recent advancements—its returns to scale—means that there isn’t some secret research breakthrough that we just need to keep out of China’s hands. Last year’s chip controls represent a significant but not insurmountable hurdle when it comes to scaling up large models. Using the chips, research talent, and know-how available to it, China can continue scaling, iterating, and improving its best models. The U.S. government should keep working to create a strategic advantage, but it also must confront the reality that China is likely here to stay in AI.

What does this mean for U.S. AI policy? The United States can’t make “winning” the AI race a prerequisite to governing the technology effectively and regulating it where necessary. It must instead simultaneously prepare for a world in which China remains equipped with highly advanced AI systems and also one where AI transforms the United States itself. Businesses will be deploying generative AI models to displace workers and create dazzling new products, while others use those same models to defraud individuals, spread misinformation, and gain new access to knowledge posing biosecurity and cybersecurity risks.

FP LIVE | JUNE 28: Who will win the AI race? How will it impact global trade, sanctions, and great-power competition? Join Paul Scharre, the author of Four Battlegrounds: Power in the Age of Artificial Intelligence, in conversation with FP’s Ravi Agrawal as they discuss “The Scramble for AI,” the cover story in FP’s Summer 2023 print issue. Register to join.

Tackling these challenges at home will require not only an appreciation of all the benefits that may come from these new and emerging AI models but also a willingness to experiment with new governance structures for AI. Responsible governance doesn’t mean clunky command-and-control regulations but tailored approaches that build capacity to tackle increasingly complex AI challenges. Courts are available to resolve liability disputes involving AI. Agencies across the U.S. federal government are already grappling with concrete issues by requiring algorithmic explainability when an applicant is denied credit and applying rules on native advertising to generative systems that manipulate users. Allocating more money so these agencies can hire AI-literate policy advisors would accelerate the technical capabilities of the government. These new hires can adapt and create departmental policies to tackle sector-specific AI challenges. If government action is necessary to prevent catastrophic AI risks, these technically proficient policymakers can help craft those policies.

Whether by carefully crafting legislation or through appropriate executive actions, society’s use of AI models can be subject to meaningful transparency requirements, further strengthened by requiring third-party bench-marking and audits for the most powerful models with potential capabilities that defy their creators’ intentions or limits. Establishing sophisticated regulatory markets for these types of audits—allowing appropriately vetted third parties to certify that certain models are safe and otherwise behave roughly as contemplated—will take time. But a useful interim step would be requirements on developers of the largest and most capable models to conduct a reliable “catastrophic risk assessment,” outlining the measures taken to investigate and mitigate these dangers, and to publish the results. A carefully tailored, nonbureaucratic registration scheme could help ensure that these models receive the necessary vetting before being made available to hundreds of millions or even billions of people.

We can already anticipate the central objection: If we implement even limited measures to govern AI, we will give China an edge. But those who fear that this type of AI regulation in the United States will give China a leg up likely haven’t been paying attention to China’s own AI governance measures. Over the past two years, China has rolled out some of the most detailed and demanding regulations on its own AI companies, including security assessments for algorithms and mandatory disclosures of training data and model specifications. In April, China followed this up with a draft regulation specifically targeting generative AI. The draft combines obligations unique to China’s political system, such as requiring that generated content reflect “socialist core values,” with mainstream international demands, such as protections on intellectual property. Notably, the regulation imposes requirements that both the training data and generated outputs be “true and accurate”—an extremely daunting task for models that are trained on billions of webpages and are known to regularly produce factually incorrect statements.

The draft regulation is the subject of much debate within Chinese AI policy circles, showcasing the Chinese government’s own attempt to balance effective regulation with AI leadership. Despite the Chinese Communist Party’s desire to guide Chinese AI development, the large majority of meaningful work is still being done in private sector labs and academic settings, where researchers face resource constraints and fear that onerous regulations will impede their own work.

We may not agree with the Chinese government’s motives for regulating AI (preserving its existing controls on information) or its methods for doing so (state-defined limits on training data and outputs). But we also cannot let the fantasy of a completely unregulated technological rival prevent us from governing AI in way that’s consistent with our values.

A man uses a virtual reality headset during a NASA Hybrid Reality Lab demonstration at the NVIDIA GPU Technology Conference, which showcases artificial intelligence, deep learning, virtual reality, and autonomous machines, in Washington on Nov. 1, 2017. In the foreground are screens showing a satellite over Earth and other technology.

Internationally, we need to do two things at once: compete aggressively and prepare for a world of parity as well as cross-border opportunities and risks. We should seek to maintain an edge over China, with controls on semiconductors as our best tool. Of the three building blocks of modern AI—data, semiconductors, and engineering talent—chips are by far the easiest to control. Leading-edge chips, and the highly specialized tools needed to fabricate them, are still produced in just a handful of places, all of which remain broadly aligned with the U.S. government. By limiting China’s access to these chips, the United States can meaningfully hamper China’s AI industry. But none of these controls are airtight, and over the medium and long term, China may well circumvent the controls, either by developing methods to train AI models on older chips or by learning how to fabricate leading-edge chips itself. In the end, the restrictions on chips may end up acting as a meaningful tax on Chinese AI development but not a hard limit.

That means we must also prepare for a long-term U.S.-China relationship in which both countries are equipped with powerful AI systems that could cause catastrophes if not handled carefully. As impressive as today’s AI systems are, they remain brittle and prone to unpredictable behavior. As those systems are woven into both countries’ economies and militaries, the risk of AI accidents will go up. For example, military AI systems intended to identify and respond to an incoming attack could mistake unusual levels of glare for kinetic activity, kicking off defensive or retaliatory measures that rapidly escalate. This is what happened during the late Cold War, when the Soviet Union’s automated missile detection system mistook glare reflecting off clouds for an incoming nuclear attack. At the time, the decision of one Soviet soldier to label this a false positive likely prevented a nuclear holocaust. Despite decades of advances in AI since then, today’s systems remain prone to these types of mistakes when faced with highly unusual inputs.

It won’t be enough for the United States to simply have the more powerful AI system. Meaningful safety for Americans will require that both sides implement safeguards on their systems.

Under these conditions, it won’t be enough for the United States to simply have the more powerful AI system. Meaningful safety for Americans will require that both sides implement safeguards on their systems. These could include agreements not to incorporate AI into nuclear command and more extensive technical exchanges between AI scientists in the two countries on techniques to ensure the safety of advanced AI systems.

Getting there will require tough political and technical conversations between the United States and China. During some of the tensest and most dangerous moments of the Cold War, the Pugwash Conferences served as a forum at which scientists from the United States and the Soviet Union could continue to engage with one another to reduce nuclear risks, helping to lay the groundwork for the nuclear test ban and nonproliferation treaties and earning a Nobel Peace Prize. As AI advances and diffuses in both countries, strategic engagement to reduce tail risks will be an essential tool for keeping Americans safe.

As China and the United States simultaneously compete while occasionally exploring pathways to reduce tensions, AI is on the cusp of reshaping our society at a fundamental level. We cannot let naive hopes or geopolitical fears prevent us from facing this moment head on. Yes, the United States must work to maintain an AI edge over China. But that cannot come at the expense of the policies that will help Americans—and eventually, much of the rest of the world—benefit the most from AI models and keep their greatest risks at bay. The United States cannot let its principles or safety become collateral damage in its quest to outpace China at all costs.

This article appears in the Summer 2023 issue of Foreign Policy. Subscribe now to support our journalism.

No comments:

Post a Comment