19 March 2024

Tech, National Security, and China: Q&A with Jason Matheny


Q. You had three titles in the Biden Administration: deputy assistant to the president for national security, NSC coordinator for technology, and deputy director of the Office of Science and Technology [OSTP is the president's science adviser and his or her staff]. What was your role?

The role was created to help bridge the National Security Council and the Office of Science and Technology Policy. Typically, there hadn't been a dual-hatted person who could help bridge those two portfolios. My role was to help advise the national security advisor and the NSC and the science advisor in OSTP and ensure that the two portfolios—one related to national security, one related to science and technology policy—were in sync.

There is increasing attention to the way in which civilian science and technology issues are affecting national security. Dual-use technologies like AI, semiconductors, and synthetic biology have commercial applications and national security applications.

You have talked about creating a permanent so-called 'Red team' to think about what China is doing.

In cases where you need to simulate how another side might respond to one's policies, you might want to have a group that permanently plays the role of your competitor and tries to anticipate how they would react to policy X or policy Y. The idea here is to have a policy red team that was deeply immersed in Xi Jinping Thought and tried to imagine how Xi Jinping and other key parts of the CCP could interpret U.S. policy, and how they might respond in turn. Such a group might be a kind of an immersion program in absorbing PRC media and PRC Politburo information.

The goal then would be for them to impersonate the decisionmaking of Xi and his inner circle. There are things like this that have existed in the intelligence community before.

What about at RAND?

We have done this historically on some topics. RAND ran a Soviet studies program for many years during the Cold War. It was really the center for detailed economic analysis of the Soviet Union [including] demographic analysis and analysis of the bureaucratic organization of the Soviet Union. There were various efforts at red-teaming Soviet responses to U.S. actions.

As part of our China research portfolio, we have thought about whether we could use these sorts of methods for analysis of PRC behavior. We are about to significantly increase our investment in China analysis, particularly in areas where we think there are gaps in analysis of China's economy, its industrial policy, and its domestic politics.

At the height of the Soviet studies program, we had twice as many researchers focused on the Soviet economy and the Soviet bureaucracy, as we do currently on China's economy and bureaucracy. So we still need to build more expertise.

RAND's work in military analysis of China is also quite large and significant. We have probably the largest group in the United States focused on understanding China's military. We war game various scenarios but that analysis is often disconnected from analysis of China's economy or its industry.

In the United States there are a significant number of [China] analysts, but they're highly dispersed across academic institutions and a large number of think tanks. Part of what's also needed is more collaboration and connection across the different research institutions in order to ensure that all the major lines of analysis are being pursued by someone.

Let's talk about some of the specific policies the Biden administration has taken in its industrial policy regarding China. The semiconductor export limits seem to be an extraordinary assertion of American power—not just cutting off China's ability to produce advanced chips but crippling the ability of China to build them itself.

First, it's worth explaining why semiconductors are important. They are the foundation of most modern technologies from computers to phones to industrial controllers to the data centers that process the world's financial transactions. They are also key to weapon systems and the data centers that run cyber operations. And progress in semiconductors has depended on the ability to manufacture smaller and smaller circuits, down to features that are only nanometers in size.

There are two interesting geopolitical features of the semiconductor market. The first is that the supply chain is highly concentrated. There are only two or three companies that produce the most advanced chips. They're located in Taiwan, South Korea, and the United States, with the U.S. share having decreased significantly over the last few decades. And there are only three countries—Japan, the United States, and the Netherlands—that produce most of the equipment essential for manufacturing those chips.

A second interesting feature is that 90 percent of the most advanced chips are manufactured in Taiwan. A blockade or invasion of Taiwan would be catastrophic, not just because of the geopolitical consequences but also because of its impact on the semiconductor supply chains that fuel our economies.

Then there are recent developments that make semiconductors even more important. One of the technologies that has been advanced the most by progress in semiconductors is artificial intelligence. AI is a general-purpose technology with broad applications, including in medicine, education, agriculture, and energy—but also applications that are central to national security, including cyber and kinetic weapons.

Progress in AI over the last several years has been extraordinary largely because of the application of ever larger amounts of compute [computing power]. Since about 2012, the amount of compute that's been used in leading-edge AI systems has doubled every four months and that doesn't look likely to slow. We're now seeing hundreds of millions of dollars being spent in training the largest AI systems. Within the next year or two, it's possible we'll see a single AI system that costs over a billion dollars in compute to train.

Semiconductors end up being a rate-limiting variable for advances in AI, including some which have critical applications in national security. For example, future AI systems could generate new offensive and defensive cybersecurity tools. They could be misused for disinformation attacks or they could be used to detect disinformation attacks. They could be used in the design of some weapon systems. They could be embedded into physical weapons that are likely to become cheaper and more abundant. In the future, small weaponized drones could become the 21st century version of the AK-47—meaning they are ubiquitous. You could have swarms of thousands of small drones attacking one another. The number and speed and diversity of weapon systems could lack historical precedent.

Progress in AI, which has been driven by semiconductors, is a key feature of military competition between states. China has been using AI in its own operations against the United States and allies in its cyber operations. It also has been using AI for surveillance and human rights abuses particularly in Xinjiang. And it's used AI and high-performance computing in modernizing its military, for example in hypersonics.

Think of investments in semiconductors as having a negative externality in economic terms. [A 'negative externality' is a cost to society due to an economic activity that the producer doesn't pay for. Pollution is a classic example.]

The first externality is China's military and intelligence services using offensive cyber operations to conduct industrial espionage and appropriating advanced technologies from the United States and allies. Increasingly, those cyber operations depend on large data centers that have been built with U.S. hardware. The second externality is that China's security services have been building large data centers for surveillance and U.S. hardware has been the primary enabler. [The New York Times's], Paul Mozur broke the story of U.S. hardware being used in the Xinjiang data center that is used for real-time surveillance of the prison camps.

A third externality has been that China's leadership views reunification with Taiwan as a national priority. It's preparing for reunification by force, and the cyber and kinetic weapons that China's military would likely use against Taiwan depend on U.S. hardware, either directly in the case of U.S. chips, or indirectly in the case of U.S. manufacturing equipment that's used to produce chips.

All of this represents the negative externalities of the sale of U.S. chips and U.S. manufacturing tools to China. [The chip companies and the tool companies don't reimburse society for China's activities enabled by their technologies—other parts of society pay.] In October 2022, the Department of Commerce set new export controls that were intended to address the problem of the risks created by U.S. chip sales, to China. Those controls focused on the most advanced chips used in computing. They were then updated in October 2023.

No matter how many legacy chips China manufactures, it won't be able to move up the value chain to the higher-end chips unless it has access to more advanced tools like immersion deep ultraviolet lithography (DUV) or extreme ultraviolet lithography (EUV).

The controls affect less than 5 percent of U.S. chip exports to China and well under one percent of all exports to China. There are also comparable controls on sales of equipment used for making leading-edge chips. The controls are motivated by those three kinds of externalities—the PRC government's cyber activities, its human rights abuses, and its risks to Taiwan. The success of those controls depends in large part on effective implementation. China will seek to—and has sought to—circumvent those controls, either by finding bootlegged versions of chips or finding third parties that can purchase the chips without attribution [and ship them to China]. This requires a significant increase in the enforcement capacity of BIS at Commerce. [The Bureau of Industry and Security polices export controls.]

China also has an ability to build advanced chips. The definition of advanced chips is usually seven nanometers or smaller. SMIC announced that they had produced a chip of that scale. How significant do you think that is? Did they do it through native capabilities, stolen technology, or bad export-control draftsmanship in the United States. [In 2022, Semiconductor Manufacturing International Corp. in Shanghai, China's premier chip maker, said it was able to manufacture chips with transistors seven nanometers wide. Taiwan Semiconductor Manufacturing Corp., the world's leader, currently produces even smaller chips. The smaller the transistor, the more that can be packed on a computer chip.]

Seven nanometer chips are a relatively old technology at this point—it wouldn't really qualify as leading edge. SMIC's ability to manufacture at seven nanometers is not really a surprise to many in industry. The tools it can use to manufacture seven nanometers are ones that it purchased before the October 2022 controls went into effect. It's unlikely [in the near term] that seven-nanometer manufacturing would be stopped, or that it would make much sense to focus on reducing the manufacturing of seven nanometers [in the near term]. The 2022 and 2023 controls are more focused on five nanometers, three nanometers, and more advanced nodes.

With seven-nanometer technology, could China still produce the outcomes you were describing? Could China use native technology to produce AI on a scale that will be militarily problematic for the United States or Taiwan?

This is a cost-imposition strategy. It's likely that China would still be able to train a large AI system, but at a cost that would be much, much higher, so it couldn't have as many such AI systems with a given budget. [That impact] is comparable to the chip controls and high-performance computing controls that we had against the Soviet Union. We had an extensive set of controls because the United States was quite worried about the Soviet use of high-performance computing for nuclear simulations and for cryptanalysis for code breaking. We didn't think that it was likely that we would prevent the Soviet Union from building some number of high-performance computers, but we made it more costly to do so. We made it likely that they would be able to produce fewer such computers and they would have to face a cost penalty in using them. I think that's the case here.

Even China's ability to manufacture at seven nanometers is likely to be impaired by these controls going forward. Its yield is likely to be adversely affected if it no longer has access to more advanced DUV machines that have now been controlled. [Deep Ultraviolet lithography is used to produce many high-performance semiconductors.] As its existing stock of machines faces wear and tear, it will not be possible for it to replace them or get them serviced. That will impose real costs on yield for SMIC and for others. It doesn't appear that China will have any realistic route in the next several years to cost-effectively produce chips that are more advanced. All of that makes computing more costly, which then makes it more costly to make large-scale AI systems.

Now, China also has the ability to use commercial cloud computing [to develop large-scale AI systems]. They are currently not restricted by cloud-computing providers like Amazon or Microsoft or Google. I think it is less likely that China would train a large AI system on U.S. cloud computing for a military use or for an intelligence use like cryptanalysis. So, in effect there is an end-use control by allowing China the opportunity to use U.S.-based cloud computing with some level of monitoring.

I think a nonproliferation regime around the misuse of computing or AI is one in which there are chip controls and chip manufacturing controls that are relatively restrictive against China while allowing certain types of cloud computing to be available, which can be monitored for misuse by PRC intelligence or military agencies.

If you're sitting in Beijing, I would think one possibility would be to pursue a kind of low-end strategy. In other industries, Chinese companies have succeeded by grabbing the commodity part of the business, cutting off revenue that would otherwise go to U.S. companies, and then going up the technology ladder. Do you think that's a possibility with semiconductors—that the Chinese basically starve U.S., Korean, and Taiwanese companies of revenue so they don't have enough money for R&D, and the Chinese are able to catch up.

I think those were such different parts of the value chain. Here the tools used are quite different. No matter how many legacy chips China manufactures, it won't be able to move up the value chain to the higher-end chips unless it has access to more advanced tools like immersion DUV or EUV. [Extreme Ultraviolet lithography machines made by ASML Holdings of the Netherlands are used to produce the most advanced chips and are so far blocked from export to China.] There's a complete restriction on access to EUV. Unless that changes, I don't see China being able to indigenously manufacture the highest-end chips.

Now it's still economically important if China has a stranglehold on legacy chips. That's an argument for why the global supply chain shouldn't be dependent on China for these mature nodes. We should have the diversification of the supply chain, including in these lower-end chips. But I don't think that China's capture of low-end chips would lead to any improvement in its ability to indigenously manufacture high-end chips. That's because of its dependence on imported tools. It really has not seen breakthroughs in its indigenous tool manufacturing of the type that would be needed for it to be independent of those tool imports.

A number of analysts have compared the competition between the United States and China over AI to the arms race with the Soviet Union. It's not quite the same. But the analogy is that the United States and China are the superpowers when it comes to artificial intelligence, and arms control is a way to think about limiting the damage that might be produced by these programs.

A nuclear analogy here could be that a cyber weapon that is enabled by AI is a nuclear warhead, chips are fissile material, and semiconductor manufacturing equipment is the centrifuge. A centrifuge and fissile material can be used peacefully for nuclear energy. But we accept that the risks of misuse are sufficiently severe that we need limits on fissile material and enrichment technologies and requirements for monitoring to ensure that the materials are used benignly.

Like nuclear technologies, computing is a powerful dual-use technology that requires guardrails. Chip-export controls and manufacturing equipment-export controls can be seen as part of a nonproliferation guardrail for computing and AI. They restrict the use of large-scale AI systems to ones that will not threaten global security. You could effectively funnel China's AI efforts into cloud computing systems that can be monitored. I think that is a better end-state than one in which China is able to build very large AI systems for its own cyber operations, for its military modernization, and for human rights abuses.

In a way this is analogous to the Baruch plan that came out shortly after World War II proposing a supply chain for nuclear materials that could be governed in a way that prevented proliferation of nuclear technologies. [Under the 1946 Baruch plan, largely written by presidential adviser Bernard Baruch, the United States proposed to decommission its nuclear weapons so long as other nations didn't produce them. A United Nations agency would oversee the nuclear supply chain to see that the material was used for commercial energy production, not weapons.]

That was a plan that called for international control. You're talking about America being able to limit what China can do.

Well, it is multilateral right now, in that some of the manufacturing tools are controlled under Wassenaar, which is a multilateral regime, and others are controlled under a trilateral agreement between the United States, Japan, and Netherlands. [The 1996 Wassenaar Arrangement is a multilateral export-control agreement covering dual-use technologies.] Multilateralizing it even further is part of the longer-term policy agenda.

One big difference between arms control of nuclear weapons and misuse of artificial intelligence is that you could count missile silos by inspecting them or sending satellites over them. But there doesn't seem to be a way to verify how AI systems are used.

A CGTN video covering the initial phase of construction on an underwater data center (UDC). A sealed watertight cabin, which serves as a data repository, and also operates as a supercomputer, was placed on the seabed off the shores of Hainan, November 24, 2023.

There are three points of monitoring and verification that are possible with computing and all involve hardware. The first is that data centers are not that easy to hide. In some ways they might be harder to hide than centrifuges or nuclear enrichment facilities. They require even more power, and they require supply chains and materials that have even more bottlenecks. And the manufacturing costs for data centers and for semiconductor foundries is enormous, even larger than the cost for building a centrifuge facility. So, one monitoring target would be data centers and semiconductor fabs.

A second point of governance would be the supply chain of chips and manufacturing tools, which the 2022 and 2023 controls get at. A third could be the chips themselves having internal governors.

In principle, it's possible to design chips in such a way that certain kinds of AI models couldn't be trained on them. RAND has a project right now that's looking in detail at the technical feasibility of having on-chip governance. The Center for a New American Security just released a report a couple weeks ago on the same topic, and it does look as though this is a technical approach that's feasible. Commerce released a request for information on proposals for such ideas, and received submissions from the major chip companies, as well as a number of research organizations like RAND.

One could imagine a program that is sort of analogous to Eisenhower's proposal for Atoms for Peace—an AI for Peace, where you have a supply chain of chips, manufacturing equipment, and data centers that are governed in such a way that it makes the peaceful use of those technologies more likely than the malevolent use. [President Dwight Eisenhower's 1953 Atoms for Peace initiative provided funding and research help for commercial nuclear reactor development.]

A lot of what we are talking about is coercive—twisting China's arm or limiting China's ability to do this and that. A big part of arms control, of course, was the United States and the Soviet Union finding a common interest in limiting the number of nuclear weapons and limiting the threat to human civilization. What would it take to convince Beijing to agree to some restrictions that seem commonsensical, such as not attaching AI programs to launch decisions about nuclear weapons or making sure a human is in the loop when it comes to firing weapons against high-value targets?

One of the more important questions we have to answer is finding some way for the United States and China to compete that doesn't escalate to catastrophe. There are various formal and informal mechanisms for confidence building and for having safeguards against escalation. Among them are having direct dialogues around AI safety and security, around having a nuclear policy that involves commitments to human control. The United States has said explicitly that humans will be in charge of nuclear launch decisions. China has not said that, and neither has Russia. Having a functioning crisis hotline between the United States and Beijing and having agreements about how our militaries could use AI systems [would help].

Unfortunately, China's leadership has rebuffed U.S. proposals so far for having such discussions about confidence-building measures. One theory is that China's leadership appears to believe that ambiguity works to its strategic advantage. I think that's unfortunate. Having dialogue about those topics would be of mutual benefit and I think we should keep trying to have those dialogues.

There are other less formal dialogues that are ongoing, including track-two dialogues that involve members of academia or think tanks who talk between the United States and China. They're sort of the equivalent of the Pugwash conferences where AI researchers in the United States and China meet to discuss topics around safety and security.

I think those are critical. But we also need track-one dialogues [between government officials] of the type that so far China hasn't shown a willingness to have. We need to keep pushing.

[The 1957 Pugwash Conference on Science and World Affairs brought together scholars to discuss ways to reduce the danger of nuclear war. In the November 2023 meeting of Presidents Biden and Xi, the two countries agreed to launch government talks about artificial intelligence, but the agenda remains undefined.]

Let's switch topics a little bit to the role of foreign investment in the United States when it comes to industrial policy, particularly in areas where the United States is behind technologically. This administration and the last one too encouraged Taiwan to build semiconductor facilities in the United States and in Europe to diversify the supply chain in case China invades Taiwan or puts pressure on it through blockade. So FDI plays an important role there. But what about areas like solar energy, wind energy, electric batteries, areas where China is clearly in the lead. Does it make sense for the United States to encourage Chinese companies to locate in the United States?

One of the reasons why the Committee on Foreign Investment in the United States (CFIUS) reforms happened was a recognition that while there were some areas of mutual benefit of investment, there were some areas of real adversarial investment. There, the investment was not only lopsided to China's advantage but its connection to China's military or intelligence services was almost certain but obscured through cutout companies or through second parties or third parties that had a relationship with the PLA [People's Liberation Army] or the MSS [Ministry of State Security].

CFIUS cases that involve investments in U.S. companies where the U.S. company appears to be lagging are ones that are viewed as less risky compared to those where the United States appears to be leading.

In some key areas of technology, the risks are very high for adversarial investment. Semiconductors, AI, synthetic biology, quantum—those are areas that China's military and intelligence services have said are priorities.

[In 2018, the Committee on Foreign Investment in the United States, an interagency group that reviews foreign acquisitions for national security risks was given additional authority.]

But there are also areas like renewable energy technology where the United States is behind China and relies on Chinese inputs.

In those areas, CFIUS has been more permissive. There really has been a sliding scale in the approach to CFIUS review in treating cases where the United States does not have a current strategic advantage. CFIUS cases that involve investments in U.S. companies where the U.S. company appears to be lagging are ones that are viewed as less risky compared to those where the United States appears to be leading.

Do you think it would be helpful for American industrial policy to encourage Chinese solar companies and Chinese battery companies to invest in the United States?

It's reasonable to have a sliding scale in risk assessment. I don't think there's a one-size-fits-all approach to CFIUS. The committee and the analytic support to the committee have been pretty subtle in making distinctions between areas of high risk and areas of lower risk.

Now, of course it's difficult to anticipate what the longer-term second- and third-order impacts of an investment might be, and whether the long-term goal of an investment might be to have a stake in a U.S. company to prevent it from becoming a competitor. Those are also part of the risk assessment.

You have talked about Taiwan having to build a porcupine defense. And as you've also mentioned in this conversation, Taiwan is the location for 90 percent of the production of the world's most advanced chips. One would think that Taiwan would be a leader in defense electronics and anti-submarine warfare in the same way that Israel, threatened by missiles from a variety of countries, developed its Iron Dome anti-missile system. But I don't see that happening.

In any large institution there are bureaucratic incentives that lead to delays in making the kinds of reforms that are needed. In the case of military organizations, including Taiwan but also the United States, the assessment of risk often takes years to change the defense policies that we have.

Most observers of the U.S. military acquisition of technology would say that we're investing too little in cheaper attritable weapon systems. We're investing too much in very expensive prestige systems that might not survive the early part of a conflict—for example, surface naval vessels that are highly vulnerable to missile attacks. It's hard for decisionmakers operating within large complex bureaucracies to operate at the speed of adaptation that is needed for technologies that are evolving and that are relevant to the military.

That is true of Taiwan as well. Taiwan's military leadership has been criticized for being overconfident about various existing measures of deterrence. The military officers within Taiwan have in general probably been too sanguine about some kinds of scenarios. By many estimates, there hasn't been as much of an investment as is needed in a porcupine defense that includes large numbers of anti-ship cruise missiles, loitering munitions, advanced sea mines, and MANPADS [portable air-defense systems] of the type that would be needed.

In military sales, the United States and Taiwan have been focused more on a smaller number of high-prestige weapons systems. That has been changing over the last couple of years, but the change is slow in part because the supply chains are bottlenecked for producing anti-ship cruise missiles or loitering munitions.

Before your work in the Biden administration, you were the head of the intelligence community's advanced research agency called Intelligence Advanced Research Projects Activity (IARPA). That sounds like James Bond and Q [the fictional arm of British intelligence that came up with Bond's cool gadgets]. Is it like that?

IARPA is an agency within the intelligence community that's responsible for developing advanced research programs, supportive national intelligence missions, and it leverages the brain power and creativity in the private sector. I spent nine years at IARPA developing research programs to figure out better ways of analyzing information, of collecting information, of making inferences from information that's often noisy and contradictory. That included research in AI, high-performance computing, and human judgment. We organized the world's largest, forecasting tournament in which tens of thousands of participants forecasted geopolitical events and were scored for accuracy. We wanted to find what are the characteristics of good forecasts and good forecasters, and can we improve the methods of forecasting so that they're more accurate over different time horizons.

There are a few findings that I think are especially central to how we can improve forecasting. One is that the typical measures of expertise often don't correlate with forecasting accuracy. The people who tend to be more accurate in forecasting are people who are deeply self-critical, who obtain information from lots of different sources, who seek out disconfirming information that may not support their original intuitions, and then update their judgments frequently when presented with new information. They also tend to have really high measures of fluid intelligence and critical thinking skills.

A second counterintuitive or second key finding is that the most accurate forecast doesn't come from the most accurate single forecaster but comes from taking the average of large numbers of forecasters. This gets to the 'wisdom of crowds' observation that by taking a larger number of judgements, you can cancel a random error and you can cancel out systematic bias. A third finding that's important is that we typically don't score the results, outside of financial markets.

Most political pundits on TV have not had the forecast that they make confidently in news programs scored for accuracy. And that's true too for academic researchers who are making judgements that are predictive. We often don't go back historically and evaluate how many journal articles on a particular research topic were accurate versus inaccurate.

How much of that work was influenced by the U.S. concern about a China that's growing stronger?

A lot of the work that we did at IARPA was competitor agnostic. That is, the technologies that we were developing were ones that were going to be useful against a range of potential competitors and adversaries.

No comments: