Pages

2 December 2021

Interview: Stanford’s Herbert Lin on “Cyber Threats and Nuclear Weapons”

John Mecklin

 
As the United States modernizes its nuclear forces in coming decades, it will upgrade the computer and communications technology associated with them. Much of such technology now controlling US nuclear weapons was produced before the rise of the Internet. Newer technology will improve aspects of command, control, and communications related to the US nuclear arsenal. But if not carefully planned, the updating of nuclear technology could also increase risk in distinct ways that cyber policy expert and Bulletin Science and Security board member Herbert Lin explains in the following interview.

A senior research scholar for cyber policy and security at Stanford University’s Center for International Security and Cooperation, Lin sat down with Bulletin editor in chief John Mecklin recently to discuss his new book, Cyber Threats and Nuclear Weapons. During the interview, Lin explains why the nuclear modernization effort could actually increase the chances that adversaries could misread US intentions, with potentially disastrous results.

John Mecklin: I guess we’ll start with nuclear modernization. You clearly see that as US nuclear forces are modernized, the computer systems are all going to be modernized. And you see some potential danger there in terms of cyberattack and intrusion. Why don’t you explain the danger you see.

Herbert Lin: Let me give a prefatory comment. When people think about cyber risk, they most often think about cyber vulnerabilities in computer systems, and that is a big deal. There’s no question about it. Cyber vulnerabilities have to do with flaws in the implementation or the design of a computer system that may be connected or controlling a missile or a nuclear weapon or command control system or something. But these are flaws in the design or implementation that if the bad guys know about them, they can make the system do something that the designers of the system never intended. So that’s one kind of cyber risk.

There’s a different kind of cyber risk that comes about because of the potential for inadvertent or accidental escalation. For example, an adversary’s cyberattack may be intended to degrade our conventional forces. But we may think the adversary is going after our nuclear forces. That might increase the pressure on us to consider going nuclear. There are certain technical characteristics of cyber that make that kind of question about intent much more ambiguous.

The kind of risk that you just mentioned, the first kind of risk, the fact that you have more computer systems out there, and then you can be more likely to be hacked—yes, that is true.

You’re computerizing to a much greater extent than you did before. And now, with nuclear modernization, it’s going to be plug and play; there’s going to be a common architecture, and people are going to be plugging into it. And there’ll be people who will communicate not over the internet, but over an internet-like, IP-based network where all these systems talk to each other. And yes, there are additional risks that come with that.

John Mecklin: You see particularly a problem with the United States and I guess other countries with intertwined conventional and nuclear systems—or with their doctrine, in some ways those are combined. And you had talked in your book about that being such a problem, that you were even suggesting the idea that there ought to be impact statements. Why don’t you explain that a little bit.

Herbert Lin: Here’s an example. We have satellites that are staring down at the Earth that are intended to detect ballistic missile launches. These were originally put up in the sky so that we could know when the Soviet Union—at that time, the Soviet Union and now of course the Russians or Chinese—are launching an intercontinental ballistic missile (ICBM) in the direction of the United States. We have these satellites in orbit that look down at the Earth continuously, and they see a big hot flare, and they say, “Aha that’s a missile being launched against the United States.” Okay, so that’s a nuclear warning system that signals that the United States may be under missile attack from Russia or from China.

Now it also turns out that these satellites, because of technological improvements, are good enough to see not just the very, very bright flare of an ICBM, but the much dimmer flare of a tactical ballistic missile. So for example, it is publicly known that we see Scuds being launched in the Middle East, Scud missiles, which have much shorter range. They are ballistic missiles, but they’re not armed with nuclear weapons. And we see these launches, too, with the same satellites.

Now imagine a scenario in which those short-range tactical missiles are being used. The United States, for example, might be using information gleaned from the satellites in orbit on these tactical ballistic missile launches to warn its missile defense forces in a region, such as in the Middle East or near China or something like that. So because of these satellites, the United States has a more effective tactical missile defense in Asia, for example.

The Chinese might well say, “Hey, wait a minute. We don’t like this. Their missile defenses are now much more effective against our tactical ballistic missiles. Why don’t we just take out the satellites that are improving their missile defenses, which after all we have an interest in penetrating.” So now they launched a cyberattack against one of the early warning satellites, and we see it. Are we to conclude that their intention is to compromise the tactical ballistic missile warning function, or the strategic ICBM warning function? The Chinese will say, “No, no, no, no. We’re not interested in going after your nuclear warning function. We have no intention of attacking with nuclear weapons, but your conventional ballistic missile defenses are a real problem for us. And we’re attacking you because you’re compromising the effectiveness of our short-range ballistic missiles.”

But the United States will look at that and say, “Hey, wait a minute. They’re trying to disable our most critical early warning systems. And we’re going to suffer because the Russians are going to take advantage of it, blah, blah, blah, blah.”

This is an example of where the United States has dual capability in one of its assets, a strategic nuclear capability, the warning of strategic nuclear attack, and a tactical role for warfighting purposes. And if the adversary goes after our stuff for war fighting purposes, how are we going to know that? Now, obviously this is not a problem in implementation or in design. The system is doing what is designed to do, but it’s a whole for philosophy of how we operate this stuff that’s the problem. And that’s the potential for inadvertent escalation. If they start attacking our early warning satellites, we may misinterpret it.

I mention the desirability of impact statements when US forces go on the offense to make sure we consider the possibility that an adversary might conflate an attack on its conventional capabilities with one on its nuclear capabilities, and also a different kind of impact statement for US systems regarding their impact on nuclear decision making by both adversaries and US decision makers.

John Mecklin: An overarching theme of your book seems to be that various cyber issues are increasing the possibility of inadvertent escalation that could increase the possibility of nuclear war in a number of these scenarios that you lay out. You can’t tell what the attack is really aiming to do.

Herbert Lin: That’s right. The introduction of cyber here helps contribute mightily to a worst-case analysis scenario, which of course you have to worry about. If conventional war scenarios are things that you have to worry about, you have to worry about the possibility of escalation to nuclear. And from my standpoint, I want to keep that fire break. I want to keep nuclear as far away from conventional as I can, because I’m concerned about this issue. But that philosophy—which I think is shared by many Bulletin readers and by many people on the Science and Security Advisory Board—is not shared by the majority at the Department of Defense, who want to use nuclear weapons as a way of deterring conventional war. And they say that we want to put in nuclear weapons to establish that you can’t go down a path of conventional war, because they think conventional war is the most likely path to nuclear war.

That’s the difference in philosophy and I think they’re wrong about that, but that is what they think.

John Mecklin: In the book, you lay out, “Here’s a bunch of things that could be done or could happen that could reduce all these different kinds of cyber risks involving nuclear that you’re talking about.” And first I’d just like you to talk about what needs to happen from your point of view, but, and also to respond to my thinking that: Boy, it would take a lot of coordination and a lot of interest in this at a whole bunch of different levels of government to really address the problem you’re broadly describing in your book. Am I getting that right or wrong?

Herbert Lin: No, I think that’s very, very true. Here’s one very big aspect of the problem.

As a part of American DNA—and it’s not just American, it’s human DNA—everybody wants their information technology system to do more—to have more functionality in some way. We want it to be better, faster, easier to use, have more functions, support more applications, et cetera. You always want it to be to doing more. The problem is that whenever you want a computer system to do more, you have to make a bigger system. You have to add to it.

And every computer professional will acknowledge that complexity is the enemy of security. More complexity means more security problems. More complexity means less security. And so what happens today is that we say we want more functionality, more, more, more, more, more, and the security guys shrug their shoulders and do the best they can. They never get the opportunity to push back, to say, “No, you can’t do that. That makes the system too hard to make secure.” So the security guys are not able to affect the functional requirements of the system, and until you can get somebody who’s willing to discuss that trade off—ask for less, and you’ll get a more secure system—until someone is willing to say that, we’re going to have a big problem. And that speaks to the large-scale, institutional problems that you were just describing.

John Mecklin: Yeah. That would take people at or near the tops of the services and the Joint Chiefs (of Staff).

Herbert Lin: That’s correct.

John Mecklin: Is there a solvable problem, given that all the services do their own tech, their own acquisitions?

Herbert Lin: Well, that’s an interesting question. I do have one proposal in there, which is that right now, US Strategic Command has the authority to specify the requirements for nuclear command and control systems. To the extent that these systems are also going to be dual use to deal with capabilities for the conventional forces, under the current state of affairs, you would say, “No, no, Strategic Command gets to specify the nuclear part.” But it’s still the services that are going to be buying these systems. And they’re going to optimize them for their own warfighting purposes, which is mostly conventional war.

My proposal is that STRATCOM ought to have acquisition authority over anything that touches nuclear, and that will make the services very mad, because they’ll say, “You’re developing the system to be optimized for nuclear. Most of the time it’s going to be used for conventional. What’s the problem here. This is not good for us.” And that’s true. It’s not good for them, but if you really believe that maintaining security and integrity of the nuclear command and control system is the really important thing, you’re going to optimize for that. And the conventional guys should take what they can get. Right now, all the power is in the services because they control the money. They control the contracts. STRATCOM may be able to come in and say, “Well, I want this requirement, this requirement, this requirement,” but they can’t beef up the whole business.

John Mecklin: Well, that would take direction from the White House.

Herbert Lin: It’s more than that; it takes legislation. We have some instances in which the operational commands have acquisition authority. Special Operations Command, for example, has its own acquisition authority. To some extent, Cyber Command now does, too. And I say STRATCOM should have it.

John Mecklin: That’s one hard recommendation that you’ve talked about in the book, which is really a wide-ranging look at cyber-nuclear vulnerability that includes things that people don’t ordinarily think of, such as supply chains and where computing components come from. Is that a solvable problem? Outside of some hard to imagine, “buy everything American” policy. How do you resolve that?

Herbert Lin: There are methods for mitigating that risk, not for eliminating it entirely. In the end, what you have to do is make sure that the effect of a compromise [of a system] stays limited.

John Mecklin: The book also talks about—getting to bottom lines—the possibility of agreements with other countries, not to do the kind of cyberattacks that could be interpreted as attacks on nuclear command and control.

Herbert Lin: I don’t know that we could get an agreement with another country to do it. And so I didn’t pose it in those terms, but you might certainly want to consider the effect that it had. If you are going to launch a cyberattack on an adversary, you might want to consider what the adversary might think about it. Yes, very much so. I want to do that.

John Mecklin: So that’s more aimed at the United States and what it does in cyber.

Herbert Lin: No, no, no, no. That, I was silent on that. I think the Russians should do that, too. And the Chinese should do that, as well. It is a book that’s mostly directed towards the Americans, but there are many lessons in it for the Chinese and the Russians, too.

John Mecklin: Is there anything in the book that I haven’t touched on that you particularly wanted to talk about? A message in there that I’ve missed and haven’t brought up.

Herbert Lin: Well, speaking from the Bulletin’s perspective, there’s a lot of it that implicitly adds to the case against ICBMs, as they’re currently configured. There’s a sense in which eliminating the ICBM force would also significantly reduce cyber risk.

John Mecklin: Because you would lose the launch-on-warning time pressure.

Herbert Lin: That’s correct. And that time pressure makes understanding what’s happening in cyberspace much more difficult. You can’t understand what’s happening in cyberspace in a short, high-pressure time.

John Mecklin: I have run through my questions except for one that I was going to ask first, but didn’t, and I wanted to ask it now: What’s your Hollywood elevator speech, the message of your book? What are you trying to get across?

Herbert Lin: One, don’t computerize unnecessarily. Two, be careful about how your actions in cyberspace will be perceived by others. Because those two statements speak to both kinds of cyber risk. Don’t computerize unnecessarily speaks to the vulnerabilities inherent in computer systems. And as I said, the more and more we develop and deploy the complicated stuff, the more vulnerable we are. And the second one points to reduction in the risk of inadvertent escalation and consequences.

The United States has a tendency to believe that others believe our benign intentions. They don’t. We may know we’re benign. But the other guys sure don’t know that we’re benign, and we should not act as though they know we’re benign.

No comments:

Post a Comment