Pages

23 August 2016

The Fog of (Cyber) War—Part 2 of 2

08.18.2016

A man walks past the Cyber Terror Response Center in Seoul, South Korea, March 21, 2013. A cyber attack on computer networks in South Korea the day before was traced to China. 

Earlier this summer, Carl Robichaud, Carnegie Corporation of New York’s program officer in International Peace and Security, and Scott Malcomson, Carnegie visiting media fellow, organized an online discussion of new technologies and nuclear security with security policy experts Ivanka Barzashka, Austin Long, Daryl Press, and James Acton. This is the second part of the 2-part discussion. Read Part 1

Carl Robichaud: Let’s talk a bit about how cyber fits in. The preoccupation right now is with how cyber can be used as part of a “gray” campaign to create aggression that cannot be easily deterred. But there’s a different use for cyber attacks: as part of a comprehensive military strategy that can be used to blind and confuse adversaries, attack their command and control, etc. Cyber as part of kinetic warfighting, instead of as an alternative to it. Can you speak to this? What are the implications? What are the countermeasures countries will take?

It is difficult to distinguish cyber intelligence collection from cyber preparation for attack, as well as to distinguish cyber attack from equipment failure.

— AUSTIN LONG

Austin Long: Cyber is potentially a major component of any kinetic campaign. The ability to target adversary command-and-control (C2) has been a major component of U.S. doctrine since the 1970s. There is a good section on this in Fred Kaplan’s new book, Dark Territory: The Secret History of Cyber War(2016). Counter-C2 capabilities are now diffusing to other countries. It is particularly escalatory because it is difficult to distinguish cyber intelligence collection from cyber preparation for attack, as well as to distinguish cyber attack from equipment failure. So, in a crisis, if your over-the-horizon radar for early warning goes kaput: is that just because it is old (looking at you, Russia), or is it because a clever adversary has cyber attacked it?

SCOTT MALCOMSON

Scott Malcomson: Currently there is no way to be sure, correct? And as far as the distinctions between intelligence and attack preparation go, there isn’t much difference between espionage code and “preparation of the operational environment” code.

Austin Long: The only difference between network exploitation for intelligence and network exploitation for attack is what the malware payload does. Once you establish network access you can do whatever.

JAMES ACTON

James Acton: In addition, if you discover your network has been penetrated, you have to worry that the penetration might be used for offensive purposes in the future—even if you’re confident it’s only being used for espionage right now. There are so many ways that cyber is inherently dual use.

DARYL PRESS

Press is a professor of government at Dartmouth College.

Daryl Press: Another reason cyber is escalatory: it is difficult to predict and control the effects of an attack.

James Acton: Also, it’s not just the cyber weapons that are inherently dual-use, it’s the systems they might be used against, too. Much of the U.S. nuclear command-and-control infrastructure is also used for conventional command and control. So even if an adversary is attacking it to undermine conventional warfighting, we don’t actually know whether their intentions are nuclear or not.

CYBER CONFLICT: THE STATE OF THE FIELD

Carnegie Corporation Visiting Media Fellow Scott Malcomson examinesthe many meanings of cyber security.

Scott Malcomson: Is there an argument to be made that the sheer unknowability of cyber capabilities could have, perhaps paradoxically, a calming effect?

Daryl Press: Truthfully, I don’t think so. Cyber is so intertwined with all high-intensity military operations—we can no more stop using cyber at this point as part of our mil ops than we can stop using air power. It’s simply part of 21st-century combined arms. I see no calming effect.

Austin Long: I have heard former senior U.S. officials argue that cyber, being nonkinetic, is less escalatory. But if an adversary knows you can penetrate one network they may believe (perhaps wrongly) that you have penetrated vital systems, such as nuclear command and control.

James Acton: I think the escalatory implications of cyber are very context dependent. A low-level conflict against a “regional adversary” without nukes is a whole different ballgame from a major conflict against Russia or China.

Austin Long: John Harvey, who was U.S. principal deputy assistant secretary of defense for nuclear, chemical, and biological defense programs from 2009 to 2013, had a piece in February in Defense News arguing for a comprehensive effort, including lots of cyber, against DPRK [North Korea] nukes. Escalatory?

James Acton: It would depend in part on how the DPRK does command and control—and that’s a black hole, at least at the unclassified level. But, in general, understanding in what kind of conflict the benefits of cyber might outweigh the risks seems an important intellectual task.

Carl Robichaud: I’ve also heard Austin talk a bit about how cyber is integrated into planning exercises/war games. Because of the high level of secrecy of our capabilities, it’s really hard to understand how cyber is likely to play out in a future conflict. Austin, I’m interested in your thoughts on this, and what it means for crisis management.

Austin Long: This is a serious ongoing concern, at least in the United States. Others may manage cyber differently, but given the perishability of cyber network accesses and the serious investment needed to develop them—not to mention the culture of the parts of the intelligence community where cyber was developed—the U.S. highly compartmentalizes these programs. So even inside the U.S. government, not everyone—not to mention allies or adversaries—understands how these cyber weapons might work, or even how the U.S. thinks they might work. Deterrence failure based on misperception of capability is a big risk that was much smaller when one could count tanks or warheads to get a rough sense of the military balance.
AUTOMATION AND ESCALATION

It’s only natural to turn things over to algorithms that are faster and more precise than humans. Until they aren’t.

— CARL ROBICHAUD

Carl Robichaud: War is moving at a faster and faster pace, and it’s only natural to turn things over to algorithms that are faster and more precise than humans. Until they aren’t. How does the growing integration of artificial intelligence into weapons systems affect nuclear risk?

IVANKA BARZASHKA

Ivanka Barzashka: With BMD you have very short reaction times. The rules of engagement are decided and hardcoded in advance. There is almost always a man in the loop but you do have the option of automation. With tactical (shorter-range) systems, automation is an issue and this is relevant to regional conflicts. Short reaction times also can create first-strike incentives.

Daryl Press: I see the argument that says that automation is in general escalatory, but I’m not sure if I see the link between “automation in BMD” and escalation. It seems that the decision to intercept incoming ballistic missiles is one of the few wartime decisions that actually doesn’t require much reflection. The time for reflection is in deciding whether or not one wants to deploy such systems.

What if it was a NATO system that shot down that Russian plane?

— IVANKA BARZASHKA

​Ivanka Barzashka: Let’s take a specific example: NATO Patriot systems on Turkey’s border with Syria. Under what conditions might they be put in “automatic” mode? What if it was a NATO system that shot down that Russian plane?

Carl Robichaud: A distinction is often made between automation in defensive systems—which we have had for a long time, as in Aegis and other systems—versus automation in offensive systems. But when offense and defense are intertwined, is that distinction artificial?

Daryl Press: There is a good paper to be written asking the general question: in what circumstances does automation increase escalation risks? That paper would have to tease out the various potential pathways between automation and escalation, and then ask which of those seem to be exacerbated by automation—cyber ops, missile defenses, etc. My sense is that automating BMD response is not a bad idea, whereas automating other military responses might be escalatory if the automatic actions have the effect of expanding the war (vertically or horizontally). Using Ivanka’s example, I see less risk in having Turkey-based ballistic-missile defense systems automatically engage missiles incoming toward Ankara. There is greater risk automating the response against aircraft penetrating one’s airspace: an aircraft’s intrusion may be harmless, while an incoming ballistic missile’s is not.
ARMS CONTROL AND RISK REDUCTION

NEW TECHNOLOGIES AND THE NUCLEAR THREAT

Are we entering a new age of nuclear vulnerability? Many national security experts in the United States, Russia, China, and elsewhere warn that the nuclear status quo is less stable than most people realize. Carnegie Corporation's Carl Robichaud looks at the rapid pace of technological change, which is leading to new or evolving weapons systems that threaten to upend the strategic balance.

Carl Robichaud: So we have a series of complex, evolving, dual-use systems where target differentiation is difficult. Our arms-control systems are designed for another era: they mostly involve counting stuff that we can observe. Are there new approaches to risk reduction and arms control that can reduce risks given the technological landscape, and given rising tensions with China and Russia?

Daryl Press: Traditional arms control is aimed at reducing numbers of weapons, which sounds noble but it also reduces the number of targets. Given the leaps in accuracy and sensing technology, further arms cuts actually enhance first-strike incentives. I’ve argued, with Keir Lieber, that given recent technological changes, arms reductions and strategic stability are no longer compatible.

Ivanka Barzashka: I believe new approaches are possible. For example, we can apply the logic of nuclear safeguards. You can have comprehensive technical verification of dual-use (or multipurpose) technology that doesn’t limit capabilities but provides you with information, thus creating more predictability. We already know how to do verification for nuclear weapons. We need to figure out how to credibly verify BMD systems.

James Acton: Bilateral politics between the U.S. and Russia, and the U.S. and China, makes cooperative risk reduction unlikely right now. That’s very unfortunate, but I think I’m being realistic. I see our most promising approach as factoring escalation risks more fully into planning and procurement. When (if?) the politics improve, I hope a more cooperative approach will become more feasible.

Carl Robichaud: Following up on Daryl’s argument that fewer weapons/targets can actually increase the danger of a first strike, is there a way to move past numerical reductions and toward measures that actually reduce risks? Are there commitments that each side could make—e.g., not targeting certain things—that would lengthen the fuse? Are any of these commitments credible or verifiable?

The core question is . . . about distinguishing capabilities aimed at regional adversaries from those aimed at near-peers. Can we make that distinction in practice?

— DARYL PRESS

Daryl Press: To me, the problem isn’t technological. It’s that we, the United States, want to win in regional contexts. So we want/need tools to defeat adversary command and control and air defenses. We want precision strikes to “neutralize” leadership and an adversary’s control over WMD, in a regional context—at the least. So if we could design an arms deal that would deny us and others that ability, we would in fact ourselves reject it. To me the core question is one raised above about distinguishing capabilities aimed at regional adversaries from those aimed at near-peers. Can we make that distinction in practice?

James Acton: It’s a difficult area. Going back to the old idea—of using arms control to move force structure in more stabilizing directions—has potential value. Norm building— no attacks against nuclear command and control, for example—is worth considering.

Austin Long: How would we know a norm is working except in a nuclear crisis? If I were China or Russia, I would not believe a U.S. promise not to attack nuclear command and control.

James Acton: Very fair question. To some extent, you know the norm is working if you don’t detect intrusions in your command and control. Granted, if no such intrusions occur in peacetime, you still might worry that they could occur in wartime, but both sides would presumably be gathering intelligence to launch such attacks should the other side break the norm. So deterrence might hold. (The prohibition is against use, not preparation). Of course, there are other difficulties: attribution and defining nuclear command and control. My point is not that this is definitely doable, but it’s worth exploring.

If you don’t detect an intrusion it means the attacker has done a good job.

— AUSTIN LONG

Austin Long: The two are not really distinguishable. And if you don’t detect an intrusion it means the attacker has done a good job. So norms don’t help limit the sort of concerns that make people itchy about command-and-control survivability in a crisis.

This is the conclusion of a 2-part discussion, which has been edited for clarity and length. Read The Fog of (Cyber) War—Part 1.

No comments:

Post a Comment