25 July 2022

WANT BETTER CYBER POLICY? TALK TO SOCIAL SCIENTISTS

Erica D. Lonergan and Maggie Smith

Last month, Microsoft published a report on the emerging lessons from the “cyber war” in Ukraine. It provides a rich storyline and a number of historical analogies and, in emphasizing the significance of the war’s cyber dimension, it falls firmly in line with commentary from many tech sector experts and government professionals. For example, Tom Burt, a Microsoft executive himself, described the conflict as a “full-scale cyberwar.” In an interview with the Washington Post, Toomas Hendrik Ilves, Estonia’s former president, warned that Russia might yet launch a significant cyberattack (even if, to date, one has not occurred). Further, the US government is steadily issuing warnings to the private sector, encouraging corporations to keep their “shields up” (a slogan created by Jen Easterly, the director of the Cybersecurity and Infrastructure Security Agency). Abroad, the warnings are also dire, with the Danish defense minister, Morten Bødskov, noting that “the cyber threat is constant and evolving. Cyberattacks can do great damage to our critical infrastructure, with fatal consequences.”

Collectively, these assessments paint a picture of the war in Ukraine as one that is fundamentally defined by its cyber component and an important source of lessons for future conflicts. But against this backdrop, one group has largely taken a different view. Many political scientists—specifically those with an expertise in security studies—writing about the role cyber operations can play in conventional warfighting are hesitant to extrapolate lessons learned from a war that has yet to conclude. For instance, a short time after Russian tanks crossed into Ukraine, Ciaran Martin reflected that “it turns out that the next war was not fought in cyberspace after all.” Weeks later, Nadiya Kostyuk and Erik Gartzke described Russia’s cyber activities in Ukraine as “dogs [that] have yet to bark loudly” and, drawing from their empirical work, added that the link between cyber and conventional military operations is, at most, indirect. Similarly, Erica Lonergan, Shawn Lonergan, Brandon Valeriano, and Benjamin Jensen commented that, “cyber operations are a form of modern political warfare, rather than decisive battles,” again hinting at the role of cyber in the ongoing conflict. Lastly, Lennart Maschmeyer and Myriam Dunn Cavelty deftly argued that cyber operations “offer limited strategic value” and that cyber commenter-enthusiasts are relying on misplaced assumptions in their analysis.

To be sure, the tech sector’s perspective is reinforced by reporting from highly respected sources about the Ukrainian conflict’s cyber dimension. Certainly, Microsoft’s intelligence products, and others like it, provide a critical public service by helping to shed light on the dynamics of cyber operations in an environment where the major tech players (like Microsoft) are likely to have exquisite visibility on often obscured cyber activities. However, they also reveal the limitations of technical intelligence reporting and the problems of drawing inferences from discrete cyber activities about core concepts in the security studies field—like deterrence, warfighting, or the offense-defense balance—absent an approach grounded in social science.

There are important policy implications if decision makers draw the wrong lessons from what the evidence has revealed thus far about the role of cyber operations. Below, using the recent Microsoft report as an illustrative example, we highlight how social science can provide critical perspective on three specific topics: drawing inferences about causal relationships, the nature of the offense-defense balance in cyberspace, and the perils of reasoning by analogy.

Causal Inference

The Microsoft report weighs in on an ongoing debate about the extent to which states can (and do) coordinate cyber and kinetic effects on the battlefield. Ukraine seems to be a fertile testing ground to evaluate competing claims about this issue. In its report, Microsoft draws a strong connection between cyber activities in the virtual battlespace and kinetic military operations on the ground. And there is some evidence to back up this claim. For instance, the report notes that Microsoft identified lateral movement of a Russian threat actor on the network of a nuclear power company on March 2 (lateral movement refers to the techniques used by a malicious cyber actor to move deeper into a network in search of sensitive data and other high-value assets after gaining initial access to the network). Then, “the next day, the Russian military attacked and occupied the country’s largest nuclear power plant.” This is provided as an example of how “the Russian military has coupled cyberattacks with conventional weapons aimed at the same targets,” a purported form of combined arms warfare involving cyber operations.

However, correlation is not causation. Just because there is evidence that cyberattacks and conventional attacks occurred around the same time and aimed at the same general target, does not prove that these moves were deliberately coordinated or coupled in some way. First, observing lateral movement in itself tells us little about what the ultimate purpose of the cyber operation was. Moreover, a compelling theoretical narrative to explain the mechanisms through which a cyber actor’s lateral movement on a network would be related to the subsequent physical occupation of that target is not offered. If anything, Microsoft’s examples of two actions taking place in tandem could instead reveal a lack of coordination between cyber and kinetic operations and actors; conducting a kinetic attack on a target as a cyber actor is maneuvering within its network would likely undermine the latter’s operation.

A related challenge is that much of the evidence about the hypothetical cyber-kinetic link lacks the kind of variation needed to rigorously evaluate their relationship. For example, in the graphic titled “Coordinated Russian cyber and military operations in Ukraine,” Microsoft’s report provides six examples of ostensible coordination in which a cyberattack was followed by a kinetic action. However, the report fails to include examples of cyberattacks that occurred in isolation and not followed by any sort of kinetic activity or, conversely, kinetic action that was not preceded by cyberattacks—lists that would run much longer than six examples. Without this kind of variation, inferences are limited and we cannot responsibly assess whether these different activities reflect planned coordination or are simply random noise.

Similarly, in another section, the report focuses on the purported impact of Russian-linked information operations, including those targeting entities external to Ukraine. For instance, the report reveals “an estimated average American consumption of Russian propaganda 60 million to 80 million page views per month.” Certainly, the reported numbers appear to be a significant magnitude of consumption of Russian information operations. But substantiating evidence of a link between those operations and their effectiveness (whether they changed the perceptions of target audiences or, even more importantly, their behavior) remains elusive. High volumes of activity do not necessarily translate into meaningful effects and the reported findings do not account for the lack of evidence or research to really assess the impact of propaganda consumption.

Offense-Defense Balance

A second hotly debated topic in the cyber strategy and policy fields is the extent to which cyberspace confers an advantage on offense over defense. This extends a central concept in the security studies literature—the idea of an offense-defense balance—to cyberspace. The basic premise of the theory is that, when offense has an advantage (typically measured in terms of technology and geography), arms races and wars are more likely. As Robert Jervis succinctly put it, measuring offensive versus defensive advantage is a function of how states answer following question: “If the state has one dollar to spend on increasing its security, should it be put into offensive or defensive forces?”

The conventional wisdom in the cyber field is that attackers have an extraordinary advantage over defenders. Echoing the views of many experts, Jason Healey has argued that offense has a “systemwide advantage” in cyberspace. But, when it comes to the Ukraine conflict, the Microsoft report seems to find evidence to the contrary, providing a much-needed reality check to some commentaries that depict cyberspace as a revolutionary and disproportionately offensive form of warfare. Specifically, the report reflects on the significant successes of Ukrainian cyber defense and resilience efforts and notes that “cyber defenses and operations have withstood attacks far more often than they have failed.”

This is consistent with a good deal of political science research, such as work by Rebecca Slayton or Erik Gartzke and Jon Lindsay, that finds that cyberspace may not in fact be offense dominant. They found that offensive cyber operations, particularly those with meaningful strategic effects, are harder and more resource intensive than the conventional wisdom would suggest, typically produce only transient and often unreliable effects, and are characterized by perishable vulnerabilities that are time sensitive with uncertain exploitation. As Austin Long aptly noted, “If an attack requires years of work and billions of dollars to overcome a defense hypothetically costing [only] millions of dollars, political scientists would characterize the environment as highly defense dominant.”

The evidence in the Microsoft report seems to bear this out. Microsoft finds that, for cyber espionage campaigns, only 29 percent of Russian actors were successful in conducting intrusion operations. Further, of that 29 percent, only 25 percent resulted in data being exfiltrated. In other words, only a small fraction of espionage operations, roughly one in fourteen, actually achieved their objectives. That said, some caution is warranted about drawing larger lessons about what the evidence demonstrates. The report does not articulate how Microsoft derived that number or the scope of Russian cyber activity that Microsoft is able to observe, leaving open the question of what bias there might be in the data.

This also raises a broader question about whether the bifurcation of cyberspace activities into offensive and defensive categories is even appropriate, or whether it imposes a conventional conflict framework onto a domain marked by interconnectedness. Political scientists have challenged the applicability of the offense-defense framework to cyberspace in different ways. Brandon Valeriano, for example, has written that “a misguided focus on the balance between offensive and defensive operations clouds understandings of cyber strategy.” In a different vein, Michael Fischerkeller, Emily Goldman, and Richard Harknett have highlighted that defining campaigns in cyberspace as either offensive or defensive reflects a misunderstanding of cyberspace and cyber operations.

The reality is that we aren’t certain yet whether offense or defense is inherently advantaged in cyberspace, or whether any such advantage will be sustained as technology evolves. But the rigorous approaches and theoretical underpinnings of the social sciences are intellectual guideposts for researchers, providing a baseline from which to postulate, assess, challenge, and work toward a more accurate conclusion, and to do so more quickly than empirical observation ungrounded in such research methodologies or, worse, limitless theorizing. Theories are meant to be expanded, disproven, evaluated, and challenged—that’s what good scientific research does—and since nation-states are only one of the stakeholders in cyberspace, social scientists have an important role in understanding cyberspace and the relationships between states, nonstate actors, corporations, and individuals. Ultimately, being more precise about how the field discusses campaigns and cyberspace operations will enable a richer understanding of national security issues that accounts for the unique characteristics of cyberspace, its role in conflict, and how states use it as an element of national power.

The Perils of Reasoning by Analogy

Finally, a significant amount of analysis in the cyber field relies on reasoning by (imperfect) analogy—such as the yet-to-materialize cyber Pearl Harbor or cyber 9/11. Similarly, the Microsoft report is littered with a plethora of historical comparisons, from the Battle of Britain in World War II to the assassination of Archduke Franz Ferdinand and the triggering of World War I to the attack on Fort Sumter that heralded the beginning of the American Civil War. But social scientific research has shown that there are limits to—and even dangers associated with—analysis by analogy. For example, Sean Lawson’s research has shown that cyber analysis is often characterized by “cyber-doom rhetoric” that draws on historical comparisons and distorts the true nature cyber threats. While historical comparisons can sometimes be useful, it is imperative to be careful about what inferences analysts can actually draw from them and what their limitations are.

In the cyber field, analogies are also not limited to history. The US military, for instance, often relies on analogies between military maneuver in the physical space and in cyberspace to explain operations, tactics, and strategy. This not only impacts operational planning, but also affects (whether intentionally or unintentionally) acquisitions, measures of effectiveness, measures of performance, and beyond. Casual analogies and comparisons to traditional maneuver concepts perpetuate the military’s conceptualization of cyberspace as an operational domain where warfighting functions used in physical domains also apply, when the unique aspects of cyberspace may be mismatched to these frameworks and require a distinct theory of their own.

So What?

Debates about how and from what perspective to interpret the cyber aspects of the Ukraine conflict are not mere academic navel-gazing. US policymakers risk learning the wrong lessons or missing critical insights if they do not consider perspectives from social scientific analysis. And reports like Microsoft’s, which are written for a broader audience, are rife with generalizations that lump a host of activities and observations into an ill-defined “cyber” bucket. This can make it difficult to convey the nuance of cyberspace and state behavior in the domain. While cyberspace may be a technical environment, it shapes and is shaped by geopolitical and strategic considerations, which demands an application of social scientific approaches. Three policy implications stand out from this analysis.

First, the question of whether cyber operations can be effectively synchronized or incorporated into conventional campaign plans has direct implications for the United States. One of US Cyber Command’s key roles is to provide combat support to the geographic combatant commands as part of their conventional campaign planning. Moreover, the Biden administration’s new National Defense Strategy, anchored in the concept of integrated deterrence, explicitly calls for capabilities to be synchronized across warfighting domains. Part of this involves, in the words of Secretary of Defense Lloyd Austin, “coordinated operations on land, in the air, on the sea, in space and in cyberspace.” Therefore, what the evidence demonstrates, and how to interpret it, about the coordination (or lack thereof) between cyber and kinetic military operations on the battlefield in Ukraine has direct implications for US strategy and warfighting concepts.

Second, debates about the advantages and risks of offensive or defensive activities in cyberspace have shaped US cyber strategy. In 2018, the Trump administration shifted US strategy in cyberspace to be more assertive and proactive, leading some senior officials to say that the United States was now “going on the offensive” in cyberspace. The strategic shift from a reactive to proactive posture led many observers to voice concerns about escalation risks based on the assumption that cyberspace is offense dominant. But if cyberspace is defense dominant, then a proactive posture is not likely to exacerbate instability. Therefore, as the Biden administration seeks to articulate its own strategic vision for cyberspace, how officials understand the characteristics and strategic implications of actions taken in and through cyberspace is likely to affect whether the United States adopts a more restraint-based approach (similar to that espoused by the Obama administration) or largely affirms the understanding of the previous administration.

Finally, policymakers may find analogies appealing because they provide simplicity and clarity—particularly for highly technical and esoteric fields like cyber. But bad analogies and oversimplification can lead to bad policies. If states are perpetually on the lookout for a “Munich moment,” for instance, they may opt for more aggressive policies that ultimately produce counterproductive outcomes. Moreover, analogies can be limiting. As General Paul Nakasone noted in December 2021, traditional, Cold-War style deterrence “is a model that does not comport to cyberspace”—but still dominates how policymakers approach cyber issues. Therefore, there is value in moving away from these imperfect analogies to grapple with cyberspace as it actually is.

Analysis of the Ukraine conflict will undoubtedly contribute to greater understanding of the strategic and operational utility of cyber power. However, we should be cautious about drawing sweeping conclusions from an ongoing and dynamic war. Analysis and output from security professionals is important—but should be informed not only by technical expertise but also social science.

No comments: