Pages

10 October 2022

How do we know when cyber defenses are working?

Josephine Wolff

When Russian forces invaded Ukraine earlier this year, many observers believed that the conflict would be marked by overwhelming use of the Kremlin’s cyberweapons. Possessing a technically sophisticated cadre of hackers and toolkits to attack digital infrastructure, the Kremlin, according to this line of thinking, would deploy these weapons in an effort to cripple the Ukrainian government and deliver a decisive advantage on the battlefield. The actual experience of cyberwar in Ukraine has been far more mixed: While Russia has used its cyber capabilities, these digital forays have been far less successful or aggressive than many observers had predicted at the outset of the war.

So why has Russia failed to win on the digital battlefield? In recent weeks, Ukrainian and U.S. government officials and the Western tech companies that have rushed to support Ukraine’s digital defenses have argued that Russia’s failure is due in no small part to the sophistication of Kiev’s defenses. But evaluating that claim is immensely difficult and illustrates a fundamental problem for the current state of cybersecurity research and policy. As it stands, there is no playbook for measuring the effectiveness of cyber defense efforts or conveying to the public when they are working. And this makes it difficult to draw conclusions from the war in Ukraine to inform our future defensive posture. Assessing the effectiveness of cyber defenses is a crucially important part of developing cybersecurity policy and making decisions about where and how to invest in computer networks and infrastructure. But in the absence of good defensive metrics, calibrating these investments remains difficult.

Near misses

On April 12, the Computer Emergency Response Team (CERT) of Ukraine announced that it had successfully stopped a series of Russian cyberattacks on Ukraine’s power grid. It was exactly the kind of cyberattack on critical infrastructure that many people had predicted Russia would carry out against Ukraine, not least because Russian hackers had succeeded in taking down parts of Ukraine’s power grid in late 2015 and then again in 2016. But that didn’t happen in 2022. While American government officials and tech companies say they have observed Russian forces carrying out cyberattacks in conjunction with military attacks and to disrupt the functioning of the Ukrainian government and military, the worst fears of Russian hackers disrupting the power grid by cyberattack have not come true.

To explain why these fears haven’t materialized, Ukrainian government officials, United States officials, and tech companies are beginning to credit their combined defensive efforts. To hear them tell it, their investments in defense are making a difference in stopping Russian cyberattacks, and to make that case they are revealing to the public some of the “near misses”—or narrowly averted cyberattacks they have countered. These public statements about averted cyberattacks and defensive victories are useful as a starting point for thinking about how to communicate and measure the impacts and sophistication of not just cyberattacks but also cyber defense. But they also illustrate how far the cybersecurity community has to go in measuring effective cyber defenses.

Measuring and communicating defensive victories in cyberspace remains a significant challenge. Successful attacks are often visible and generate their own headlines, without government officials needing to issue formal statements, but successful defense is demonstrated primarily by the absence of any headline-making attacks. And even when that absence gets noticed and discussed by the media and the general public—as it certainly did in the case of Russia and Ukraine—there’s no way for outside observers to tell whether it’s due to a lack of attempted attacks or a robust defensive posture. These challenges are not unique to cybersecurity—it’s always tricky to measure the impact of security measures when that success hinges on eliminating or reducing relatively rare, large-scale threats. But there are characteristics of cyber threats—and defenses—that make this type of analysis especially difficult and contentious.

When Ukrainian officials announced in April that they had stopped the cyberattack directed at their power grid, they went to great lengths to describe just how advanced the attack was and how much damage it could have done, had it been successful—potentially cutting off power for two million people. Understanding the stakes and potential consequences of an averted cyberattack is important, but that understanding is more powerful if it includes some sense of how close the attack actually came to succeeding. When it comes to breaches of physical security, we often have some intuition about how close the perpetrators got to their target, how dangerous the weapons they were planning to use are, and what stage of an attack they had reached by the time they are stopped. With cyberattacks, all of those questions are more slippery and open to interpretation. Had Russian hackers only gotten as far as writing the malware needed to breach the Ukrainian grid before the computer systems were patched? Had they actually successfully installed any of that malware on Ukrainian critical infrastructure networks? And if it could be thwarted before causing any harm, was that malware really any good anyway?

These are not idle questions, nor are they easy ones. They lie at the heart of trying to understand whether we are—possibly for the first time ever—witnessing a period of prolonged, successful cyber defense against state-sponsored attacks or a period of surprisingly half-hearted cyber offense from a state long regarded as one of the most sophisticated and aggressive actors in this domain, or some combination of the two. To understand how strong our current cyber defenses are, we also need to know more about at what stage of Russian cyber operations they are intervening and how serious the attacks coming out of Russia are. According to a pair of recent reports from security researchers at ESET and Dragos, the malware used to target the Ukrainian energy sector in April was highly technically sophisticated but was intercepted before it was deployed. This suggests that Russia retains talented hackers with intimate knowledge of industrial control systems but that the weak operational security of the Russian government is undermining its cyber operations. It might also indicate that Russia’s cyber capabilities are currently strongest when it comes to writing malware programs but relatively weaker when it comes to identifying the vulnerabilities and leveraging the resources necessary to distribute that malware so it can infect targeted systems. In other words, there are reasons to believe that Russia’s cyber operations are uneven right now, rather than dormant, and that other countries are benefiting significantly from its inability to protect the secrecy of its own cyber capabilities and malware, offering defenders an important head start when it comes to protecting their networks and systems.

Measuring success in cyber defense

The wide range and varying success of Russian activity in cyberspace illustrates the difficulty of measuring defensive success. In January, ahead of the invasion, dozens of Ukrainian government websites were defaced, and the following month, several more websites went down and wiper malware, designed to delete data, was detected on hundreds of Ukrainian computers. On the day of the Russian invasion, a Russian cyberattack disrupted the network of satellite broadband provider Viasat, which may have played a role in degrading Ukrainian military communication. As military operations continued, U.S. officials have described Russian forces using cyberattacks in conjunction with kinetic ones. But these were relatively small-scale breaches compared to previous attacks on the Ukrainian power grid, or the infamous NotPetya malware that Russia released via Ukrainian software in 2017, causing billions of dollars’ worth of damage and disruption worldwide. Moreover, as the conflict escalated in March and April, the reports of successful cyberattacks seemed to fall off, rather than increasing in frequency or severity. An April report by Microsoft indicated that there were 22 successful, destructive cyberattacks in Ukraine during the first week of the war at the end of February, but fewer than 10 each of the following weeks in March and April, even as the number of attempted cyber operations by Russia appeared to be increasing. While the United States Department of Homeland Security warned U.S. companies and critical infrastructure providers that Russia might use cyberattacks to retaliate for U.S. sanctions, in the weeks and months following those sanctions, no such attacks materialized in a way that had any major, public impact.

This (incomplete) catalogue of Russian cyber activity—or relative inactivity—offers Western officials the opportunity to spin a narrative of defensive success, but outside observers have a hard time parsing whether that’s because Russia decided against deploying its most sophisticated weapons or is holding them in reserve. The relative “sophistication” of Russian attacks matters for the purposes of assessing the quality of Ukraine’s defenses. If Russia is deploying its most sophisticated weapons—typically defined as those deploying at least one or several “zero-day” vulnerabilities, which are flaws in hardware and software unknown to its manufacturer—and Ukraine is managing to repel them, then that would be evidence of a strong defense indeed. If the attacks are less sophisticated and Ukraine is still managing to repel them, that too points to a quality defense but is perhaps less of an accomplishment.

The presence of zero-days provides a simple way of assessing the sophistication of cyberattack. Zero-days indicate how much time and money went into developing the attack—finding bugs no one else has found before is much more work and much more expensive than reusing existing malware—and also because malware that exploits zero-days is almost by definition harder to defend against than other attacks. Using a zero-day means no one has patched the vulnerability being exploited because no one knew about it prior to the attack. To assess the sophistication of a given attack analysts also look at the extent to which attacks spread laterally through infected systems, their ability to operate covertly without triggering detection mechanisms, or how well tailored they are to execute a specific operation without damaging other, peripheral computer systems or functions. But doing all of those things is often, at least in part, a question of how good—and how new—the vulnerabilities being exploited are: Do they allow the malware to spread quickly across a network? Do they provide the attackers with the ability to sabotage or evade detection systems?

But this does not mean that unsophisticated attacks don’t matter—especially for defenders. Unsophisticated cyberattacks are often successful, and perpetrators can and do cause a lot of damage by reusing old vulnerabilities that their victims haven’t bothered to patch, and in some cases even old code that has been deployed before. That’s why so many governments are so focused right now on encouraging companies and agencies to install security updates and make sure that all of their software and systems are fully patched. Those patches can prevent a lot of breaches—but not the most sophisticated ones, the ones that exploit bugs that haven’t been patched or even detected by the targets.

Answering the question of how good our cyber defenses currently are against Russian cyberattacks will require a great deal more information about what these attempted attacks look like and how they are being mitigated. There are a range of proposed cybersecurity metrics for measuring the resilience and security of computer networks. These include how quickly a computer system can recover from various types of interference, how long it takes an organization to detect an intrusion, and the percentage of devices compromised or infected with malware in any given incident. Other proposals have focused on reporting information and metrics around cybersecurity “near misses” which can be trickier to measure because there often is no downtime or other clear, measurable damage. However, many other metrics still apply in these cases: How many servers did the perpetrators’ malware spread to before it was detected? Which segments of the network were they able to penetrate? How much data was the perpetrator in a position to destroy, exfiltrate, or alter prior to their detection? Were they able to evade or bypass any existing security controls or monitoring systems?

Communicating about the effectiveness of our cyber defenses means explaining just how far an intrusion is able to proceed before it was cut off and, by extension, how close it came to being successful. This information can be tricky to communicate without revealing sensitive details about the defenders’ security posture and intelligence streams, as well, so we shouldn’t be surprised that there’s a lot we still don’t know about how, exactly, Russian cyberattacks are being thwarted and how far they managed to spread before being detected and contained. But over time, if and when that defensive information becomes less relevant and sensitive, we should continue to push for it to be released in forms that describe explicitly what capabilities the attackers had within targeted systems before their presence was detected, how exactly that detection occurred, and what steps defenders took to remove the intruders. These details will offer a clearer window into not just how successful cyber defense efforts have been but also how robust and replicable that success is likely to be against future incidents, and what defensive gaps we might aim to fill moving forward.

No comments:

Post a Comment