4 April 2017

Want to fix cybersecurity? Think about worst-case scenarios first


by Alicia Tatone
 
Scenario thinking sketches out future cybersecurity problems and helps policymakers begin addressing tomorrow’s digital dilemmas.

Cybersecurity depends on managing the consequences of a single powerful insight: The ongoing and ever-increasing demand for features, performance, and extensions of digital capabilities will expand to fill the space of what is technically possible, and then go beyond it. More than anything else, this is what drives innovation and the rapid rate of change that people and institutions have had to grapple with in the digital world.

It also means that the digital realm will evolve very much like other security realms have evolved in human affairs, but more quickly, with ever-changing vulnerabilities that will never fully be mastered. In other words, bad (illicit) actors coevolve with good, and the meanings and identities of “good” and “bad” are never settled. Threats don’t disappear; they change shape. Since the illicit players don’t need to follow rules or norms other than the ruthless pursuit of profit (for criminals) and strategic advantage (for states), they have a structural advantage and move faster and more boldly than the legal players.

Policymakers struggling with the consequences of digital insecurity need ways to get out ahead of this game rather than continuing to play catch-up. This essay explains one process for doing that — the development of scenarios that sketch a future landscape for the cybersecurity problem space. Scenario thinking is a systematic methodology that aims to specify the most important and most uncertain causal drivers in a system at the same time. It then combines these drivers together in models that explore unexpected pathways of change, using narratives to highlight what could be significantly different and how an observer would know in advance that those differences were beginning to emerge. Policymakers often use scenarios to rehearse how they might respond to “what if” types of questions, but in the cybersecurity world the need is even more urgent. The key question that scenario thinking can help address for policy is this: If X or Y happens, what will governments wish they had in place at that moment to maximize the upside and minimize the threat from the emerging digital environment?

I consider the rationale for scenario thinking in cybersecurity as well as the specific logic and consequences of a single scenario, in which stock market valuations for data-intensive companies rapidly decline as a result of bursting market bubbles. That is not a prediction, but rather a logical possibility with significant and surprising consequences for the cybersecurity environment. What will policymakers need to know and decide, and what tools will they need to have in hand at that moment? Good policy will need to consider not just one but a set of scenarios in order to design in advance interventions and incentives that will succeed — or at least not make the security situation worse — across the evolving landscape of possibilities. The digital world simply moves too quickly to wait, observe, analyze, and react.

Past and future Cybersecurity

Take a moment to consider the word “cybersecurity” and the assumptions it embeds. The preface “cyber” has its root in the term cybernetics, a science of control for complex systems, which in turn derives from a Greek term for governance or “steering.” The irony should be palpable because the internet of today has escaped most conventional forms of steering and control. This matters, because it points to a core fallacy at the heart of much of today’s thinking about security in the digital world. Using the term cyber reinforces reflexive instincts to seek control of a system that won’t succumb to control.

Now consider the term “security.” Attacking and defending today’s (and tomorrow’s) computers and networks is part of what that word points to, but only part. As most things and most people are or will soon be connected to digital networks, to think of cybersecurity as a technical discipline that focuses on the protection of computer networks is just too limiting. Technology is a necessary but not nearly sufficient condition for what we will want to achieve. Security starts in a different place, by identifying the most important values that people want to preserve everywhere that human beings and digital machines interact. And so it seems certain to me that cybersecurity will soon undergo a conceptual reformulation much like what happened to national security at the end of the Cold War, when an agenda that had focused narrowly on nuclear deterrence grew to encompass a much broader set of issues, including environmental security, economic security, and human security. But this time, we’ll have to grapple with the diverse manifestations of that much broader agenda in the digital realm.

As this happens, there will be broader public recognition of the fact that the seemingly distinctive digital world of computing and connectivity is no longer separable from conventional economic, social, political, and military processes. In 2016, it’s still common to hear phrases like “digital advertising,” “electronic medical record,” “e-commerce,” “online dating,” and “cyberconflict.” In a few years (or less) the prefaces (digital, electronic, and online) will seem unnecessary and anachronistic. The point is that the internet is indeed everywhere — but more importantly, it is inextricable and inseparable from society. This means issues that today are often treated as technical and specialized — like the effects of automation and robotics on labor markets, or the safety of artificial intelligence systems whose alignment with human goals cannot be guaranteed — will become core cybersecurity issues and by extension, core security issues for societies pure and simple.

That’s why the emerging cybersecurity agenda is more profound and fundamental than stolen credit card numbers, personnel files, or even an intrusion on industrial control systems. It makes future cybersecurity one of a very small number of existential risks to human societies, more like climate change in that respect than almost any contemporary discussion acknowledges. And that happens without a “cyber Pearl Harbor” or other military cataclysm.

Take this argument seriously, and you quickly recognize that the cybersecurity research and policy communities will very soon confront a much more diverse set of problems and opportunities than they do today. That complicates the picture, but it doesn’t have to be bad news. Cybersecurity experts have for years tried to raise the alarm and drive people, firms, and governments to confront candidly and directly our dependence on digital systems that are structurally at risk. A broader problem set will make those risks more difficult to ignore. What’s needed then is a tangible way for decision makers to see that broader landscape in a more holistic and anticipatory light, so they act in advance rather than wait for crises to force reactions.

Getting ahead of the game

To shed some light on that emerging landscape, the Center for Long Term Cybersecurity at Berkeley developed an approach to modeling what cybersecurity could mean in the future (with an original target date of 2020). We used a variant of scenario thinking, a method that was developed originally in the energy sector during the 1970s for investigating expansively and purposefully how a set of critical uncertainties come together to describe a possibility space and long-term strategic options.

The point of scenario thinking is to see more clearly not what aspects of the future landscape are the same, but what could be different and particularly discontinuous or disruptive change. Scenario thinking works like any model — by simplifying reality, exaggerating its most important elements, and adding just enough complexity back to make the model useful for analysis and decision-making. The mind almost naturally wants to treat scenarios as predictions, but that’s a mistake: No model we know of can produce the single answer to the question, “What will cybersecurity be in the future and what should we do about it.”

What scenario models can do, however, is generate hypotheses that mark out logically consistent pathways of change and describe future possibilities.

Our measure of success, then, is not accurate prediction. It is enabling decision makers to gain insight about how the cybersecurity landscape is changing, that in turn leads to better decisions about what needs to be done (and by whom) to position ourselves for the cybersecurity challenges and objectives that will emerge just over the horizon.

With a disciplined process using input from a broad variety of sources and experts, we developed a set of five scenarios that we call “Cybersecurity 2020.” Consider one of these scenarios, “Bubble 2.0.” This scenario models a future in which many of today’s data-intensive internet companies (and the neutral platforms and advertising revenue underpinning them) corrode and collapse as a result of perceived overvaluation in equity markets (analogous to the dot-com bust of 2000).
A scenario for 2020: Bubble 2.0

The basic causal path for Bubble 2.0 is simple. Many of today’s most successful internet firms are principally valued not on the basis of the services they provide, but on the presumed future value of the data they collect as they provide those services. That’s simply a shared belief for what we refer to as “the market.” Shared beliefs like these can evaporate very quickly when underlying adverse trend lines and/or an exogenous shock cause the market to “decide” they were not realistic. That is precisely what happened to dot-com firms in 2000. Beliefs about value (that in hindsight almost always look bizarre) began to crack and then spiraled downward in a self-reinforcing process; the results were devastating. What if a similar unravelling were to hit today’s data-intensive internet firms?

This is a well-understood financial panic dynamic — but the fact that it is understood may not make much difference in how it plays out. Imagine the media declaring a company like Yahoo or Twitter to be this decade’s equivalent of Bear Stearns, and a company like Facebook the next Lehman Brothers. Sequoia Capital (a major venture investment firm that saw and understood the dot-com crisis relatively early) might release a slide deck titled “Good Times RIP 2.0,” reminding industry insiders of the (in)famous 2008 slide deck it presented around Silicon Valley, that signaled life support for that generation of companies. The race for innovation and growth would then give way to survival mode as companies do everything they can to keep themselves solvent. Good Times RIP in 2008 advised firms to “get real or go home,” which meant (among other urgent crisis management actions) focus on must-have products, raise cash quickly, and treat every dollar as if it were your last. Imagine that kind of message being delivered to today’s data-intensive firms in the context of a stock price crash.

The critical scenario question then becomes, what “recoverable assets” does this generation of internet companies own that can be sold to raise cash for survival? The answer is, of course, their data sets.

It would be a major discontinuity for the cybersecurity agenda (and coincidentally a great irony), if many data sets that companies now spend vast resources to protect from thieves trying to gain access to their networks were suddenly made available in distressed asset auctions or sales on wide-open markets. Unlike today’s cat-and-mouse security game between attackers and defenders, the new game would play out in markets. But it would be rather opaque and inefficient markets, with a ‘war for data’ under some of the worst possible conditions: financial stress, ambiguous property rights, and uncertain regulation.

A new game

This creates a new cybersecurity agenda. A criminal organization or adversarial government that had been trying to access a data set — perhaps intellectual property from a technology firm, or an extensive social graph from a media company — can put away its spear-phishing tools and code exploits. Why break into systems and steal data when you can buy it from firms on the brink of bankruptcy at fire-sale prices? In colloquial language, nobody steals a car from a salvage yard.

This creates interesting new opportunities for criminal organizations. Some might choose to “go legitimate.” Some might consolidate control over valuable datasets in combination that they could not previously have stolen. Some might try to quickly and boldly attack datasets as they land in the possession of new owners, who are unlikely to be ready with adequate protections for their just-acquired assets. And some might act as cut-out intermediaries for adversarial governments that want access to national security or competitiveness-relevant data sets, but would not want to be identified as the purchasing party even if that purchase were formally legal.

How would governments respond? In the immediate crisis period a government might face high-stakes choices about whether to intervene in markets, and lock-up data sets from private companies that would otherwise go up for sale. It’s predictable that governments would be interested in ring-fencing defense, critical infrastructure, and government employee data. But other categories might be more surprising and confusing. Is it possible that data on farm locations and product lines could give rise to a food security question? Could data on high-performing university students be considered a source of leverage in the hands of foreign intelligence agencies to recruit effective spies? Lobbying around these issues would be fast, furious, and intense — as would, potentially, covert counter-lobbying by commercial interests, adversarial states, and possibly criminal networks.

One interesting strategic option for large data-intensive firms might be to actively seek government rescue, as auto companies and banks did in 2008 and 2009. Could a firm like Google argue that it was simply illiquid not insolvent, and “too big to fail?” Would it echo the General Motors approach of 2009 and claim that millions of US jobs depend on its survival? Might a large firm go even further, and challenge the US government by threatening that in the absence of a bail out it would have no choice but to sell its assets to a foreign, possibly Chinese, competitor?

The US government will have to listen seriously to these arguments. The economic and national-security policy communities would push for governments to act as “data buyer of last resort.” Protecting jobs, maintaining “systemic risk entities” in the data economy, and keeping valuable data assets out of the hands of foreign companies and governments all lean in favor of government intervention. The expressed intention (as with GM in 2009) would be for the government to buy up the data assets, hold them through the crisis period long enough for markets to stabilize, and then resell them to legitimate private firms on the other side. It’s likely that the GM success story would be cited as a precedent for this approach.

In the interim period of ownership, though, the federal government could find itself in a very awkward place regarding property rights — a much more complicated situation than was the case with GM’s industrial assets. Datasets that citizens felt OK about companies like Facebook having, might suddenly be not OK when they are held in escrow by a government agency, at least in the US. And what of data about foreign citizens and companies held by American firms, particularly those subject to the new transatlantic data protection and privacy protocols? The US government would certainly go to great lengths to assure the world that it had only a financial presence in data markets and even then only a temporary one, and that it would not do anything with the data that it temporarily owned other than warehouse it — but who would really have confidence in that assurance?

This could be very well be a watershed moment for privacy debates. As personally identifiable information is sold to new owners, the people who were the source of that data will more often than not react with astonishment: “I didn’t agree to have my data sold at bankruptcy to a government or firm I’ve never heard of!” The truth is that in most cases they did agree to it, simply by accepting common terms of service agreements. The controversies will be even more difficult to manage when de-anonymization hits combinations of datasets that were thought to have been rendered “safe” through (imperfect) anonymization protocols.

What’s to be done?

Repeating the point: scenarios like these are not predictions. The purpose lies in marking out a new problem set that could very well land at the center of the cybersecurity agenda. The purpose of doing that is to ask now, in advance, what actions can be taken to mitigate the downside risks and seize plausible opportunities.

And this scenario points to one very important area for action that would not normally appear on the list of cybersecurity policy options. It shows that we need extensive markets for data that function well, even under intense stress. That’s a serious problem of market design that nature and market forces alone won’t solve by themselves, or at least not in the relevant time frame. Today we have licit markets for very specific large data sets, mainly related to advertising systems; illicit markets that sell stolen credit card and other personal information outside the law; and a variety of markets that try to to put a value on ‘intangible’ assets that often include data (along with other assets) but are generally recognized as highly imperfect.

None of these markets would suffice in a Bubble 2.0 type scenario. A working market that had a chance in that world would need standard pricing models for data sets, models that include terms for characteristics like quality, freshness, uniqueness, the equivalent of title risk (does the seller actually own, free and clear, the rights to the data that she is selling?) and more. And since markets like these don’t come about ‘naturally’, government needs to consider bringing them into being with sufficient structure and regulation.

Today’s cybersecurity policy community needs to confront this issue — and other issues like it that our scenarios identify. The good news is that the market design problem this scenario poses is the kind of inspirational research question that academics love. It is also a problem that a forward-looking government would want to start working on before the market was actually needed. And, it is a problem with lots of money at stake including potentially large rewards for an organization that could put into place and operate such a market or even pieces of it.

It’s a place we can make progress in cybersecurity in addition to the standard technical attack-and-defend dynamics — and it might turn out to be as or even more important than more familiar targets for action like new authentication mechanisms, secure coding systems, or breach notification laws. We should get started now.

Steven Weber is the director of the Center for Long Term Cybersecurity and a professor in the School of Information and Department of Political Science at the University of California, Berkeley.

No comments: