22 June 2022

Achieving True Cybersecurity Is Impossible

Ivan Arreguin-Toft

Cybersecurity the way we like to think of it is actually impossible to achieve. That’s not to say we shouldn’t try hard to achieve it. Nor is it the same thing as saying that our costly efforts to date have been wasted. Instead, if our aim is to make our interactions in cyberspace more secure, we need to recognize two things.

First, part of our troubles has to do with a culture that defines things like success, victory, and security as dichotomous rather than continuous variables. Think of a switch that’s either on or off. Second, speed is hurting us, and calls to replace humans with much faster and “objective” machines will continue to gain momentum, putting us at extreme risk without increasing either our security or prosperity. Let me explain.

[Cyber]security is Not a Switch

In my time in Norway a few years ago, I had the great fortune to be hosted by the Norwegian Institute for Defense. As I toiled to recover the history of Norway’s experience under occupation by the Third Reich, I was able most days to join my Norwegian colleagues for a communal lunch. My colleagues did me the great courtesy of carrying on most conversations in flawless English. As an American academic accustomed to research abroad, I anticipated that sooner or later I’d encounter a classic opening sentence of the form, “You know, the trouble with you Americans is…” And after a month or so my unfailingly polite and generous colleagues obliged. But what ended that sentence has stuck with me since then; and underlines a core value of study abroad at the same time: “You know, the trouble with you Americans is, you think every policy problem has a solution; whereas we Europeans understand that some problems you just have to learn to live with.”

The idea that part of our mission was research intended to support policies that solved problems was never something I’d thought of as varying by culture. But as I reflected more and more on the idea, I realized that insecurity—and by extension cyber-insecurity—would be something we Americans would have to learn to live with.

This “switch” problem is mainly due to the relentless infiltration of market capitalist logic into problem framing and solving. For example, corporations hire cybersecurity consultants to ensure that corporate profit-making operations are secure from hacking, theft, disruption, and so on. When corporations pay money to someone to solve a problem, they expect a “deliverable”: some empirical evidence that corporate operations are now “secure.” It should go without saying that this same corporate logic infiltration—the largely North American idea that governance would be more “effective” if run via corporate profit-making logic—has seriously degraded effective governance as well.

Cybersecurity is not a switch. It isn’t something that’s either “on” or “off,” but something that we can approach if we have a sound strategy. And progress toward our shared ideal itself is what we should be counting as success.

Automating Computer Network Defense Can’t Save Us, and May Destroy Us

Even if we could agree to moderate our cultural insistence on measuring success or failure in terms of decisively “solving” policy problems, we’d be left with another set of problems caused mainly by the assertion that humans are too slow and emotional as compared to computers, which are imagined as fast (absolutely) and objective (absolutely not). We need to challenge these ideas, because together they make up a kind of binary weapon which leads us into very dangerous territory while at the same time doing little to advance us toward our ideal of “cybersecurity in our time.”

So, a first critical question is, under what conditions is speed a necessary advantage? That’s where computers come in. Few Americans will be aware, for example, that the first-ever presidential directive on cybersecurity—NSDD-145 (1984)—was issued by President Ronald Reagan in reaction to his viewing of John Badham’s WarGames (1983). After viewing the film, which imagines a nascent artificial intelligence called the WOPR hijacking U.S. nuclear missile defense and threatening to start a global thermonuclear war, Reagan asked his national security team whether the events in the film could happen in real life. When his question was later answered in the affirmative, the Reagan administration issued the NSDD. Here’s a key bit of dialogue from Badham’s film, which starts after a simulated nuclear attack resulted in 22 percent of Air Force officers refusing to launch their missiles when commanded to do so:

No comments: