12 October 2020

WILL WE HAVE CYBERWAR OR CYBER PEACE?

RICHARD A. CLARKE

The Fates, it sometimes seems, prefer extreme outcomes. While humans usually reject predictions of futures dramatically changed from the present, information technology has produced a never-ending stream of upheavals in the economy, warfare, our very way of life. Thus, cyberspace in 2030 could be a very different place than it is today, for good or ill. How we deploy artificial intelligence and machine learning to attack and to defend networks will make the difference.

Today cyberspace is a hostile environment. Most corporations and governments have security operations centers (SOCs) that look like hospital emergency rooms doing triage, as they are hit by thousands of automated and human-directed attacks every day. To deal with this problem, Silicon Valley startups have created yet more software to prioritize security incidents and automate operational responses. Even with the added software, however, humans cannot always act with the speed and discernment necessary to respond to attacks and remediate vulnerabilities.

One reason humans cannot react quickly enough is that they are already competing against attackers which aren’t human, but rather machine-learning algorithms that have incorporated all of the tricks known to hackers and deploy those techniques at machine speed. Think of it as cyber AI that goes on the offensive. After observing network features from the outside, offensive bots make educated guesses about a network’s vulnerabilities, persistently try every attack technique until they penetrate the perimeter defenses, and then drop a payload. The payload, lines of self-executing code, defeats internal protections, finds the targeted information, and extracts it. Or, rather than merely stealing data, the algorithm may be designed to eat data, encrypt data in a ransom scheme or cause machines to malfunction or self-destruct.

Despite science fiction fears of Skynet and the Borg, AI has the potential to make cyberspace safer for humans. Think of it as a master cyber AI that goes on the defense. Machine learning holds out the theoretical possibility of humans yielding control of network security management, indeed all network operations, to adaptive algorithms. Thus far, however, machine-learning techniques and narrow AI systems have only been incorporated into anomalous activity detection, fraud prevention, and identity and access management tools. The master AI to “rule them all” hasn’t been a project any venture-capital firm nor government grant-giver has been willing to fund.

The biggest barrier has been human distrust. Executives often incorrectly intuit that having humans in the loop will raise the probability of successful defense, even though humans cannot keep up with an automated attack program. No enterprise has been willing to volunteer its operational network as a classroom for machine learning to educate itself on how to make the decisions necessary to protect the organization in real-time at machine speed.

What do you see as more likely and why, a cyberwar or the triumph of technological advances over bad actors? Join the conversation below.

Given the increased advantage that the offense now gets from AI, someday soon someone may feel compelled to let go of the reins, and will develop a master AI for defense. By 2030 such a network-defense and network-control master algorithm might greatly reduce cyber risks (and reduce the revenue realized in parts of today’s lucrative cybersecurity industry). Cyber peace might break out.

Alternatively, by 2030 we may have had our first cyberwar, a hyper-speed conflict involving widespread nation-state attacks on each other’s critical infrastructure, including telecommunications, pipelines, financial systems, and electric-power generation and transmission networks. Although this concept was first introduced to most people in movie thrillers like “Live Free or Die Hard” (2007), weaponized software exists and is in the hands of military cyber commands and intelligence units in more than a score of nations, including Russia, China, Israel, Iran and North Korea. And, of course, the U.S., which has proved willing to employ such weapons against targets including Iran and the terrorist group ISIS.

After Iran shot down a U.S. drone in 2019, President Trump called off a retaliatory bombing raid and launched a cyberattack. A U.S. official told The Wall Street Journal at the time that the latter didn’t involve loss of life. The belief that cyber conflict is antiseptic and creates few casualties may result in leaders around the world being more willing to go to cyberwar than kinetic conflict. Unfortunately, the physical, financial and military damage done by a cyberattack could be so great that it would force the hand of leaders to respond with conventional weapons. Thus, cyberwar may be the entryway for broader conflict.

Some nations have already loaded their cyber weapons. Senior intelligence officials believe that foreign adversaries including Russia and China have secured hidden footholds in the U.S. electric grid and could use that access to cause blackouts in the future.

Moreover, new Congressional authorities backed by presidential directives have given both the Pentagon’s Cyber Command and the CIA the authority to lace potential adversaries’ networks with a destructive program that can be activated in the event of war. While a strong case can be made for such preparation, many nations existing in this perpetual state of high readiness creates crisis instability and incentives to go first.

As the U.S. gets ready for an election during a pandemic, we report on in-person voting options and review the security threats inherent in mobile or blockchain assisted voting. In a previous version of this podcast released on Oct. 2, we said that Bradley Tusk was funding mobile voting apps, including the Voatz app. Tusk Philanthropies has given funding to voting precincts to launch mobile voting pilot programs - not to the apps themselves.

If there were to be a full-scale cyberwar, we could expect that many parts of the U.S. would be without networked electric power for months. Software can functionally kill hardware by giving commands that cause it to self-destruct. Destroyed generators and transformers couldn't be replaced quickly. They are made-to-order and extras aren’t kept in storage. Swaths of the country would rely on a few small backup generators at hospitals. Stricken regions would descend into chaos as the thin veneer of civilization rapidly deteriorated.

Will either of these outcomes, a master algorithm that will effectively end cyberattacks or a cyberwar that leads to societal collapse, occur? The Fates, it sometimes seems, prefer extreme outcomes.

Richard A. Clarke is the co-author of “The Fifth Domain: Defending Our Country, Our Companies, and Ourselves in the Age of Cyber Threats,” and a former White House counterterrorism and cybersecurity chief.

Copyright ©2020 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8

Appeared in the October 9, 2020, print edition as 'Cyberwar or Cyber Peace?.'

No comments: