23 December 2020

A Hack Foretold

By FRED KAPLAN

The most stunning thing about Russia’s latest hack of 18,000 computer networks—including those of at least six federal agencies, including the State Department, the Homeland Security Department, and the National Nuclear Security Administration—is not how sophisticated the attack was.* It’s that these sorts of attacks are still happening—are still possible, in some cases easy—and that months can go by with nobody noticing them.

It’s not as if the hacking threat is new; it’s been going on—its scope and possible fixes have been known—for a very long time.

As far back as 1997, a presidential commission concluded that the “information networks” controlling much of our economy and society were vulnerable to “cyber attacks.” The ability of foreign powers to launch these hacks and do us harm, the report went on, “is real; it is growing at an alarming rate; and we have little defense against it.”

One year before that report, the Pentagon’s Defense Science Board released a report on “Information Warfare Defense,” warning that our “increasing dependency” on vulnerable networks amounted to a “recipe for a national security disaster.” Its authors recommended more than 50 actions to be taken over the next five years, at a cost of $3 billion. None of the actions was taken.

These reports were only repeating a document signed in 1984 by President Ronald Reagan called “National Policy on Telecommunications and Automated Information Systems Security,” concluding that new computer networks—which were just hitting the market—were “highly susceptible to interception, unauthorized electronic access, and related forms of technical exploitation” by foreign spies, “terrorist groups and criminal elements.” (The study was prompted by the movie WarGames, about a tech-whiz teenager who unwittingly hacks into the North American Air Defense Command’s early warning computers and nearly sets off a nuclear war. After Reagan saw the movie, he asked his top general, “Could something like this really happen?”)

The awareness that something like this could happen dates all the way back to the dawn of the internet, when it was a Defense Department research-sharing project called the ARPANET. In April 1967, just before the ARPANET’s rollout, an engineer named Willis Ware wrote a paper called “Security and Privacy in Computer Systems” (it was classified at the time but has long been made public) warning that once users could access data from multiple locations, people with certain skills could hack into a network—and after hacking into one part of the network, they could roam at will.

Stephen Lukasik, ARPANET’s supervisor, took Ware’s paper to his team and asked what they thought. The team was annoyed. They begged Lukasik not to saddle them with a security requirement. It would be like telling the Wright brothers that their first airplane at Kitty Hawk had to fly 50 miles while carrying 20 passengers. Let’s do this step by step, the team said. It had been hard enough to get the system to work; the Russians wouldn’t be able to match it for decades.

It did take decades—about three decades—for the Russians, then the Chinese and others, to develop their own systems along with the technology to hack America. Meanwhile, vast systems and networks would sprout up throughout the U.S. and much of the world, without any provisions for security. Some provisions would be backfitted later, but the vulnerability that Ware and the later studies observed was built into the technology.

That’s the root of the problem we’re seeing today.

By the time of the 1997 presidential report, it was known that this problem wasn’t merely theoretical. That same year, 25 members of a “Red Team” inside the National Security Agency launched an exercise called Eligible Receiver, in which they hacked into the computer networks of the Defense Department, using only commercially available equipment and software. They hacked every single system, shutting down some, sending false orders to others, generally fomenting mayhem in the military’s entire command-control-communications system. (This was all done legally, with full supervision by the secretary of defense and Pentagon lawyers.)

A few months later, in an operation called Moonlight Maze, officials detected someone hacking into networks at several military facilities and stealing data. The hack was traced to the Russian Academy of Sciences. In 2001, the Chinese were first detected hacking into our military networks. In 2008, the Russians penetrated Central Command—which runs U.S. military operations in Iraq and Afghanistan—and, once inside, hacked into classified networks, the first time that had happened, at least as far as we know. Today the armies of more than 20 nations have cyberunits with at least some offensive capability.

It should be noted that the United States has been hacking into Russian, Chinese, and other networks for a very long time as well. A department of the NSA, called Tailored Access Operations (though the name has since changed), is dedicated to “cyberoffensive operations.”

The point is the authorities have known about hacking for a long time. Whole bureaucracies have been established, and presidential directives have been promulgated, to enhance cybersecurity—and some of their actions have been effective. Still, the contest between cyberoffense and -defense is a never-ending race, where the offense has the advantage and, so, the defense must never let up its guard.

While security is a lot better than it used to be, vast networks have been left exposed in one way or another, and dedicated hackers who very much want to get inside those networks—and who have the resources of a nation-state—figure out a way.

The recent discovery that Russian intelligence has penetrated SolarWinds Orion, a network management system used by 300,000 customers throughout the federal government and in hundreds of corporations—not just in the U.S., but in seven other countries—is one such example. It is more alarming than most for a few reasons: First, its scope is much broader. Second, it went undetected for eight months and, even then, was discovered not by the government but by a private cybersecurity company, FireEye, which the Russians made the mistake of hacking as well. FireEye detected and analyzed the intrusion—a task that took 100 of the firm’s analysts—and released the findings to the world.

The third big distinction about this hack is that it was embedded in a SolarWinds security update. When clients clicked on the update, they unwittingly downloaded malware, which allowed the hacker to monitor—and, in some cases, alter—data across the entire network. Orion’s software, it turned out, was protected by a very flimsy password (“solarwinds123”). As has been the case all too often throughout the internet age, the watchword was efficiency, not security.

There was one point, a little more than 20 years ago, when much of this problem could have been avoided. Soon after the 1997 report warning of possible “cyber attacks,” Richard Clarke, President Bill Clinton’s counterterrorism coordinator, proposed setting mandatory cybersecurity requirements for industries in “critical infrastructure”—economic sectors vital to the functioning of a modern society, including telecommunications, electrical power, gas and oil, banking and finance, transportation, water supply, and emergency services. However, most of the industries were, and still are, private companies, and their executives—who abhorred all forms of regulation—resisted the proposal.

In 1999, Clarke drafted an alternative idea, in which all civilian government agencies—perhaps joined later by critical infrastructure companies—would be hooked up to a parallel internet, known as FIDNET (for federal intrusion detection network), with sensors wired to monitors at some government agency. If the sensors detected an intrusion, the monitors would be alerted and could shut down the network or take other action. (The military was already putting intrusion-detection devices on its networks, which are monitored by the NSA and U.S. Cyber Command, both of which are part of the Defense Department.)

Someone leaked Clarke’s draft to the New York Times, prompting howls of protest from legislators and civil liberties groups denouncing the idea as “Orwellian.” The idea died. Twenty-one years and a zillion hacks later, it’s a fair bet that the idea would at least be seriously debated if it were proposed today. In any case, it’s probably too late. Even among federal agencies, the internet has grown too complex for a single monitor to track.

We don’t yet know the full scope of the damage done by the SolarWinds Orion hack. It is not even known as yet whether it was an attack, as the term is generally understood, or a “mere” act of espionage. (For one thing, we don’t know if the Russians merely exfiltrated data or altered the data as well.) In some ways, though, the distinction between espionage and an attack has blurred. Matt Devost, co-founder of OODA LLC, a cybersecurity consulting firm, calls hacks such as these “attacks against institutions of trust”—which, over time, can corrode our society as much as a physical attack might.

And so we come back to Clarke’s initial proposal to Clinton: mandatory cybersecurity requirements for critical infrastructure, or at least for those firms that sell their wares, or have contracts with, the federal government. The companies would resist, as they did in the 1990s. But we all know more about the problem than we did back then. Maybe this time, the resistance could be resisted.

No comments: