6 July 2017

The NSA Confronts a Problem of Its Own Making

BY AMY ZEGARTCO
Recent cyberattacks show what happens when America’s secret-keepers can’t keep their secrets. 

It is hard to imagine more fitting names for code-gone-bad than WannaCry and Eternal Blue. Those are just some of the computer coding vulnerabilities pilfered from the National Security Agency’s super-secret stockpile that have been used in two separate global cyber attacks in recent weeks. An attack on Tuesday featuring Eternal Blue was the second of these to use stolen NSA cyber tools—disrupting everything from radiation monitoring at Chernobyl to shipping operations in India. Fort Meade’s trove of coding weaknesses is designed to give the NSA an edge. Instead, it’s giving the NSA heartburn. And it’s not going away any time soon.

As with most intelligence headlines, the story is complicated, filled with good intentions and unintended consequences. Home to the nation’s codebreakers and cyber spies, the NSA is paid to intercept communications of foreign adversaries. One way is by hunting for hidden vulnerabilities in the computer code powering Microsoft Windows and and all sorts of other products and services that connect us to the digital world. It’s a rich hunting ground. The rule of thumb is that one vulnerability can be found in about every 2,500 lines of code. Given that an Android phone uses 12 million lines of code, we’re talking a lot of vulnerabilities. Some are easy to find. Others are really hard. Companies are so worried about vulnerabilities that many—including Facebook and Microsoft—pay “bug bounties” to anyone who finds one and tells the company about it before alerting the world. Bug bounties can stretch into the hundreds of thousands of dollars.

The NSA, which employs more mathematicians than any organization on Earth, has been collecting these vulnerabilities. The agency often shares the weaknesses they find with American manufacturers so they can be patched. But not always. As NSADirector Mike Rogers told a Stanford audience in 2014,“the default setting is if we become aware of a vulnerability, we share it,” but then added, “There are some instances where we are not going to do that.” Critics contend that’s tantamount to saying, “In most cases we administer our special snake bite anti-venom that saves the patient. But not always.”

In this case, a shadowy group called the Shadow Brokers (really, you can’t make these names up) posted part of the NSA’s collection online, and now it’s O.K. Corral time in cyberspace. Tuesday’s attacks are just the beginning. Once bad code is “in the wild,” it never really goes away. Generally speaking, the best approach is patching. But most of us are terrible about clicking on those updates, which means there are always victims—lots of them—for cyber bad guys to shoot at. 

WannaCry and Eternal Blue must be how folks inside the NSA are feeling these days. America’s secret-keepers are struggling to keep their secrets. For the National Security Agency, this new reality must hit especially hard. For years, the agency was so cloaked in secrecy, officials refused to acknowledge its existence. People inside the Beltway joked that NSA stood for “No Such Agency.” When I visited NSA headquarters shortly after the Snowden revelations, one public-affairs officer said the job used to entail watching the phones ring and not commenting to reporters.

Now, the NSA finds itself confronting two wicked problems—one technical, the other human. The technical problem boils down to this: Is it ever possible to design technologies to be secure against everyone who wants to breach them except the good guys? Many government officials say yes, or at least “no, but…” In this view, weakening security just a smidge to give law-enforcement and intelligence officials an edge is worth it. That’s the basic idea behind the NSA’s vulnerability collection: “If we found a vulnerability, and we alone can use it, we get the advantage.” Sounds good, except for the part about “we alone can use it,” which turns out to be, well, dead wrong.

That’s essentially what the FBI argued when it tried to force Apple to design a new way to breach its own products so that special agents could access the iPhone of Syed Rizwan Farook, the terrorist who, along with his wife, killed 14 people in San Bernardino. Law-enforcement and intelligence agencies always want an edge, and there is a public interest in letting them have it.

As former FBI Director James Comey put it, “There will come a day—and it comes every day in this business—where it will matter a great deal to innocent people that we in law enforcement can’t access certain types of data or information, even with legal authorization.” 

Many leading cryptographers (the geniuses who design secure communications systems) and some senior intelligence officials say that a technical backdoor for one is a backdoor for all. If there’s a weakness in the security of a device or system, anyone can eventually exploit it. It may be hard, it may take time, it may take a team of crack hackers, but the math doesn’t lie. It’s nice to imagine that the FBI and NSA are the only ones who can exploit coding vulnerabilities for the good of the nation. It’s also nice to imagine that I’m the only person my teenage kids listen to. Nice isn’t the same thing as true. Former NSA Director Mike Hayden publicly broke with many of his former colleagues last year. “I disagree with Jim Comey,” Hayden said. “I know encryption represents a particular challenge for the FBI. … But on balance, I actually think it creates greater security for the American nation than the alternative: a backdoor.”

Hayden and others argue that digital security is good for everyone. If people don’t trust their devices and systems, they just won’t use them. And for all the talk that security improvements will lock out U.S. intelligence agencies, that hasn’t happened in the 40 years of this raging debate. That’s right. 40 years. Back in 1976, during the first “crypto war,” one of my Stanford colleagues, Martin Hellman, nearly went to jail over this dispute. His crime: publishing his academic research that became the foundational technology used to protect electronic communications. Back then, some NSAofficials feared that securing communications would make it harder for them to penetrate adversaries’ systems. They were right, of course—it did get harder. But instead of “going dark,” U.S.intelligence officials have been “going smart,” finding new ways to gather information about the capabilities and intentions of bad guys through electronic means.

The NSA’s second wicked problem is humans. All the best security clearance procedures in the world cannot eliminate the risk of an “insider threat.” The digital era has supersized the damage that one person can inflict. Pre-internet, traitors had to sneak into files, snap pictures with hidden mini-cameras, and smuggle documents out of secure buildings in their pant legs or a tissue box. Edward Snowden could download millions of pages onto a thumb drive with some clicks and clever social engineering, all from the comfort of his own desktop

There are no easy solutions to either the technical or human challenge the NSA now faces. Tuesday’s global cyber attack is a sneak preview of the movie known as our lives forever after.

No comments: