Pages

20 April 2017

The Zero-Day Dilemma: Should Government Disclose Company Cyber Security Gaps?

LEVI MAXEY

Few topics lend themselves to more polemics than government collection and exploitation of zero-day vulnerabilities, or security flaws in commercial software and hardware not yet disclosed to the vendors, to facilitate intelligence gathering efforts.

The choices for intelligence agencies are, in short, to either collect and retain zero-day vulnerabilities to glean crucial intelligence, or, instead, to collect and disclose security flaws to companies so that they may design and distribute patches for them.

Weighing these choices involves a number of considerations, both technological and political. But the question remains: does the U.S. intelligence community gathering – but not disclosing – zero-day vulnerabilities contribute to weaker overall cybersecurity? If so, does this negative impact outweigh the benefit such capabilities could present for intelligence collection?

Critics of government use of zero-day exploits suggest that by not disclosing vulnerabilities in systems used by U.S. citizens and companies, the government is tacitly accepting their digital insecurity and intentionally leaving them vulnerable to attack by foreign governments, criminals, terrorists, and hacktivists. While the vast majority of cyber attacks involve already known vulnerabilities, the small percentage that leverage zero-day exploits can often be the most harmful. For example, in the 2014 breach of Sony Pictures, North Korean hackers seemingly exploited a zero-day vulnerability that allowed them unfettered access to sensitive data – an incident that ended with the U.S. government imposing punitive sanctions in response.

Proponents of lawful government hacking, however, argue such methods allow the intelligence community to conduct its mission of gathering information to mitigate national security risks ranging from terrorism, nuclear weapons proliferation, and transnational crime to foreign espionage and adversary military operations. For example, zero-day exploits were a crucial aspect of the Stuxnet worm found sabotaging Iran’s nuclear program in 2010.

The rationale for the use of hacking to collect foreign intelligence is obvious: it is both safer and more effective than relying on human intelligence assets risking their lives on the ground and more targeted than bulk data collection programs that threaten the privacy of innocents at scale. Some even suggest that perfect cybersecurity is impossible, and instead resources should be put into penetrating adversary networks to preempt attacks and determine their origins.

Balancing the pros and cons of disclosure is itself a challenge, as metrics of potential impact are difficult to ascertain. However, this is what the U.S. government’s Vulnerabilities Equities Process (VEP) essentially tries to accomplish. It was first conceptualized during the final days of the Bush administration, finally written in 2010 by the Obama administration, and then “reinvigorated” in 2014 following the exposure of the Heartbleed malware and the public relations catastrophe that ensued after the Snowden revelations.

While most of what is known about the VEP comes from documents obtained through FOIA requests, Michael Daniel, the former cybersecurity coordinator at the National Security Council during the Obama administration, argues that the policy’s “strong bias is going to be that we will disclose vulnerabilities to vendors.” In fact, self-reported numbers by the National Security Agency from 2015 suggest that it discloses some 90 percent of zero-day vulnerabilities, though apparently through an internal mechanism rather than the VEP.

But while the VEP has received harsh criticism from some in the intelligence community, “it would be a mistake to say that, because the VEP is not perfect, we should get rid of it,” says Ari Schwartz, previously a Special Assistant to the President and Senior Director for Cybersecurity in former President Barack Obama’s National Security Council. “We need to continually try to improve it. There are clearly a lot of things that could be done to make the VEP better.”

Efforts that could be undertaken, according to Schwartz, include further transparency into criteria in determining whether to disclose or retain vulnerabilities, ensuring decisions are subject to periodic review, and transferring the Executive Secretary function from the NSA to a more publically accountable agency, such as the Department of Homeland Security.

Now more than ever, the exploitation of zero-day vulnerabilities by U.S. law enforcement has come under increased scrutiny. Notable events include the FBI’s use of zero-day exploits to investigate suspects involved in child sexual exploitation on Tor hidden services, and, separately, to gain access to the iPhone of Sayed Farook, a suspect in the 2015 San Bernardino terrorist attack. At the same time, it should be understood that law enforcement’s use of hacking as an investigate tool is the inevitable consequence of pervasive use of end-to-end encryption, device encryption, and anonymity programs such as the Tor browser.

Another crucial component surrounding the discussion of government vulnerability disclosure policy, is the so-called “grey market,” where government has the ability to purchase or license exploits for zero-day vulnerabilities from private brokers. For example, it has been reported that the FBI licensed a zero-day exploit from the firm Cellebrite to breach Farook’s iPhone.

Therefore, a disclosure policy in which government purchases vulnerabilities from zero-day brokers and then discloses them to the affected companies essentially results in U.S. taxpayers subsidizing these companies. In effect, the government is paying for the security of commercial services that should be financed through market-driven forces. The problems with devising a fair government disclosure policy are compounded by the reality that companies are often slow in patching flaws – the very purpose of disclosure. These days, security flaws in commercial products are becoming more common.

Moreover, should a government agency choose to temporarily exploit a vulnerability and then disclose it once it is no longer needed, it risks leaving a digital trail that could lead forensic investigators to reveal past operations. 

Recent incidents could be changing the intelligence community’s calculus over whether to advise companies that their systems are vulnerable versus keeping this information classified in order to collect intelligence. For example, a group calling itself the Shadow Brokers published purported NSA hacking tools. Similarly, Wikileaks published what it claims are CIA hacking techniques, including a number of previously unknown vulnerabilities. In 2015, firm Hacking Team was the target of a hack that stole data on at least three zero-day vulnerabilities.

“The debate in the past about vulnerability disclosure has focused on the potential for independent discovery,” says Marshall Erwin, the head of trust at Mozilla and former cybersecurity and counterterrorism analyst in the U.S. intelligence community.

“If the U.S. government knows about an unpatched vulnerability, that vulnerability could be independently discovered by a foreign adversary – an event known as a collision – and used by that adversary.”

A new study by the Washington think-tank RAND finds that foreign governments and other intruders discovered only 5.7 percent of zero-day vulnerabilities over the span of a year. This assessment suggests the U.S. government doesn’t necessarily need to act swiftly to disclose vulnerabilities to companies because the odds are slim that foreign hackers will stumble onto them. 

However, says Erwin, “What recent incidents show is that much of the risk of non-disclosure of zero-day vulnerabilities stems from the use and mismanagement of those vulnerabilities, rather than from independent discovery.” In other words, the problem with intelligence agencies retaining zero-day vulnerabilities is not necessarily that they don’t immediately disclose them to private companies so that they can be patched; it is that they do not adequately secure them so that they are not stolen and leaked.

“It is much more difficult to keep secrets than it used to be,” Schwartz agrees. “Government officials should not assume that they will be the only ones that know about a particular vulnerability for years as they had in the past. Leaks are much more common now.”

The point, says Erwin, is that “when agencies have not met their responsibility to protect non-disclosed vulnerabilities, they then have a secondary responsibility to help mitigate any harm they may have caused.”

No comments:

Post a Comment