Pages

20 July 2017

When The NSA Spots A Crack In Commercial Software – Should It Tell?


One of the only tasks the U.S. Constitution declares that the federal government must do is to provide for the common defense. That is the government’s foundational truth and purpose; to protect American lives, liberty, and their pursuit of happiness from those that would disrupt it. So, the question on whether and when government should disclose digital vulnerabilities should be explored from the perspective of which option does a better job at defending Americans.

First is the emotional argument that government is putting grandma at risk by not immediately disclosing vulnerabilities. In reality, she is already at risk from the metric ton of published vulnerabilities that remain un-remediated for up to a decade across the network. And grandma, with potentially limited technical capabilities, would not be able to shore up her defense even once a patch is released unless a grandchild would come over. Therefore, if the main reason why we should disclose all vulnerabilities is to improve American citizens’ individual defense, more disclosure won’t help. Figuring out how to mandate or automate the patching of devices by default would be the key to individual defense in order to enable a stronger collective defense.

Second, is the fun “public safety” argument. For instance, we decided in the 1980s that everyone should wear their seatbelts in order to protect society. The fact that no matter how safe you drove, someone else could cause you harm if they weren’t driving safely. We made laws about wearing seat belts. We taught everyone how to properly wear them. We gave drivers tests to ensure that they knew the rules. Then, we ticketed the driver when they don’t follow the rules. With this safety problem, we clearly defined who has the responsibility (the driver) and what happens when you are caught not following the rules. This then drove automotive tech innovation to help warn and automate – from the flashing lights on the dashboard to vehicle models that automatically put the seatbelt on when the ignition starts.

That becomes an interesting intellectual debate in the cyber realm – is it the individual, the device manufacturer, the internet service provider, the government, or someone else responsible for ensuring that devices are patched? This will only get worse as the Internet of Things becomes the Internet of Everything. Therefore, if we do not figure out how to patch all the known vulnerabilities in our current defense, it will not aid our nation’s defense to tell everyone the moment we find a new hole.

The third argument is that if the U.S. stopped using zero-days against adversaries then the rest of the world would follow, ultimately making the world a safer place. However, we have tried to lead “moral change” before. In late 1920s, Secretary of State Henry Stimson stated, “gentlemen do not read each other’s mail.” So, we shut down the Black Chamber and no longer focused on breaking encryption because we had just won the war to end all wars. Germany, on the other hand, began mass-producing Enigma machines to protect their communications. The U.S. had failed to create an international norm.

Today, international norms do not exist to define cyberattacks as bad. Could this change? Perhaps it will someday in the future. We decided as an international community that stopping the spread of infectious disease was important, thus the creation of the World Health Organization. We decided as world citizens that chemical and nuclear weapons were evil and now we have international treaties and monitoring programs, which work because, unlike cyber tools, it is nation states that exclusively have these capabilities and not individuals. We decided that cluster bombs and landmines were abhorrent as the damage inflected outweighed the advantage of using them, and the world has predominantly agreed over time.

But today, there are no international norms establishing cyberattacks as bad, and therefore, there is no regulation, and no ability to monitor to ensure that cyber weapons are not stockpiled. In fact, there is a robust market place to ensure that zero-days are found and sold. To make the decision today to disclose 100 percent of discovered cyber vulnerabilities would purposely cripple the federal government’s potential ability to actively defend the nation and its citizens – flying in the face of what our founding fathers charged it with doing.

We cannot chose the environment that we live in. We cannot control how technological innovation will expand. We cannot control other nations, and we especially cannot control non-state actors. I do not claim to know the answer to this gnarly problem. There are a lot of smart minds both inside and outside the government thinking about this. However, I contend that we are potentially focusing on the wrong metrics to weigh our decisions about disclosure.

My oath as a U.S. Army officer is to defend the Constitution against all enemies foreign and domestic. The Constitution tells the government to defend the American people from invasion. It did not stipulate only from forces on the land, sea, or in the air.

To disclose everything is irresponsible. To disclose nothing is irresponsible. There has to be an appropriate middle ground that should best protect the country. First principles going back to the Constitution should guide the decision on whether to disclose via the Vulnerabilities Equities Process. By going back to these first principles, we can weed out the nonsensical arguments that are based on emotion instead of fact.

No comments:

Post a Comment