Pages

5 December 2018

Principles for a More Informed Exceptional Access Debate

By Ian Levy, Crispin Robinson 

This is part of a series of essays from the Crypto 2018 Workshop on Encryption and Surveillance.

In any discussion of cyber security, details matter.

Unfortunately, it’s the details that are missing from the discussion around lawful access to commodity end-to-end encrypted services and devices (often called the “going dark” problem). Without details, the problem is debated as a purely academic abstraction concerning security, liberty, and the role of government. 

There is a better way that doesn’t involve, on one side, various governments, and on the other side lawyers, philosophers, and vendors’ PR departments continuing to shout at each other. If we can get all parties to look at some actual detail, some practices and proposals—without asking anyone to compromise on things they fundamentally believe in—we might get somewhere.


As commodity technology starts to really drive the evolution of our daily lives and more of our personal data, our industry and our economy is on the internet, we will repeatedly run into challenges of how to explain complex and subtle technical concepts to non-experts. That’s likely to cover everything from how the internet economy could affect personal privacy through how the mass of data our smart stuff will be generating affects national security to how agencies charged with public protection can do their job in a way that meets the public’s expectation. To do that, we need to have open and honest conversations between experts that can inform the public debate about what’s right and we’ll need a framework in which to do that. We hope the U.K.’s principles for access to encrypted services may help start that off. These are not intended as general principles for government access to data covering every case; and they do not address the ‘discovery’ problem around how governments establish which services and identities are being used by criminals and other valid targets. They’re specifically for mass-scale, commodity, end-to-end encrypted services, which today pose one of the toughest challenges for targeted lawful access to data and an apparent dichotomy around security.

Law enforcement and intelligence agencies have been “going spotty” for some time, in that changes in technology continually changes what is available to law enforcement. There’s no panacea that can solve all the problems law enforcement has with access to information. This article outlines how to enable the majority of the necessary lawful access without undermining the values we all hold dear. For the purposes of this article, we’ll use the term “exceptional access” to mean a targeted government authorization to access, with the assistance of the service provider, the data belonging to a user when needed, for example as part of a criminal investigation or to stop terrorists. It’s exceptional because almost all users aren’t affected by it and it’s not very common, on the scale of the total number of devices and the total number of communications enabled by the platforms.

The authors represent both the intelligence and security missions in GCHQ. One is the Technical Director of the National Cyber Security Centre, the other the Technical Director for Cryptanalysis for GCHQ. As mathematicians we really do understand how cryptography is used in these cases, and we believe the encryption debate can be framed in a more constructive way. GCHQ also houses the National Technical Assistance Centre, the part of the U.K. government that manages targeted access to communications and complex digital forensics—including encrypted devices—for U.K. law enforcement, so we understand many of the challenges facing them. In this piece, when we say “law enforcement,” it’s short hand for those various agencies charged with protecting the public,[1] including both law enforcement and intelligence agencies that can be authorised to access data under U.K. law.

The U.K. government strongly supports commodity encryption. The Director of GCHQ has publicly stated that we have no intention of undermining the security of the commodity services that billions of people depend upon and, in August, the U.K. signed up to the Five Country statement on access to evidence and encryption, committing us to support strong encryption while seeking access to data. That statement urged signatories to pursue the best implementations within their jurisdictions. This is where details matter, so with colleagues from across government, we have created some core principles that will be used to set expectations of our engagements with industry and constrain any exceptional access solution. We believe these U.K. principles will enable solutions that provide for responsible law enforcement access with service provider assistance without undermining user privacy or security.

Any functioning democracy will ensure that its law enforcement and intelligence methods are overseen independently, and that the public can be assured that any intrusions into people’s lives are necessary and proportionate. In the U.K., under the Investigatory Powers Act 2016, that means a Secretary of State and an independent judge must both sign-off the use of the most intrusive powers. We believe this provides world class oversight of our law enforcement and intelligence agencies. But how should we decide which countries have “good” regimes from which service providers can accept lawful orders? It shouldn’t be up to the service providers receiving these international requests to make individual judgements about countries. Surely the nations whose laws govern the companies who provide these services and devices need to make those judgements? We should be able to define a minimum standard—in terms of privacy protection, oversight and other important characteristics—that governments must meet in order to have their requests accepted by these companies in other countries. Doing this may have the welcome side effect of improving those standards around the world, as access to data would be much more limited without it. This is hard, but things like the U.S. Cloud Act could be a model for how those judgements could be made in the future.

That gives us our first principle:

1) Privacy and security protections are critical to public confidence. Therefore, we will only seek exceptional access to data where there’s a legitimate need, that access is the least intrusive way of proceeding and there is appropriate legal authorisation.

We expect providers to validate that such an authorization is in place, but not try to independently judge the details of the case.

Governments and law enforcement haven’t always been clear about what an acceptable solution would look like. Collectively, we’ve defined the various different service and device problems as a single entity called “encryption.” That’s unhelpful as the details of each device and each service will constrain and drive particular solutions. For example, a solution for one device’s data encryption capability won’t necessarily work on another. Again, details are important. Are keys stored in a separate custom hardware device? Or just separated by software running on the application processor?

It’s not just the technical details that are important. It’s also important that we’re clear about what is actually useful to law enforcement, and that’s not always been consistently explained. Without an open, consistent approach, people talk about what they think law enforcement wants, rather than what would be useful.

We also need to be clear that law enforcement has had to cope with evolving technology since technology began. From the original mechanical 19th century Strowger exchanges to digital telephony to mobile phones to “over-the-top” data-borne applications, law enforcement techniques aren’t static. Law enforcement very rarely has access to everything that would be useful, and investigators are skilful in making sure they make the most of every bit (in both senses) of information.

That gives us our next two principles:

2) Investigative tradecraft has to evolve with technology.

Law enforcement has to understand better the technology in use and how to use the data it makes available for investigative purposes. Service providers should help law enforcement to understand the evolution of their products and services to help law enforcement keep current without wasting resources reverse engineering things. There’s a corollary to this though: We don’t think vendors should knowingly remove access to data or capabilities that are useful to public protection agencies, where there’s no real customer benefit.

3) Even when we have a legitimate need, we can’t expect 100 percent access 100 percent of the time.

There may be services or devices where there just isn’t a proportionate solution that allows access in a way that protects the privacy and security of untargeted users. But these should be the exception, not the norm.

Much of the public narrative on this topic talks about security as a binary property; something is either secure or it’s not. This isn’t true—every real system is a set of design trade-offs. Billions of people who could lose their phones or forget their passwords (or any number of other things) must be accommodated by these systems. Billions of people trust service providers to do a huge number of things well. Things like:
writing decent code so security isn’t undermined by trivial vulnerabilities
making sure there’s appropriate independent vetting of critical code before it’s added to the product
protecting development networks so they know what’s really in the product and that it hasn’t been covertly modified by some external malfeasant
protecting critical security artifacts like code signing keys

We should be honest about that—the systems we use today aren’t perfectly secure. We should seek to ensure that an exceptional access solution does not fundamentally change the level of trust a user needs to have in a service provider or device manufacturer. Therefore, we can, a priori, exclude certain mechanisms, giving us our next couple of principles:

4) Targeted exceptional access capabilities should not give governments unfettered access to user data.

There is no single solution to enable all lawful access, but we definitely don’t want governments to have access to a global key that can unlock any user’s data. Government controlled global key escrow systems would be a catastrophically dumb solution in these cases. Furthermore, solutions should be designed so the service provider—in the form of a real human—is involved in enacting every authorized request, limiting the scale of use.

5) Any exceptional access solution should not fundamentally change the trust relationship between a service provider and its users.

This means not asking the provider to do something fundamentally different to things they already do to run their business.

A service provider or device manufacturer can already adversely affect the security and privacy of its users today, without building in exceptional access. Any exceptional access solution shouldn’t make that more likely, or easier for an attacker to take advantage of. Of course, any complex system—including an exceptional access system—could have defects and some of those could be security vulnerabilities. Security is better when subject to peer review; security through obscurity just doesn’t work. So, any solution should be subject to some form of peer review and incremental implementation. The public has been convinced that a solution in this case is impossible, so we need to explain why we’re not proposing magic. That’s different to traditional intercept, for example.

The public will also want to know how these systems are used, as it has been convinced that governments want access to every single one of these encrypted things. We should not expect the global publics to simply trust that any solution is being used properly by all. Some form of public audit—that protects investigations—is probably required. That’s hard, as it’s different between seized devices and access to real-time services and even across different countries’ jurisdictions. However, with a bit of work, technology solutions can assure the public of the scale of use of an exceptional access solution, without damaging intelligence and law enforcement outcomes.

Our final principle is therefore:

6) Transparency is essential

Generally, governments protect the sensitive details of their investigative capabilities so that criminals can’t easily use that information to try to evade detection. However, end-to-end encrypted services and modern devices are subject to intense research by the security community in a way that traditional communication systems aren’t. So, the details of any exceptional access solution may well become public and subject to expert scrutiny, which it should not fail. Given the unique and ubiquitous nature of these services and devices, we would not expect criminals to simply move if it becomes known that an exceptional access solution exists.

The devices and apps that we all use are also complex systems and so will also have software defects and some of those could also be security vulnerabilities. Governments are called on by the tech industry and privacy groups to have “disclose by default” processes to ensure vulnerabilities are, on average, reported to the vendor and fixed. This protects the vast population of users. Incidentally, the U.K.’s vulnerabilities equities process will be published soon.

Under U.K. law, government has the power to authorize Equipment Interference.[2] That includes everything from covertly entering a suspect’s house to copy data through to more technical things like “lawful hacking.” Lawful hacking of target devices initially sounds attractive as the panacea to governments’ lawful access requirements—just hack the target’s device and get what you want. But that requires governments to have vulnerabilities on the shelf to use to hack those devices, which is completely at odds with the demands for governments to disclose all vulnerabilities they find to protect the population. That seems daft. 

Moreover, vulnerabilities can be found and exploited by anyone—not just governments—and so this will very likely engender a shady marketplace of vulnerabilities and exploit chains that would be available to anyone with the cash. There are other problems with this approach, but asking governments to rely exclusively on lawful hacking of target devices is likely to have some nasty second order effects.

Principles in Practice

So, to some detail. For over 100 years, the basic concept of voice intercept hasn’t changed much: crocodile clips on telephone lines. Sure, it’s evolved from real crocodile clips in early systems through to virtual crocodile clips in today’s digital exchanges that copy the call data. But the basic concept has remained the same. Many of the early digital exchanges enacted lawful intercept through the use of conference calling functionality.

In a world of encrypted services, a potential solution could be to go back a few decades. It’s relatively easy for a service provider to silently add a law enforcement participant to a group chat or call. The service provider usually controls the identity system and so really decides who’s who and which devices are involved - they’re usually involved in introducing the parties to a chat or call. You end up with everything still being end-to-end encrypted, but there’s an extra ‘end’ on this particular communication. This sort of solution seems to be no more intrusive than the virtual crocodile clips that our democratically elected representatives and judiciary authorise today in traditional voice intercept solutions and certainly doesn’t give any government power they shouldn’t have.

We’re not talking about weakening encryption or defeating the end-to-end nature of the service. In a solution like this, we’re normally talking about suppressing a notification on a target’s device, and only on the device of the target and possibly those they communicate with. That’s a very different proposition to discuss and you don’t even have to touch the encryption.

The problem of gaining access to a seized encrypted device is very different and may well end up being harder to do in a proportionate way – there’s not enough research to be sure either way. The apps and services we’re talking about are usually just software - which is mutable - and there’s no technical barrier to changing them. Indeed, they’re updated often to add features and fix defects and vulnerabilities. Some devices may have hardware limitations in what they can actually do, regardless of the software load. We collectively need to decide whether hardware changes are a reasonable thing to ask a vendor to do. Also, the vendor isn’t generally involved in encrypting an individual device, unlike calls or chats made online. Getting the data off the device itself may end up being hard, but perhaps there are other ways, for example, in some cases, by getting access to cloud backups. If those backups are encrypted, maybe we can do password guessing on big machines. Again, the details matter.

There are already academic proposals that may form the basis of some components of a subset of putative solutions. We need to be able to discuss these openly. We also need to be very careful not to take any component or proposal and claim that it proves that the problem is either totally solved or totally insoluble. That’s just bad science and solutions are going to be more complex than that. To be completely clear, what we’re outlining here is just to start discussion. More detailed work is needed, and it needs to be discussed properly, by experts critiquing each other’s work. It needs to happen without people being vilified for having a point of view or daring to work on this as a problem. The alternative will almost certainly be bad for everyone.

No comments:

Post a Comment