Crypto Regulation, Part 3: Regulating end-to-end encryption

This is part 3 of a series on the crypto regulations proposed by Cameron, Obama and others. Part 1 explains what this is all about and discusses different types of cryptography. Part 2 discusses the different ways transport encryption could conceivably be regulated. I encourage you to read those two first in order to better understand this article.

We are in the middle of a thought experiment: What if we were tasked with coming up with a regulation that allows us (the European law enforcement agencies) to decrypt encrypted messages while others (like foreign intelligence agencies) are still unable to do so. In the previous installment, we looked at how transport encryption could be regulated. We saw that there were a number of different ways this could be achieved:

We have also seen where each of these techniques has its problems, and how none of them achieved the goal of letting us decrypt information while others cannot. Let’s see if we have better luck with the regulation of end-to-end encryption (E2EE).

Regulating end-to-end encryption

If we look at the history of cryptography, it turns out that end-to-end encryption is a nightmare to regulate, as the United States found out during the first “Crypto Wars“. It is pratically impossible to control what software people run on their computers, as is regulating the spread of the software itself. There are simply too many computers and too many ways to get access to software to control them all.

But let’s leave aside the practical issues of enforcement (that’s going to be a whole post of its own) for now. There are still a number of additional problems we have to face: There are only two points in the system where we could gain access to the plain text of a message: The sender and the receiver. For example, even if a PGP-encrypted message is sent via Google Mail, we cannot gain access to the decrypted message by asking Google for it, as they don’t have it, either. We are forced to raid either the sender or the receiver to access the unencrypted contents of the message.

This makes it hard to collect even small amounts of these messages, and impossible to perform large-scale collection.1) And this is why end-to-end encryption is so scary to law enforcement and intelligence agencies alike: There could be anything in them, from cake recipes to terror plots, and there’s no way to know until you have decrypted them.

Given that pretty much every home computer and smartphone is capable of running end-to-end encryption software, the potential for unreadable communication is staggering. It is only natural that law enforcement and intelligence services alike are now calling for legislation to outlaw or at least regulate this threat to their investigatory powers. So, in our thought experiment, how could this work?

Regulation techniques

The regulation techniques from the previous part of this series mostly still apply, but the circumstances around them have changed, so we’ll have to re-examine them and see if they could work under these new circumstances. Also, given that end-to-end encrypted messages are almost always transmitted over transport-encrypted channels, a solution for the transport encryption problem would be needed in order to meaningfully implement any of these regulations.

We will look at the following regulation techniques:

Let’s take another look at each of these proposals and their merits and disadvantages for this new problem.

Outlawing cryptography

This proposal is actually surprisingly practical at first glance: Almost no corporations use end-to-end encryption (meaning that there would be next to no lobbying against such a proposal), comparatively few private persons use it (meaning there would be almost no resistance from the public), and it completely fixes the problem of end-to-end encryption. So, case closed?

That depends. Completely outlawing this kind of cryptography would not only leave the communication open for our own law enforcement, but also for foreign intelligence agencies. Recent reports (pdf) to the European parliaments Science and Technology Options Assessment Board (STOA) suggest that one of the ways to improve European IT security would be to foster the adoption of E2EE:

In this way E2EE offers an improved level of confidentiality of information and thus privacy, protecting users from both censorship and
repression and law enforcement and intelligence. […] Since the users of these products might be either criminals or well-meaning citizens, a political discussion is needed to balance the interests involved in this instance.

As the report says, a tradeoff between the interests of society as a whole and law enforcement needs to be found. Outlawing cryptography would hardly constitute a tradeoff, and there are many legitimate and important uses for E2EE. For example, journalists may use it to protect their sources, human rights activists use it to communicate with contacts living under orpressive regimes, and so on. Legislation outlawing E2EE would make all of this illegal and would result in journalists not being able to communicate with confidential sources without fear of revealing their identity.

In the end, a tradeoff will have to be found, but that tradeoff cannot be to completely outlaw the only thing that lets these people do their job with some measure of security, at least not if we want our society to still have adversarial journalism and human rights activists five years from now.

Mandating the use of weak or backdoored algorithms

This involves pretty much the same ideas and problems we have already discussed in the previous article. However, we also encounter another problem: As the software used for E2EE is used by many individuals on many different systems (as opposed to a comparatively small number of corporations managing large numbers of identical servers which are easy to modify) in many different versions, some of which are no longer being actively developed and many of which are open source and not maintained by EU citizens, mandating the use of specific algorithms would entail…

  • …forcing every developer of such a system to introduce these weak algorithms (and producing these updates yourself for those programs which are no longer actively maintained by anyone)
  • …forcing everyone to download the new versions and configure them to use the weak algorithms
  • …preventing people from switching back to more secure versions (although that is an issue of enforcement, which we will deal with later)

In practise, this is basically impossible to achieve. Project maintainers not living under the jurisdiction of the EU would refuse to add algorithms to their software that they know are bad, and many of the more privacy-conscious and tech-literate crowd would just refuse to update their software (again, see enforcement). Assuming that using any non-backdoored algorithms would be illegal, this would be equivalent to outlawing E2EE alltogether.

In a globalized world, many people communicate across state boundaries. Such a regulation would imply forcing foreigners to use bad cryptography in order to communicate with Europeans (or possibly get their European communication partners in trouble for receiving messages using strong cryptography). In a world of global, collaborative work, you sometimes may not even know which country your communication partner resides in. The administrative overhead for everyone would be incredible, and thus people would either ignore the law or stop communicating with Europeans.

Additionally, Software used for E2EE is also used in other areas: For example, many variants of Linux operating systems uses GPG (a software for E2EE of eMails) to verify if software updates have been tampered with. Using bad algorithms for this would compromise the security of the whole operating system.

Again, it is a question of the tradeoff: Does having access to the communication of the general population justify putting all of this at risk? I don’t think so, but then again, I am not a Minister of the Interior.

Performing a Man-in-the-Middle-Attack on all / select connections

If a Man-in-the-Middle-Attack (short: MitM) on transport security is impractical, it becomes downright impossible for most forms of E2EE. To understand why, we need to look at the two major ways E2EE is done in practise: The public key model and the key agreement model:

  • In the public key model, each participant has a public and a private cryptographic key. The public key is known to everyone and used to encrypt messages, the private key is only known to the recipient and is used to decrypt the message. The public key is exchanged once and keeps being re-used for future communication. This model is used by GPG, among others.
  • In the key agreement model, two communication partners establish a connection and then exchange a few messages in order to establish a shared cryptographic key. This key is never sent over the connection2) and is only known to the two endpoints of the connection, who can now use that key to encrypt messages between each other. For each chat session, a new cryptographic key is established, with long-term identity keys being used to ensure that we are still talking to the same person in the next session. Variations of this model are used in OTR and TextSecure, among others.

So, how would performing a MitM attack work for each of these models? For the public key model, you would need to intercept the initial key exchange (i.e. the time where the actual keys are downloaded or otherwise exchanged) and replace the keys with your own. This is hard, for multiple reasons:

  • Many key exchanges have already happened and cannot be retroactively attacked
  • Replaced keys are easily detected if the user is paying attention, and the forged keys can subsequently be ignored.
  • Keys can be exchanged in a variety of ways, not all of which involve the internet

So, one would not only have to attack the key exchanges but also prevent the user from disregarding the forged keys and using other methods for the key exchange instead. If we’re going to do that, we may as well just backdoor the software, which would be far easier.

Attacking a key agreement system wouldn’t work much better: We would have to intercept each key agreement message (which is usually itself encrypted using transport encryption) and replace the message with one of our own. These messages are also authenticated using the long-term identity keys of the communication partners, so we would either have to gain access to those keys (which is hard) or replace them with our own (which is, again, easily detected).

So, while this may theoretically be possible, it is far from viable and suffers from the same issues of enforcement all the other proposals do.

Key escrow

Key escrow sounds like the perfect solution: Everyone has to deposit their keys somewhere where law enforcement may gain access to it. The exact implementation may vary, but that’s the general idea. So, what’s wrong with this idea?

First off, the same caveats as before apply: You are creating an interesting target for both intelligence agencies and criminals. In addition to that, this would only work for the public key model, where the same keys are used over and over again. In the key agreement model, new keys are generated all the time (and the old ones deleted), so a way would have to be found to enter these keys into an escrow system and retain them in case they are ever needed. This would quickly grow into a massive database of keys (many of which would be worthless as no crime was committed using them), which you would have to hang on to, just in case it ever becomes relevant.

Key disclosure laws

The same theme continues here: Key disclosure laws (if they are even allowed under European law) may be able to compel users to disclose their private keys, but users can’t disclose keys that do not exist anymore. Since the keys used in key agreement schemes are usually deleted after less than a day (often after only minutes), the user would be unable to comply with a key disclosure request from law enforcement, even if he would like to. And since it is considered best practise not to keep logs of encrypted chats, the user would also be unable to provide law enforcement with a record of the conversation in question.

Changing this would require massive changes to the software used for encrypted communication, encountering the same problems we already discussed when talking about introducing backdoors into software. So, this proposal is pretty much useless as well.

The “Golden Key”

The term “Golden Key” refers to a recent comment in the Washington Post, which stated:

A police “back door” for all smartphones is undesirable — a back door can and will be exploited by bad guys, too. However, with all their wizardry, perhaps Apple and Google could invent a kind of secure golden key they would retain and use only when a court has approved a search warrant.

— “Compromise needed on Smartphone encryption“, Washington Post, 2014

The article was an obvious attempt to propose the exact same idea (a backdoor) using a different, less politically charged word (“Golden key”), because any system that allows you to bypass a protection mechanism is, by its very nature, a backdoor. But let’s humor them and use the term, because a “golden key” sounds fancy, right? So, how would that work?

In essence, every piece of encryption software would have to be modified in a way that forced it to encrypt every message with one additional key, which is under the control of law enforcement. That way, the keys of the users can stay secret, and only one key needs to be kept safe to keep the communication secured against everyone else. Problem solved?

Not so much. We still have the problem of forcing software developers to include that backdoor into their systems. Then there’s the problem of who is holding the keys. Is there only one key for the whole EU? Or, to put it another way, do you really believe that there is no industrial espionage going on between European Countries?

Or do we have keys for each individual country? And how does the software decide with which key to encrypt the data in this case? What if I talk to someone in france? How is my software supposed to know that and encrypt both with the german and the french key? How secure is the storage of that key? Again, we have a single point of failure. If someone gains access to one of these master keys, he/she can unlock every encrypted message in that country.

There are a lot of questions that would have to be answered to implement this proposal, and I am pretty sure that there is no satisfying solution (If you think you have one, let me know in the comments).

Conclusion

We have looked at five proposals for regulating end-to-end encryption and have found all of them lacking in terms of their effectiveness and having plenty of harmful side effects. All of these proposals both reduce the security of everyone’s communication, not to mention the toxic side effects on basic human rights that are a given whenever we are considering such measures.

There must be a tradeoff between security and privacy, but that tradeoff should not be less security for less privacy, and any attempt at regulating encryption we have looked at is exactly that: It makes everyone less secure and, at the same time, harms their privacy.

One issue we haven’t even looked at yet is how to actually enforce any of these measures, which is another can of worms entirely. We’re going to do that next, in the upcoming fourth installment of this series.


As before, thanks go out to niemalsnever, FreeFall and DanielAW for proofreading and suggestions. Any remaining mistakes are my own.

Flattr this

Footnotes

Footnotes
1 Although, as stated before, it is still possible to collect metadata, which is the thing intelligence agencies are most interested in anyway.
2 This works due to mathematical properties of the messages we send, but for our purposes, it is enough to know that both parties will have the same key, while no one else can easily compute the same key from the values sent over the network.