Category Archives: Encryption

An update on the Crypto Regulation series

If you have been following this blog, you may have noticed that I have an unfinished series on the topic of crypto regulation (part 1, part 2, part 3). I’ve been meaning to finish it up, but life got in the way and I mostly forgot about it. Now, it has become relevant again, as Great Britain has just introduced a new draft legislation that would force companies to assist the government in removing encryption.

I’d like to finish up the series, but right now, I just don’t have the time. But luckily, someone else did a much better job than I could ever do. A dream team of cryptography experts has released a joint article called “Keys under Doormats – mandating insecurity by requiring government access to all data and communications”. An excerpt:

Twenty years ago, law enforcement organizations lobbied to require data and communication services to engineer their products to guarantee law enforcement access to all data. After lengthy debate and vigorous predictions of enforcement channels going dark, these attempts to regulate the emerging Internet were abandoned. In the intervening years, innovation on the Internet flourished, and law enforcement agencies found new and more effective means of accessing vastly larger quantities of data. Today we are again hearing calls for regulation to mandate the provision of exceptional access mechanisms. In this report, a group of computer scientists and security experts, many of whom participated in a 1997 study of these same topics, has convened to explore the likely effects of imposing extraordinary access mandates.

We have found that the damage that could be caused by law enforcement exceptional access requirements would be even greater today than it would have been 20 years ago. In the wake of the growing economic and social cost of the fundamental insecurity of today’s Internet environment, any proposals that alter the security dynamics online should be approached with caution. Exceptional access would force Internet system developers to reverse forward secrecy design practices that seek to minimize the impact on user privacy when systems are breached. The complexity of today’s Internet environment, with millions of apps and globally connected services, means that new law enforcement requirements are likely to introduce unanticipated, hard to detect security flaws. Beyond these and other technical vulnerabilities, the prospect of globally deployed exceptional access systems raises difficult problems about how such an environment would be governed and how to ensure that such systems would respect human rights and the rule of law.

So, yeah. The cryptography equivalent of the Avengers have gone ahead and written this stuff better than I could have done. So, I will not be finishing my own series, and instead encourage you to read the article.

However, if it turns out that there is interest in me continuing this series, I may make some time to finish the final two articles of the series. So, if you want me to continue, drop me a line in the comments or on twitter, if that’s your thing.

Crypto Regulation, Part 3: Regulating end-to-end encryption

This is part 3 of a series on the crypto regulations proposed by Cameron, Obama and others. Part 1 explains what this is all about and discusses different types of cryptography. Part 2 discusses the different ways transport encryption could conceivably be regulated. I encourage you to read those two first in order to better understand this article.

We are in the middle of a thought experiment: What if we were tasked with coming up with a regulation that allows us (the European law enforcement agencies) to decrypt encrypted messages while others (like foreign intelligence agencies) are still unable to do so. In the previous installment, we looked at how transport encryption could be regulated. We saw that there were a number of different ways this could be achieved:

We have also seen where each of these techniques has its problems, and how none of them achieved the goal of letting us decrypt information while others cannot. Let’s see if we have better luck with the regulation of end-to-end encryption (E2EE).

Regulating end-to-end encryption

If we look at the history of cryptography, it turns out that end-to-end encryption is a nightmare to regulate, as the United States found out during the first “Crypto Wars“. It is pratically impossible to control what software people run on their computers, as is regulating the spread of the software itself. There are simply too many computers and too many ways to get access to software to control them all.

But let’s leave aside the practical issues of enforcement (that’s going to be a whole post of its own) for now. There are still a number of additional problems we have to face: There are only two points in the system where we could gain access to the plain text of a message: The sender and the receiver. For example, even if a PGP-encrypted message is sent via Google Mail, we cannot gain access to the decrypted message by asking Google for it, as they don’t have it, either. We are forced to raid either the sender or the receiver to access the unencrypted contents of the message.

This makes it hard to collect even small amounts of these messages, and impossible to perform large-scale collection.1) And this is why end-to-end encryption is so scary to law enforcement and intelligence agencies alike: There could be anything in them, from cake recipes to terror plots, and there’s no way to know until you have decrypted them.

Given that pretty much every home computer and smartphone is capable of running end-to-end encryption software, the potential for unreadable communication is staggering. It is only natural that law enforcement and intelligence services alike are now calling for legislation to outlaw or at least regulate this threat to their investigatory powers. So, in our thought experiment, how could this work?

Regulation techniques

The regulation techniques from the previous part of this series mostly still apply, but the circumstances around them have changed, so we’ll have to re-examine them and see if they could work under these new circumstances. Also, given that end-to-end encrypted messages are almost always transmitted over transport-encrypted channels, a solution for the transport encryption problem would be needed in order to meaningfully implement any of these regulations.

We will look at the following regulation techniques:

Let’s take another look at each of these proposals and their merits and disadvantages for this new problem.

Outlawing cryptography

This proposal is actually surprisingly practical at first glance: Almost no corporations use end-to-end encryption (meaning that there would be next to no lobbying against such a proposal), comparatively few private persons use it (meaning there would be almost no resistance from the public), and it completely fixes the problem of end-to-end encryption. So, case closed?

That depends. Completely outlawing this kind of cryptography would not only leave the communication open for our own law enforcement, but also for foreign intelligence agencies. Recent reports (pdf) to the European parliaments Science and Technology Options Assessment Board (STOA) suggest that one of the ways to improve European IT security would be to foster the adoption of E2EE:

In this way E2EE offers an improved level of confidentiality of information and thus privacy, protecting users from both censorship and
repression and law enforcement and intelligence. […] Since the users of these products might be either criminals or well-meaning citizens, a political discussion is needed to balance the interests involved in this instance.

As the report says, a tradeoff between the interests of society as a whole and law enforcement needs to be found. Outlawing cryptography would hardly constitute a tradeoff, and there are many legitimate and important uses for E2EE. For example, journalists may use it to protect their sources, human rights activists use it to communicate with contacts living under orpressive regimes, and so on. Legislation outlawing E2EE would make all of this illegal and would result in journalists not being able to communicate with confidential sources without fear of revealing their identity.

In the end, a tradeoff will have to be found, but that tradeoff cannot be to completely outlaw the only thing that lets these people do their job with some measure of security, at least not if we want our society to still have adversarial journalism and human rights activists five years from now.

Mandating the use of weak or backdoored algorithms

This involves pretty much the same ideas and problems we have already discussed in the previous article. However, we also encounter another problem: As the software used for E2EE is used by many individuals on many different systems (as opposed to a comparatively small number of corporations managing large numbers of identical servers which are easy to modify) in many different versions, some of which are no longer being actively developed and many of which are open source and not maintained by EU citizens, mandating the use of specific algorithms would entail…

  • …forcing every developer of such a system to introduce these weak algorithms (and producing these updates yourself for those programs which are no longer actively maintained by anyone)
  • …forcing everyone to download the new versions and configure them to use the weak algorithms
  • …preventing people from switching back to more secure versions (although that is an issue of enforcement, which we will deal with later)

In practise, this is basically impossible to achieve. Project maintainers not living under the jurisdiction of the EU would refuse to add algorithms to their software that they know are bad, and many of the more privacy-conscious and tech-literate crowd would just refuse to update their software (again, see enforcement). Assuming that using any non-backdoored algorithms would be illegal, this would be equivalent to outlawing E2EE alltogether.

In a globalized world, many people communicate across state boundaries. Such a regulation would imply forcing foreigners to use bad cryptography in order to communicate with Europeans (or possibly get their European communication partners in trouble for receiving messages using strong cryptography). In a world of global, collaborative work, you sometimes may not even know which country your communication partner resides in. The administrative overhead for everyone would be incredible, and thus people would either ignore the law or stop communicating with Europeans.

Additionally, Software used for E2EE is also used in other areas: For example, many variants of Linux operating systems uses GPG (a software for E2EE of eMails) to verify if software updates have been tampered with. Using bad algorithms for this would compromise the security of the whole operating system.

Again, it is a question of the tradeoff: Does having access to the communication of the general population justify putting all of this at risk? I don’t think so, but then again, I am not a Minister of the Interior.

Performing a Man-in-the-Middle-Attack on all / select connections

If a Man-in-the-Middle-Attack (short: MitM) on transport security is impractical, it becomes downright impossible for most forms of E2EE. To understand why, we need to look at the two major ways E2EE is done in practise: The public key model and the key agreement model:

  • In the public key model, each participant has a public and a private cryptographic key. The public key is known to everyone and used to encrypt messages, the private key is only known to the recipient and is used to decrypt the message. The public key is exchanged once and keeps being re-used for future communication. This model is used by GPG, among others.
  • In the key agreement model, two communication partners establish a connection and then exchange a few messages in order to establish a shared cryptographic key. This key is never sent over the connection2) and is only known to the two endpoints of the connection, who can now use that key to encrypt messages between each other. For each chat session, a new cryptographic key is established, with long-term identity keys being used to ensure that we are still talking to the same person in the next session. Variations of this model are used in OTR and TextSecure, among others.

So, how would performing a MitM attack work for each of these models? For the public key model, you would need to intercept the initial key exchange (i.e. the time where the actual keys are downloaded or otherwise exchanged) and replace the keys with your own. This is hard, for multiple reasons:

  • Many key exchanges have already happened and cannot be retroactively attacked
  • Replaced keys are easily detected if the user is paying attention, and the forged keys can subsequently be ignored.
  • Keys can be exchanged in a variety of ways, not all of which involve the internet

So, one would not only have to attack the key exchanges but also prevent the user from disregarding the forged keys and using other methods for the key exchange instead. If we’re going to do that, we may as well just backdoor the software, which would be far easier.

Attacking a key agreement system wouldn’t work much better: We would have to intercept each key agreement message (which is usually itself encrypted using transport encryption) and replace the message with one of our own. These messages are also authenticated using the long-term identity keys of the communication partners, so we would either have to gain access to those keys (which is hard) or replace them with our own (which is, again, easily detected).

So, while this may theoretically be possible, it is far from viable and suffers from the same issues of enforcement all the other proposals do.

Key escrow

Key escrow sounds like the perfect solution: Everyone has to deposit their keys somewhere where law enforcement may gain access to it. The exact implementation may vary, but that’s the general idea. So, what’s wrong with this idea?

First off, the same caveats as before apply: You are creating an interesting target for both intelligence agencies and criminals. In addition to that, this would only work for the public key model, where the same keys are used over and over again. In the key agreement model, new keys are generated all the time (and the old ones deleted), so a way would have to be found to enter these keys into an escrow system and retain them in case they are ever needed. This would quickly grow into a massive database of keys (many of which would be worthless as no crime was committed using them), which you would have to hang on to, just in case it ever becomes relevant.

Key disclosure laws

The same theme continues here: Key disclosure laws (if they are even allowed under European law) may be able to compel users to disclose their private keys, but users can’t disclose keys that do not exist anymore. Since the keys used in key agreement schemes are usually deleted after less than a day (often after only minutes), the user would be unable to comply with a key disclosure request from law enforcement, even if he would like to. And since it is considered best practise not to keep logs of encrypted chats, the user would also be unable to provide law enforcement with a record of the conversation in question.

Changing this would require massive changes to the software used for encrypted communication, encountering the same problems we already discussed when talking about introducing backdoors into software. So, this proposal is pretty much useless as well.

The “Golden Key”

The term “Golden Key” refers to a recent comment in the Washington Post, which stated:

A police “back door” for all smartphones is undesirable — a back door can and will be exploited by bad guys, too. However, with all their wizardry, perhaps Apple and Google could invent a kind of secure golden key they would retain and use only when a court has approved a search warrant.

— “Compromise needed on Smartphone encryption“, Washington Post, 2014

The article was an obvious attempt to propose the exact same idea (a backdoor) using a different, less politically charged word (“Golden key”), because any system that allows you to bypass a protection mechanism is, by its very nature, a backdoor. But let’s humor them and use the term, because a “golden key” sounds fancy, right? So, how would that work?

In essence, every piece of encryption software would have to be modified in a way that forced it to encrypt every message with one additional key, which is under the control of law enforcement. That way, the keys of the users can stay secret, and only one key needs to be kept safe to keep the communication secured against everyone else. Problem solved?

Not so much. We still have the problem of forcing software developers to include that backdoor into their systems. Then there’s the problem of who is holding the keys. Is there only one key for the whole EU? Or, to put it another way, do you really believe that there is no industrial espionage going on between European Countries?

Or do we have keys for each individual country? And how does the software decide with which key to encrypt the data in this case? What if I talk to someone in france? How is my software supposed to know that and encrypt both with the german and the french key? How secure is the storage of that key? Again, we have a single point of failure. If someone gains access to one of these master keys, he/she can unlock every encrypted message in that country.

There are a lot of questions that would have to be answered to implement this proposal, and I am pretty sure that there is no satisfying solution (If you think you have one, let me know in the comments).

Conclusion

We have looked at five proposals for regulating end-to-end encryption and have found all of them lacking in terms of their effectiveness and having plenty of harmful side effects. All of these proposals both reduce the security of everyone’s communication, not to mention the toxic side effects on basic human rights that are a given whenever we are considering such measures.

There must be a tradeoff between security and privacy, but that tradeoff should not be less security for less privacy, and any attempt at regulating encryption we have looked at is exactly that: It makes everyone less secure and, at the same time, harms their privacy.

One issue we haven’t even looked at yet is how to actually enforce any of these measures, which is another can of worms entirely. We’re going to do that next, in the upcoming fourth installment of this series.


As before, thanks go out to niemalsnever, FreeFall and DanielAW for proofreading and suggestions. Any remaining mistakes are my own.

Flattr this

Footnotes

Footnotes
1 Although, as stated before, it is still possible to collect metadata, which is the thing intelligence agencies are most interested in anyway.
2 This works due to mathematical properties of the messages we send, but for our purposes, it is enough to know that both parties will have the same key, while no one else can easily compute the same key from the values sent over the network.

Crypto Regulation, Part 2: Regulating transport encryption

This is part 2 of a series on the crypto regulations proposed by Cameron, Obama and others. Part 1, explaining what it is all about and describing different types of cryptography, can be found here.

The declared goal of crypto regulation is to be able to read every message passing through a country, regardless of who sent or received it and what technology they used. Regular readers probably know my feelings about such ideas, but let’s just assume that we are a member of David Camerons staff and are tasked with coming up with a plan on how to achieve this.1)

We have to keep in mind the two types of encryption we have previously talked about, transport and end-to-end encryption. I will discuss the problems associated with gaining access to communication secured by the respective technologies, and possible alternatives to regulating cryptography. Afterwards, I will look at the technological solution that could be used to implement the regulation of cryptography. This part will be about transport encryption, while the next part will deal with end-to-end encryption.

Regulating transport encryption

As a rule, transport encryption is easier to regulate, as the number of parties you have to involve is much lower. For instance, if you are interested in gaining access to the transport-encrypted communication of all Google Mail users, you only have to talk to Google, and not to each individual user.

For most of these companies, it probably wouldn’t even be necessary to regulate the cryptography itself, they could just be (and are) required to hand over information to law enforcement agencies. These laws could, if necessary, be expanded to include PRISM-like full access to the data stored on the servers (assuming this is not already common practice). Assuming that our goal really only is to gain access to the communication content and metadata, this should be enough to satisfy the needs of law enforcement.

Access to the actual information while it is encrypted and flowing through the internet is only required if we are interested in more than the data stored on the servers of the companies. An example would be the passwords used to log into a service, which are transmitted in an encrypted form over the internet. These passwords are usually not stored in plain text on the company servers. Instead, they store a so-called hash of the password which is easy to generate from the password but makes it almost impossible to restore the password from the information stored in the hash.2) However, if we were able to decrypt the password while it is sent over the internet, we would gain access to the account and could perform actions ourselves (e.g. send messages). More importantly, we could also use that password to attempt to log into other accounts of the suspect, potentially gaining access to more accounts with non-cooperating (foreign) companies or private servers.

Regulation techniques

So, assuming we want that kind of access to the communication, we’re back to the topic of regulating transport encryption. The different ways this access could be ensured are, in rising order of practicality:

Let’s take a look at each of these proposals, their merits and their disadvantages.

Outlawing cryptography

Outlawing cryptography has the advantage of simplicity. There is no overhead of backdooring implementations, implementing key escrow, or performing active attacks. However, that is just about the only advantage of this proposal.

Cryptography is fundamental to the way our society works, and the modern information age would not be possible without it. You are using cryptography every day: when you get your mail, when you log into a website, when you purchase stuff online, even on this very website, your connection is encrypted.

It gets even worse for companies. They rely on their information being encrypted when communicating with other companies or their customers, otherwise their trade secrets would be free to be taken. Banks would have to cease offering online banking. Amazon would probably go out of business. Internet crime would skyrocket as people would hijack unprotected accounts, steal private and corporate information.

So, given the resistance any such proposition would face, outlawing cryptography as a whole isn’t really an option. An alternative would be to just outlaw it for individuals, but not for corporations. That way, the banks could continue offering online banking, but individuals would no longer be allowed to encrypt their private information.

Such a law would technically be possible, but would raise a lot of problems in practise. Aside from being impossible to enforce, some existing programs can only save their data in an encrypted form (e.g. banking applications). Some people have devices they use both privately and for their job, and their employer may require them to encrypt the device. There are a lot of special cases that would cause problems in the actual implementation of this law, not to mention the possible damage caused from criminals gaining access to unencrypted private information. There would definitely be a lot of opposition to such a law, and the end result would be hard to predict.

Mandating the use of weak or backdoored algorithms

In this case, some party would come up with a list of ciphers which are considered secure enough against common “cyber criminals”, while offering no significant resistance to law enforcement or intelligence agencies. This could be achieved,  either through raw computational power (limiting the size of encryption keys to a level where all possibilities can be tried out in a reasonable timeframe, given the computational ressources available to law enforcement / intelligence agencies), or through the introduction of a backdoor in the algorithm.

In cryptography, a backdoor could be anything from encrypting the data with a second key, owned by the government, to make sure that they can also listen in, to using weak random numbers for the generation of cryptographic keys, which would allow anyone knowing the exact weakness to recover the keys much more quickly. This has, appearently, already happened: It is suspected (and has pretty much been proven) that the NSA introduced backdoors into the Dual EC DRBG random number generator and it is alleged that they paid off a big company (RSA) to then make this algorithm their standard random number generator in their commercial software.

The problem with backdoors is that once they are discovered, anyone can use them. For example, if we mandated that everyone use Dual EC DRBG random numbers for their cryptographic functions, not only we, but also the NSA could decrypt the data much more easily. If we encrypt everything to a second key, then anyone in posession of that key could use it to decrypt the data, which would make the storage location of the key a very attractive target for foreign spies and malicious hackers. So, unless we want to make the whole system insecure to potentially anyone and not just us, backdooring the cryptography is a bad idea.

The other option we mentioned was limiting the size of cryptographic keys. For example, we could mandate that certain important keys may only use key sizes of up to 768 bits, which can be cracked within a reasonable timeframe using sufficient computing power. But, once again, we encounter the same problem: If we can crack the key, other organizations with comparable power (NSA, KGB, the chinese Ministry of State Security, …) can do the same.

Also, because the computational power of computers is still increasing every year, it may be that in a few years, a dedicated individual / small group could also break encryption with that key length. This could prove disastrous if data that may still be valuable a decade later is encrypted with keys of that strength, e.g. trade secrets or long-term plans. Competitors would just have to get a hold of the encrypted data and wait for technology to reach a point where it becomes affordable to break the encryption.

So, mandating the use of weak or backdoored cryptography would make everyone less secure against intelligence agencies and quite possibly even against regular criminals or corporate espionage. In that light, this form of regulation probably involves too much risk for too little reward (cracking these keys still takes some time, so it cannot really be done at a large scale).

Performing a Man-in-the-Middle-Attack on all / select connections

A man-in-the-middle (MitM)-Attack occurs when one party (commonly called Alice) wants to talk to another party (Bob), but the communication is intercepted by someone else (Mallory), who then modifies the data in transit. Usually, this involves replacing transmitted encryption keys with others in order to be able to decrypt the data and re-encrypt it before sending it on to the destination (the Wikipedia article has a good explanation). This attack is usually prevented by authenticating the data. There are different techniques for that, but most of the actual communication between human beings (e.g. eMail transfer, logins into websites, …) is protected using SSL/TLS, which uses a model involving Certification Authorities (CAs).

In the CA model, there are a bunch of organizations who are trusted to verify the identity of people and organizations. You can apply for a digital certificate, which confirms that a certain encryption key belongs to a certain website or individual. They are then supposed to verify that you are, in fact, the owner of said website, and issue you a certificate file that states “We, the certification authority xyz, confirm that the cryptographic key abc belongs to the website blog.velcommuta.de”. Using that file and the encryption key, you can then offer web browsers a way to (more or less) securely access your website via SSL/TLS. The server will send its encryption key and the certificate, confirming that this key is authentic, to clients, who can then use that key to communicate with the server.3)

The problem is that every certification authority is trusted to issue certificates for every website, and no one can prevent them from issuing a false certificate (e.g. confirming that key def is a valid key for my website). A man-in-the-middle could then use such a certificate to hijack a connection, replace my cryptographic key with their own and listen in on the communication.

Now, in order to get into every (or at least every interesting) stream of communication, we would need two things:

  • A certification authority that is willing (or can be forced) to give us certificates for any site we want
  • The cooperation (again, voluntary or forced) of internet providers to perform the attack for us

Both of these things can be written into law and passed, and we would have a way to listen in on every connection protected by this protocol. However, there are a few problems with that idea.

One problem is that not all connections use the CA model, so we would need to find a way to attack other protocols as well. These protocols are mostly unimportant for large-scale communication like eMail, but become interesting if we want to gain access to specialized services or specific servers.

The second problem is that some applications do additional checks on the certificates. They can either make sure that the certificate comes from a specific certification authority, or they could even make sure that it is a specific certificate (a process called Certificate Pinning4)). Those programs would stop working if we started intercepting their traffic.

The third problem is that it creates a third point at which connections can be attacked by criminals and foreign intelligence agencies. Usually, they would have to attack either the source or the destination of a connection in order to gain access to the communication. Attacking the source is usually hard, as that would be your laptop, and there are an awful lot of personal computers which you would have to attack in order to gain full access to all communication that way.

Attacking the destination is also hard, because those are usually servers run by professional companies who (hopefully) have good security measures in place to prevent those attacks. It is probably still possible to find a way in if you invest enough effort, but it is hard to do at scale.

However, if you introduce a few centralized points at which all communication flowing through the network of an internet operator is decrypted and re-encrypted, you also create one big, juicy target, because you can read all of those connections by compromising one server (or at least a much smaller number of servers than otherwise). And experience has shown that for juicy targets like that, intelligence agencies are willing to invest a lot of effort.

So, performing MitM-Attacks on all connections would not work for all types of connections, it would not work for all devices, and it would create attractive targets for hostile agencies to gain access to a large percentage of formerly secured traffic. That does not seem like a good trade to me, so let’s keep looking for alternatives.

Key escrow

Key escrow (sometimes called a “fair” cryptosystem by proponents and key surrender by opponents) is the practise of keeping the cryptographic keys needed to decrypt data in a repository where certain parties (in our case law enforcement agencies) may gain access to them under certain circumstances.

The main problem in this case is finding an arrangement where the keys are stored in a way that lets only authorized parties access them. Assuming we want to continue following a system with judicial oversight, that would probably mean that the escrow system could only be accessed with a warrant / court order. It is hard to enforce this using technology alone, and systems involving humans are prone to abuse and mistakes. However, with a system as security critical as a repository for cryptographic keys, any mistake could prove costly, both in a figurative and a literal sense.

Then there is the problem of setting the system up. Do you want a central European repository? A central repository for each country? Will every server operator be required to run escrow software on their own server? Each of these options has its own advantages and drawbacks.

  • A European repository would mean less administrative effort overall, but it would create a single point of failure, which, when compromised, would impact the internet security of the whole EU. As with the issue of man-in-the-middle attack devices, history has shown that foreign agencies can and will go to a lot of effort to compromise such repositories. A central European repository would also assume that European countries do not spy on each other, which is a naive assumption.
  • Country-wide repositories fix the last problem, but still suffer from the others. They are attractive targets for both foreign intelligence agencies and cybercriminals.
  • Individual repositories face the problem of compatibility (there are a LOT of different operating systems and -versions running on servers). They are less centralized, which is good (the effort to break into them increases)5), but they also imply that law enforcement would have to be able to electronically retrieve the key on demand. If someone knew that the police was onto him, he could thus disable the software or even destroy the key and server in order to prevent the police from retroactively decrypting potential evidence they had already captured.

Again, we have encountered administrative problems and important security aspects that make this option problematic at best. So, why don’t we take a look at how things are done right now in great britain and see if it made  sense to at least expand this law into the rest of Europe.

Key disclosure laws
Key disclosure in practise
Key disclosure in practise (Image: “Security” by Randall Munroe, Licensed CC BY-NC 2.5)

The british Regulation of Investigatory Powers Act of 2000 (short: RIPA) includes a provision requiring suspects in a crime to hand over encryption keys or face jail time of up to two years (or up to five in cases of terrorism or suspected child pornography).6) The law has already been used to imprison at least three people for refusing to give up encryption keys.

However, all members of the European council have ratified the European Convention on Human Rights. While it is not specifically mentioned, the European court of human rights holds that

…the right to remain silent under police questioning and the privilege against self-incrimination are generally recognized international standards which lie at the heart of the notion of a fair procedure under Article 6 [of the European Convention on Human Rights].

Requiring an individual to surrender keys would probably be in violation of the right to remain silent (although there are different opinions on that). Any such law would almost certainly be annulled by the Court of Justice of the European Union, as it did with the Data Retention Directive.

However, such a law could conceivably be used to compel companies or witnesses to disclose encryption keys they have access to. These laws exist in some European countries, and could be expanded to all of Europe. It would remain to be seen what the European Court of Justice would think of that, as such a law would definitely be challenged, but the potential of a law being annulled by the ECJ has not prevented the European parliament from passing them in the past.

There exists another, more technical concern with this: More and more websites employ cryptographic techniques that ensure a property called (perfect) forward secrecy, short (P)FS. This ensures that even if an encrypted conversation is eavesdropped on and recorded, and even if the encryption keys are surrendered to law enforcement afterwards (or stolen by criminals), they will be unable to decrypt the conversation. The only way to eavesdrop on this kind of communication is to perform an active man-in-the-middle-attack while in possession of a valid key.

This means that even if law enforcement has a recording of evidence while it was being transmitted, and even if they could force someone to give them the relevant keys, they would still be unable to gain access to said evidence. This technology is slowly becoming the standard, and the percentage of connections protected by it will only grow, meaning that laws requiring the disclosure of keys after the communication has taken place will become less and less useful over the next years.

Conclusion

We have taken a look at five different proposals for regulating transport security, and have found that each is either extremely harmful to the security of the European internet or ineffective at providing access to encrypted communication. Each of the proposals also holds an enormous potential for abuse from governments and intelligence services.

This concludes part 2 of my series on crypto regulation. Part 3 is discussing possible ways to regulate end-to-end cryptography.


As before, thanks go out to niemalsnever, FreeFall and DanielAW for proofreading and suggestions. Any remaining mistakes are my own.

Flattr this

Footnotes

Footnotes
1 I’m playing “Devil’s system engineer” here and am obviously completely opposed to any of the measures I describe in this article, in case there was any doubt.
2 Again, this is a simplification. In the real world, there are important considerations, including the choice of the proper hash function and salting of the passwords, but that is out of the scope of this article.
3 As always, I am simplifying matters here, but the exact inner workings of TLS are not relevant to this article.
4 There is a Firefox extension that does that
5 …assuming the key escrow software does not have a security hole itself, which is an optimistic assumption in itself.
6 Distressingly, it does not even distinguish between willingly not giving up the key and being unable to give up a key. This means that if the police thinks something is encrypted, and it is not, you can be sent to jail for refusing to give up a key to decrypt imaginary encrypted data.

Crypto Regulation, Part 1: What is it all about?

Over the last weeks, we’ve had a slew of politicians asking for new legislation in response to the Paris attacks. The proposed new regulations range from a new Data Retention directive (here’s my opinion on that) to PNR (Passenger Name Records, data about the passengers of all Flights within europe) data exchange within the EU.

By far the most worrying suggestion initially came from UK Prime Minister David Cameron, but was taken up by Barrack Obama, the Counterterrorism Coordinator (pdf) of the EU, and the German Innenminister (Minister of the Interior), de Maziere: A regulation of encryption technology. The reasons they give are very similar: We need to be able (in really important cases, with proper oversight, a signed warrant et cetera) to read the contents of any communication in order to “protect all of us from terrorists”.1)

The irony of justifying this with the Paris attack, a case where the terrorists were known to the relevant authorities and used unencrypted communication, is apparently lost on them.

In this series of posts, I will take a look at what crypto regulation means, how it could (or could not) work in practice, and why it appeals to pro-surveillance politicians either way.

An (extremely) brief primer on cryptography

Cryptography is used in order to hide information from unauthorized readers. In order to decrypt a message, you need three things: The encrypted message (obviously), knowledge about the algorithm that was used to encrypt it (which can often, but not always, be easily determined), and the cryptographic key that was used to do it. When we talk about crypto regulation, we usually assume that algorithm and message are known to whoever wants to read them, and the only missing thing is the key.

Cryptography is all around you, although you may not see it. In fact, you are using cryptography right now: This website is protected using SSL/TLS (that’s the https:// you see everywhere). You are also using it when you go to withdraw money from an ATM, when you send a mail, log into any website, and so on. All of those things use cryptography, although the  strength (meaning how easy it is to break that cryptography) varies.

A (very) brief history of crypto regulation to date

Crypto regulation is hardly a new idea. For a long time, the export of encryption technology was regulated as a munition in the United States (the fight for the right to freely use and export cryptography was called the Crypto Wars and spawned some interesting tricks to get around the export restriction). This restriction was relaxed, but never completely removed (it is still illegal to export strong encryption technology into “rogue states” like Iran).

During the last 10 years or so, there haven’t really been serious attempts to limit the use and development of encryption technology2), leading to the rise of many great cryptographic tools like GnuPG, OTR, Tor and TextSecure.3) But now, there appears to be another push to regulate or even outlaw strong encryption.

What is “strong encryption”?

In Cryptography, we distinguish between two4) different kinds of encryption. There is transport encryption and end-to-end encryption. Transport encryption means that your communication is encrypted on its way from you to the server, but decrypted on the server. For example, if you send a regular eMail, your connection to the server is encrypted (no one who is eavesdropping on your connection is able to read it), but the server can read your eMail without a problem. This type of encryption is used by almost every technology you use, be it eMail, chats (except for a select few), or telephony like Skype.

The major drawback of transport encryption is that you have to trust the person or organization operating the server to not look at your stuff while it is floating around on their server. History has shown that most companies simply cannot be trusted to keep your data safe, be it against malicious hackers (see Sony), the government (see PRISM), or their own advertising and analytics desires (see Google Mail, Facebook, …).

The alternative is end-to-end encryption. For this, you encrypt your message in a way that only allows the legitimate receiver to decrypt it. That way, your message cannot be read by anyone except the legitimate receiver.5) The advantage should be obvious: You can put that message on an untrusted server and the operators of said server cannot read it.

The drawback is the logistics: The recipients need to have their cryptographic keys to decrypt the message, which can be a hassle if you have a lot of devices. The key can also be stolen and used to decrypt your messages. For some usage scenarios like Chats, there are solutions like the aforementioned OTR and TextSecure (which you should install if you own an Android phone), but there is no such solution for eMails. End-to-End-Encryption also does not protect the metadata (who is talking to whom, when, for how long, et cetera) of your messages, only the contents.

When politicians are talking about “strong encryption”, they are probably referring to end-to-end encryption, because that data is much harder to obtain than transport-encrypted data, which can still be seized on the servers it resides on. To read your end-to-end encrypted data, they would have to seize both the encrypted data and your encryption keys (and compel you to give them the passwords you protected them with), which is a lot harder to do.

Conclusion

Now that we have a basic understanding of the different types of encryption used in the wild, we can talk about how to regulate them. This will be covered in part 2 of this series.


Thanks go out to niemalsnever, FreeFall and DanielAW for proofreading and suggestions. Any remaining mistakes are solely mine.

Flattr this

Footnotes

Footnotes
1 I dislike the term “Terrorist” because it can (and has been) expanded to include pretty much anyone you disagree with. However, for readability, I will use it in the connotation most used by western media, e.g. meaning the Islamic State, Al Quaeda, et cetera.
2 Although it appears that these efforts where simply put into introducing backdoors into algorithms and implementations instead.
3 And not-so-great, but still necessary tools like OpenSSL.
4 We obviously distinguish between many more than that, but for this article, two will be enough.
5 This is, again, a gross simplification, but sufficient for this article.

Transport security in online banking, or: “Not my Department”

Lately, I have been trying to improve transport security (read: SSL settings and ciphers) for the online banking sites of the banks I am using. And, before you ask, yes, I enjoy fighting windmills.

Quick refresher on SSL / TLS before we continue: There are three things you can vary when choosing cipher suites:

  • Key Exchange: When connecting via SSL, you have to agree on cryptographic keys to use for encrypting the data. This happens in the key exchange. Example: RSA, DHE, …
  • Encryption Cipher: The actual encryption happens using this cipher. Example: RC4, AES, …
  • Message Authentication: The authenticity of encrypted messages is ensured using the algorithm selected here. Example: SHA1, MD5, …

After the NSA revelations, I started checking the transport security of the websites I was using (the SSL test from SSLLabs / Qualys is a great help for that). I noticed that my bank, which I will keep anonymous to protect the guilty, was using RC4 to provide transport encryption. RC4 is considered somewhere in between “weak” and “completely broken”, with people like Jacob Applebaum claiming that the NSA is decrypting RC4 in real time.

Given that, RC4 seemed like a bad choice for a cipher. I wrote a message to the support team of my bank, and received a reply that they were working on replacing RC4 with something more sensible, which they did a few months later. But, for some reason, they still did not offer sensible key exchange algorithms, insisting on RSA.

There is nothing inherently wrong with RSA. It is very widely used and I know of no practical attacks on the implementation used by OpenSSL. But there is one problem when using RSA for key exchanges in SSL/TLS: The messages are not forward secret.

What is forward secrecy? Well, let’s say you do some online banking, and some jerk intercepts your traffic. He can’t read any of it (it’s encrypted), but he stores it for later, regardless. Then something like Heartbleed comes along, and the same jerk extracts the private SSL key from your bank.

If you were using RSA (or, generally, any algorithm without forward secrecy) for the key exchange he will now be able to retroactively decrypt all the traffic he has previously stored, seeing everything you did, including your passwords.

However, there is a way to get around that: By using key exchange algorithms like Diffie-Hellman, which create temporary encryption keys that are discarded after the connection is closed. These keys never go “over the wire”, meaning that the attacker cannot know them (if he has not compromised the server or your computer, in which case no amount of crypto will help you). This means that even if the attacker compromises the private key of the server, he will not be able to retroactively decrypt all your stuff.

So, why doesn’t everyone use this? Good question. Diffie-Hellman leads to a slightly higher load on the server and makes the connection process slightly slower, so very active sites may choose to use RSA to reduce the load on their servers. But I assume that in nine of ten cases, people use RSA because they either don’t know any better or just don’t care. There may also be the problem that some obscure guideline requires them to use only specific algorithms. And as guidelines update only rarely and generally don’t much care if their algorithms are weak, companies may be left with the uncomfortable choice between compliance to guidelines and providing strong security, with non-compliance sometimes carrying hefty fines.

So, my bank actually referred to the guidelines of a german institution, the “deutsche Kreditwirtschaft”, which is an organisation comprised of a bunch of large german banking institutes. They worked on the standards for online banking in germany, among other things.

So, what do these security guidelines have to say about transport security? Good question. I did some research and came up blank, so I contacted the press relations department and asked them. It took them a month to get back to me, but I finally received an answer. The security guidelines consist of exactly one thing: “Use at least SSLv3“. For non-crypto people, that’s basically like saying “please don’t send your letters in glass envelopes, but we don’t care if you close them with glue, a seal, or a piece of string.”

Worse, in response to my question if they are planning to incorporate algorithms with forward secrecy into their guidelines, they stated that the key management is the responsibility of the banks. This either means that they have no idea what forward secrecy is (the reponse was worded a bit hand-wavy), or that they actually do know what it is, but have no intention of even recommending it to their member banks.

This leaves us with the uncomfortable situation where the banks point to the guidelines when asked about their lacklustre cipher suites, and those who make the guidelines point back at the banks, saying “Not my department!“. In programming, you call that “circular dependencies”.

So, how can this stalemate be broken? Well, I will write another message to my bank, telling them that while the guidelines do not include a recommendation of forward secrecy, they also do not forbid using it, so why would you use a key made of rubber band and rocks if you could just use a proper, steel key?

Don Quixote charging the windmills is licensed CC BY-SA 2.0
Don Quixote charging the windmills by Dave Winer is licensed CC BY-SA 2.0

And, of course, the more people do this, the more likely it is that the banks will actually listen to one of us…

Introducing the SMTP GPG Proxy

I frequently encounter software that allows me to send mails, but has no GPG support out of the box (sometimes not even using plugins). This annoys me greatly, especially if it is software like FusionInvoice, which may transport sensitive information in its mail messages. Since FusionInvoice (and many other programs) support SMTP for sending their mail, and since I had a few spare hours, I decided to see if I could hack something together to add GPG support to those programs. And the result was…

…the SMTP GPG Proxy

The SMTP GPG Proxy, besides having an awful name (name proposals welcome), is a Python program. It provides an SMTP Server and will accept incoming mail messages, encrypt / sign them according to its settings and magic strings in the mail subject, and then forward them to the upstream SMTP server.

Since the basic python smtpd-Module does not support encrypted connections, I used the modified “secure-smtpd”-Module by bcoe. It extends the basic smtpd with support for SSL-encrypted connections while providing an almost identical interface. For the encryption itself, I used the standard “python-gnupg”-wrapper, which isn’t ideal but gets the job done most of the time.

Setup

Setting up the SMTP GPG Proxy is quite easy. Just grab the latest version from the GitHub-Repository, install the dependencies, rename the config.py.example to config.py and fill in the settings (everything should be documented in there), and then launch the main program. Next, point your SMTP-speaking program at the IP and port you just configured (it is highly recommended to do this via localhost only, as incoming connections into the Proxy are, as of right now, not encrypted), and mail away.

Usage

To get the SMTP Proxy to encrypt a message, just send the mail and add the KeyIDs (including the “0x”) to the subject line, seperated by whitespaces. They will be automatically parsed and removed from the subject, so if you want to send a message with the subject “Invoice #23”, encrypted with 0x12345678 and 0x13374242, you would choose the subject “Invoice #23 0x12345678 0x13374242”. KeyIDs can be in short (8 characters) or long (16 characters) form, as well as full fingerprints (without whitespaces and prefixed by “0x”).

Depending on the settings, missing public keys will either lead to the message being rejected, sent unencrypted, or keyservers may be polled before rejecting or sending unencrypted if no public keys are found. You can also configure the program to GPG-sign all messages, or only encrypted messages, or no messages at all.

Development status

The program is currently in alpha, but it works very well for me. Still, as of right now there are some open issues with it, which I may or may not be working on. If you set up everything correctly, you should not encounter any problems. It is the border cases like incorrect SMTP passwords that are currently not dealt with very well.

Roadmap

If I find the time, I will keep developing the program, removing bugs, making it more stable, and adding more features like opportunistic encryption. However, I may not have the time to fully fix everything, and bugs that are annoying me will obviously be fixed faster than those I will never encounter in my usage.

However, as the program is open source and on GitHub, feel free to fork and submit pull requests. The code is, as of right now, shamefully undocumented, but as it has only about 200 lines, it should still be fairly easy to understand.

License

Like almost all my projects, I am releasing this program under the BSD 2-Clause License.

Howto: Usable disk encryption with Linux Mint 16

After I encountered a bunch of problems when setting up full disk encryption on Linux Mint 16, I thought I’d share the final solution I chose, and how to avoid some of the bugs in the Linux Mint Installer.

This howto is based on the Cinnamon variant of Linux Mint 16 x64, but it is probably applicable to other variants and architectures of Linux Mint 16.

Now, there are a few ways to partition your hard drive when using Linux. I wanted the following setup:

  • /boot, unencrypted, 256 MB (otherwise the system won’t boot)
  • /, encrypted, ~50 GB
  • swap-space, encrypted, ~10 GB
  • /home, encrypted, the remaining ~440 GB

This setup has the advantage that you can preserve the /home-partition when updating the system, saving you the trouble of making a backup and restoring it afterwards.

Now, the first option you have when installing Linux Mint 16 is to just select “encrypt this installation” during the install process. Sadly, this creates a setup without a dedicated /home partition, so it is out of the question.

Another Howto I found proposed to use the partition manager of the installer (select “something else” when asked how you want to install Linux Mint 16), and create the three encrypted partitions with the “Use as: Physical volume for encryption” option. This works, but it will make you enter your decryption passphrase three times in a row when booting up your PC, which is pretty darn annoying, so I went looking for a better option.

The way I finally solved my problem was like this: When installing Linux Mint 16, go to the partition manager and create the following setup:

  • /boot, unencrypted ext2, 256 MB
  • An encrypted partition with the size you want for the root of the file system (“/”, choose “Physical volume for encryption”, and a new virtual hard drive will appear, where you will have to click on the only partition on it and select “Use as: ext4 journaling file system”, setting the mount point to “/”)
  • A partition with the size you want your swap space to have, but choosing “Do not use this partition” in the “Use as”-Dropdown menu (more on this later)
  • An unencrypted partition with the size you want for your /home, choosing “ext4 journaling file system” with a mount point of “/home”

Now, there are a few things strange about this setup:

  • We do not activate the swap space because Linux won’t let you install a system with an encrypted partition and unencrypted swap space, and we will encrypt the swap later.
  • We do not encrypt the /home partition because that will happen later as well

Now, after you’ve set up your partition table to your liking (substitute ext4 for other file system types if you want to, but I’ll stick to ext4 for now), click “Install now”. The setup will warn you about your system not having any swap space, but you can ignore that (as long as you have enough RAM to install and boot linux without swap, otherwise you’re out of luck with this howto).

Proceed with the installation until it asks you to create your user. Enter all your user information and do not check the “encrypt my files” option. Why? Because the installer repeatedly failed when I did it, and the one time it worked it produced an unencrypted home directory regardless of my settings. Continue the install and reboot into your fresh linux when the installation is finished. Make sure you are asked for the passphrase you set up for the root partition when booting!

Now, after booting into your fresh linux, make sure “ecryptfs-utils” is installed (sudo apt-get install ecryptfs-utils), and install “gparted” (sudo apt-get install gparted). Then, go to the “Users and Groups” setting and create a new administrator user. Set a passwort for it, then log out and back into the new user. Start “gparted” and select the unused partition that was intended as swap space (make sure it is the correct one, as the encrypted “/” partition looks very similar. Compare sizes to be sure). Rightclick it, and select “format to => swap”. Apply your changes, then right-click the newly created swap space and select “swapon”.

Next, open a terminal and enter sudo ecryptfs-migrate-home -u USER, where USER is the username of your primary user (not the one you are currently logged in as). Follow the on-screen instructions. After the migration finished, run sudo ecryptfs-setup-swap and follow the instructions. Once you are finished with this, log out (but do not reboot) and back into your primary account. Open a terminal and enter ecryptfs-unwrap-passphrase and note the passphrase down somewhere secure. This is your emergency passphrase for when you need to decrypt your home directory manually and / or have forgotten your user password. Afterwards, delete the user you created and reboot for good measure.

You should now have the following setup:

  • /boot, unencrypted
  • /, encrypted
  • /home, unencrypted, but containing an encrypted /home/USER.
  • swap-space, encrypted

You can verify this by running lsblk and checking if the relevant partitions show up as encrypted or not, and running df and looking for /home/USER/.Private in the leftmost column. If it is there, you should be safe.

This setup leaves you with encryption for all your relevant data, with minimal additional annoyance compared to an unencrypted linux (you only need to enter one passphrase on boot, your home directory will be decrypted on login). Now, a few caveats:

  • In this setup, only the home directory of the first user is encrypted. If you ever create additional users, you will have to repeat the steps ecryptfs-migrate-home and ecryptfs-unwrap-passphrase for the new user, or it will not be encrypted. (Technically, this means that this is no full disk encryption, but as long as you remember to do this for every new user or just don’t create additional users, you should be just as safe).
  • Your home directory can be decrypted with your users login passphrase, so choose a strong user password.
  • Encrypting the swap space in this fashion will break Linux’ hibernate function, so don’t use that afterwards (standby is fine, although that will leave everything decrypted in case your laptop is seized while it is in standby, so you may want to avoid that).
  • I have not tried reinstalling linux with an encrypted home directory yet, but according to this askUbuntu-Question, it should not be a problem.

I hope I could help you living a more encrypted life. Please let me know if you find any mistakes in this guide or if something is not clear from what I’ve written.

Update: There is a persistent bug in Ubuntu / Linux Mint, which leads to the SWAP partition not being mounted on reboot. A workaround is described here. Thanks to Igo in the comments for testing the workaround and reporting back.

“Crypto-Hypocrisy”, or: on what is wrong with the security community

I’ve been annoyed at some of the things in the computer science and, more specifically, computer security community for a long time, and decided to finally write them down. This has become quite a wall of text. Depending on how you read this, this may be a rant or a plea.

A few days ago, when I was browsing the website of a security conference (SEC 2014 in this case, but this is applicable to a lot of conferences), I became curious. Shouldn’t a conference focussing on “Applied Cryptography”, among other things, automatically forward me to the HTTPS version of their website? I changed the http:// to a https://, and OH GOD WHAT THE HELL?

Shiny...Interesting. A self-signed certificate, expired for more than three months, and with a Common Name of “ensa ident” (as opposed to, you know, the domain name it was supposed to protect).

Well, someone  must’ve been sleeping, I figured, so I went to the contact page of the website, and OH MY GOD WHAT THE HELL?

Contact Information

A hotmail address. Not only that, but a hotmail address with no information about a PGP key for secure communication.

Now, some people may ask “so what?”. To which I reply: How can the security community as a whole condemn bad security practices and demand secure, end-to-end-encrypted communication for everyone, if the organizers a conference that attendees pay between 250 and 550 € to attend can’t get their shit together enough to at least provide a valid, well-formed SSL-certificate for their websites. Hell, I’m not even asking for one that is signed by a proper CA. I can live with a self-signed certificate, but at least put in the effort to have the CN match your domain name and to create a new one once the old one has expired.

As for the PGP key: I can understand if conferences do not provide a PGP key because their contact address is actually a mailing list that sends the message to multiple people (although it is beyond me why, in 2014, there is no program that will decrypt all incoming messages and re-encrypt them with the keys of the recepients of the mailing list. Or, if there is such a program, why no one uses it). But this is a hotmail.fr address. This is bad on so many levels. A conference on “Information Security Education” can’t even afford to have their own eMail address?

I regularily annoy companies by writing them eMails closing with “P.S.: Have you ever considered adding the capability to receive encrypted eMails to this address? [Link to a tutorial]”. Some ignore this, some make excuses. I only know of two companies who allow me to send them encrypted eMails. One of them is my bank, who will then reply unencrypted with a full-quote, rendering my encryption worse than useless. The other are the people at Uberspace.de, who I am not a customer of, but who provide their key prominently on their website.

How can I keep a straight face demanding this of those companies if the people running our conferences are too lazy or just plain don’t care enough about the ACTUAL TOPIC OF THEIR CONFERENCE to take the 30 minutes to set something up? How can I keep a straight face if, until a few months ago, I could write encrypted eMails with more of my parents (2) than other computer scientists I regularily mailed with (1)?

The general reaction if I propose mail encryption to the average CS student is one of the following:

  1. I should totally do that, but it’s too much work
  2. I’m not writing anything secret, so why would I encrypt it?
  3. I don’t know anyone who is using mail encryption
  4. I’m not writing any mails anyway, I’m using Facebook to talk to other people.

To which I would reply, respectively:

  1. It’s 30 minutes of work, once, and then you can have it up and running until you reinstall your system. How is that “too much work”? Don’t you value your privacy enough to invest 30 minutes into protecting it?
  2. Because it is good practise to encrypt it. Because, even if you don’t write any secret letters, you would still not be happy to have other people read them (hopefully).
  3. Then be the first and pester your computer scientist friends. Take them to a crypto party. It’s gonna be fun.
  4. …Goodbye. *shake head, go away, loose faith in humanity*

It is not about the contents needing hiding. It is not about keeping something from the NSA (although that’s an awesome side effect). It is about making encrypting communication a social norm, at least within the computer science community.

At 30c3, I received three business cards. Two of them were from people working for the Tor Project (Roger Dingeldine and Jacob Appelbaum). Both of them had their PGP Fingerprints printed on their business card. This is what I want to see. Get away from “here’s my eMail address” towards “here’s a way you can send me an encrypted message and be sure you reach me and no one else”.

The third business card was from a nice woman of maybe 50 years with barely any background in computer science. She wanted to help an open source project, so she got a ticket to 30c3 and went to different assemblies and workshops (which, in itself, is pretty awesome, I might add). I recently sent her an eMail, signing it with my PGP key, as I always do for those mails. I received an encrypted response, stating that she had just started using GnuPG and Enigmail and asking if I would help her set up a laptop with Linux and full disk encryption.

If 50-something year old executive consultants can figure this our, why can’t the security community?

Thoughts on usability vs. security

Warning: This has become a wall of text, and is much less coherent than I wanted it to be. I hope I got my point across.

In the last few days, I had a lot of discussions about usability, security, and the conflict between them. One was in the “Issues”-section of Mailpile, and upcoming FOSS webmail client with encryption support, among other things. One was with another participant of the “Night of Code” last Friday. Both of them were very civil, but at least in the second, it seems I did not get my point across properly. As I expect to discuss this again soon, I thought I’d just write up my argument in a coherent fashion and put it here for everyone to see and discuss.

Usually, the topic comes up when I am discussing improving Mail Encryption to be more user-friendly, as the current version of Enigmail is just plain horrible in that regard (sorry, devs, but it’s the truth). There are several improvements on the way that I know about, but as I am helping design some of them, I sometimes get into an argument with others about how a “thing” should be designed. And that is good, by the way. The only way to make something better is to talk about what’s wrong and how to improve it.

So, usually, the discussion goes somewhat like this: “Hey, let’s improve the process of checking fingerprints to make keysigning parties less horrible” – “Great Idea!” – “oh, yes, and since checking fingerprints is so much easier then, let’s notify people when they are trying to send a mail to an untrusted public key” – “Hmmm. You know, that could be really annoying for people…” – “Yeah, and let’s add a big red warning message that you have to work really hard to turn off so people start signing keys” – “yeah, well, that would be extremely annoying and make people just randomly start signing keys, destroying the Web of Trust system” – “then let’s make it really hard to sign keys without checking fingerprints” – “Noooooooooooooo” – and so on.

The problem, I think, is that the other person is trying to design for security with an added bonus of a bit of usability for nerds, while I am trying to design for regular people, with changes where security may suffer a bit (if you do not know what you are doing). I am always pushing people to adopt mail encryption, and usually the answer is “hell no, that’s way too complicated”. I’m even getting that reply from other CS students, by the way, so that goes to show you just how bad it is (or at least, how bad it is perceived).

So, from my point of view, it is better if a person is using mail encryption in a way that is not completely secure against a sophisticated attack, it is still better than if the person is mailing completely unencrypted. Or, to put it another way: I’d rather have everyone mailing encrypted and 0.0001 % of them being attacked with a targeted, sophisticated attack, than having everyone mailing unencrypted and having their mails read by everyone and their dog (and the NSA).

So, I’m arguing for the following: Make Mail encryption as “out of my face” as possible. Make it possible to automatically download recipient keys. Make it easy, but not required, to validate keys. Make it easy to read your mail on any device, even if it is encrypted (this is not an issue with Enigmail but with the ecosystem of PGP apps in general). Make the usage as easy to understand as possible, and error messages without any technobabble. You can always add a button “details” to show the technical details of what went wrong, but my grandma does not need to know “Error – No valid armored OpenPGP block found”, she only needs to know that something went wrong and she cannot be sure the mail is actually signed, and that in a language that she can understand.

This is hard. I know it is because I did not manage to come up with an error message that fits the criteria outlined above. But it being hard should not keep us from trying to achieve it. We are the only people able to change this, and the rest of the population is depending on us to get this solved so they can use it, too.

Create all of those features outlined above. Turn them on by default. Make the wizzard pop up on first launch (yes, there is a wizzard. It’s even supposed to pop up, but it doesn’t for me). Make it really easy to use PGP. Then make it possible to turn those options off again, for people who know what they are doing and want a clean keyring. But build a system your parents could use, and your non-technical friends could use. Because they are sending their data around unencrypted. And sometimes they are sending your data around. Unencrypted.

Think about it.

Results from the inofficial Enigmail “Night of Code”

Yesterday, we had a small “Night of Code” in Hamburg. Basically, five hackers met up in the rooms of the CCC Hamburg and tried to improve Enigmail, the Thunderbird extension for PGP-encrypting Mails. It was a hell of a lot of fun, and we actually made quite a bit of progress on several improvements.

It all started with a discussion on the mailing list of the computer science department of the University of Hamburg. We had a lengthy discussion on what is wrong with Enigmail and PGP, and some of us decided to do something about it. Someone organized a room, called for a “Night of Code”, and a few people responded, me among them.

We started with a short introduction on the architecture of Enigmail and what the important files are. Afterwards, we discussed what needed improvements (the consensus being “basically everything about the UI”) and everyone chose one of the proposed improvements and started working.

I don’t want to spoil the surprise on what the others have been working (although all of it will come in pretty handy, once it is finished and hopefully merged into the main project), but I can say a bit about what I worked on.

So, one of the important things when using PGP is to manage your Web of Trust. This includes the signing of the keys of other people (after you validated that they are, in fact, the person they are saying they are). For that purpose, there are Key-signing parties. And one of the major annoyances about those parties is the distribution of freshly signed keys.

On Linux, there is a neat command line tool called caff. It takes any number of Key-IDs, downloads the public keys, signs each ID seperately and mails it (encrypted) to the provided eMail address. The problem is that caff is pretty annoying to set up, and only works on Linux.

The Feature I am working on is something along those lines. I added a new checkbox on sigining keys…

The second checkbox is new, in case you were wondering.
The second checkbox is new, in case you were wondering. And the keys are totally legit, I checked. 😉

If you select the checkbox and sign a key (and no error occurs during the signing process), a new Message composition window will open:

The new Message

It will contain some sort of preset text and have the signed public key attached.

Now, this is all working great already, but there are still some things to do:

  • Save the last decision on whether to mail the key or not (currently, due to some weird behaviour of the Enigmail preferences function that I still need to figure out, it is not saved)
  • Automatically set the mail to be encrypted and signed, regardless of the settings.
  • Perhaps encrypt the public key before attaching it, to make sure the recipient needs his private key to get the new signatures?
  • Perhaps choose the sending account based on the private key that was used to sign the public key?

Now, the experience to work on Enigmail has been interesting and somewhat cool, but not without its problems.

  • To say the documentation of Enigmail is bad would be misleading, as it implies that there actually is a documentation, which is not the case. Everything you want to use, you need to figure out yourself, possibly by using the addon with debug output active and seeing which functions are used in what order.
  • Thunderbird isn’t much better. Many of the important functions (adding an attachment!) had to be reverse engineered from other addons or from the very helpful thunderbird-stdlib-Project on GitHub, as the documentation has some pretty big holes in significant places.

If you are an Enigmail dev and reading this: Please provide at least some documentation on what is done where in the code, and what APIs can be used for new features. I know you probably understand the code, but it makes the entry barrier for new devs very high.

If you are a Thunderbird dev: See above. The current docs are not enough, and the function names are in parts weird enough to make it almost impossible to find out how to actually use them without checking the source files, which takes time and is extremely annoying.

All in all, I enjoyed my time hacking on Enigmail. But it could have been a lot more productive if there was some form of documentation one could use. As for the new feature: I will try to get it to work properly and then submit a patch to the devs, but I do not know how long that will take, as my time is currently pretty limited because of other things I need to take care of (my bachelors thesis among them).

As for the others: I don’t know when their features will be finished, but we already have a bunch of ideas on what to do next, and if we find the time, we’ll create some more new features. Some of our ideas have the potential to vastly increase usability, so I am very curious as to the reactions of the devs. Let’s hope for the best.