Category Archives: Privacy

An update on the Crypto Regulation series

If you have been following this blog, you may have noticed that I have an unfinished series on the topic of crypto regulation (part 1, part 2, part 3). I’ve been meaning to finish it up, but life got in the way and I mostly forgot about it. Now, it has become relevant again, as Great Britain has just introduced a new draft legislation that would force companies to assist the government in removing encryption.

I’d like to finish up the series, but right now, I just don’t have the time. But luckily, someone else did a much better job than I could ever do. A dream team of cryptography experts has released a joint article called “Keys under Doormats – mandating insecurity by requiring government access to all data and communications”. An excerpt:

Twenty years ago, law enforcement organizations lobbied to require data and communication services to engineer their products to guarantee law enforcement access to all data. After lengthy debate and vigorous predictions of enforcement channels going dark, these attempts to regulate the emerging Internet were abandoned. In the intervening years, innovation on the Internet flourished, and law enforcement agencies found new and more effective means of accessing vastly larger quantities of data. Today we are again hearing calls for regulation to mandate the provision of exceptional access mechanisms. In this report, a group of computer scientists and security experts, many of whom participated in a 1997 study of these same topics, has convened to explore the likely effects of imposing extraordinary access mandates.

We have found that the damage that could be caused by law enforcement exceptional access requirements would be even greater today than it would have been 20 years ago. In the wake of the growing economic and social cost of the fundamental insecurity of today’s Internet environment, any proposals that alter the security dynamics online should be approached with caution. Exceptional access would force Internet system developers to reverse forward secrecy design practices that seek to minimize the impact on user privacy when systems are breached. The complexity of today’s Internet environment, with millions of apps and globally connected services, means that new law enforcement requirements are likely to introduce unanticipated, hard to detect security flaws. Beyond these and other technical vulnerabilities, the prospect of globally deployed exceptional access systems raises difficult problems about how such an environment would be governed and how to ensure that such systems would respect human rights and the rule of law.

So, yeah. The cryptography equivalent of the Avengers have gone ahead and written this stuff better than I could have done. So, I will not be finishing my own series, and instead encourage you to read the article.

However, if it turns out that there is interest in me continuing this series, I may make some time to finish the final two articles of the series. So, if you want me to continue, drop me a line in the comments or on twitter, if that’s your thing.

Review: “The Circle” by Dave Eggers

Secrets are lies
Sharing is caring
Privacy is theft
— The Circle, Dave Eggers

The Circle is scary. Not scary in the sense of a thriller or horror novel. Scary in the sense in which Brave new World is scary: By showing the limitless capacity of humans to ignore the consequences of their actions, as long as they think it is for a higher goal (or even if it’s only for their own amusement).

The book follows the Story of Mae Holland, freshly hired at The Circle, a social media- / search- / technology-giant which has revolutionized the web with some sort of universal, single identity system and assorted services (and is quite obviously a reference to Google). The story covers the development of both Mae as a person and the Circle as a company, which slides more and more into a modus operandi which would make everyone except the most radical post-privacy advocates flinch (the quote from above encapsulates the views of the company quite well).

Over the course of the book, The Circle invents more and more technologies that are, on the surface, extremely useful and could change the world for the better. However, each invention further reduces the personal privacy of everyone and builds an ever-growing net of tracking and surveillance over the planet.

I don’t want to go into too much detail here (I hate spoilers), but over the course of the book, I found myself despising Mae more and more. For me, the character embodies everything that is wrong with social networks and the general trends in the internet in general. This book has it all: Thoughtless acts without regard for the privacy of others, generally zero reflection about the impact her actions may have, and the problem of Slacktivism:

Below the picture of Ana María was a blurry photo of a group of men in mismatched military garb, walking through dense jungle. Next to the photo was a frown button that said “We denounce the Central Guatemalan Security Forces.” Mae hesitated briefly, knowing the gravity of what she was about to do—to come out against these rapists and murderers—but she needed to make a stand. She pushed the button. […] Mae sat for a moment, feeling very alert, very aware of herself, knowing that [she had] possibly made a group of powerful enemies in Guatemala.

I really enjoyed the theme of the book (as in: I was terrified of the future it portrayed. I’m a sucker for a good dystopia). However, the book suffers a little from the writing itself. I can’t put a finger on it, but something about the writing seemed off to me. The book also suffers from having a main character which was clearly an antagonist for me.

It serves as a warning not to blindly accept every new technology and to critically ask how it could be misused and how your use of it may impact others, from the small things like talking about others on social networks (others who may wish to keep certain things private) to the idea of filming your own life.

A young man, seeming too young to be drinking at all, aimed his face at Mae’s camera. “Hey mom, I’m home studying.” A woman of about thirty, who may or may not have been with the too young man, said, walking out of view, “Hey honey, I’m at a book club with the ladies. Say hi to the kids!”

The Circle is not a happy book. Even though it has its problems, you should read it, because it gives some perspective on the direction our use of technology is taking. Read it, and think about it when you use your social networks or read about the newest products.

The Circle is the most terrifying dystopia of all: The one where many people would say “Dystopia? What dystopia? That sounds awesome, I’d love to live there”. And that, more than anything else, is why it terrifies me.

Crypto Regulation, Part 3: Regulating end-to-end encryption

This is part 3 of a series on the crypto regulations proposed by Cameron, Obama and others. Part 1 explains what this is all about and discusses different types of cryptography. Part 2 discusses the different ways transport encryption could conceivably be regulated. I encourage you to read those two first in order to better understand this article.

We are in the middle of a thought experiment: What if we were tasked with coming up with a regulation that allows us (the European law enforcement agencies) to decrypt encrypted messages while others (like foreign intelligence agencies) are still unable to do so. In the previous installment, we looked at how transport encryption could be regulated. We saw that there were a number of different ways this could be achieved:

We have also seen where each of these techniques has its problems, and how none of them achieved the goal of letting us decrypt information while others cannot. Let’s see if we have better luck with the regulation of end-to-end encryption (E2EE).

Regulating end-to-end encryption

If we look at the history of cryptography, it turns out that end-to-end encryption is a nightmare to regulate, as the United States found out during the first “Crypto Wars“. It is pratically impossible to control what software people run on their computers, as is regulating the spread of the software itself. There are simply too many computers and too many ways to get access to software to control them all.

But let’s leave aside the practical issues of enforcement (that’s going to be a whole post of its own) for now. There are still a number of additional problems we have to face: There are only two points in the system where we could gain access to the plain text of a message: The sender and the receiver. For example, even if a PGP-encrypted message is sent via Google Mail, we cannot gain access to the decrypted message by asking Google for it, as they don’t have it, either. We are forced to raid either the sender or the receiver to access the unencrypted contents of the message.

This makes it hard to collect even small amounts of these messages, and impossible to perform large-scale collection.1) And this is why end-to-end encryption is so scary to law enforcement and intelligence agencies alike: There could be anything in them, from cake recipes to terror plots, and there’s no way to know until you have decrypted them.

Given that pretty much every home computer and smartphone is capable of running end-to-end encryption software, the potential for unreadable communication is staggering. It is only natural that law enforcement and intelligence services alike are now calling for legislation to outlaw or at least regulate this threat to their investigatory powers. So, in our thought experiment, how could this work?

Regulation techniques

The regulation techniques from the previous part of this series mostly still apply, but the circumstances around them have changed, so we’ll have to re-examine them and see if they could work under these new circumstances. Also, given that end-to-end encrypted messages are almost always transmitted over transport-encrypted channels, a solution for the transport encryption problem would be needed in order to meaningfully implement any of these regulations.

We will look at the following regulation techniques:

Let’s take another look at each of these proposals and their merits and disadvantages for this new problem.

Outlawing cryptography

This proposal is actually surprisingly practical at first glance: Almost no corporations use end-to-end encryption (meaning that there would be next to no lobbying against such a proposal), comparatively few private persons use it (meaning there would be almost no resistance from the public), and it completely fixes the problem of end-to-end encryption. So, case closed?

That depends. Completely outlawing this kind of cryptography would not only leave the communication open for our own law enforcement, but also for foreign intelligence agencies. Recent reports (pdf) to the European parliaments Science and Technology Options Assessment Board (STOA) suggest that one of the ways to improve European IT security would be to foster the adoption of E2EE:

In this way E2EE offers an improved level of confidentiality of information and thus privacy, protecting users from both censorship and
repression and law enforcement and intelligence. […] Since the users of these products might be either criminals or well-meaning citizens, a political discussion is needed to balance the interests involved in this instance.

As the report says, a tradeoff between the interests of society as a whole and law enforcement needs to be found. Outlawing cryptography would hardly constitute a tradeoff, and there are many legitimate and important uses for E2EE. For example, journalists may use it to protect their sources, human rights activists use it to communicate with contacts living under orpressive regimes, and so on. Legislation outlawing E2EE would make all of this illegal and would result in journalists not being able to communicate with confidential sources without fear of revealing their identity.

In the end, a tradeoff will have to be found, but that tradeoff cannot be to completely outlaw the only thing that lets these people do their job with some measure of security, at least not if we want our society to still have adversarial journalism and human rights activists five years from now.

Mandating the use of weak or backdoored algorithms

This involves pretty much the same ideas and problems we have already discussed in the previous article. However, we also encounter another problem: As the software used for E2EE is used by many individuals on many different systems (as opposed to a comparatively small number of corporations managing large numbers of identical servers which are easy to modify) in many different versions, some of which are no longer being actively developed and many of which are open source and not maintained by EU citizens, mandating the use of specific algorithms would entail…

  • …forcing every developer of such a system to introduce these weak algorithms (and producing these updates yourself for those programs which are no longer actively maintained by anyone)
  • …forcing everyone to download the new versions and configure them to use the weak algorithms
  • …preventing people from switching back to more secure versions (although that is an issue of enforcement, which we will deal with later)

In practise, this is basically impossible to achieve. Project maintainers not living under the jurisdiction of the EU would refuse to add algorithms to their software that they know are bad, and many of the more privacy-conscious and tech-literate crowd would just refuse to update their software (again, see enforcement). Assuming that using any non-backdoored algorithms would be illegal, this would be equivalent to outlawing E2EE alltogether.

In a globalized world, many people communicate across state boundaries. Such a regulation would imply forcing foreigners to use bad cryptography in order to communicate with Europeans (or possibly get their European communication partners in trouble for receiving messages using strong cryptography). In a world of global, collaborative work, you sometimes may not even know which country your communication partner resides in. The administrative overhead for everyone would be incredible, and thus people would either ignore the law or stop communicating with Europeans.

Additionally, Software used for E2EE is also used in other areas: For example, many variants of Linux operating systems uses GPG (a software for E2EE of eMails) to verify if software updates have been tampered with. Using bad algorithms for this would compromise the security of the whole operating system.

Again, it is a question of the tradeoff: Does having access to the communication of the general population justify putting all of this at risk? I don’t think so, but then again, I am not a Minister of the Interior.

Performing a Man-in-the-Middle-Attack on all / select connections

If a Man-in-the-Middle-Attack (short: MitM) on transport security is impractical, it becomes downright impossible for most forms of E2EE. To understand why, we need to look at the two major ways E2EE is done in practise: The public key model and the key agreement model:

  • In the public key model, each participant has a public and a private cryptographic key. The public key is known to everyone and used to encrypt messages, the private key is only known to the recipient and is used to decrypt the message. The public key is exchanged once and keeps being re-used for future communication. This model is used by GPG, among others.
  • In the key agreement model, two communication partners establish a connection and then exchange a few messages in order to establish a shared cryptographic key. This key is never sent over the connection2) and is only known to the two endpoints of the connection, who can now use that key to encrypt messages between each other. For each chat session, a new cryptographic key is established, with long-term identity keys being used to ensure that we are still talking to the same person in the next session. Variations of this model are used in OTR and TextSecure, among others.

So, how would performing a MitM attack work for each of these models? For the public key model, you would need to intercept the initial key exchange (i.e. the time where the actual keys are downloaded or otherwise exchanged) and replace the keys with your own. This is hard, for multiple reasons:

  • Many key exchanges have already happened and cannot be retroactively attacked
  • Replaced keys are easily detected if the user is paying attention, and the forged keys can subsequently be ignored.
  • Keys can be exchanged in a variety of ways, not all of which involve the internet

So, one would not only have to attack the key exchanges but also prevent the user from disregarding the forged keys and using other methods for the key exchange instead. If we’re going to do that, we may as well just backdoor the software, which would be far easier.

Attacking a key agreement system wouldn’t work much better: We would have to intercept each key agreement message (which is usually itself encrypted using transport encryption) and replace the message with one of our own. These messages are also authenticated using the long-term identity keys of the communication partners, so we would either have to gain access to those keys (which is hard) or replace them with our own (which is, again, easily detected).

So, while this may theoretically be possible, it is far from viable and suffers from the same issues of enforcement all the other proposals do.

Key escrow

Key escrow sounds like the perfect solution: Everyone has to deposit their keys somewhere where law enforcement may gain access to it. The exact implementation may vary, but that’s the general idea. So, what’s wrong with this idea?

First off, the same caveats as before apply: You are creating an interesting target for both intelligence agencies and criminals. In addition to that, this would only work for the public key model, where the same keys are used over and over again. In the key agreement model, new keys are generated all the time (and the old ones deleted), so a way would have to be found to enter these keys into an escrow system and retain them in case they are ever needed. This would quickly grow into a massive database of keys (many of which would be worthless as no crime was committed using them), which you would have to hang on to, just in case it ever becomes relevant.

Key disclosure laws

The same theme continues here: Key disclosure laws (if they are even allowed under European law) may be able to compel users to disclose their private keys, but users can’t disclose keys that do not exist anymore. Since the keys used in key agreement schemes are usually deleted after less than a day (often after only minutes), the user would be unable to comply with a key disclosure request from law enforcement, even if he would like to. And since it is considered best practise not to keep logs of encrypted chats, the user would also be unable to provide law enforcement with a record of the conversation in question.

Changing this would require massive changes to the software used for encrypted communication, encountering the same problems we already discussed when talking about introducing backdoors into software. So, this proposal is pretty much useless as well.

The “Golden Key”

The term “Golden Key” refers to a recent comment in the Washington Post, which stated:

A police “back door” for all smartphones is undesirable — a back door can and will be exploited by bad guys, too. However, with all their wizardry, perhaps Apple and Google could invent a kind of secure golden key they would retain and use only when a court has approved a search warrant.

— “Compromise needed on Smartphone encryption“, Washington Post, 2014

The article was an obvious attempt to propose the exact same idea (a backdoor) using a different, less politically charged word (“Golden key”), because any system that allows you to bypass a protection mechanism is, by its very nature, a backdoor. But let’s humor them and use the term, because a “golden key” sounds fancy, right? So, how would that work?

In essence, every piece of encryption software would have to be modified in a way that forced it to encrypt every message with one additional key, which is under the control of law enforcement. That way, the keys of the users can stay secret, and only one key needs to be kept safe to keep the communication secured against everyone else. Problem solved?

Not so much. We still have the problem of forcing software developers to include that backdoor into their systems. Then there’s the problem of who is holding the keys. Is there only one key for the whole EU? Or, to put it another way, do you really believe that there is no industrial espionage going on between European Countries?

Or do we have keys for each individual country? And how does the software decide with which key to encrypt the data in this case? What if I talk to someone in france? How is my software supposed to know that and encrypt both with the german and the french key? How secure is the storage of that key? Again, we have a single point of failure. If someone gains access to one of these master keys, he/she can unlock every encrypted message in that country.

There are a lot of questions that would have to be answered to implement this proposal, and I am pretty sure that there is no satisfying solution (If you think you have one, let me know in the comments).

Conclusion

We have looked at five proposals for regulating end-to-end encryption and have found all of them lacking in terms of their effectiveness and having plenty of harmful side effects. All of these proposals both reduce the security of everyone’s communication, not to mention the toxic side effects on basic human rights that are a given whenever we are considering such measures.

There must be a tradeoff between security and privacy, but that tradeoff should not be less security for less privacy, and any attempt at regulating encryption we have looked at is exactly that: It makes everyone less secure and, at the same time, harms their privacy.

One issue we haven’t even looked at yet is how to actually enforce any of these measures, which is another can of worms entirely. We’re going to do that next, in the upcoming fourth installment of this series.


As before, thanks go out to niemalsnever, FreeFall and DanielAW for proofreading and suggestions. Any remaining mistakes are my own.

Flattr this

Footnotes

Footnotes
1 Although, as stated before, it is still possible to collect metadata, which is the thing intelligence agencies are most interested in anyway.
2 This works due to mathematical properties of the messages we send, but for our purposes, it is enough to know that both parties will have the same key, while no one else can easily compute the same key from the values sent over the network.

Crypto Regulation, Part 2: Regulating transport encryption

This is part 2 of a series on the crypto regulations proposed by Cameron, Obama and others. Part 1, explaining what it is all about and describing different types of cryptography, can be found here.

The declared goal of crypto regulation is to be able to read every message passing through a country, regardless of who sent or received it and what technology they used. Regular readers probably know my feelings about such ideas, but let’s just assume that we are a member of David Camerons staff and are tasked with coming up with a plan on how to achieve this.1)

We have to keep in mind the two types of encryption we have previously talked about, transport and end-to-end encryption. I will discuss the problems associated with gaining access to communication secured by the respective technologies, and possible alternatives to regulating cryptography. Afterwards, I will look at the technological solution that could be used to implement the regulation of cryptography. This part will be about transport encryption, while the next part will deal with end-to-end encryption.

Regulating transport encryption

As a rule, transport encryption is easier to regulate, as the number of parties you have to involve is much lower. For instance, if you are interested in gaining access to the transport-encrypted communication of all Google Mail users, you only have to talk to Google, and not to each individual user.

For most of these companies, it probably wouldn’t even be necessary to regulate the cryptography itself, they could just be (and are) required to hand over information to law enforcement agencies. These laws could, if necessary, be expanded to include PRISM-like full access to the data stored on the servers (assuming this is not already common practice). Assuming that our goal really only is to gain access to the communication content and metadata, this should be enough to satisfy the needs of law enforcement.

Access to the actual information while it is encrypted and flowing through the internet is only required if we are interested in more than the data stored on the servers of the companies. An example would be the passwords used to log into a service, which are transmitted in an encrypted form over the internet. These passwords are usually not stored in plain text on the company servers. Instead, they store a so-called hash of the password which is easy to generate from the password but makes it almost impossible to restore the password from the information stored in the hash.2) However, if we were able to decrypt the password while it is sent over the internet, we would gain access to the account and could perform actions ourselves (e.g. send messages). More importantly, we could also use that password to attempt to log into other accounts of the suspect, potentially gaining access to more accounts with non-cooperating (foreign) companies or private servers.

Regulation techniques

So, assuming we want that kind of access to the communication, we’re back to the topic of regulating transport encryption. The different ways this access could be ensured are, in rising order of practicality:

Let’s take a look at each of these proposals, their merits and their disadvantages.

Outlawing cryptography

Outlawing cryptography has the advantage of simplicity. There is no overhead of backdooring implementations, implementing key escrow, or performing active attacks. However, that is just about the only advantage of this proposal.

Cryptography is fundamental to the way our society works, and the modern information age would not be possible without it. You are using cryptography every day: when you get your mail, when you log into a website, when you purchase stuff online, even on this very website, your connection is encrypted.

It gets even worse for companies. They rely on their information being encrypted when communicating with other companies or their customers, otherwise their trade secrets would be free to be taken. Banks would have to cease offering online banking. Amazon would probably go out of business. Internet crime would skyrocket as people would hijack unprotected accounts, steal private and corporate information.

So, given the resistance any such proposition would face, outlawing cryptography as a whole isn’t really an option. An alternative would be to just outlaw it for individuals, but not for corporations. That way, the banks could continue offering online banking, but individuals would no longer be allowed to encrypt their private information.

Such a law would technically be possible, but would raise a lot of problems in practise. Aside from being impossible to enforce, some existing programs can only save their data in an encrypted form (e.g. banking applications). Some people have devices they use both privately and for their job, and their employer may require them to encrypt the device. There are a lot of special cases that would cause problems in the actual implementation of this law, not to mention the possible damage caused from criminals gaining access to unencrypted private information. There would definitely be a lot of opposition to such a law, and the end result would be hard to predict.

Mandating the use of weak or backdoored algorithms

In this case, some party would come up with a list of ciphers which are considered secure enough against common “cyber criminals”, while offering no significant resistance to law enforcement or intelligence agencies. This could be achieved,  either through raw computational power (limiting the size of encryption keys to a level where all possibilities can be tried out in a reasonable timeframe, given the computational ressources available to law enforcement / intelligence agencies), or through the introduction of a backdoor in the algorithm.

In cryptography, a backdoor could be anything from encrypting the data with a second key, owned by the government, to make sure that they can also listen in, to using weak random numbers for the generation of cryptographic keys, which would allow anyone knowing the exact weakness to recover the keys much more quickly. This has, appearently, already happened: It is suspected (and has pretty much been proven) that the NSA introduced backdoors into the Dual EC DRBG random number generator and it is alleged that they paid off a big company (RSA) to then make this algorithm their standard random number generator in their commercial software.

The problem with backdoors is that once they are discovered, anyone can use them. For example, if we mandated that everyone use Dual EC DRBG random numbers for their cryptographic functions, not only we, but also the NSA could decrypt the data much more easily. If we encrypt everything to a second key, then anyone in posession of that key could use it to decrypt the data, which would make the storage location of the key a very attractive target for foreign spies and malicious hackers. So, unless we want to make the whole system insecure to potentially anyone and not just us, backdooring the cryptography is a bad idea.

The other option we mentioned was limiting the size of cryptographic keys. For example, we could mandate that certain important keys may only use key sizes of up to 768 bits, which can be cracked within a reasonable timeframe using sufficient computing power. But, once again, we encounter the same problem: If we can crack the key, other organizations with comparable power (NSA, KGB, the chinese Ministry of State Security, …) can do the same.

Also, because the computational power of computers is still increasing every year, it may be that in a few years, a dedicated individual / small group could also break encryption with that key length. This could prove disastrous if data that may still be valuable a decade later is encrypted with keys of that strength, e.g. trade secrets or long-term plans. Competitors would just have to get a hold of the encrypted data and wait for technology to reach a point where it becomes affordable to break the encryption.

So, mandating the use of weak or backdoored cryptography would make everyone less secure against intelligence agencies and quite possibly even against regular criminals or corporate espionage. In that light, this form of regulation probably involves too much risk for too little reward (cracking these keys still takes some time, so it cannot really be done at a large scale).

Performing a Man-in-the-Middle-Attack on all / select connections

A man-in-the-middle (MitM)-Attack occurs when one party (commonly called Alice) wants to talk to another party (Bob), but the communication is intercepted by someone else (Mallory), who then modifies the data in transit. Usually, this involves replacing transmitted encryption keys with others in order to be able to decrypt the data and re-encrypt it before sending it on to the destination (the Wikipedia article has a good explanation). This attack is usually prevented by authenticating the data. There are different techniques for that, but most of the actual communication between human beings (e.g. eMail transfer, logins into websites, …) is protected using SSL/TLS, which uses a model involving Certification Authorities (CAs).

In the CA model, there are a bunch of organizations who are trusted to verify the identity of people and organizations. You can apply for a digital certificate, which confirms that a certain encryption key belongs to a certain website or individual. They are then supposed to verify that you are, in fact, the owner of said website, and issue you a certificate file that states “We, the certification authority xyz, confirm that the cryptographic key abc belongs to the website blog.velcommuta.de”. Using that file and the encryption key, you can then offer web browsers a way to (more or less) securely access your website via SSL/TLS. The server will send its encryption key and the certificate, confirming that this key is authentic, to clients, who can then use that key to communicate with the server.3)

The problem is that every certification authority is trusted to issue certificates for every website, and no one can prevent them from issuing a false certificate (e.g. confirming that key def is a valid key for my website). A man-in-the-middle could then use such a certificate to hijack a connection, replace my cryptographic key with their own and listen in on the communication.

Now, in order to get into every (or at least every interesting) stream of communication, we would need two things:

  • A certification authority that is willing (or can be forced) to give us certificates for any site we want
  • The cooperation (again, voluntary or forced) of internet providers to perform the attack for us

Both of these things can be written into law and passed, and we would have a way to listen in on every connection protected by this protocol. However, there are a few problems with that idea.

One problem is that not all connections use the CA model, so we would need to find a way to attack other protocols as well. These protocols are mostly unimportant for large-scale communication like eMail, but become interesting if we want to gain access to specialized services or specific servers.

The second problem is that some applications do additional checks on the certificates. They can either make sure that the certificate comes from a specific certification authority, or they could even make sure that it is a specific certificate (a process called Certificate Pinning4)). Those programs would stop working if we started intercepting their traffic.

The third problem is that it creates a third point at which connections can be attacked by criminals and foreign intelligence agencies. Usually, they would have to attack either the source or the destination of a connection in order to gain access to the communication. Attacking the source is usually hard, as that would be your laptop, and there are an awful lot of personal computers which you would have to attack in order to gain full access to all communication that way.

Attacking the destination is also hard, because those are usually servers run by professional companies who (hopefully) have good security measures in place to prevent those attacks. It is probably still possible to find a way in if you invest enough effort, but it is hard to do at scale.

However, if you introduce a few centralized points at which all communication flowing through the network of an internet operator is decrypted and re-encrypted, you also create one big, juicy target, because you can read all of those connections by compromising one server (or at least a much smaller number of servers than otherwise). And experience has shown that for juicy targets like that, intelligence agencies are willing to invest a lot of effort.

So, performing MitM-Attacks on all connections would not work for all types of connections, it would not work for all devices, and it would create attractive targets for hostile agencies to gain access to a large percentage of formerly secured traffic. That does not seem like a good trade to me, so let’s keep looking for alternatives.

Key escrow

Key escrow (sometimes called a “fair” cryptosystem by proponents and key surrender by opponents) is the practise of keeping the cryptographic keys needed to decrypt data in a repository where certain parties (in our case law enforcement agencies) may gain access to them under certain circumstances.

The main problem in this case is finding an arrangement where the keys are stored in a way that lets only authorized parties access them. Assuming we want to continue following a system with judicial oversight, that would probably mean that the escrow system could only be accessed with a warrant / court order. It is hard to enforce this using technology alone, and systems involving humans are prone to abuse and mistakes. However, with a system as security critical as a repository for cryptographic keys, any mistake could prove costly, both in a figurative and a literal sense.

Then there is the problem of setting the system up. Do you want a central European repository? A central repository for each country? Will every server operator be required to run escrow software on their own server? Each of these options has its own advantages and drawbacks.

  • A European repository would mean less administrative effort overall, but it would create a single point of failure, which, when compromised, would impact the internet security of the whole EU. As with the issue of man-in-the-middle attack devices, history has shown that foreign agencies can and will go to a lot of effort to compromise such repositories. A central European repository would also assume that European countries do not spy on each other, which is a naive assumption.
  • Country-wide repositories fix the last problem, but still suffer from the others. They are attractive targets for both foreign intelligence agencies and cybercriminals.
  • Individual repositories face the problem of compatibility (there are a LOT of different operating systems and -versions running on servers). They are less centralized, which is good (the effort to break into them increases)5), but they also imply that law enforcement would have to be able to electronically retrieve the key on demand. If someone knew that the police was onto him, he could thus disable the software or even destroy the key and server in order to prevent the police from retroactively decrypting potential evidence they had already captured.

Again, we have encountered administrative problems and important security aspects that make this option problematic at best. So, why don’t we take a look at how things are done right now in great britain and see if it made  sense to at least expand this law into the rest of Europe.

Key disclosure laws
Key disclosure in practise
Key disclosure in practise (Image: “Security” by Randall Munroe, Licensed CC BY-NC 2.5)

The british Regulation of Investigatory Powers Act of 2000 (short: RIPA) includes a provision requiring suspects in a crime to hand over encryption keys or face jail time of up to two years (or up to five in cases of terrorism or suspected child pornography).6) The law has already been used to imprison at least three people for refusing to give up encryption keys.

However, all members of the European council have ratified the European Convention on Human Rights. While it is not specifically mentioned, the European court of human rights holds that

…the right to remain silent under police questioning and the privilege against self-incrimination are generally recognized international standards which lie at the heart of the notion of a fair procedure under Article 6 [of the European Convention on Human Rights].

Requiring an individual to surrender keys would probably be in violation of the right to remain silent (although there are different opinions on that). Any such law would almost certainly be annulled by the Court of Justice of the European Union, as it did with the Data Retention Directive.

However, such a law could conceivably be used to compel companies or witnesses to disclose encryption keys they have access to. These laws exist in some European countries, and could be expanded to all of Europe. It would remain to be seen what the European Court of Justice would think of that, as such a law would definitely be challenged, but the potential of a law being annulled by the ECJ has not prevented the European parliament from passing them in the past.

There exists another, more technical concern with this: More and more websites employ cryptographic techniques that ensure a property called (perfect) forward secrecy, short (P)FS. This ensures that even if an encrypted conversation is eavesdropped on and recorded, and even if the encryption keys are surrendered to law enforcement afterwards (or stolen by criminals), they will be unable to decrypt the conversation. The only way to eavesdrop on this kind of communication is to perform an active man-in-the-middle-attack while in possession of a valid key.

This means that even if law enforcement has a recording of evidence while it was being transmitted, and even if they could force someone to give them the relevant keys, they would still be unable to gain access to said evidence. This technology is slowly becoming the standard, and the percentage of connections protected by it will only grow, meaning that laws requiring the disclosure of keys after the communication has taken place will become less and less useful over the next years.

Conclusion

We have taken a look at five different proposals for regulating transport security, and have found that each is either extremely harmful to the security of the European internet or ineffective at providing access to encrypted communication. Each of the proposals also holds an enormous potential for abuse from governments and intelligence services.

This concludes part 2 of my series on crypto regulation. Part 3 is discussing possible ways to regulate end-to-end cryptography.


As before, thanks go out to niemalsnever, FreeFall and DanielAW for proofreading and suggestions. Any remaining mistakes are my own.

Flattr this

Footnotes

Footnotes
1 I’m playing “Devil’s system engineer” here and am obviously completely opposed to any of the measures I describe in this article, in case there was any doubt.
2 Again, this is a simplification. In the real world, there are important considerations, including the choice of the proper hash function and salting of the passwords, but that is out of the scope of this article.
3 As always, I am simplifying matters here, but the exact inner workings of TLS are not relevant to this article.
4 There is a Firefox extension that does that
5 …assuming the key escrow software does not have a security hole itself, which is an optimistic assumption in itself.
6 Distressingly, it does not even distinguish between willingly not giving up the key and being unable to give up a key. This means that if the police thinks something is encrypted, and it is not, you can be sent to jail for refusing to give up a key to decrypt imaginary encrypted data.

Crypto Regulation, Part 1: What is it all about?

Over the last weeks, we’ve had a slew of politicians asking for new legislation in response to the Paris attacks. The proposed new regulations range from a new Data Retention directive (here’s my opinion on that) to PNR (Passenger Name Records, data about the passengers of all Flights within europe) data exchange within the EU.

By far the most worrying suggestion initially came from UK Prime Minister David Cameron, but was taken up by Barrack Obama, the Counterterrorism Coordinator (pdf) of the EU, and the German Innenminister (Minister of the Interior), de Maziere: A regulation of encryption technology. The reasons they give are very similar: We need to be able (in really important cases, with proper oversight, a signed warrant et cetera) to read the contents of any communication in order to “protect all of us from terrorists”.1)

The irony of justifying this with the Paris attack, a case where the terrorists were known to the relevant authorities and used unencrypted communication, is apparently lost on them.

In this series of posts, I will take a look at what crypto regulation means, how it could (or could not) work in practice, and why it appeals to pro-surveillance politicians either way.

An (extremely) brief primer on cryptography

Cryptography is used in order to hide information from unauthorized readers. In order to decrypt a message, you need three things: The encrypted message (obviously), knowledge about the algorithm that was used to encrypt it (which can often, but not always, be easily determined), and the cryptographic key that was used to do it. When we talk about crypto regulation, we usually assume that algorithm and message are known to whoever wants to read them, and the only missing thing is the key.

Cryptography is all around you, although you may not see it. In fact, you are using cryptography right now: This website is protected using SSL/TLS (that’s the https:// you see everywhere). You are also using it when you go to withdraw money from an ATM, when you send a mail, log into any website, and so on. All of those things use cryptography, although the  strength (meaning how easy it is to break that cryptography) varies.

A (very) brief history of crypto regulation to date

Crypto regulation is hardly a new idea. For a long time, the export of encryption technology was regulated as a munition in the United States (the fight for the right to freely use and export cryptography was called the Crypto Wars and spawned some interesting tricks to get around the export restriction). This restriction was relaxed, but never completely removed (it is still illegal to export strong encryption technology into “rogue states” like Iran).

During the last 10 years or so, there haven’t really been serious attempts to limit the use and development of encryption technology2), leading to the rise of many great cryptographic tools like GnuPG, OTR, Tor and TextSecure.3) But now, there appears to be another push to regulate or even outlaw strong encryption.

What is “strong encryption”?

In Cryptography, we distinguish between two4) different kinds of encryption. There is transport encryption and end-to-end encryption. Transport encryption means that your communication is encrypted on its way from you to the server, but decrypted on the server. For example, if you send a regular eMail, your connection to the server is encrypted (no one who is eavesdropping on your connection is able to read it), but the server can read your eMail without a problem. This type of encryption is used by almost every technology you use, be it eMail, chats (except for a select few), or telephony like Skype.

The major drawback of transport encryption is that you have to trust the person or organization operating the server to not look at your stuff while it is floating around on their server. History has shown that most companies simply cannot be trusted to keep your data safe, be it against malicious hackers (see Sony), the government (see PRISM), or their own advertising and analytics desires (see Google Mail, Facebook, …).

The alternative is end-to-end encryption. For this, you encrypt your message in a way that only allows the legitimate receiver to decrypt it. That way, your message cannot be read by anyone except the legitimate receiver.5) The advantage should be obvious: You can put that message on an untrusted server and the operators of said server cannot read it.

The drawback is the logistics: The recipients need to have their cryptographic keys to decrypt the message, which can be a hassle if you have a lot of devices. The key can also be stolen and used to decrypt your messages. For some usage scenarios like Chats, there are solutions like the aforementioned OTR and TextSecure (which you should install if you own an Android phone), but there is no such solution for eMails. End-to-End-Encryption also does not protect the metadata (who is talking to whom, when, for how long, et cetera) of your messages, only the contents.

When politicians are talking about “strong encryption”, they are probably referring to end-to-end encryption, because that data is much harder to obtain than transport-encrypted data, which can still be seized on the servers it resides on. To read your end-to-end encrypted data, they would have to seize both the encrypted data and your encryption keys (and compel you to give them the passwords you protected them with), which is a lot harder to do.

Conclusion

Now that we have a basic understanding of the different types of encryption used in the wild, we can talk about how to regulate them. This will be covered in part 2 of this series.


Thanks go out to niemalsnever, FreeFall and DanielAW for proofreading and suggestions. Any remaining mistakes are solely mine.

Flattr this

Footnotes

Footnotes
1 I dislike the term “Terrorist” because it can (and has been) expanded to include pretty much anyone you disagree with. However, for readability, I will use it in the connotation most used by western media, e.g. meaning the Islamic State, Al Quaeda, et cetera.
2 Although it appears that these efforts where simply put into introducing backdoors into algorithms and implementations instead.
3 And not-so-great, but still necessary tools like OpenSSL.
4 We obviously distinguish between many more than that, but for this article, two will be enough.
5 This is, again, a gross simplification, but sufficient for this article.

Using Anonymity for good, or Why Tor Matters

During the last years, there has been a disturbing trend of law enforcement agencies (both european and american) demonizing the Tor project and anonymity in general, and Tor Hidden Services specifically. Recently, during 31c3, Jacob Appelbaum (a Tor developer and generally awesome person) put out a call to the community to start conversations about anonymity in order to inform people about why anonymity is important and how it is useful not only to (perceived or actual) criminals, but also to regular people. This is my (public) contribution.

First, I will briefly explain how Tor in general and hidden services specifically work. If you are familiar with Tor and hidden services, feel free to skip ahead.

What is Tor?

“Tor” stands for “The Onion Router”. It is a program that can be used to browse the internet anonymously (the websites you visit cannot identify you unless you provide them with identifying information yourself, e.g. by logging in). It also hides which websites you are visiting from your internet company. This is achieved (slightly simplified) by sending your internet traffic through a number of servers all over the globe before delivering it to the website you are visiting.

Tor also supports a system called “hidden services“. A hidden service is a website (or any other type of service, like a mail or chat server) that can only be reached over the Tor network. When used properly, the server never knows the identity of users connecting to it, and the users never know the location of the server they are talking to.

The usual caveats apply: Tor cannot protect your identity if you use it incorrectly. For example, you will obviously not be anonymous if you log into facebook via Tor. Read the warnings on the download site.

Why use Tor?

There are many reasons why you may want to use Tor, and the overwhelming majority of them do not involve anything that you may find questionable. For example, Tor is used…

  • …by dissidents who want to get around state censorship (e.g. in China, Syria, …)
  • …by whistleblowers and journalists alike to protect themselves and their sources
  • …by privacy-concious people who want to avoid the omnipresent tracking on many websites
  • The list goes on. The Tor project has a nice list of potential uses and users of their software.

But I was told criminals use Tor!

Yes, there are people who are using Tor to hide their identities when extorting money, or to buy and sell drugs. It is in the nature of an anonymity system that it is impossible to prevent malicious use while still allowing those with “legitimate” (however you would define that) interests to use it. In the end, it all comes down to a tradeoff between the good and the bad that Tor does. How many drug smuggling rings equal one Edward Snowden? How many chinese dissidents equal one criminal using Tor to extort money?

In my personal opinion, Tor does more good than it does bad. You may think differently. Just keep in mind that Tor does save lifes under oppressive regimes, and that it enables people like Edward Snowden to come forward with at least a small measure of safety. You will have to decide if it is worth loosing all of that to cut off a channel for drug trade. In the end, there will always be ways to more-or-less-securely trade drugs, but there may not be any way for dissidents to safely use the internet.

And what about those hidden services?

Hidden services enjoy a particularily bad reputation as a place where only drug traders and pedophiles hang out, and it is true that there is a lot of awful stuff hosted on hidden services. But again, there are a lot of different ways these hidden services can be used. Here are two ways in which I personally use hidden services:

  • I have my own Server for instant messaging using Jabber / XMPP, and I connect to it using a Tor hidden service. That way, my server does not know my current IP address (which is good, in case it ever gets taken over by criminals), and it also prevents anyone watching the network from identifying that I am using it at all. Additionally, it gives the other users of my server a way to use it and still be sure that I cannot track them. I would obviously never even try to track them, but I firmly believe in minimizing the amount of damage any one party can do, no matter how trustworthy.
  • I also have a seperate hidden service I use to access my server using the SSH protocol (a protocol used to remotely administrate my server), as lately, doubt has been cast on the security of the SSH protocol. By using hidden services, I am adding another layer of security to the connection, which helps keep my server secured against the aforementioned criminals.

In both cases, I am not interested in hiding the location or identity of my server (as that is trivial to determine using the protocols themselves), but more interested in hiding myself from my server, and hiding the fact that I am talking to the server. This makes it slightly harder to identify me, and much harder to identify which channels I am using to communicate (another case of minimizing the information available to any single party). And, most importantly, it adds another layer of protection to the information I am sending.

Closing notes

I hope that this article helped you understand that there are many different ways people use anonymity tools like Tor, and many of them are completely acceptable by every sane person. So, what I am asking of you is simple: Keep this in mind when you next hear politicians railing against anonymity: For every criminal, pedophile and “terrorist” using Tor, there is at least one dissident, activist, journalist, or server operator using the same software for good.

Life is not as easy as people make it sound. Why should the issue of anonymity be any different?

How the AirBnB-App is tracking your location

AirBnB can be used to find rooms in other cities while you travel. For that purpose, it also offers an official Android Application. As the app requests some dangerous permissions (Location, Contacts, …), I enabled the “privacy guard” feature of CyanogenMod right away, which blocks access to location and contacts and asks the user to confirm each access to one of these ressources. Due to these prompts, I noticed that AirBnB requests your location a lot, including while the app is not active (in the background, but not terminated).

This made me curious, so I set up mitmproxy to take a look at the network traffic of the app. Fortunately for me (and unfortunately, in general), while it uses HTTPS to phone home, it does not implement certificate pinning, so it was trivial to get a dump of the requests and responses it sends and receives. And, as it turns out, AirBnB is indeed very curious.

When is your location disclosed?

The app always sends your current location when it is started. In fact, a whole host of information is sent to AirBnB, including your GPS location with a precision of seven decimals, your current city in human-readable form, your system language and OS version, the type of your device (phone, tablet), and even a bunch of settings you can presumably set if you are logged into your account on the website. Judging from the presence of a “is_logged_in”-Field, I assume that this information will be linked to your account if you are logged into the app (I was not).

The app will also send your GPS location if you search for offers and while it loads the offers in the “discover”-tab (where it will display some featured places and locations you could travel to). It has to be stressed that the location is not actually needed for any of this, it’s just AirBnB being curious and wanting the data for their analysis, I assume (they also use a bunch of other trackers, including Google Analytics, Newrelic, Flurry, and Facebook, but as far as I could find out, they do not disclose the location to these). There are probably a lot of additional cases where your location is sent to AirBnB, but I stopped here, mostly because I was not interested in sending them even more data.

AirBnB also regularily requests your current location every 5 minutes, but does not send it to the server, as far as I can tell.

For what is your location used?

That is the big question. As the data is not needed to answer your queries, I can only assume that they are using it for their analysis software. So, let’s take a look at their privacy policy:

“When you use certain features of the Platform, in particular our mobile applications we may receive, store and process different types of information about your location, including general information (e.g., IP address, zip code) and more specific information (e.g., GPS-based functionality on mobile devices used to access the Platform or specific features of the platform).”

Okay, interesting. Is there a way to opt out of this?

“If you access the Platform through a mobile device and you do not want your device to provide us with location-tracking information, you can disable the GPS or other location-tracking functions on your device, provided your device allows you to do this. See your device manufacturer’s instructions for further details.”

Oh. Okay. And for what, precisely, are you using the data?

We use and process Information about you for the following general purposes:

  1. to enable you to access and use the Platform;
  2. to operate, protect, improve and optimize the Platform, Airbnb’s business, and our users’ experience, such as to perform analytics, conduct research, and for advertising and marketing;
  3. to help create and maintain a trusted and safer environment on the Platform, such as fraud detection and prevention, conducting investigations and risk assessments, verifying the address of your listings, verifying any identifications provided by you, and conducting checks against databases such as public government databases;
  4. to send you service, support and administrative messages, reminders, technical notices, updates, security alerts, and information requested by you;
  5. where we have your consent, to send you marketing and promotional messages and other information that may be of interest to you, including information sent on behalf of our business partners that we think you may find interesting. You will be able to unsubscribe or opt-out from receiving these communications in your settings (in the “Account” section) when you login to your Airbnb account;
  6. to administer rewards, surveys, sweepstakes, contests, or other promotional activities or events sponsored or managed by Airbnb or our business partners; and
  7. to comply with our legal obligations, resolve any disputes that we may have with any of our users, and enforce our agreements with third parties.

So, basically, they reserve the right to do whatever they want with your data. Great.

Why is this bad?

Your current location is not their business (quite literally). They only offer one function that technically requires them to know your current location, and that is “accomodations around me”. In all other situations, your current location is not needed to serve your request, so it should not be disclosed to them. This is not some esoteric concept, this is basic privacy. Also, the best way to prevent the misuse of personal information is not to collect the information in the first place.

AirBnB’s reaction

I contacted the AirBnB-Support via Twitter and, later, via eMail. The response I got wasn’t very helpful:

The current location is requested in order to provide you rapidly with listings around your area whenever you go to search for a place. You should receive that request when starting it.

This may explain the periodical requests every five minutes, but does not explain why the information is sent to the server. AirBnB, if you are reading this, feel free to contact me or comment on this article.

Closing notes

AirBnB is probably not the only offender in this regard. It probably isn’t even the worst offender. I’m just using it to illustrate a growing trend among companies to collect everything, no matter if they need it. They may not misuse this information. They may even not use it at all. The problem is that I do not know what they are doing. And the hunger for more and more data, combined with the secrecy around what it is actually used for, makes me uncomfortable.

An interesting experience: Writing to 400 candidates

During the last week, I worked on an interesting project. But instead of programming, this time it involved politics: I wrote a message to 403 german candidates to the European Parliament in the upcoming european elections, on the topic of “digital rights”.

It all started when I heard of WePromise.eu. In a nutshell: Candidates can promise to follow a charter of basic digital rights, supporting laws that strengthen these rights and opposing those that seek to reduce them. The charter contains a lot of very obvious, sensible points, and some less obvious but also very sensible points like export controls for surveillance / censorship equipment. Voters, in return, can promise to vote in the elections, and vote for a party whose candidates support these rights.

Now, my original plan was to write a physical letter to some candidates from my area, but when I asked the people behind WePromise for some material, they also supplied me with a list of all german candidates, including their eMail addresses. And since tools like Mail Merge make sending a lot of personalized eMails very easy, I decided to just write to each and every single candidate from that list. I quickly removed all candidates that had already pledged their support to the project, and all candidates without a known mail address, leaving me with 403 candidates, ranging from people almost guaranteed a spot in the european parliament to people on the 88th spot of a tiny party that may or may not get one or two candidates into the parliament. I quickly wrote up a message detailing the project, the aims, why I support it, and asking them to support it as well (or, alternatively, write me a quick mail detailing why they do not want to support it). I fed the message and the spreadsheet to Mail Merge, waited two minutes, and the mails were sent.

I received a bunch of autoreplies and some error messages concerning incorrect eMail addresses (which I tried to correct and update in my spreadsheet, sending the corrected addresses back to WePromise). Then I waited. That was one week ago.

Until today, I have received replies from over 25 candidates, ranging from the aforementioned 88th spot on the ÖDPs list to current members of the european parliament. The number of german candidates has jumped from 22 to 37, with a few more candidates having promised their signature and not yet appeared on the website. I had some very interesting discussions concerning the advantages and disadvantages of online anonymity, during which I convinced at least one candidate to change his views (two more discussions are ongoing). I also received three replies from parties that are generally not considered to be very pro-internet (all of them stating that they would not sign the pledge, but would “continue to fight for data protection”, and all of them from supporters of data retention laws. I’ll leave you to figure out how the hell that is supposed to work, because I have no idea).

So, some statistics from one week into the project:

  • 15 new signatures (6x Die Grünen, 4x ÖDP, 3x SPD, 2x FDP)
  • 4 signatures pledged that have not yet appeared on the website (2x Linke, 2x SPD, 1x ÖDP)
  • 6 refusals for different, mostly acceptable reasons (2x CSU, 2x AfD, 1x CDU, 1x SPD)

As you can see, a large majority of candidates has never replied. That was to be expected. Still, it has been quite an interesting experience, interacting directly with people that may, in the near future, be called upon to represent my interests in the european parliament.

I can only recommend contacting (some of) your candidates. Ask them their opinion on a cause close to your heart, maybe even have a (civil!) discussion on the matter if they have a different opinion. My experience has shown me that, at least in the smaller parties, you can actually change someones opinion on some matters. And who knows, maybe the candidate will actually be elected to the european parliament. And maybe, just maybe, your discussion will change their vote on a crucial issue… And wouldn’t that be worth the 10 minutes it takes you to write up a mail?

“Crypto-Hypocrisy”, or: on what is wrong with the security community

I’ve been annoyed at some of the things in the computer science and, more specifically, computer security community for a long time, and decided to finally write them down. This has become quite a wall of text. Depending on how you read this, this may be a rant or a plea.

A few days ago, when I was browsing the website of a security conference (SEC 2014 in this case, but this is applicable to a lot of conferences), I became curious. Shouldn’t a conference focussing on “Applied Cryptography”, among other things, automatically forward me to the HTTPS version of their website? I changed the http:// to a https://, and OH GOD WHAT THE HELL?

Shiny...Interesting. A self-signed certificate, expired for more than three months, and with a Common Name of “ensa ident” (as opposed to, you know, the domain name it was supposed to protect).

Well, someone  must’ve been sleeping, I figured, so I went to the contact page of the website, and OH MY GOD WHAT THE HELL?

Contact Information

A hotmail address. Not only that, but a hotmail address with no information about a PGP key for secure communication.

Now, some people may ask “so what?”. To which I reply: How can the security community as a whole condemn bad security practices and demand secure, end-to-end-encrypted communication for everyone, if the organizers a conference that attendees pay between 250 and 550 € to attend can’t get their shit together enough to at least provide a valid, well-formed SSL-certificate for their websites. Hell, I’m not even asking for one that is signed by a proper CA. I can live with a self-signed certificate, but at least put in the effort to have the CN match your domain name and to create a new one once the old one has expired.

As for the PGP key: I can understand if conferences do not provide a PGP key because their contact address is actually a mailing list that sends the message to multiple people (although it is beyond me why, in 2014, there is no program that will decrypt all incoming messages and re-encrypt them with the keys of the recepients of the mailing list. Or, if there is such a program, why no one uses it). But this is a hotmail.fr address. This is bad on so many levels. A conference on “Information Security Education” can’t even afford to have their own eMail address?

I regularily annoy companies by writing them eMails closing with “P.S.: Have you ever considered adding the capability to receive encrypted eMails to this address? [Link to a tutorial]”. Some ignore this, some make excuses. I only know of two companies who allow me to send them encrypted eMails. One of them is my bank, who will then reply unencrypted with a full-quote, rendering my encryption worse than useless. The other are the people at Uberspace.de, who I am not a customer of, but who provide their key prominently on their website.

How can I keep a straight face demanding this of those companies if the people running our conferences are too lazy or just plain don’t care enough about the ACTUAL TOPIC OF THEIR CONFERENCE to take the 30 minutes to set something up? How can I keep a straight face if, until a few months ago, I could write encrypted eMails with more of my parents (2) than other computer scientists I regularily mailed with (1)?

The general reaction if I propose mail encryption to the average CS student is one of the following:

  1. I should totally do that, but it’s too much work
  2. I’m not writing anything secret, so why would I encrypt it?
  3. I don’t know anyone who is using mail encryption
  4. I’m not writing any mails anyway, I’m using Facebook to talk to other people.

To which I would reply, respectively:

  1. It’s 30 minutes of work, once, and then you can have it up and running until you reinstall your system. How is that “too much work”? Don’t you value your privacy enough to invest 30 minutes into protecting it?
  2. Because it is good practise to encrypt it. Because, even if you don’t write any secret letters, you would still not be happy to have other people read them (hopefully).
  3. Then be the first and pester your computer scientist friends. Take them to a crypto party. It’s gonna be fun.
  4. …Goodbye. *shake head, go away, loose faith in humanity*

It is not about the contents needing hiding. It is not about keeping something from the NSA (although that’s an awesome side effect). It is about making encrypting communication a social norm, at least within the computer science community.

At 30c3, I received three business cards. Two of them were from people working for the Tor Project (Roger Dingeldine and Jacob Appelbaum). Both of them had their PGP Fingerprints printed on their business card. This is what I want to see. Get away from “here’s my eMail address” towards “here’s a way you can send me an encrypted message and be sure you reach me and no one else”.

The third business card was from a nice woman of maybe 50 years with barely any background in computer science. She wanted to help an open source project, so she got a ticket to 30c3 and went to different assemblies and workshops (which, in itself, is pretty awesome, I might add). I recently sent her an eMail, signing it with my PGP key, as I always do for those mails. I received an encrypted response, stating that she had just started using GnuPG and Enigmail and asking if I would help her set up a laptop with Linux and full disk encryption.

If 50-something year old executive consultants can figure this our, why can’t the security community?