If you have been following this blog, you may have noticed that I have an unfinished series on the topic of crypto regulation (part 1, part 2, part 3). I’ve been meaning to finish it up, but life got in the way and I mostly forgot about it. Now, it has become relevant again, as Great Britain has just introduced a new draft legislation that would force companies to assist the government in removing encryption.
Twenty years ago, law enforcement organizations lobbied to require data and communication services to engineer their products to guarantee law enforcement access to all data. After lengthy debate and vigorous predictions of enforcement channels going dark, these attempts to regulate the emerging Internet were abandoned. In the intervening years, innovation on the Internet flourished, and law enforcement agencies found new and more effective means of accessing vastly larger quantities of data. Today we are again hearing calls for regulation to mandate the provision of exceptional access mechanisms. In this report, a group of computer scientists and security experts, many of whom participated in a 1997 study of these same topics, has convened to explore the likely effects of imposing extraordinary access mandates.
We have found that the damage that could be caused by law enforcement exceptional access requirements would be even greater today than it would have been 20 years ago. In the wake of the growing economic and social cost of the fundamental insecurity of today’s Internet environment, any proposals that alter the security dynamics online should be approached with caution. Exceptional access would force Internet system developers to reverse forward secrecy design practices that seek to minimize the impact on user privacy when systems are breached. The complexity of today’s Internet environment, with millions of apps and globally connected services, means that new law enforcement requirements are likely to introduce unanticipated, hard to detect security flaws. Beyond these and other technical vulnerabilities, the prospect of globally deployed exceptional access systems raises difficult problems about how such an environment would be governed and how to ensure that such systems would respect human rights and the rule of law.
So, yeah. The cryptography equivalent of the Avengers have gone ahead and written this stuff better than I could have done. So, I will not be finishing my own series, and instead encourage you to read the article.
However, if it turns out that there is interest in me continuing this series, I may make some time to finish the final two articles of the series. So, if you want me to continue, drop me a line in the comments or on twitter, if that’s your thing.
Secrets are lies
Sharing is caring
Privacy is theft
— The Circle, Dave Eggers
The Circle is scary. Not scary in the sense of a thriller or horror novel. Scary in the sense in which Brave new World is scary: By showing the limitless capacity of humans to ignore the consequences of their actions, as long as they think it is for a higher goal (or even if it’s only for their own amusement).
The book follows the Story of Mae Holland, freshly hired at The Circle, a social media- / search- / technology-giant which has revolutionized the web with some sort of universal, single identity system and assorted services (and is quite obviously a reference to Google). The story covers the development of both Mae as a person and the Circle as a company, which slides more and more into a modus operandi which would make everyone except the most radical post-privacy advocates flinch (the quote from above encapsulates the views of the company quite well).
Over the course of the book, The Circle invents more and more technologies that are, on the surface, extremely useful and could change the world for the better. However, each invention further reduces the personal privacy of everyone and builds an ever-growing net of tracking and surveillance over the planet.
I don’t want to go into too much detail here (I hate spoilers), but over the course of the book, I found myself despising Mae more and more. For me, the character embodies everything that is wrong with social networks and the general trends in the internet in general. This book has it all: Thoughtless acts without regard for the privacy of others, generally zero reflection about the impact her actions may have, and the problem of Slacktivism:
Below the picture of Ana María was a blurry photo of a group of men in mismatched military garb, walking through dense jungle. Next to the photo was a frown button that said “We denounce the Central Guatemalan Security Forces.” Mae hesitated briefly, knowing the gravity of what she was about to do—to come out against these rapists and murderers—but she needed to make a stand. She pushed the button. […] Mae sat for a moment, feeling very alert, very aware of herself, knowing that [she had] possibly made a group of powerful enemies in Guatemala.
I really enjoyed the theme of the book (as in: I was terrified of the future it portrayed. I’m a sucker for a good dystopia). However, the book suffers a little from the writing itself. I can’t put a finger on it, but something about the writing seemed off to me. The book also suffers from having a main character which was clearly an antagonist for me.
It serves as a warning not to blindly accept every new technology and to critically ask how it could be misused and how your use of it may impact others, from the small things like talking about others on social networks (others who may wish to keep certain things private) to the idea of filming your own life.
A young man, seeming too young to be drinking at all, aimed his face at Mae’s camera. “Hey mom, I’m home studying.” A woman of about thirty, who may or may not have been with the too young man, said, walking out of view, “Hey honey, I’m at a book club with the ladies. Say hi to the kids!”
The Circle is not a happy book. Even though it has its problems, you should read it, because it gives some perspective on the direction our use of technology is taking. Read it, and think about it when you use your social networks or read about the newest products.
The Circle is the most terrifying dystopia of all: The one where many people would say “Dystopia? What dystopia? That sounds awesome, I’d love to live there”. And that, more than anything else, is why it terrifies me.
Today is valentines day, but, much more importantly, it is also I love Free Software Day. That’s the day we show appreciation to all the Free Software (Free as in Freedom, but also often Free as in beer) Developers out there, who are working, mostly in their spare time, mostly unpaid, to make the tools we use every day work. This blog post is an experiment. I will try to list all the free software I use every day, and I will probably fail, just because it is all around us.
In the morning, when I start my Laptop, it boots into Linux Mint, which uses a plethora of Free libraries and programs like Grub, the Linux Kernel, and so on. After logging in, Tor, Pidgin with the OTR plugin, and the OwnCloud client automatically start up. I launch Thunderbird (which encrypts eMails using GnuPG) and Firefox (with a number of plugins) and go about my day.
When I want to make short notes, I use gedit. For slightly longer texts, I have LibreOffice. And if I have to take notes or write texts in university, I use LaTeX with Texmaker and a bunch of extra packets. When editing images, I use Gimp or Inkscape, and for Audio, there’s Audacity. I watch videos using VLC and listen to music with Banshee. My passwords are securely stored using the KeePassX password manager, and the whole disk is encrypted with ecryptfs.
When I develop software, I use Git for versioning and, if possible, Python for my programming language (my code editor, Sublime Text, is one of the few pieces of non-free software I run on a regular basis). I also use a large number of command line utilities like grep, find, htop, wget, netcat, and so on.
Every single piece of software I just mentioned is Free Software (and I am sure I have forgotten some of the tools I use every day). Every single one of them is provided to me (and the rest of the world) free of charge. They are supported by communities of developers working on them in their spare time. In return, they expect nothing from us.
This is awesome.
These people do not get enough appreciation for their hard work, let alone donations. The developer of the single most important piece of technology for encrypted eMails, used by millions, almost went broke. This should be an indicator that something is wrong. Free Software is the only thing I know of where people seem to think that it is okay to use it, expect it to work and be updated, and not give anything in return.
It’s true that many developers work on Free Software because they believe in it, and because they like doing it, and that they don’t expect to be able to make a living from it. But you cannot expect people to give their full attention to these projects if they need to worry about getting food on the table. These people are donating their free time to these projects, and they deserve our thanks and support, instead of being sneered at because they would like to continue doing what they are doing and still be able to eat.
So, I love Free Software, and I try to do my part as best I can. There are too many great projects to donate to them all, but I regularily donate to some of them. If I encouter bugs, I report them. If I write my own software, I put it online under a Free Software license so others can benefit from it (as much as you can benefit from my terrible code at least). And I try to raise awareness.
Free software is awesome. Let’s help keep it awesome by keeping the developers motivated. Be it with a donation, a contribution of code, or even just a quick “thank you”. Give back to the community that gives you the things you use every day. To the developers of Free Software, whereever you may be: Thank you. Thank you for being awesome.
This is part 3 of a series on the crypto regulations proposed by Cameron, Obama and others. Part 1 explains what this is all about and discusses different types of cryptography. Part 2 discusses the different ways transport encryption could conceivably be regulated. I encourage you to read those two first in order to better understand this article.
We are in the middle of a thought experiment: What if we were tasked with coming up with a regulation that allows us (the European law enforcement agencies) to decrypt encrypted messages while others (like foreign intelligence agencies) are still unable to do so. In the previous installment, we looked at how transport encryption could be regulated. We saw that there were a number of different ways this could be achieved:
We have also seen where each of these techniques has its problems, and how none of them achieved the goal of letting us decrypt information while others cannot. Let’s see if we have better luck with the regulation of end-to-end encryption (E2EE).
Regulating end-to-end encryption
If we look at the history of cryptography, it turns out that end-to-end encryption is a nightmare to regulate, as the United States found out during the first “Crypto Wars“. It is pratically impossible to control what software people run on their computers, as is regulating the spread of the software itself. There are simply too many computers and too many ways to get access to software to control them all.
But let’s leave aside the practical issues of enforcement (that’s going to be a whole post of its own) for now. There are still a number of additional problems we have to face: There are only two points in the system where we could gain access to the plain text of a message: The sender and the receiver. For example, even if a PGP-encrypted message is sent via Google Mail, we cannot gain access to the decrypted message by asking Google for it, as they don’t have it, either. We are forced to raid either the sender or the receiver to access the unencrypted contents of the message.
This makes it hard to collect even small amounts of these messages, and impossible to perform large-scale collection.1) And this is why end-to-end encryption is so scary to law enforcement and intelligence agencies alike: There could be anything in them, from cake recipes to terror plots, and there’s no way to know until you have decrypted them.
Given that pretty much every home computer and smartphone is capable of running end-to-end encryption software, the potential for unreadable communication is staggering. It is only natural that law enforcement and intelligence services alike are now calling for legislation to outlaw or at least regulate this threat to their investigatory powers. So, in our thought experiment, how could this work?
The regulation techniques from the previous part of this series mostly still apply, but the circumstances around them have changed, so we’ll have to re-examine them and see if they could work under these new circumstances. Also, given that end-to-end encrypted messages are almost always transmitted over transport-encrypted channels, a solution for the transport encryption problem would be needed in order to meaningfully implement any of these regulations.
We will look at the following regulation techniques:
Let’s take another look at each of these proposals and their merits and disadvantages for this new problem.
This proposal is actually surprisingly practical at first glance: Almost no corporations use end-to-end encryption (meaning that there would be next to no lobbying against such a proposal), comparatively few private persons use it (meaning there would be almost no resistance from the public), and it completely fixes the problem of end-to-end encryption. So, case closed?
That depends. Completely outlawing this kind of cryptography would not only leave the communication open for our own law enforcement, but also for foreign intelligence agencies. Recent reports (pdf) to the European parliaments Science and Technology Options Assessment Board (STOA) suggest that one of the ways to improve European IT security would be to foster the adoption of E2EE:
In this way E2EE offers an improved level of confidentiality of information and thus privacy, protecting users from both censorship and
repression and law enforcement and intelligence. […] Since the users of these products might be either criminals or well-meaning citizens, a political discussion is needed to balance the interests involved in this instance.
As the report says, a tradeoff between the interests of society as a whole and law enforcement needs to be found. Outlawing cryptography would hardly constitute a tradeoff, and there are many legitimate and important uses for E2EE. For example, journalists may use it to protect their sources, human rights activists use it to communicate with contacts living under orpressive regimes, and so on. Legislation outlawing E2EE would make all of this illegal and would result in journalists not being able to communicate with confidential sources without fear of revealing their identity.
In the end, a tradeoff will have to be found, but that tradeoff cannot be to completely outlaw the only thing that lets these people do their job with some measure of security, at least not if we want our society to still have adversarial journalism and human rights activists five years from now.
Mandating the use of weak or backdoored algorithms
This involves pretty much the same ideas and problems we have already discussed in the previous article. However, we also encounter another problem: As the software used for E2EE is used by many individuals on many different systems (as opposed to a comparatively small number of corporations managing large numbers of identical servers which are easy to modify) in many different versions, some of which are no longer being actively developed and many of which are open source and not maintained by EU citizens, mandating the use of specific algorithms would entail…
…forcing every developer of such a system to introduce these weak algorithms (and producing these updates yourself for those programs which are no longer actively maintained by anyone)
…forcing everyone to download the new versions and configure them to use the weak algorithms
…preventing people from switching back to more secure versions (although that is an issue of enforcement, which we will deal with later)
In practise, this is basically impossible to achieve. Project maintainers not living under the jurisdiction of the EU would refuse to add algorithms to their software that they know are bad, and many of the more privacy-conscious and tech-literate crowd would just refuse to update their software (again, see enforcement). Assuming that using any non-backdoored algorithms would be illegal, this would be equivalent to outlawing E2EE alltogether.
In a globalized world, many people communicate across state boundaries. Such a regulation would imply forcing foreigners to use bad cryptography in order to communicate with Europeans (or possibly get their European communication partners in trouble for receiving messages using strong cryptography). In a world of global, collaborative work, you sometimes may not even know which country your communication partner resides in. The administrative overhead for everyone would be incredible, and thus people would either ignore the law or stop communicating with Europeans.
Additionally, Software used for E2EE is also used in other areas: For example, many variants of Linux operating systems uses GPG (a software for E2EE of eMails) to verify if software updates have been tampered with. Using bad algorithms for this would compromise the security of the whole operating system.
Again, it is a question of the tradeoff: Does having access to the communication of the general population justify putting all of this at risk? I don’t think so, but then again, I am not a Minister of the Interior.
Performing a Man-in-the-Middle-Attack on all / select connections
If a Man-in-the-Middle-Attack (short: MitM) on transport security is impractical, it becomes downright impossible for most forms of E2EE. To understand why, we need to look at the two major ways E2EE is done in practise: The public key model and the key agreement model:
In the public key model, each participant has a public and a private cryptographic key. The public key is known to everyone and used to encrypt messages, the private key is only known to the recipient and is used to decrypt the message. The public key is exchanged once and keeps being re-used for future communication. This model is used by GPG, among others.
In the key agreement model, two communication partners establish a connection and then exchange a few messages in order to establish a shared cryptographic key. This key is never sent over the connection2) and is only known to the two endpoints of the connection, who can now use that key to encrypt messages between each other. For each chat session, a new cryptographic key is established, with long-term identity keys being used to ensure that we are still talking to the same person in the next session. Variations of this model are used in OTR and TextSecure, among others.
So, how would performing a MitM attack work for each of these models? For the public key model, you would need to intercept the initial key exchange (i.e. the time where the actual keys are downloaded or otherwise exchanged) and replace the keys with your own. This is hard, for multiple reasons:
Many key exchanges have already happened and cannot be retroactively attacked
Replaced keys are easily detected if the user is paying attention, and the forged keys can subsequently be ignored.
Keys can be exchanged in a variety of ways, not all of which involve the internet
So, one would not only have to attack the key exchanges but also prevent the user from disregarding the forged keys and using other methods for the key exchange instead. If we’re going to do that, we may as well just backdoor the software, which would be far easier.
Attacking a key agreement system wouldn’t work much better: We would have to intercept each key agreement message (which is usually itself encrypted using transport encryption) and replace the message with one of our own. These messages are also authenticated using the long-term identity keys of the communication partners, so we would either have to gain access to those keys (which is hard) or replace them with our own (which is, again, easily detected).
So, while this may theoretically be possible, it is far from viable and suffers from the same issues of enforcement all the other proposals do.
Key escrow sounds like the perfect solution: Everyone has to deposit their keys somewhere where law enforcement may gain access to it. The exact implementation may vary, but that’s the general idea. So, what’s wrong with this idea?
First off, the same caveats as before apply: You are creating an interesting target for both intelligence agencies and criminals. In addition to that, this would only work for the public key model, where the same keys are used over and over again. In the key agreement model, new keys are generated all the time (and the old ones deleted), so a way would have to be found to enter these keys into an escrow system and retain them in case they are ever needed. This would quickly grow into a massive database of keys (many of which would be worthless as no crime was committed using them), which you would have to hang on to, just in case it ever becomes relevant.
Key disclosure laws
The same theme continues here: Key disclosure laws (if they are even allowed under European law) may be able to compel users to disclose their private keys, but users can’t disclose keys that do not exist anymore. Since the keys used in key agreement schemes are usually deleted after less than a day (often after only minutes), the user would be unable to comply with a key disclosure request from law enforcement, even if he would like to. And since it is considered best practise not to keep logs of encrypted chats, the user would also be unable to provide law enforcement with a record of the conversation in question.
Changing this would require massive changes to the software used for encrypted communication, encountering the same problems we already discussed when talking about introducing backdoors into software. So, this proposal is pretty much useless as well.
The “Golden Key”
The term “Golden Key” refers to a recent comment in the Washington Post, which stated:
A police “back door” for all smartphones is undesirable — a back door can and will be exploited by bad guys, too. However, with all their wizardry, perhaps Apple and Google could invent a kind of secure golden key they would retain and use only when a court has approved a search warrant.
The article was an obvious attempt to propose the exact same idea (a backdoor) using a different, less politically charged word (“Golden key”), because any system that allows you to bypass a protection mechanism is, by its very nature, a backdoor. But let’s humor them and use the term, because a “golden key” sounds fancy, right? So, how would that work?
In essence, every piece of encryption software would have to be modified in a way that forced it to encrypt every message with one additional key, which is under the control of law enforcement. That way, the keys of the users can stay secret, and only one key needs to be kept safe to keep the communication secured against everyone else. Problem solved?
Not so much. We still have the problem of forcing software developers to include that backdoor into their systems. Then there’s the problem of who is holding the keys. Is there only one key for the whole EU? Or, to put it another way, do you really believe that there is no industrial espionage going on between European Countries?
Or do we have keys for each individual country? And how does the software decide with which key to encrypt the data in this case? What if I talk to someone in france? How is my software supposed to know that and encrypt both with the german and the french key? How secure is the storage of that key? Again, we have a single point of failure. If someone gains access to one of these master keys, he/she can unlock every encrypted message in that country.
There are a lot of questions that would have to be answered to implement this proposal, and I am pretty sure that there is no satisfying solution (If you think you have one, let me know in the comments).
We have looked at five proposals for regulating end-to-end encryption and have found all of them lacking in terms of their effectiveness and having plenty of harmful side effects. All of these proposals both reduce the security of everyone’s communication, not to mention the toxic side effects on basic human rights that are a given whenever we are considering such measures.
There must be a tradeoff between security and privacy, but that tradeoff should not be less security for less privacy, and any attempt at regulating encryption we have looked at is exactly that: It makes everyone less secure and, at the same time, harms their privacy.
One issue we haven’t even looked at yet is how to actually enforce any of these measures, which is another can of worms entirely. We’re going to do that next, in the upcoming fourth installment of this series.
As before, thanks go out to niemalsnever, FreeFall and DanielAW for proofreading and suggestions. Any remaining mistakes are my own.
This works due to mathematical properties of the messages we send, but for our purposes, it is enough to know that both parties will have the same key, while no one else can easily compute the same key from the values sent over the network.
This is part 2 of a series on the crypto regulations proposed by Cameron, Obama and others. Part 1, explaining what it is all about and describing different types of cryptography, can be found here.
The declared goal of crypto regulation is to be able to read every message passing through a country, regardless of who sent or received it and what technology they used. Regular readers probably know my feelings about such ideas, but let’s just assume that we are a member of David Camerons staff and are tasked with coming up with a plan on how to achieve this.1)
We have to keep in mind the two types of encryption we have previously talked about, transport and end-to-end encryption. I will discuss the problems associated with gaining access to communication secured by the respective technologies, and possible alternatives to regulating cryptography. Afterwards, I will look at the technological solution that could be used to implement the regulation of cryptography. This part will be about transport encryption, while the next part will deal with end-to-end encryption.
Regulating transport encryption
As a rule, transport encryption is easier to regulate, as the number of parties you have to involve is much lower. For instance, if you are interested in gaining access to the transport-encrypted communication of all Google Mail users, you only have to talk to Google, and not to each individual user.
For most of these companies, it probably wouldn’t even be necessary to regulate the cryptography itself, they could just be (and are) required to hand over information to law enforcement agencies. These laws could, if necessary, be expanded to include PRISM-like full access to the data stored on the servers (assuming this is not already common practice). Assuming that our goal really only is to gain access to the communication content and metadata, this should be enough to satisfy the needs of law enforcement.
Access to the actual information while it is encrypted and flowing through the internet is only required if we are interested in more than the data stored on the servers of the companies. An example would be the passwords used to log into a service, which are transmitted in an encrypted form over the internet. These passwords are usually not stored in plain text on the company servers. Instead, they store a so-called hash of the password which is easy to generate from the password but makes it almost impossible to restore the password from the information stored in the hash.2) However, if we were able to decrypt the password while it is sent over the internet, we would gain access to the account and could perform actions ourselves (e.g. send messages). More importantly, we could also use that password to attempt to log into other accounts of the suspect, potentially gaining access to more accounts with non-cooperating (foreign) companies or private servers.
So, assuming we want that kind of access to the communication, we’re back to the topic of regulating transport encryption. The different ways this access could be ensured are, in rising order of practicality:
Let’s take a look at each of these proposals, their merits and their disadvantages.
Outlawing cryptography has the advantage of simplicity. There is no overhead of backdooring implementations, implementing key escrow, or performing active attacks. However, that is just about the only advantage of this proposal.
Cryptography is fundamental to the way our society works, and the modern information age would not be possible without it. You are using cryptography every day: when you get your mail, when you log into a website, when you purchase stuff online, even on this very website, your connection is encrypted.
It gets even worse for companies. They rely on their information being encrypted when communicating with other companies or their customers, otherwise their trade secrets would be free to be taken. Banks would have to cease offering online banking. Amazon would probably go out of business. Internet crime would skyrocket as people would hijack unprotected accounts, steal private and corporate information.
So, given the resistance any such proposition would face, outlawing cryptography as a whole isn’t really an option. An alternative would be to just outlaw it for individuals, but not for corporations. That way, the banks could continue offering online banking, but individuals would no longer be allowed to encrypt their private information.
Such a law would technically be possible, but would raise a lot of problems in practise. Aside from being impossible to enforce, some existing programs can only save their data in an encrypted form (e.g. banking applications). Some people have devices they use both privately and for their job, and their employer may require them to encrypt the device. There are a lot of special cases that would cause problems in the actual implementation of this law, not to mention the possible damage caused from criminals gaining access to unencrypted private information. There would definitely be a lot of opposition to such a law, and the end result would be hard to predict.
Mandating the use of weak or backdoored algorithms
In this case, some party would come up with a list of ciphers which are considered secure enough against common “cyber criminals”, while offering no significant resistance to law enforcement or intelligence agencies. This could be achieved, either through raw computational power (limiting the size of encryption keys to a level where all possibilities can be tried out in a reasonable timeframe, given the computational ressources available to law enforcement / intelligence agencies), or through the introduction of a backdoor in the algorithm.
In cryptography, a backdoor could be anything from encrypting the data with a second key, owned by the government, to make sure that they can also listen in, to using weak random numbers for the generation of cryptographic keys, which would allow anyone knowing the exact weakness to recover the keys much more quickly. This has, appearently, already happened: It is suspected (and has pretty much been proven) that the NSA introduced backdoors into the Dual EC DRBG random number generator and it is alleged that they paid off a big company (RSA) to then make this algorithm their standard random number generator in their commercial software.
The problem with backdoors is that once they are discovered, anyone can use them. For example, if we mandated that everyone use Dual EC DRBG random numbers for their cryptographic functions, not only we, but also the NSA could decrypt the data much more easily. If we encrypt everything to a second key, then anyone in posession of that key could use it to decrypt the data, which would make the storage location of the key a very attractive target for foreign spies and malicious hackers. So, unless we want to make the whole system insecure to potentially anyone and not just us, backdooring the cryptography is a bad idea.
The other option we mentioned was limiting the size of cryptographic keys. For example, we could mandate that certain important keys may only use key sizes of up to 768 bits, which can be cracked within a reasonable timeframe using sufficient computing power. But, once again, we encounter the same problem: If we can crack the key, other organizations with comparable power (NSA, KGB, the chinese Ministry of State Security, …) can do the same.
Also, because the computational power of computers is still increasing every year, it may be that in a few years, a dedicated individual / small group could also break encryption with that key length. This could prove disastrous if data that may still be valuable a decade later is encrypted with keys of that strength, e.g. trade secrets or long-term plans. Competitors would just have to get a hold of the encrypted data and wait for technology to reach a point where it becomes affordable to break the encryption.
So, mandating the use of weak or backdoored cryptography would make everyone less secure against intelligence agencies and quite possibly even against regular criminals or corporate espionage. In that light, this form of regulation probably involves too much risk for too little reward (cracking these keys still takes some time, so it cannot really be done at a large scale).
Performing a Man-in-the-Middle-Attack on all / select connections
A man-in-the-middle (MitM)-Attack occurs when one party (commonly called Alice) wants to talk to another party (Bob), but the communication is intercepted by someone else (Mallory), who then modifies the data in transit. Usually, this involves replacing transmitted encryption keys with others in order to be able to decrypt the data and re-encrypt it before sending it on to the destination (the Wikipedia article has a good explanation). This attack is usually prevented by authenticating the data. There are different techniques for that, but most of the actual communication between human beings (e.g. eMail transfer, logins into websites, …) is protected using SSL/TLS, which uses a model involving Certification Authorities (CAs).
In the CA model, there are a bunch of organizations who are trusted to verify the identity of people and organizations. You can apply for a digital certificate, which confirms that a certain encryption key belongs to a certain website or individual. They are then supposed to verify that you are, in fact, the owner of said website, and issue you a certificate file that states “We, the certification authority xyz, confirm that the cryptographic key abc belongs to the website blog.velcommuta.de”. Using that file and the encryption key, you can then offer web browsers a way to (more or less) securely access your website via SSL/TLS. The server will send its encryption key and the certificate, confirming that this key is authentic, to clients, who can then use that key to communicate with the server.3)
The problem is that every certification authority is trusted to issue certificates for every website, and no one can prevent them from issuing a false certificate (e.g. confirming that key def is a valid key for my website). A man-in-the-middle could then use such a certificate to hijack a connection, replace my cryptographic key with their own and listen in on the communication.
Now, in order to get into every (or at least every interesting) stream of communication, we would need two things:
A certification authority that is willing (or can be forced) to give us certificates for any site we want
The cooperation (again, voluntary or forced) of internet providers to perform the attack for us
Both of these things can be written into law and passed, and we would have a way to listen in on every connection protected by this protocol. However, there are a few problems with that idea.
One problem is that not all connections use the CA model, so we would need to find a way to attack other protocols as well. These protocols are mostly unimportant for large-scale communication like eMail, but become interesting if we want to gain access to specialized services or specific servers.
The second problem is that some applications do additional checks on the certificates. They can either make sure that the certificate comes from a specific certification authority, or they could even make sure that it is a specific certificate (a process called Certificate Pinning4)). Those programs would stop working if we started intercepting their traffic.
The third problem is that it creates a third point at which connections can be attacked by criminals and foreign intelligence agencies. Usually, they would have to attack either the source or the destination of a connection in order to gain access to the communication. Attacking the source is usually hard, as that would be your laptop, and there are an awful lot of personal computers which you would have to attack in order to gain full access to all communication that way.
Attacking the destination is also hard, because those are usually servers run by professional companies who (hopefully) have good security measures in place to prevent those attacks. It is probably still possible to find a way in if you invest enough effort, but it is hard to do at scale.
However, if you introduce a few centralized points at which all communication flowing through the network of an internet operator is decrypted and re-encrypted, you also create one big, juicy target, because you can read all of those connections by compromising one server (or at least a much smaller number of servers than otherwise). And experience has shown that for juicy targets like that, intelligence agencies are willing to invest a lot of effort.
So, performing MitM-Attacks on all connections would not work for all types of connections, it would not work for all devices, and it would create attractive targets for hostile agencies to gain access to a large percentage of formerly secured traffic. That does not seem like a good trade to me, so let’s keep looking for alternatives.
Key escrow (sometimes called a “fair” cryptosystem by proponents and key surrender by opponents) is the practise of keeping the cryptographic keys needed to decrypt data in a repository where certain parties (in our case law enforcement agencies) may gain access to them under certain circumstances.
The main problem in this case is finding an arrangement where the keys are stored in a way that lets only authorized parties access them. Assuming we want to continue following a system with judicial oversight, that would probably mean that the escrow system could only be accessed with a warrant / court order. It is hard to enforce this using technology alone, and systems involving humans are prone to abuse and mistakes. However, with a system as security critical as a repository for cryptographic keys, any mistake could prove costly, both in a figurative and a literal sense.
Then there is the problem of setting the system up. Do you want a central European repository? A central repository for each country? Will every server operator be required to run escrow software on their own server? Each of these options has its own advantages and drawbacks.
A European repository would mean less administrative effort overall, but it would create a single point of failure, which, when compromised, would impact the internet security of the whole EU. As with the issue of man-in-the-middle attack devices, history has shown that foreign agencies can and will go to a lot of effort to compromise such repositories. A central European repository would also assume that European countries do not spy on each other, which is a naive assumption.
Country-wide repositories fix the last problem, but still suffer from the others. They are attractive targets for both foreign intelligence agencies and cybercriminals.
Individual repositories face the problem of compatibility (there are a LOT of different operating systems and -versions running on servers). They are less centralized, which is good (the effort to break into them increases)5), but they also imply that law enforcement would have to be able to electronically retrieve the key on demand. If someone knew that the police was onto him, he could thus disable the software or even destroy the key and server in order to prevent the police from retroactively decrypting potential evidence they had already captured.
Again, we have encountered administrative problems and important security aspects that make this option problematic at best. So, why don’t we take a look at how things are done right now in great britain and see if it made sense to at least expand this law into the rest of Europe.
Requiring an individual to surrender keys would probably be in violation of the right to remain silent (although there are different opinions on that). Any such law would almost certainly be annulled by the Court of Justice of the European Union, as it did with the Data Retention Directive.
However, such a law could conceivably be used to compel companies or witnesses to disclose encryption keys they have access to. These laws exist in some European countries, and could be expanded to all of Europe. It would remain to be seen what the European Court of Justice would think of that, as such a law would definitely be challenged, but the potential of a law being annulled by the ECJ has not prevented the European parliament from passing them in the past.
There exists another, more technical concern with this: More and more websites employ cryptographic techniques that ensure a property called (perfect) forward secrecy, short (P)FS. This ensures that even if an encrypted conversation is eavesdropped on and recorded, and even if the encryption keys are surrendered to law enforcement afterwards (or stolen by criminals), they will be unable to decrypt the conversation. The only way to eavesdrop on this kind of communication is to perform an active man-in-the-middle-attack while in possession of a valid key.
This means that even if law enforcement has a recording of evidence while it was being transmitted, and even if they could force someone to give them the relevant keys, they would still be unable to gain access to said evidence. This technology is slowly becoming the standard, and the percentage of connections protected by it will only grow, meaning that laws requiring the disclosure of keys after the communication has taken place will become less and less useful over the next years.
We have taken a look at five different proposals for regulating transport security, and have found that each is either extremely harmful to the security of the European internet or ineffective at providing access to encrypted communication. Each of the proposals also holds an enormous potential for abuse from governments and intelligence services.
Again, this is a simplification. In the real world, there are important considerations, including the choice of the proper hash function and salting of the passwords, but that is out of the scope of this article.
Over the last weeks, we’ve had a slew of politicians asking for new legislation in response to the Paris attacks. The proposed new regulations range from a new Data Retention directive (here’s my opinion on that) to PNR (Passenger Name Records, data about the passengers of all Flights within europe) data exchange within the EU.
By far the most worrying suggestion initially came from UK Prime Minister David Cameron, but was taken up by Barrack Obama, the Counterterrorism Coordinator (pdf) of the EU, and the German Innenminister (Minister of the Interior), de Maziere: A regulation of encryption technology. The reasons they give are very similar: We need to be able (in really important cases, with proper oversight, a signed warrant et cetera) to read the contents of any communication in order to “protect all of us from terrorists”.1)
The irony of justifying this with the Paris attack, a case where the terrorists were known to the relevant authorities and used unencrypted communication, is apparently lost on them.
In this series of posts, I will take a look at what crypto regulation means, how it could (or could not) work in practice, and why it appeals to pro-surveillance politicians either way.
An (extremely) brief primer on cryptography
Cryptography is used in order to hide information from unauthorized readers. In order to decrypt a message, you need three things: The encrypted message (obviously), knowledge about the algorithm that was used to encrypt it (which can often, but not always, be easily determined), and the cryptographic key that was used to do it. When we talk about crypto regulation, we usually assume that algorithm and message are known to whoever wants to read them, and the only missing thing is the key.
Cryptography is all around you, although you may not see it. In fact, you are using cryptography right now: This website is protected using SSL/TLS (that’s the https:// you see everywhere). You are also using it when you go to withdraw money from an ATM, when you send a mail, log into any website, and so on. All of those things use cryptography, although the strength (meaning how easy it is to break that cryptography) varies.
A (very) brief history of crypto regulation to date
Crypto regulation is hardly a new idea. For a long time, the export of encryption technology was regulated as a munition in the United States (the fight for the right to freely use and export cryptography was called the Crypto Wars and spawned some interesting tricks to get around the export restriction). This restriction was relaxed, but never completely removed (it is still illegal to export strong encryption technology into “rogue states” like Iran).
During the last 10 years or so, there haven’t really been serious attempts to limit the use and development of encryption technology2), leading to the rise of many great cryptographic tools like GnuPG, OTR, Tor and TextSecure.3) But now, there appears to be another push to regulate or even outlaw strong encryption.
What is “strong encryption”?
In Cryptography, we distinguish between two4) different kinds of encryption. There is transport encryption and end-to-end encryption. Transport encryption means that your communication is encrypted on its way from you to the server, but decrypted on the server. For example, if you send a regular eMail, your connection to the server is encrypted (no one who is eavesdropping on your connection is able to read it), but the server can read your eMail without a problem. This type of encryption is used by almost every technology you use, be it eMail, chats (except for a select few), or telephony like Skype.
The major drawback of transport encryption is that you have to trust the person or organization operating the server to not look at your stuff while it is floating around on their server. History has shown that most companies simply cannot be trusted to keep your data safe, be it against malicious hackers (see Sony), the government (see PRISM), or their own advertising and analytics desires (see Google Mail, Facebook, …).
The alternative is end-to-end encryption. For this, you encrypt your message in a way that only allows the legitimate receiver to decrypt it. That way, your message cannot be read by anyone except the legitimate receiver.5) The advantage should be obvious: You can put that message on an untrusted server and the operators of said server cannot read it.
The drawback is the logistics: The recipients need to have their cryptographic keys to decrypt the message, which can be a hassle if you have a lot of devices. The key can also be stolen and used to decrypt your messages. For some usage scenarios like Chats, there are solutions like the aforementioned OTR and TextSecure (which you should install if you own an Android phone), but there is no such solution for eMails. End-to-End-Encryption also does not protect the metadata (who is talking to whom, when, for how long, et cetera) of your messages, only the contents.
When politicians are talking about “strong encryption”, they are probably referring to end-to-end encryption, because that data is much harder to obtain than transport-encrypted data, which can still be seized on the servers it resides on. To read your end-to-end encrypted data, they would have to seize both the encrypted data and your encryption keys (and compel you to give them the passwords you protected them with), which is a lot harder to do.
Now that we have a basic understanding of the different types of encryption used in the wild, we can talk about how to regulate them. This will be covered in part 2 of this series.
Thanks go out to niemalsnever, FreeFall and DanielAW for proofreading and suggestions. Any remaining mistakes are solely mine.
I dislike the term “Terrorist” because it can (and has been) expanded to include pretty much anyone you disagree with. However, for readability, I will use it in the connotation most used by western media, e.g. meaning the Islamic State, Al Quaeda, et cetera.
As a result of the attacks on Charlie Hebdo in Paris, many politicians are once again calling for mandatory data retention laws.1) Ignoring the (frankly sickening) eagerness to exploit this tragedy for your own political goals without even having the decency to wait until the victims are buried, and leaving aside questions of the effectiveness of data retention in solving crimes (doubtful), the potential for abuse (high), the costs associated with it (impressive), and the compliance with basic european principles and human rights like the presumption of innocence (problematic at best), I would like to focus on a few (perhaps non-obvious) consequences of a new mandatory data retention law.
I am focussing on the situation in Germany, simply because I live here. Some of the problems are specific to Germany and its sometimes impressively stupid laws (like the infamous “Störerhaftung”, where the owner of an internet connection is responsible for any crimes committed via his/her connection, regardless of who actually committed it), but most should apply to just about any country.
Problem 1: It creates targets
Where there is data, there are so-called “security” services who are interested in using it. Having a country’s own so-called “security” service use the data is bad enough, given the track record of criminal behaviour2) and sloppy data security of most of these so-called “security” services. But it also creates targets for foreign intelligence services.
The NSA appears to be very fond of attacking such centralized data repositories. They have already demonstrated that they are perfectly willing to attack european carriers if they carry interesting data. And given their obsession with metadata, we are doing them a big favor by aggregating all the metadata ourselves and storing it in centralized locations3). The NSA just has to take the metaphorical can-opener to the networks of the ISPs (which, again, they seem to be perfectly willing to do) and query them as much as they like. Which brings us to the second problem:
Problem 2: It’s not “just metadata”
Proponents of surveillance will often cite the argument that “it’s just metadata”. This statement is wrong on so many levels that we would need an escalator to reach them all, so I will only mention a few of them.
Metadata can be used to construct stories that are “made up of facts, but not necessarily true”. To quote Jacob Appelbaum:
The data trail you leave behind tells a story about you, but not necessarily one that is true. Even if it’s made up of facts. For years the US government harassed me because they thought Bradley Manning, now Chelsea Manning, had given me documents. But that is not true. — Jacob Appelbaum
And finally, if metadata really was as useless as they make it sound, why would they spend so much time and effort collecting it?
Many people have written whole articles about this argument, so I will leave it at this.4)
Problem 3: Data retention leads to problems for small network providers
“Freifunk” is a (mostly German) initiative / movement / however you want to classify it. It involves people setting up WiFi routers in their homes and providing free and open internet access to everyone around them. Anyone can participate, and the decentralized, local communities have done many great things from covering entire small towns with their network to providing free internet access to refugees (German article).
Now, in germany we have a law called “Störerhaftung”, which greatly discourages people from sharing their internet access because they are responsible for any (perceived) crimes committed using their connection, no matter who actually committed the (perceived) crime. This law has an exception for internet service providers (because, understandably, the big telcos are not interested in being responsible for the things their customers do). Freifunk uses this part of the regulation by tunneling all traffic from Freifunk routers to one (or more) central gateways using VPNs, before the traffic is sent into the internet proper. That way, they are classified as a small ISP and are exempt from the Störerhaftung.
However, by classifying itself as a small ISP, Freifunk communities may5) be forced to implement data retention themselves. This would put a major strain on the communities, as the additional costs for data retention and storage hardware would have to be financed somehow. As these communities aren’t really well-supplied with money as it is, this would greatly impact their ability to actually provide internet access to many disadvantaged people, not to mention the ideological problem of logging the connections of their users (most Freifunk operators strongly believe in privacy).
The same problem applies for all small internet providers. Small, regional ISPs with a few hundred customers, or universities providing internet access for student dorms, may (or may not) be subject to data retention laws, and they would all incur costs that would either force them out of service or force them to raise prices for the customers. Which, again, brings us to the next problem:
Problem 4: The monetary costs
If you think that the big internet service providers will let their bottom line suffer because of data retention laws, you obviously have not seen how they operate. The added costs for data retention hardware will either be paid by the customer (meaning you), or by the state and, by extension, the taxpayer (meaning you). In essence, you are forced to pay for your own surveillance and the reduction of your civil liberties. Speaking of which:
Problem 5: The potential for abuse
I am cheating a little here, because I told you that I would not be talking about this, but this is just too important to ignore. The data that is collected can be abused by pretty much any party:
Anyone with access to the data can use it for blackmail (“It would be a shame if your wife knew that you are talking to Ms. XYZ at 3 in the morning…”)
Business competitors with access to the connection logs of your company could infer information about your business strategy (“They sure have been looking at the website of that one company a lot lately…”)
It can also be used to infer private information like religious beliefs (are you visiting church websites?), medical conditions (visiting cancer information sites?), political views, social circles, …
It could be used to identify sources of journalists, clients of lawyers, patients of doctors, basically any form of confidential relationship
…I can do this all day long…
This obviously is already a problem, because many of the three-letter-agencies are already connecting all of this data (and, probably, for exactly these purposes). But by collecting this data at even more places, the problem only gets worse, because more and more corporations, agencies and individual people6) gain access to them.
To sum things up, data retention laws…
…support local and foreign intelligence services in their dragnet surveillance tactics
…lead to big collections of sensitive information that can be abused
…endanger small Internet Service Providers and projects like Freifunk
…increase either your phone bill or your taxes
…go against the basic principles of any democratic country, e.g. the presumption of innocence
…have been shown to be practically useless for actually fighting crime, as demonstrated by the exact attacks that are used to justify new data retention laws: France already has laws for a data retention of 12 months, which failed to prevent the attack on Charlie Hebdo.7)
A call to action
In closing, I ask of you: Contact your representatives, both in your countries parliaments and the european parliament. Tell them that more surveillance is the wrong answer. Tell them that instead of dismantling our democracy with more surveillance, we should retaliate with more democracy and openness.
But most importantly, make sure to actually tell it to them directly. Tweeting your opposition to something is one thing. Actually taking the 10 minutes it takes to write a (polite!) mail to your representative shows them that you do care, and it forces them to reply to you (or have one of their staffers do so). 8)
Now imagine if hundreds of people were to do the same thing. Imagine the effect of hundreds of well-written, polite (!) eMails arriving in the inboxes of all representatives, complete with sources for all of your claims. Imagine them having to find replies to all of those eMails, trying to defuse your worries. Now imagine hundreds of people replying to those messages, calling out the flawed assumptions or evasive answers, (politely) demanding actual argumentations, demanding sources for their claims.
Write those eMails. Be persistent. Be annoying. Stay polite. Perhaps you can help prevent another disasterous piece of surveillance legislation. Perhaps not. Perhaps it will pass in spite of all the protests. But at least you will have tried.
Thanks go out to niemalsnever and FreeFall for proofreading. Any remaining mistakes are my own.
The german “Verfassungsschutz” (“Constitution protection agency” would be a rough translation) was found to have known about a german right-wing terror cell for years and actively prevented the police from arresting suspected or confirmed members of it. They also actively destroyed files and evidence when suspicion was cast on them. A german summary of the case can be found here.
The attackers had actually been known to the relevant authorities for quite some time, but due to insufficient capabilities for targeted surveillance, they could not be properly surveilled. Another reason why “more dragnet surveillance” is exactly the wrong thing to ask for right now.
I’ve recently had a long eMail exchange with my representative in the german parliament, and while she expertly managed to talk past my actual questions (a skill every politician seems to have mastered), at least I forced her to take 10 minutes out of her day to formulate a reply.
During the last years, there has been a disturbing trend of law enforcement agencies (both european and american) demonizing the Tor project and anonymity in general, and Tor Hidden Services specifically. Recently, during 31c3, Jacob Appelbaum (a Tor developer and generally awesome person) put out a call to the community to start conversations about anonymity in order to inform people about why anonymity is important and how it is useful not only to (perceived or actual) criminals, but also to regular people. This is my (public) contribution.
First, I will briefly explain how Tor in general and hidden services specifically work. If you are familiar with Tor and hidden services, feel free to skip ahead.
What is Tor?
“Tor” stands for “The Onion Router”. It is a program that can be used to browse the internet anonymously (the websites you visit cannot identify you unless you provide them with identifying information yourself, e.g. by logging in). It also hides which websites you are visiting from your internet company. This is achieved (slightly simplified) by sending your internet traffic through a number of servers all over the globe before delivering it to the website you are visiting.
Tor also supports a system called “hidden services“. A hidden service is a website (or any other type of service, like a mail or chat server) that can only be reached over the Tor network. When used properly, the server never knows the identity of users connecting to it, and the users never know the location of the server they are talking to.
The usual caveats apply: Tor cannot protect your identity if you use it incorrectly. For example, you will obviously not be anonymous if you log into facebook via Tor. Read the warnings on the download site.
Why use Tor?
There are many reasons why you may want to use Tor, and the overwhelming majority of them do not involve anything that you may find questionable. For example, Tor is used…
…by dissidents who want to get around state censorship (e.g. in China, Syria, …)
…by whistleblowers and journalists alike to protect themselves and their sources
…by privacy-concious people who want to avoid the omnipresent tracking on many websites
Yes, there are people who are using Tor to hide their identities when extorting money, or to buy and sell drugs. It is in the nature of an anonymity system that it is impossible to prevent malicious use while still allowing those with “legitimate” (however you would define that) interests to use it. In the end, it all comes down to a tradeoff between the good and the bad that Tor does. How many drug smuggling rings equal one Edward Snowden? How many chinese dissidents equal one criminal using Tor to extort money?
In my personal opinion, Tor does more good than it does bad. You may think differently. Just keep in mind that Tor does save lifes under oppressive regimes, and that it enables people like Edward Snowden to come forward with at least a small measure of safety. You will have to decide if it is worth loosing all of that to cut off a channel for drug trade. In the end, there will always be ways to more-or-less-securely trade drugs, but there may not be any way for dissidents to safely use the internet.
And what about those hidden services?
Hidden services enjoy a particularily bad reputation as a place where only drug traders and pedophiles hang out, and it is true that there is a lot of awful stuff hosted on hidden services. But again, there are a lot of different ways these hidden services can be used. Here are two ways in which I personally use hidden services:
I have my own Server for instant messaging using Jabber / XMPP, and I connect to it using a Tor hidden service. That way, my server does not know my current IP address (which is good, in case it ever gets taken over by criminals), and it also prevents anyone watching the network from identifying that I am using it at all. Additionally, it gives the other users of my server a way to use it and still be sure that I cannot track them. I would obviously never even try to track them, but I firmly believe in minimizing the amount of damage any one party can do, no matter how trustworthy.
In both cases, I am not interested in hiding the location or identity of my server (as that is trivial to determine using the protocols themselves), but more interested in hiding myselffrom my server, and hiding the fact that I am talking to the server. This makes it slightly harder to identify me, and much harder to identify which channels I am using to communicate (another case of minimizing the information available to any single party). And, most importantly, it adds another layer of protection to the information I am sending.
I hope that this article helped you understand that there are many different ways people use anonymity tools like Tor, and many of them are completely acceptable by every sane person. So, what I am asking of you is simple: Keep this in mind when you next hear politicians railing against anonymity: For every criminal, pedophile and “terrorist” using Tor, there is at least one dissident, activist, journalist, or server operator using the same software for good.
Life is not as easy as people make it sound. Why should the issue of anonymity be any different?
AirBnB can be used to find rooms in other cities while you travel. For that purpose, it also offers an official Android Application. As the app requests some dangerous permissions (Location, Contacts, …), I enabled the “privacy guard” feature of CyanogenMod right away, which blocks access to location and contacts and asks the user to confirm each access to one of these ressources. Due to these prompts, I noticed that AirBnB requests your location a lot, including while the app is not active (in the background, but not terminated).
This made me curious, so I set up mitmproxy to take a look at the network traffic of the app. Fortunately for me (and unfortunately, in general), while it uses HTTPS to phone home, it does not implement certificate pinning, so it was trivial to get a dump of the requests and responses it sends and receives. And, as it turns out, AirBnB is indeed very curious.
When is your location disclosed?
The app always sends your current location when it is started. In fact, a whole host of information is sent to AirBnB, including your GPS location with a precision of seven decimals, your current city in human-readable form, your system language and OS version, the type of your device (phone, tablet), and even a bunch of settings you can presumably set if you are logged into your account on the website. Judging from the presence of a “is_logged_in”-Field, I assume that this information will be linked to your account if you are logged into the app (I was not).
The app will also send your GPS location if you search for offers and while it loadsthe offers in the “discover”-tab (where it will display some featured places and locations you could travel to). It has to be stressed that the location is not actually needed for any of this, it’s just AirBnB being curious and wanting the data for their analysis, I assume (they also use a bunch of other trackers, including Google Analytics, Newrelic, Flurry, and Facebook, but as far as I could find out, they do not disclose the location to these). There are probably a lot of additional cases where your location is sent to AirBnB, but I stopped here, mostly because I was not interested in sending them even more data.
AirBnB also regularily requests your current location every 5 minutes, but does not send it to the server, as far as I can tell.
For what is your location used?
“When you use certain features of the Platform, in particular our mobile applications we may receive, store and process different types of information about your location, including general information (e.g., IP address, zip code) and more specific information (e.g., GPS-based functionality on mobile devices used to access the Platform or specific features of the platform).”
Okay, interesting. Is there a way to opt out of this?
“If you access the Platform through a mobile device and you do not want your device to provide us with location-tracking information, you can disable the GPS or other location-tracking functions on your device, provided your device allows you to do this. See your device manufacturer’s instructions for further details.”
Oh. Okay. And for what, precisely, are you using the data?
We use and process Information about you for the following general purposes:
to enable you to access and use the Platform;
to operate, protect, improve and optimize the Platform, Airbnb’s business, and our users’ experience, such as to perform analytics, conduct research, and for advertising and marketing;
to help create and maintain a trusted and safer environment on the Platform, such as fraud detection and prevention, conducting investigations and risk assessments, verifying the address of your listings, verifying any identifications provided by you, and conducting checks against databases such as public government databases;
to send you service, support and administrative messages, reminders, technical notices, updates, security alerts, and information requested by you;
where we have your consent, to send you marketing and promotional messages and other information that may be of interest to you, including information sent on behalf of our business partners that we think you may find interesting. You will be able to unsubscribe or opt-out from receiving these communications in your settings (in the “Account” section) when you login to your Airbnb account;
to administer rewards, surveys, sweepstakes, contests, or other promotional activities or events sponsored or managed by Airbnb or our business partners; and
to comply with our legal obligations, resolve any disputes that we may have with any of our users, and enforce our agreements with third parties.
So, basically, they reserve the right to do whatever they want with your data. Great.
Why is this bad?
Your current location is not their business (quite literally). They only offer one function that technically requires them to know your current location, and that is “accomodations around me”. In all other situations, your current location is not needed to serve your request, so it should not be disclosed to them. This is not some esoteric concept, this is basic privacy. Also, the best way to prevent the misuse of personal information is not to collect the information in the first place.
I contacted the AirBnB-Support via Twitter and, later, via eMail. The response I got wasn’t very helpful:
The current location is requested in order to provide you rapidly with listings around your area whenever you go to search for a place. You should receive that request when starting it.
This may explain the periodical requests every five minutes, but does not explain why the information is sent to the server. AirBnB, if you are reading this, feel free to contact me or comment on this article.
AirBnB is probably not the only offender in this regard. It probably isn’t even the worst offender. I’m just using it to illustrate a growing trend among companies to collect everything, no matter if they need it. They may not misuse this information. They may even not use it at all. The problem is that I do not know what they are doing. And the hunger for more and more data, combined with the secrecy around what it is actually used for, makes me uncomfortable.
Lately, I have been trying to improve transport security (read: SSL settings and ciphers) for the online banking sites of the banks I am using. And, before you ask, yes, I enjoy fighting windmills.
Quick refresher on SSL / TLS before we continue: There are three things you can vary when choosing cipher suites:
Key Exchange: When connecting via SSL, you have to agree on cryptographic keys to use for encrypting the data. This happens in the key exchange. Example: RSA, DHE, …
Encryption Cipher: The actual encryption happens using this cipher. Example: RC4, AES, …
Message Authentication: The authenticity of encrypted messages is ensured using the algorithm selected here. Example: SHA1, MD5, …
After the NSA revelations, I started checking the transport security of the websites I was using (the SSL test from SSLLabs / Qualys is a great help for that). I noticed that my bank, which I will keep anonymous to protect the guilty, was using RC4 to provide transport encryption. RC4 is considered somewhere in between “weak” and “completely broken”, with people like Jacob Applebaum claiming that the NSA is decrypting RC4 in real time.
Given that, RC4 seemed like a bad choice for a cipher. I wrote a message to the support team of my bank, and received a reply that they were working on replacing RC4 with something more sensible, which they did a few months later. But, for some reason, they still did not offer sensible key exchange algorithms, insisting on RSA.
There is nothing inherently wrong with RSA. It is very widely used and I know of no practical attacks on the implementation used by OpenSSL. But there is one problem when using RSA for key exchanges in SSL/TLS: The messages are not forward secret.
What is forward secrecy? Well, let’s say you do some online banking, and some jerk intercepts your traffic. He can’t read any of it (it’s encrypted), but he stores it for later, regardless. Then something like Heartbleed comes along, and the same jerk extracts the private SSL key from your bank.
If you were using RSA (or, generally, any algorithm without forward secrecy) for the key exchange he will now be able to retroactively decrypt all the traffic he has previously stored, seeing everything you did, including your passwords.
However, there is a way to get around that: By using key exchange algorithms like Diffie-Hellman, which create temporary encryption keys that are discarded after the connection is closed. These keys never go “over the wire”, meaning that the attacker cannot know them (if he has not compromised the server or your computer, in which case no amount of crypto will help you). This means that even if the attacker compromises the private key of the server, he will not be able to retroactively decrypt all your stuff.
So, why doesn’t everyone use this? Good question. Diffie-Hellman leads to a slightly higher load on the server and makes the connection process slightly slower, so very active sites may choose to use RSA to reduce the load on their servers. But I assume that in nine of ten cases, people use RSA because they either don’t know any better or just don’t care. There may also be the problem that some obscure guideline requires them to use only specific algorithms. And as guidelines update only rarely and generally don’t much care if their algorithms are weak, companies may be left with the uncomfortable choice between compliance to guidelines and providing strong security, with non-compliance sometimes carrying hefty fines.
So, my bank actually referred to the guidelines of a german institution, the “deutsche Kreditwirtschaft”, which is an organisation comprised of a bunch of large german banking institutes. They worked on the standards for online banking in germany, among other things.
So, what do these security guidelines have to say about transport security? Good question. I did some research and came up blank, so I contacted the press relations department and asked them. It took them a month to get back to me, but I finally received an answer. The security guidelines consist of exactly one thing: “Use at least SSLv3“. For non-crypto people, that’s basically like saying “please don’t send your letters in glass envelopes, but we don’t care if you close them with glue, a seal, or a piece of string.”
Worse, in response to my question if they are planning to incorporate algorithms with forward secrecy into their guidelines, they stated that the key management is the responsibility of the banks. This either means that they have no idea what forward secrecy is (the reponse was worded a bit hand-wavy), or that they actually do know what it is, but have no intention of even recommending it to their member banks.
This leaves us with the uncomfortable situation where the banks point to the guidelines when asked about their lacklustre cipher suites, and those who make the guidelines point back at the banks, saying “Not my department!“. In programming, you call that “circular dependencies”.
So, how can this stalemate be broken? Well, I will write another message to my bank, telling them that while the guidelines do not include a recommendation of forward secrecy, they also do not forbid using it, so why would you use a key made of rubber band and rocks if you could just use a proper, steel key?
And, of course, the more people do this, the more likely it is that the banks will actually listen to one of us…