Tag Archives: Privacy

An update on the Crypto Regulation series

If you have been following this blog, you may have noticed that I have an unfinished series on the topic of crypto regulation (part 1, part 2, part 3). I’ve been meaning to finish it up, but life got in the way and I mostly forgot about it. Now, it has become relevant again, as Great Britain has just introduced a new draft legislation that would force companies to assist the government in removing encryption.

I’d like to finish up the series, but right now, I just don’t have the time. But luckily, someone else did a much better job than I could ever do. A dream team of cryptography experts has released a joint article called “Keys under Doormats – mandating insecurity by requiring government access to all data and communications”. An excerpt:

Twenty years ago, law enforcement organizations lobbied to require data and communication services to engineer their products to guarantee law enforcement access to all data. After lengthy debate and vigorous predictions of enforcement channels going dark, these attempts to regulate the emerging Internet were abandoned. In the intervening years, innovation on the Internet flourished, and law enforcement agencies found new and more effective means of accessing vastly larger quantities of data. Today we are again hearing calls for regulation to mandate the provision of exceptional access mechanisms. In this report, a group of computer scientists and security experts, many of whom participated in a 1997 study of these same topics, has convened to explore the likely effects of imposing extraordinary access mandates.

We have found that the damage that could be caused by law enforcement exceptional access requirements would be even greater today than it would have been 20 years ago. In the wake of the growing economic and social cost of the fundamental insecurity of today’s Internet environment, any proposals that alter the security dynamics online should be approached with caution. Exceptional access would force Internet system developers to reverse forward secrecy design practices that seek to minimize the impact on user privacy when systems are breached. The complexity of today’s Internet environment, with millions of apps and globally connected services, means that new law enforcement requirements are likely to introduce unanticipated, hard to detect security flaws. Beyond these and other technical vulnerabilities, the prospect of globally deployed exceptional access systems raises difficult problems about how such an environment would be governed and how to ensure that such systems would respect human rights and the rule of law.

So, yeah. The cryptography equivalent of the Avengers have gone ahead and written this stuff better than I could have done. So, I will not be finishing my own series, and instead encourage you to read the article.

However, if it turns out that there is interest in me continuing this series, I may make some time to finish the final two articles of the series. So, if you want me to continue, drop me a line in the comments or on twitter, if that’s your thing.

Review: “The Circle” by Dave Eggers

Secrets are lies
Sharing is caring
Privacy is theft
— The Circle, Dave Eggers

The Circle is scary. Not scary in the sense of a thriller or horror novel. Scary in the sense in which Brave new World is scary: By showing the limitless capacity of humans to ignore the consequences of their actions, as long as they think it is for a higher goal (or even if it’s only for their own amusement).

The book follows the Story of Mae Holland, freshly hired at The Circle, a social media- / search- / technology-giant which has revolutionized the web with some sort of universal, single identity system and assorted services (and is quite obviously a reference to Google). The story covers the development of both Mae as a person and the Circle as a company, which slides more and more into a modus operandi which would make everyone except the most radical post-privacy advocates flinch (the quote from above encapsulates the views of the company quite well).

Over the course of the book, The Circle invents more and more technologies that are, on the surface, extremely useful and could change the world for the better. However, each invention further reduces the personal privacy of everyone and builds an ever-growing net of tracking and surveillance over the planet.

I don’t want to go into too much detail here (I hate spoilers), but over the course of the book, I found myself despising Mae more and more. For me, the character embodies everything that is wrong with social networks and the general trends in the internet in general. This book has it all: Thoughtless acts without regard for the privacy of others, generally zero reflection about the impact her actions may have, and the problem of Slacktivism:

Below the picture of Ana María was a blurry photo of a group of men in mismatched military garb, walking through dense jungle. Next to the photo was a frown button that said “We denounce the Central Guatemalan Security Forces.” Mae hesitated briefly, knowing the gravity of what she was about to do—to come out against these rapists and murderers—but she needed to make a stand. She pushed the button. […] Mae sat for a moment, feeling very alert, very aware of herself, knowing that [she had] possibly made a group of powerful enemies in Guatemala.

I really enjoyed the theme of the book (as in: I was terrified of the future it portrayed. I’m a sucker for a good dystopia). However, the book suffers a little from the writing itself. I can’t put a finger on it, but something about the writing seemed off to me. The book also suffers from having a main character which was clearly an antagonist for me.

It serves as a warning not to blindly accept every new technology and to critically ask how it could be misused and how your use of it may impact others, from the small things like talking about others on social networks (others who may wish to keep certain things private) to the idea of filming your own life.

A young man, seeming too young to be drinking at all, aimed his face at Mae’s camera. “Hey mom, I’m home studying.” A woman of about thirty, who may or may not have been with the too young man, said, walking out of view, “Hey honey, I’m at a book club with the ladies. Say hi to the kids!”

The Circle is not a happy book. Even though it has its problems, you should read it, because it gives some perspective on the direction our use of technology is taking. Read it, and think about it when you use your social networks or read about the newest products.

The Circle is the most terrifying dystopia of all: The one where many people would say “Dystopia? What dystopia? That sounds awesome, I’d love to live there”. And that, more than anything else, is why it terrifies me.

Crypto Regulation, Part 3: Regulating end-to-end encryption

This is part 3 of a series on the crypto regulations proposed by Cameron, Obama and others. Part 1 explains what this is all about and discusses different types of cryptography. Part 2 discusses the different ways transport encryption could conceivably be regulated. I encourage you to read those two first in order to better understand this article.

We are in the middle of a thought experiment: What if we were tasked with coming up with a regulation that allows us (the European law enforcement agencies) to decrypt encrypted messages while others (like foreign intelligence agencies) are still unable to do so. In the previous installment, we looked at how transport encryption could be regulated. We saw that there were a number of different ways this could be achieved:

We have also seen where each of these techniques has its problems, and how none of them achieved the goal of letting us decrypt information while others cannot. Let’s see if we have better luck with the regulation of end-to-end encryption (E2EE).

Regulating end-to-end encryption

If we look at the history of cryptography, it turns out that end-to-end encryption is a nightmare to regulate, as the United States found out during the first “Crypto Wars“. It is pratically impossible to control what software people run on their computers, as is regulating the spread of the software itself. There are simply too many computers and too many ways to get access to software to control them all.

But let’s leave aside the practical issues of enforcement (that’s going to be a whole post of its own) for now. There are still a number of additional problems we have to face: There are only two points in the system where we could gain access to the plain text of a message: The sender and the receiver. For example, even if a PGP-encrypted message is sent via Google Mail, we cannot gain access to the decrypted message by asking Google for it, as they don’t have it, either. We are forced to raid either the sender or the receiver to access the unencrypted contents of the message.

This makes it hard to collect even small amounts of these messages, and impossible to perform large-scale collection.1) And this is why end-to-end encryption is so scary to law enforcement and intelligence agencies alike: There could be anything in them, from cake recipes to terror plots, and there’s no way to know until you have decrypted them.

Given that pretty much every home computer and smartphone is capable of running end-to-end encryption software, the potential for unreadable communication is staggering. It is only natural that law enforcement and intelligence services alike are now calling for legislation to outlaw or at least regulate this threat to their investigatory powers. So, in our thought experiment, how could this work?

Regulation techniques

The regulation techniques from the previous part of this series mostly still apply, but the circumstances around them have changed, so we’ll have to re-examine them and see if they could work under these new circumstances. Also, given that end-to-end encrypted messages are almost always transmitted over transport-encrypted channels, a solution for the transport encryption problem would be needed in order to meaningfully implement any of these regulations.

We will look at the following regulation techniques:

Let’s take another look at each of these proposals and their merits and disadvantages for this new problem.

Outlawing cryptography

This proposal is actually surprisingly practical at first glance: Almost no corporations use end-to-end encryption (meaning that there would be next to no lobbying against such a proposal), comparatively few private persons use it (meaning there would be almost no resistance from the public), and it completely fixes the problem of end-to-end encryption. So, case closed?

That depends. Completely outlawing this kind of cryptography would not only leave the communication open for our own law enforcement, but also for foreign intelligence agencies. Recent reports (pdf) to the European parliaments Science and Technology Options Assessment Board (STOA) suggest that one of the ways to improve European IT security would be to foster the adoption of E2EE:

In this way E2EE offers an improved level of confidentiality of information and thus privacy, protecting users from both censorship and
repression and law enforcement and intelligence. […] Since the users of these products might be either criminals or well-meaning citizens, a political discussion is needed to balance the interests involved in this instance.

As the report says, a tradeoff between the interests of society as a whole and law enforcement needs to be found. Outlawing cryptography would hardly constitute a tradeoff, and there are many legitimate and important uses for E2EE. For example, journalists may use it to protect their sources, human rights activists use it to communicate with contacts living under orpressive regimes, and so on. Legislation outlawing E2EE would make all of this illegal and would result in journalists not being able to communicate with confidential sources without fear of revealing their identity.

In the end, a tradeoff will have to be found, but that tradeoff cannot be to completely outlaw the only thing that lets these people do their job with some measure of security, at least not if we want our society to still have adversarial journalism and human rights activists five years from now.

Mandating the use of weak or backdoored algorithms

This involves pretty much the same ideas and problems we have already discussed in the previous article. However, we also encounter another problem: As the software used for E2EE is used by many individuals on many different systems (as opposed to a comparatively small number of corporations managing large numbers of identical servers which are easy to modify) in many different versions, some of which are no longer being actively developed and many of which are open source and not maintained by EU citizens, mandating the use of specific algorithms would entail…

  • …forcing every developer of such a system to introduce these weak algorithms (and producing these updates yourself for those programs which are no longer actively maintained by anyone)
  • …forcing everyone to download the new versions and configure them to use the weak algorithms
  • …preventing people from switching back to more secure versions (although that is an issue of enforcement, which we will deal with later)

In practise, this is basically impossible to achieve. Project maintainers not living under the jurisdiction of the EU would refuse to add algorithms to their software that they know are bad, and many of the more privacy-conscious and tech-literate crowd would just refuse to update their software (again, see enforcement). Assuming that using any non-backdoored algorithms would be illegal, this would be equivalent to outlawing E2EE alltogether.

In a globalized world, many people communicate across state boundaries. Such a regulation would imply forcing foreigners to use bad cryptography in order to communicate with Europeans (or possibly get their European communication partners in trouble for receiving messages using strong cryptography). In a world of global, collaborative work, you sometimes may not even know which country your communication partner resides in. The administrative overhead for everyone would be incredible, and thus people would either ignore the law or stop communicating with Europeans.

Additionally, Software used for E2EE is also used in other areas: For example, many variants of Linux operating systems uses GPG (a software for E2EE of eMails) to verify if software updates have been tampered with. Using bad algorithms for this would compromise the security of the whole operating system.

Again, it is a question of the tradeoff: Does having access to the communication of the general population justify putting all of this at risk? I don’t think so, but then again, I am not a Minister of the Interior.

Performing a Man-in-the-Middle-Attack on all / select connections

If a Man-in-the-Middle-Attack (short: MitM) on transport security is impractical, it becomes downright impossible for most forms of E2EE. To understand why, we need to look at the two major ways E2EE is done in practise: The public key model and the key agreement model:

  • In the public key model, each participant has a public and a private cryptographic key. The public key is known to everyone and used to encrypt messages, the private key is only known to the recipient and is used to decrypt the message. The public key is exchanged once and keeps being re-used for future communication. This model is used by GPG, among others.
  • In the key agreement model, two communication partners establish a connection and then exchange a few messages in order to establish a shared cryptographic key. This key is never sent over the connection2) and is only known to the two endpoints of the connection, who can now use that key to encrypt messages between each other. For each chat session, a new cryptographic key is established, with long-term identity keys being used to ensure that we are still talking to the same person in the next session. Variations of this model are used in OTR and TextSecure, among others.

So, how would performing a MitM attack work for each of these models? For the public key model, you would need to intercept the initial key exchange (i.e. the time where the actual keys are downloaded or otherwise exchanged) and replace the keys with your own. This is hard, for multiple reasons:

  • Many key exchanges have already happened and cannot be retroactively attacked
  • Replaced keys are easily detected if the user is paying attention, and the forged keys can subsequently be ignored.
  • Keys can be exchanged in a variety of ways, not all of which involve the internet

So, one would not only have to attack the key exchanges but also prevent the user from disregarding the forged keys and using other methods for the key exchange instead. If we’re going to do that, we may as well just backdoor the software, which would be far easier.

Attacking a key agreement system wouldn’t work much better: We would have to intercept each key agreement message (which is usually itself encrypted using transport encryption) and replace the message with one of our own. These messages are also authenticated using the long-term identity keys of the communication partners, so we would either have to gain access to those keys (which is hard) or replace them with our own (which is, again, easily detected).

So, while this may theoretically be possible, it is far from viable and suffers from the same issues of enforcement all the other proposals do.

Key escrow

Key escrow sounds like the perfect solution: Everyone has to deposit their keys somewhere where law enforcement may gain access to it. The exact implementation may vary, but that’s the general idea. So, what’s wrong with this idea?

First off, the same caveats as before apply: You are creating an interesting target for both intelligence agencies and criminals. In addition to that, this would only work for the public key model, where the same keys are used over and over again. In the key agreement model, new keys are generated all the time (and the old ones deleted), so a way would have to be found to enter these keys into an escrow system and retain them in case they are ever needed. This would quickly grow into a massive database of keys (many of which would be worthless as no crime was committed using them), which you would have to hang on to, just in case it ever becomes relevant.

Key disclosure laws

The same theme continues here: Key disclosure laws (if they are even allowed under European law) may be able to compel users to disclose their private keys, but users can’t disclose keys that do not exist anymore. Since the keys used in key agreement schemes are usually deleted after less than a day (often after only minutes), the user would be unable to comply with a key disclosure request from law enforcement, even if he would like to. And since it is considered best practise not to keep logs of encrypted chats, the user would also be unable to provide law enforcement with a record of the conversation in question.

Changing this would require massive changes to the software used for encrypted communication, encountering the same problems we already discussed when talking about introducing backdoors into software. So, this proposal is pretty much useless as well.

The “Golden Key”

The term “Golden Key” refers to a recent comment in the Washington Post, which stated:

A police “back door” for all smartphones is undesirable — a back door can and will be exploited by bad guys, too. However, with all their wizardry, perhaps Apple and Google could invent a kind of secure golden key they would retain and use only when a court has approved a search warrant.

— “Compromise needed on Smartphone encryption“, Washington Post, 2014

The article was an obvious attempt to propose the exact same idea (a backdoor) using a different, less politically charged word (“Golden key”), because any system that allows you to bypass a protection mechanism is, by its very nature, a backdoor. But let’s humor them and use the term, because a “golden key” sounds fancy, right? So, how would that work?

In essence, every piece of encryption software would have to be modified in a way that forced it to encrypt every message with one additional key, which is under the control of law enforcement. That way, the keys of the users can stay secret, and only one key needs to be kept safe to keep the communication secured against everyone else. Problem solved?

Not so much. We still have the problem of forcing software developers to include that backdoor into their systems. Then there’s the problem of who is holding the keys. Is there only one key for the whole EU? Or, to put it another way, do you really believe that there is no industrial espionage going on between European Countries?

Or do we have keys for each individual country? And how does the software decide with which key to encrypt the data in this case? What if I talk to someone in france? How is my software supposed to know that and encrypt both with the german and the french key? How secure is the storage of that key? Again, we have a single point of failure. If someone gains access to one of these master keys, he/she can unlock every encrypted message in that country.

There are a lot of questions that would have to be answered to implement this proposal, and I am pretty sure that there is no satisfying solution (If you think you have one, let me know in the comments).

Conclusion

We have looked at five proposals for regulating end-to-end encryption and have found all of them lacking in terms of their effectiveness and having plenty of harmful side effects. All of these proposals both reduce the security of everyone’s communication, not to mention the toxic side effects on basic human rights that are a given whenever we are considering such measures.

There must be a tradeoff between security and privacy, but that tradeoff should not be less security for less privacy, and any attempt at regulating encryption we have looked at is exactly that: It makes everyone less secure and, at the same time, harms their privacy.

One issue we haven’t even looked at yet is how to actually enforce any of these measures, which is another can of worms entirely. We’re going to do that next, in the upcoming fourth installment of this series.


As before, thanks go out to niemalsnever, FreeFall and DanielAW for proofreading and suggestions. Any remaining mistakes are my own.

Flattr this

Footnotes

Footnotes
1 Although, as stated before, it is still possible to collect metadata, which is the thing intelligence agencies are most interested in anyway.
2 This works due to mathematical properties of the messages we send, but for our purposes, it is enough to know that both parties will have the same key, while no one else can easily compute the same key from the values sent over the network.

Crypto Regulation, Part 2: Regulating transport encryption

This is part 2 of a series on the crypto regulations proposed by Cameron, Obama and others. Part 1, explaining what it is all about and describing different types of cryptography, can be found here.

The declared goal of crypto regulation is to be able to read every message passing through a country, regardless of who sent or received it and what technology they used. Regular readers probably know my feelings about such ideas, but let’s just assume that we are a member of David Camerons staff and are tasked with coming up with a plan on how to achieve this.1)

We have to keep in mind the two types of encryption we have previously talked about, transport and end-to-end encryption. I will discuss the problems associated with gaining access to communication secured by the respective technologies, and possible alternatives to regulating cryptography. Afterwards, I will look at the technological solution that could be used to implement the regulation of cryptography. This part will be about transport encryption, while the next part will deal with end-to-end encryption.

Regulating transport encryption

As a rule, transport encryption is easier to regulate, as the number of parties you have to involve is much lower. For instance, if you are interested in gaining access to the transport-encrypted communication of all Google Mail users, you only have to talk to Google, and not to each individual user.

For most of these companies, it probably wouldn’t even be necessary to regulate the cryptography itself, they could just be (and are) required to hand over information to law enforcement agencies. These laws could, if necessary, be expanded to include PRISM-like full access to the data stored on the servers (assuming this is not already common practice). Assuming that our goal really only is to gain access to the communication content and metadata, this should be enough to satisfy the needs of law enforcement.

Access to the actual information while it is encrypted and flowing through the internet is only required if we are interested in more than the data stored on the servers of the companies. An example would be the passwords used to log into a service, which are transmitted in an encrypted form over the internet. These passwords are usually not stored in plain text on the company servers. Instead, they store a so-called hash of the password which is easy to generate from the password but makes it almost impossible to restore the password from the information stored in the hash.2) However, if we were able to decrypt the password while it is sent over the internet, we would gain access to the account and could perform actions ourselves (e.g. send messages). More importantly, we could also use that password to attempt to log into other accounts of the suspect, potentially gaining access to more accounts with non-cooperating (foreign) companies or private servers.

Regulation techniques

So, assuming we want that kind of access to the communication, we’re back to the topic of regulating transport encryption. The different ways this access could be ensured are, in rising order of practicality:

Let’s take a look at each of these proposals, their merits and their disadvantages.

Outlawing cryptography

Outlawing cryptography has the advantage of simplicity. There is no overhead of backdooring implementations, implementing key escrow, or performing active attacks. However, that is just about the only advantage of this proposal.

Cryptography is fundamental to the way our society works, and the modern information age would not be possible without it. You are using cryptography every day: when you get your mail, when you log into a website, when you purchase stuff online, even on this very website, your connection is encrypted.

It gets even worse for companies. They rely on their information being encrypted when communicating with other companies or their customers, otherwise their trade secrets would be free to be taken. Banks would have to cease offering online banking. Amazon would probably go out of business. Internet crime would skyrocket as people would hijack unprotected accounts, steal private and corporate information.

So, given the resistance any such proposition would face, outlawing cryptography as a whole isn’t really an option. An alternative would be to just outlaw it for individuals, but not for corporations. That way, the banks could continue offering online banking, but individuals would no longer be allowed to encrypt their private information.

Such a law would technically be possible, but would raise a lot of problems in practise. Aside from being impossible to enforce, some existing programs can only save their data in an encrypted form (e.g. banking applications). Some people have devices they use both privately and for their job, and their employer may require them to encrypt the device. There are a lot of special cases that would cause problems in the actual implementation of this law, not to mention the possible damage caused from criminals gaining access to unencrypted private information. There would definitely be a lot of opposition to such a law, and the end result would be hard to predict.

Mandating the use of weak or backdoored algorithms

In this case, some party would come up with a list of ciphers which are considered secure enough against common “cyber criminals”, while offering no significant resistance to law enforcement or intelligence agencies. This could be achieved,  either through raw computational power (limiting the size of encryption keys to a level where all possibilities can be tried out in a reasonable timeframe, given the computational ressources available to law enforcement / intelligence agencies), or through the introduction of a backdoor in the algorithm.

In cryptography, a backdoor could be anything from encrypting the data with a second key, owned by the government, to make sure that they can also listen in, to using weak random numbers for the generation of cryptographic keys, which would allow anyone knowing the exact weakness to recover the keys much more quickly. This has, appearently, already happened: It is suspected (and has pretty much been proven) that the NSA introduced backdoors into the Dual EC DRBG random number generator and it is alleged that they paid off a big company (RSA) to then make this algorithm their standard random number generator in their commercial software.

The problem with backdoors is that once they are discovered, anyone can use them. For example, if we mandated that everyone use Dual EC DRBG random numbers for their cryptographic functions, not only we, but also the NSA could decrypt the data much more easily. If we encrypt everything to a second key, then anyone in posession of that key could use it to decrypt the data, which would make the storage location of the key a very attractive target for foreign spies and malicious hackers. So, unless we want to make the whole system insecure to potentially anyone and not just us, backdooring the cryptography is a bad idea.

The other option we mentioned was limiting the size of cryptographic keys. For example, we could mandate that certain important keys may only use key sizes of up to 768 bits, which can be cracked within a reasonable timeframe using sufficient computing power. But, once again, we encounter the same problem: If we can crack the key, other organizations with comparable power (NSA, KGB, the chinese Ministry of State Security, …) can do the same.

Also, because the computational power of computers is still increasing every year, it may be that in a few years, a dedicated individual / small group could also break encryption with that key length. This could prove disastrous if data that may still be valuable a decade later is encrypted with keys of that strength, e.g. trade secrets or long-term plans. Competitors would just have to get a hold of the encrypted data and wait for technology to reach a point where it becomes affordable to break the encryption.

So, mandating the use of weak or backdoored cryptography would make everyone less secure against intelligence agencies and quite possibly even against regular criminals or corporate espionage. In that light, this form of regulation probably involves too much risk for too little reward (cracking these keys still takes some time, so it cannot really be done at a large scale).

Performing a Man-in-the-Middle-Attack on all / select connections

A man-in-the-middle (MitM)-Attack occurs when one party (commonly called Alice) wants to talk to another party (Bob), but the communication is intercepted by someone else (Mallory), who then modifies the data in transit. Usually, this involves replacing transmitted encryption keys with others in order to be able to decrypt the data and re-encrypt it before sending it on to the destination (the Wikipedia article has a good explanation). This attack is usually prevented by authenticating the data. There are different techniques for that, but most of the actual communication between human beings (e.g. eMail transfer, logins into websites, …) is protected using SSL/TLS, which uses a model involving Certification Authorities (CAs).

In the CA model, there are a bunch of organizations who are trusted to verify the identity of people and organizations. You can apply for a digital certificate, which confirms that a certain encryption key belongs to a certain website or individual. They are then supposed to verify that you are, in fact, the owner of said website, and issue you a certificate file that states “We, the certification authority xyz, confirm that the cryptographic key abc belongs to the website blog.velcommuta.de”. Using that file and the encryption key, you can then offer web browsers a way to (more or less) securely access your website via SSL/TLS. The server will send its encryption key and the certificate, confirming that this key is authentic, to clients, who can then use that key to communicate with the server.3)

The problem is that every certification authority is trusted to issue certificates for every website, and no one can prevent them from issuing a false certificate (e.g. confirming that key def is a valid key for my website). A man-in-the-middle could then use such a certificate to hijack a connection, replace my cryptographic key with their own and listen in on the communication.

Now, in order to get into every (or at least every interesting) stream of communication, we would need two things:

  • A certification authority that is willing (or can be forced) to give us certificates for any site we want
  • The cooperation (again, voluntary or forced) of internet providers to perform the attack for us

Both of these things can be written into law and passed, and we would have a way to listen in on every connection protected by this protocol. However, there are a few problems with that idea.

One problem is that not all connections use the CA model, so we would need to find a way to attack other protocols as well. These protocols are mostly unimportant for large-scale communication like eMail, but become interesting if we want to gain access to specialized services or specific servers.

The second problem is that some applications do additional checks on the certificates. They can either make sure that the certificate comes from a specific certification authority, or they could even make sure that it is a specific certificate (a process called Certificate Pinning4)). Those programs would stop working if we started intercepting their traffic.

The third problem is that it creates a third point at which connections can be attacked by criminals and foreign intelligence agencies. Usually, they would have to attack either the source or the destination of a connection in order to gain access to the communication. Attacking the source is usually hard, as that would be your laptop, and there are an awful lot of personal computers which you would have to attack in order to gain full access to all communication that way.

Attacking the destination is also hard, because those are usually servers run by professional companies who (hopefully) have good security measures in place to prevent those attacks. It is probably still possible to find a way in if you invest enough effort, but it is hard to do at scale.

However, if you introduce a few centralized points at which all communication flowing through the network of an internet operator is decrypted and re-encrypted, you also create one big, juicy target, because you can read all of those connections by compromising one server (or at least a much smaller number of servers than otherwise). And experience has shown that for juicy targets like that, intelligence agencies are willing to invest a lot of effort.

So, performing MitM-Attacks on all connections would not work for all types of connections, it would not work for all devices, and it would create attractive targets for hostile agencies to gain access to a large percentage of formerly secured traffic. That does not seem like a good trade to me, so let’s keep looking for alternatives.

Key escrow

Key escrow (sometimes called a “fair” cryptosystem by proponents and key surrender by opponents) is the practise of keeping the cryptographic keys needed to decrypt data in a repository where certain parties (in our case law enforcement agencies) may gain access to them under certain circumstances.

The main problem in this case is finding an arrangement where the keys are stored in a way that lets only authorized parties access them. Assuming we want to continue following a system with judicial oversight, that would probably mean that the escrow system could only be accessed with a warrant / court order. It is hard to enforce this using technology alone, and systems involving humans are prone to abuse and mistakes. However, with a system as security critical as a repository for cryptographic keys, any mistake could prove costly, both in a figurative and a literal sense.

Then there is the problem of setting the system up. Do you want a central European repository? A central repository for each country? Will every server operator be required to run escrow software on their own server? Each of these options has its own advantages and drawbacks.

  • A European repository would mean less administrative effort overall, but it would create a single point of failure, which, when compromised, would impact the internet security of the whole EU. As with the issue of man-in-the-middle attack devices, history has shown that foreign agencies can and will go to a lot of effort to compromise such repositories. A central European repository would also assume that European countries do not spy on each other, which is a naive assumption.
  • Country-wide repositories fix the last problem, but still suffer from the others. They are attractive targets for both foreign intelligence agencies and cybercriminals.
  • Individual repositories face the problem of compatibility (there are a LOT of different operating systems and -versions running on servers). They are less centralized, which is good (the effort to break into them increases)5), but they also imply that law enforcement would have to be able to electronically retrieve the key on demand. If someone knew that the police was onto him, he could thus disable the software or even destroy the key and server in order to prevent the police from retroactively decrypting potential evidence they had already captured.

Again, we have encountered administrative problems and important security aspects that make this option problematic at best. So, why don’t we take a look at how things are done right now in great britain and see if it made  sense to at least expand this law into the rest of Europe.

Key disclosure laws
Key disclosure in practise
Key disclosure in practise (Image: “Security” by Randall Munroe, Licensed CC BY-NC 2.5)

The british Regulation of Investigatory Powers Act of 2000 (short: RIPA) includes a provision requiring suspects in a crime to hand over encryption keys or face jail time of up to two years (or up to five in cases of terrorism or suspected child pornography).6) The law has already been used to imprison at least three people for refusing to give up encryption keys.

However, all members of the European council have ratified the European Convention on Human Rights. While it is not specifically mentioned, the European court of human rights holds that

…the right to remain silent under police questioning and the privilege against self-incrimination are generally recognized international standards which lie at the heart of the notion of a fair procedure under Article 6 [of the European Convention on Human Rights].

Requiring an individual to surrender keys would probably be in violation of the right to remain silent (although there are different opinions on that). Any such law would almost certainly be annulled by the Court of Justice of the European Union, as it did with the Data Retention Directive.

However, such a law could conceivably be used to compel companies or witnesses to disclose encryption keys they have access to. These laws exist in some European countries, and could be expanded to all of Europe. It would remain to be seen what the European Court of Justice would think of that, as such a law would definitely be challenged, but the potential of a law being annulled by the ECJ has not prevented the European parliament from passing them in the past.

There exists another, more technical concern with this: More and more websites employ cryptographic techniques that ensure a property called (perfect) forward secrecy, short (P)FS. This ensures that even if an encrypted conversation is eavesdropped on and recorded, and even if the encryption keys are surrendered to law enforcement afterwards (or stolen by criminals), they will be unable to decrypt the conversation. The only way to eavesdrop on this kind of communication is to perform an active man-in-the-middle-attack while in possession of a valid key.

This means that even if law enforcement has a recording of evidence while it was being transmitted, and even if they could force someone to give them the relevant keys, they would still be unable to gain access to said evidence. This technology is slowly becoming the standard, and the percentage of connections protected by it will only grow, meaning that laws requiring the disclosure of keys after the communication has taken place will become less and less useful over the next years.

Conclusion

We have taken a look at five different proposals for regulating transport security, and have found that each is either extremely harmful to the security of the European internet or ineffective at providing access to encrypted communication. Each of the proposals also holds an enormous potential for abuse from governments and intelligence services.

This concludes part 2 of my series on crypto regulation. Part 3 is discussing possible ways to regulate end-to-end cryptography.


As before, thanks go out to niemalsnever, FreeFall and DanielAW for proofreading and suggestions. Any remaining mistakes are my own.

Flattr this

Footnotes

Footnotes
1 I’m playing “Devil’s system engineer” here and am obviously completely opposed to any of the measures I describe in this article, in case there was any doubt.
2 Again, this is a simplification. In the real world, there are important considerations, including the choice of the proper hash function and salting of the passwords, but that is out of the scope of this article.
3 As always, I am simplifying matters here, but the exact inner workings of TLS are not relevant to this article.
4 There is a Firefox extension that does that
5 …assuming the key escrow software does not have a security hole itself, which is an optimistic assumption in itself.
6 Distressingly, it does not even distinguish between willingly not giving up the key and being unable to give up a key. This means that if the police thinks something is encrypted, and it is not, you can be sent to jail for refusing to give up a key to decrypt imaginary encrypted data.

Crypto Regulation, Part 1: What is it all about?

Over the last weeks, we’ve had a slew of politicians asking for new legislation in response to the Paris attacks. The proposed new regulations range from a new Data Retention directive (here’s my opinion on that) to PNR (Passenger Name Records, data about the passengers of all Flights within europe) data exchange within the EU.

By far the most worrying suggestion initially came from UK Prime Minister David Cameron, but was taken up by Barrack Obama, the Counterterrorism Coordinator (pdf) of the EU, and the German Innenminister (Minister of the Interior), de Maziere: A regulation of encryption technology. The reasons they give are very similar: We need to be able (in really important cases, with proper oversight, a signed warrant et cetera) to read the contents of any communication in order to “protect all of us from terrorists”.1)

The irony of justifying this with the Paris attack, a case where the terrorists were known to the relevant authorities and used unencrypted communication, is apparently lost on them.

In this series of posts, I will take a look at what crypto regulation means, how it could (or could not) work in practice, and why it appeals to pro-surveillance politicians either way.

An (extremely) brief primer on cryptography

Cryptography is used in order to hide information from unauthorized readers. In order to decrypt a message, you need three things: The encrypted message (obviously), knowledge about the algorithm that was used to encrypt it (which can often, but not always, be easily determined), and the cryptographic key that was used to do it. When we talk about crypto regulation, we usually assume that algorithm and message are known to whoever wants to read them, and the only missing thing is the key.

Cryptography is all around you, although you may not see it. In fact, you are using cryptography right now: This website is protected using SSL/TLS (that’s the https:// you see everywhere). You are also using it when you go to withdraw money from an ATM, when you send a mail, log into any website, and so on. All of those things use cryptography, although the  strength (meaning how easy it is to break that cryptography) varies.

A (very) brief history of crypto regulation to date

Crypto regulation is hardly a new idea. For a long time, the export of encryption technology was regulated as a munition in the United States (the fight for the right to freely use and export cryptography was called the Crypto Wars and spawned some interesting tricks to get around the export restriction). This restriction was relaxed, but never completely removed (it is still illegal to export strong encryption technology into “rogue states” like Iran).

During the last 10 years or so, there haven’t really been serious attempts to limit the use and development of encryption technology2), leading to the rise of many great cryptographic tools like GnuPG, OTR, Tor and TextSecure.3) But now, there appears to be another push to regulate or even outlaw strong encryption.

What is “strong encryption”?

In Cryptography, we distinguish between two4) different kinds of encryption. There is transport encryption and end-to-end encryption. Transport encryption means that your communication is encrypted on its way from you to the server, but decrypted on the server. For example, if you send a regular eMail, your connection to the server is encrypted (no one who is eavesdropping on your connection is able to read it), but the server can read your eMail without a problem. This type of encryption is used by almost every technology you use, be it eMail, chats (except for a select few), or telephony like Skype.

The major drawback of transport encryption is that you have to trust the person or organization operating the server to not look at your stuff while it is floating around on their server. History has shown that most companies simply cannot be trusted to keep your data safe, be it against malicious hackers (see Sony), the government (see PRISM), or their own advertising and analytics desires (see Google Mail, Facebook, …).

The alternative is end-to-end encryption. For this, you encrypt your message in a way that only allows the legitimate receiver to decrypt it. That way, your message cannot be read by anyone except the legitimate receiver.5) The advantage should be obvious: You can put that message on an untrusted server and the operators of said server cannot read it.

The drawback is the logistics: The recipients need to have their cryptographic keys to decrypt the message, which can be a hassle if you have a lot of devices. The key can also be stolen and used to decrypt your messages. For some usage scenarios like Chats, there are solutions like the aforementioned OTR and TextSecure (which you should install if you own an Android phone), but there is no such solution for eMails. End-to-End-Encryption also does not protect the metadata (who is talking to whom, when, for how long, et cetera) of your messages, only the contents.

When politicians are talking about “strong encryption”, they are probably referring to end-to-end encryption, because that data is much harder to obtain than transport-encrypted data, which can still be seized on the servers it resides on. To read your end-to-end encrypted data, they would have to seize both the encrypted data and your encryption keys (and compel you to give them the passwords you protected them with), which is a lot harder to do.

Conclusion

Now that we have a basic understanding of the different types of encryption used in the wild, we can talk about how to regulate them. This will be covered in part 2 of this series.


Thanks go out to niemalsnever, FreeFall and DanielAW for proofreading and suggestions. Any remaining mistakes are solely mine.

Flattr this

Footnotes

Footnotes
1 I dislike the term “Terrorist” because it can (and has been) expanded to include pretty much anyone you disagree with. However, for readability, I will use it in the connotation most used by western media, e.g. meaning the Islamic State, Al Quaeda, et cetera.
2 Although it appears that these efforts where simply put into introducing backdoors into algorithms and implementations instead.
3 And not-so-great, but still necessary tools like OpenSSL.
4 We obviously distinguish between many more than that, but for this article, two will be enough.
5 This is, again, a gross simplification, but sufficient for this article.

How the AirBnB-App is tracking your location

AirBnB can be used to find rooms in other cities while you travel. For that purpose, it also offers an official Android Application. As the app requests some dangerous permissions (Location, Contacts, …), I enabled the “privacy guard” feature of CyanogenMod right away, which blocks access to location and contacts and asks the user to confirm each access to one of these ressources. Due to these prompts, I noticed that AirBnB requests your location a lot, including while the app is not active (in the background, but not terminated).

This made me curious, so I set up mitmproxy to take a look at the network traffic of the app. Fortunately for me (and unfortunately, in general), while it uses HTTPS to phone home, it does not implement certificate pinning, so it was trivial to get a dump of the requests and responses it sends and receives. And, as it turns out, AirBnB is indeed very curious.

When is your location disclosed?

The app always sends your current location when it is started. In fact, a whole host of information is sent to AirBnB, including your GPS location with a precision of seven decimals, your current city in human-readable form, your system language and OS version, the type of your device (phone, tablet), and even a bunch of settings you can presumably set if you are logged into your account on the website. Judging from the presence of a “is_logged_in”-Field, I assume that this information will be linked to your account if you are logged into the app (I was not).

The app will also send your GPS location if you search for offers and while it loads the offers in the “discover”-tab (where it will display some featured places and locations you could travel to). It has to be stressed that the location is not actually needed for any of this, it’s just AirBnB being curious and wanting the data for their analysis, I assume (they also use a bunch of other trackers, including Google Analytics, Newrelic, Flurry, and Facebook, but as far as I could find out, they do not disclose the location to these). There are probably a lot of additional cases where your location is sent to AirBnB, but I stopped here, mostly because I was not interested in sending them even more data.

AirBnB also regularily requests your current location every 5 minutes, but does not send it to the server, as far as I can tell.

For what is your location used?

That is the big question. As the data is not needed to answer your queries, I can only assume that they are using it for their analysis software. So, let’s take a look at their privacy policy:

“When you use certain features of the Platform, in particular our mobile applications we may receive, store and process different types of information about your location, including general information (e.g., IP address, zip code) and more specific information (e.g., GPS-based functionality on mobile devices used to access the Platform or specific features of the platform).”

Okay, interesting. Is there a way to opt out of this?

“If you access the Platform through a mobile device and you do not want your device to provide us with location-tracking information, you can disable the GPS or other location-tracking functions on your device, provided your device allows you to do this. See your device manufacturer’s instructions for further details.”

Oh. Okay. And for what, precisely, are you using the data?

We use and process Information about you for the following general purposes:

  1. to enable you to access and use the Platform;
  2. to operate, protect, improve and optimize the Platform, Airbnb’s business, and our users’ experience, such as to perform analytics, conduct research, and for advertising and marketing;
  3. to help create and maintain a trusted and safer environment on the Platform, such as fraud detection and prevention, conducting investigations and risk assessments, verifying the address of your listings, verifying any identifications provided by you, and conducting checks against databases such as public government databases;
  4. to send you service, support and administrative messages, reminders, technical notices, updates, security alerts, and information requested by you;
  5. where we have your consent, to send you marketing and promotional messages and other information that may be of interest to you, including information sent on behalf of our business partners that we think you may find interesting. You will be able to unsubscribe or opt-out from receiving these communications in your settings (in the “Account” section) when you login to your Airbnb account;
  6. to administer rewards, surveys, sweepstakes, contests, or other promotional activities or events sponsored or managed by Airbnb or our business partners; and
  7. to comply with our legal obligations, resolve any disputes that we may have with any of our users, and enforce our agreements with third parties.

So, basically, they reserve the right to do whatever they want with your data. Great.

Why is this bad?

Your current location is not their business (quite literally). They only offer one function that technically requires them to know your current location, and that is “accomodations around me”. In all other situations, your current location is not needed to serve your request, so it should not be disclosed to them. This is not some esoteric concept, this is basic privacy. Also, the best way to prevent the misuse of personal information is not to collect the information in the first place.

AirBnB’s reaction

I contacted the AirBnB-Support via Twitter and, later, via eMail. The response I got wasn’t very helpful:

The current location is requested in order to provide you rapidly with listings around your area whenever you go to search for a place. You should receive that request when starting it.

This may explain the periodical requests every five minutes, but does not explain why the information is sent to the server. AirBnB, if you are reading this, feel free to contact me or comment on this article.

Closing notes

AirBnB is probably not the only offender in this regard. It probably isn’t even the worst offender. I’m just using it to illustrate a growing trend among companies to collect everything, no matter if they need it. They may not misuse this information. They may even not use it at all. The problem is that I do not know what they are doing. And the hunger for more and more data, combined with the secrecy around what it is actually used for, makes me uncomfortable.

Howto: Running Tor on a Synology DiskStation

Note: All of these steps may no longer be necessary. Check out this comment for a software package for your DiskStation, if you trust a version of Tor you have not compiled yourself.

(Repost from my tumblr)

After a brief conversation with the Tor support, I tried to and suceeded at getting Tor to run on my Synology DiskStation 211j. I suppose the setup process will be similar on all DiskStations and possibly other BusyBox NAS Systems, but I only tried my own one.

I suppose you already know your way around your NAS, in having SSH enabled and secured (important!) , and ipkg installed. You should also know basic stuff about linux (editing files, creading directories, sudo / su, …), but you don’t need to be an expert (hell, I am mostly a newbie myself when it comes to Linux).

Please also be aware of the legal implications that come with running Tor. I am not responsible for anything that happens to you, your NAS, your Network, Internet Connection, computer, data, cat, or anything else. Also, please note that while the following steps have worked for me, they might not work for you, and chances are that I will be unable to assist you in any way. Use Google or whatever search engine you are comfortable with to find solutions.

A note on the ipkg version of Tor:

I have asked the guys at TOR, and the version on IPKG is not official. It is also outdated, so please don’t use it. Compile TOR yourself instead.

Step one: Getting the Tor Source code

There are, as of April 2012, no precompiled ARM binaries available, so you will have to compile Tor yourself.

Go to https://www.torproject.org/download/download.html.en and download the Source Tarball (That’s important. Do not download any precompiled linux package).

Copy it to your NAS in some way (via a network share, for example). Getting the source Tarball directly on the NAS was not possible for me, as it is only loadable via https, and my wget had no https support compiled in.

Copy it to a location of your choice (your home folder, for example), and unpack it using:

tar x -f name_of_source_tarball.tar.gz

(remember you can autocomplete the filename with tab)

Step two: Checking the config

This step is easy. Just run “./configure” from the unpacked directory (you may have to “cd” into it first).

You will most likely get errors. Don’t freak out, thats normal.

If it complains that you don’t have gcc installed, just run “sudo ipkg install gcc” and you should be fine.

Usually, it will tell you that it has found a shared library, but is unable to use it, and you can specify a new path using the “—with-[libraryname]-dir=path/to/library” switch.

Most of the libraries will be located at /opt/lib

If you are indeed missing a library completely, you can most likely install it using ipkg.

For example, I was missing the openssl-libraries. By running “sudo ipkg list | grep openssl”, I was able to locate the “openssl-dev”-package that contained the libraries. If you really can’t find the libraries, use a search engine to figure out how to get them.

Once you get the “./configure” command to run without errors, using the switches explained above, you can run “make” (or install it first, if you don’t have it already, using “sudo ipkg install make”).

This will take a while (about 10 minutes for me). It should run without errors. If you encounter problems here, I will most likely not be able to help you, so use your friendly search engine again.

Step three: Preparing the system

First off, if you have not properly sealed your ssh, now is the time to do it. Use keyfiles to log in, change the standard port, disallow root login and so on. I will not go into details here, there are enough tutorials for that online.

Make sure all your software is up to date (“sudo ipkg update”, followed by “sudo ipkg upgrade”), and that your router’s Firewall is blocking every port by default. Be a bit paranoid.

If you are done with that, run “sudo mkdir /root/.tor”, followed by “sudo chown -R [your_username] /root/.tor”. This will enable Tor to use the directory, as per standard config.

Alternatively, you could just run “sudo [path_to_tor_source_dir]/src/or/tor” and then, after a second, cancel the execution using ctrl+c. Tor should create all required directories. Now you can run it without the “sudo” to get a list of all relevant directories that were created (Because it will complain that it has no access to them). “chown -R” all of them to your user, as described above.

Step four: Preparing a torrc file

Torshould have created a folder called “tor” somewhere (For me, it was /opt/etc/tor). cd to that folder and edit the torrc.sample (Or maybe it will be called torrc, without the .sample).

Read through it carefully and consider your choices, then make your changes. Also, check if you have write access by changing something and saving. If it works, keep going. Otherwise, exit your editor and restart it using “sudo”.

Keep in mind that the “#” character is signifying a comment. So, make sure the relevant lines are uncommented.

The most important decisions you have to make are:

  • SocksPort: Set to 0 if you only want to run a relay / exit node
  • Log configuration: It is useful to set a logfile for “notice” level logs. For example: “Log notice file /path/to/the/file/filename.txt”
  • RunAsDaemon: If you want Tor to keep running in background if you terminate your SSH connection, set this to 1. In this case, it is important to set a log file, or you will be unable to find out what is going on inside Tor, if there are any problems.
  • Port: Set some some port and make sure that it is forwarded in your router.Only set this if you want to run a relay, bridge, or exit node.
  • NickName: Set anything here to identify your Node. Again, only set if you want to run a relay, bridge or exit node.
  • RelayBandwidthRate: Set if you want to limit traffic through your relay, bridge or exit node.
  • RelayBandwidthBurst: Same here
  • AccountingMax, AccountingStart: Same here
  • ContactInfo: Set if you want the Tor team to be able to contact you, should something be wrong. Search engines are indexing this, so spammers will find your email eventually, if you are setting this.
  • DirPort: If you want to mirror directory information, set this and make sure your Router forwards the port.
  • DirPortFrontPage: Specify a HTML document that should be displayed if someone browses to your IP on your DirPort. Totally optional
  • MyFamily: Set the fingerprints of other Tor relays you are running here.
  • ExitPolicy:This is critical!If you want to only relay traffic (From Tor into Tor, as opposed to from Tor into the Internet), set this to “reject *:*”. Else, you can reject special ports, for example BitTorrent, Usenet, …If you chooseto be an exit node, you will get problems at some point, because people are using Tor to do illegal stuff, and your IP will show up eventually. Consider this carefully. Running a non-exit relay is safe and very much helps the Tor network.
  • BridgeRelay: If you want to serve as a bridge, set this to 1.

If you need your fingerprint to configure other relays, check “/root/.tor”.

Don’t forget to remove the “.sample” from the torrc file, if it was still there (“mv torrc.sample torrc)

Step five: Running Tor for the first time (for real)

Run TOR using “[path_to_Tor_sources]/src/or/tor”. If the torrc file is not in the standard directory, you can specify the path using “-f /path/to/file/torrc”.

If you have set Daemon to 1, check your log files. It should say “Self-testing indicates your ORPort is reachable from the outside”, as well as “self-testing indicates your DirPort is reachable from the outside” (If you have enabled the respective ports). If there are problems, check your port forwarding, paths and permissions.

Step six: Killing Tor if is is a daemon

If you have set Daemon to 1 and need to kill Tor for some reason, run “ps | grep tor”, note the PID of the tor process and run “sudo kill -SIGINT [tor_pid]”. It will take 30 seconds for Tor to shut down. If you need to shut it down fast, without regard for the stability of the currently connected clients, you can just use the kill command without the -SIGINT. Please try to avoid this.

That’s it. You are now (hopefully) running a Tor relay, or at least have access to Tor, using your NAS as a Proxy.

Some notes:

  • Subscribe to the tor-announce mailing list if you want to be notified on new Tor updates. Install them ASAP, as old versions might be insecure.
  • IMPORTANT: Read this page on the Tor documentation about improving security of your Tor Relay. There are many things you can do to make it harder for people to break into your machine. This is especially important if you are running an exit node.
  • If you want to browse the internet using Tor, use the Tor browser bundle instead of setting Tor as a proxy in your Browser. The TBB contains a hardened version of Firefox that has some additional tracking and exploit protections built in that your regular browser does not have. Do not expect to be anonymous if you use your regular browser with a Tor proxy.

So, that’s it, this time for real. If you have any notes concerning the process, do not hesitate to comment.