Category Archives: Security / Vulnerabilities

Strictly passive, of course

Transport security in online banking, or: “Not my Department”

Lately, I have been trying to improve transport security (read: SSL settings and ciphers) for the online banking sites of the banks I am using. And, before you ask, yes, I enjoy fighting windmills.

Quick refresher on SSL / TLS before we continue: There are three things you can vary when choosing cipher suites:

  • Key Exchange: When connecting via SSL, you have to agree on cryptographic keys to use for encrypting the data. This happens in the key exchange. Example: RSA, DHE, …
  • Encryption Cipher: The actual encryption happens using this cipher. Example: RC4, AES, …
  • Message Authentication: The authenticity of encrypted messages is ensured using the algorithm selected here. Example: SHA1, MD5, …

After the NSA revelations, I started checking the transport security of the websites I was using (the SSL test from SSLLabs / Qualys is a great help for that). I noticed that my bank, which I will keep anonymous to protect the guilty, was using RC4 to provide transport encryption. RC4 is considered somewhere in between “weak” and “completely broken”, with people like Jacob Applebaum claiming that the NSA is decrypting RC4 in real time.

Given that, RC4 seemed like a bad choice for a cipher. I wrote a message to the support team of my bank, and received a reply that they were working on replacing RC4 with something more sensible, which they did a few months later. But, for some reason, they still did not offer sensible key exchange algorithms, insisting on RSA.

There is nothing inherently wrong with RSA. It is very widely used and I know of no practical attacks on the implementation used by OpenSSL. But there is one problem when using RSA for key exchanges in SSL/TLS: The messages are not forward secret.

What is forward secrecy? Well, let’s say you do some online banking, and some jerk intercepts your traffic. He can’t read any of it (it’s encrypted), but he stores it for later, regardless. Then something like Heartbleed comes along, and the same jerk extracts the private SSL key from your bank.

If you were using RSA (or, generally, any algorithm without forward secrecy) for the key exchange he will now be able to retroactively decrypt all the traffic he has previously stored, seeing everything you did, including your passwords.

However, there is a way to get around that: By using key exchange algorithms like Diffie-Hellman, which create temporary encryption keys that are discarded after the connection is closed. These keys never go “over the wire”, meaning that the attacker cannot know them (if he has not compromised the server or your computer, in which case no amount of crypto will help you). This means that even if the attacker compromises the private key of the server, he will not be able to retroactively decrypt all your stuff.

So, why doesn’t everyone use this? Good question. Diffie-Hellman leads to a slightly higher load on the server and makes the connection process slightly slower, so very active sites may choose to use RSA to reduce the load on their servers. But I assume that in nine of ten cases, people use RSA because they either don’t know any better or just don’t care. There may also be the problem that some obscure guideline requires them to use only specific algorithms. And as guidelines update only rarely and generally don’t much care if their algorithms are weak, companies may be left with the uncomfortable choice between compliance to guidelines and providing strong security, with non-compliance sometimes carrying hefty fines.

So, my bank actually referred to the guidelines of a german institution, the “deutsche Kreditwirtschaft”, which is an organisation comprised of a bunch of large german banking institutes. They worked on the standards for online banking in germany, among other things.

So, what do these security guidelines have to say about transport security? Good question. I did some research and came up blank, so I contacted the press relations department and asked them. It took them a month to get back to me, but I finally received an answer. The security guidelines consist of exactly one thing: “Use at least SSLv3“. For non-crypto people, that’s basically like saying “please don’t send your letters in glass envelopes, but we don’t care if you close them with glue, a seal, or a piece of string.”

Worse, in response to my question if they are planning to incorporate algorithms with forward secrecy into their guidelines, they stated that the key management is the responsibility of the banks. This either means that they have no idea what forward secrecy is (the reponse was worded a bit hand-wavy), or that they actually do know what it is, but have no intention of even recommending it to their member banks.

This leaves us with the uncomfortable situation where the banks point to the guidelines when asked about their lacklustre cipher suites, and those who make the guidelines point back at the banks, saying “Not my department!“. In programming, you call that “circular dependencies”.

So, how can this stalemate be broken? Well, I will write another message to my bank, telling them that while the guidelines do not include a recommendation of forward secrecy, they also do not forbid using it, so why would you use a key made of rubber band and rocks if you could just use a proper, steel key?

Don Quixote charging the windmills is licensed CC BY-SA 2.0
Don Quixote charging the windmills by Dave Winer is licensed CC BY-SA 2.0

And, of course, the more people do this, the more likely it is that the banks will actually listen to one of us…

Adventures in base64, or: How not to design your confirmation links

I recently applied for a room at a student housing complex. After finishing up my online application, I got a confirmation link. I clicked it, checked that everything was fine, and closed it. I then reopened it because the URL had caught my eye. Slightly modified for privacy:

https://[...]/xyz.html?ID=MTIzNHxEb2V8Sm9obnwxOTkwLTAxLTAxCg==

Now, if you’ve been playing around with “dd if=/dev/urandom | base64” like I have, you will see the patterns that imply a base64 encoded string in this URL. So, I quickly copied the ID and decoded it (as I said, I changed the ID around a bit to avoid disclosing personal information).

$ echo MTIzNHxEb2V8Sm9obnwxOTkwLTAxLTAxCg== | base64 -d
1234|Doe|John|1990-01-01

Sooo. Well, that’s not very secure, is it? Give me a confirmation link which I know how to build and only have to guess one part of, which is probably automatically incrementing. I can work with that. But then again, why would I want to fake registrations to a student housing complex? Let’s keep digging.

$ echo "1235|Doe|John|1990-01-01" | base64
MTIzNXxEb2V8Sm9obnwxOTkwLTAxLTAxCg==

Let’s enter that changed ID into the URL, and…

Your Application could not be confirmed: The application was not found in the database (Applicant_ID:1235, Surname:Doe, Name:John, DateOfBirth:1990-01-01)

Oh well. Okay, let’s at least check for Cross-Site scripting, maybe we can find something…

$ echo "<script type="text/javascript">alert('xss');</script>|Herp|De Derp|1337-42-42" | base64
PHNjcmlwdCB0eXBlPSd0ZXh0L2phdmFzY3JpcHQnPmFsZXJ0KCd4c3MnKTs8L3NjcmlwdD58SGVycHxEZSBEZXJwfDEzMzctNDItNDIK

Enter this into the URL, and…

Your Application could not be confirmed: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'text/javascript'>alert('xss');' and Surname='Herp' and DateOfBirth='1337-' at line 1

Yikes. At this point, I backed the hell off, because we have some pretty strict laws about this stuff in Germany. SQL injections are not to be taken lightly, so I quickly whipped up an eMail to the organization and explained the problem to them. I received a lot of silence, and after a few days, I called them up and asked if they had received my mail. They had, and were working on it.

Almost two weeks passed, and the hole was still there, so I called again and asked for the person in charge of the homepage. A brief talk revealed that they had passed the issue on to their vendor, who was, appearently, not very good with providing delivery dates. I asked if I could release this blog entry, and they asked for more time, which I agreed to.

Today, I received an eMail stating that the hole had been fixed. I double checked it, and appearently, it was indeed fixed. I also received permission to release this blog entry. Appearently, the system was in use not only at this specific institution, but also in some other institutions (who have been notified about the need to update, I hope).

All in all, except for the long periods of waiting, it was a pleasent experience. The people at the institution were friendly and understood the dangers of the issue. If everyone would react as quickly and professionally as they did, the web would probably be a more secure place. The reaction time of the vendor could be improved though, taking almost a month to deliver a fix for a critical security issue that could be fixed by changing approximately one line of code is not acceptable.

Thoughts on usability vs. security

Warning: This has become a wall of text, and is much less coherent than I wanted it to be. I hope I got my point across.

In the last few days, I had a lot of discussions about usability, security, and the conflict between them. One was in the “Issues”-section of Mailpile, and upcoming FOSS webmail client with encryption support, among other things. One was with another participant of the “Night of Code” last Friday. Both of them were very civil, but at least in the second, it seems I did not get my point across properly. As I expect to discuss this again soon, I thought I’d just write up my argument in a coherent fashion and put it here for everyone to see and discuss.

Usually, the topic comes up when I am discussing improving Mail Encryption to be more user-friendly, as the current version of Enigmail is just plain horrible in that regard (sorry, devs, but it’s the truth). There are several improvements on the way that I know about, but as I am helping design some of them, I sometimes get into an argument with others about how a “thing” should be designed. And that is good, by the way. The only way to make something better is to talk about what’s wrong and how to improve it.

So, usually, the discussion goes somewhat like this: “Hey, let’s improve the process of checking fingerprints to make keysigning parties less horrible” – “Great Idea!” – “oh, yes, and since checking fingerprints is so much easier then, let’s notify people when they are trying to send a mail to an untrusted public key” – “Hmmm. You know, that could be really annoying for people…” – “Yeah, and let’s add a big red warning message that you have to work really hard to turn off so people start signing keys” – “yeah, well, that would be extremely annoying and make people just randomly start signing keys, destroying the Web of Trust system” – “then let’s make it really hard to sign keys without checking fingerprints” – “Noooooooooooooo” – and so on.

The problem, I think, is that the other person is trying to design for security with an added bonus of a bit of usability for nerds, while I am trying to design for regular people, with changes where security may suffer a bit (if you do not know what you are doing). I am always pushing people to adopt mail encryption, and usually the answer is “hell no, that’s way too complicated”. I’m even getting that reply from other CS students, by the way, so that goes to show you just how bad it is (or at least, how bad it is perceived).

So, from my point of view, it is better if a person is using mail encryption in a way that is not completely secure against a sophisticated attack, it is still better than if the person is mailing completely unencrypted. Or, to put it another way: I’d rather have everyone mailing encrypted and 0.0001 % of them being attacked with a targeted, sophisticated attack, than having everyone mailing unencrypted and having their mails read by everyone and their dog (and the NSA).

So, I’m arguing for the following: Make Mail encryption as “out of my face” as possible. Make it possible to automatically download recipient keys. Make it easy, but not required, to validate keys. Make it easy to read your mail on any device, even if it is encrypted (this is not an issue with Enigmail but with the ecosystem of PGP apps in general). Make the usage as easy to understand as possible, and error messages without any technobabble. You can always add a button “details” to show the technical details of what went wrong, but my grandma does not need to know “Error – No valid armored OpenPGP block found”, she only needs to know that something went wrong and she cannot be sure the mail is actually signed, and that in a language that she can understand.

This is hard. I know it is because I did not manage to come up with an error message that fits the criteria outlined above. But it being hard should not keep us from trying to achieve it. We are the only people able to change this, and the rest of the population is depending on us to get this solved so they can use it, too.

Create all of those features outlined above. Turn them on by default. Make the wizzard pop up on first launch (yes, there is a wizzard. It’s even supposed to pop up, but it doesn’t for me). Make it really easy to use PGP. Then make it possible to turn those options off again, for people who know what they are doing and want a clean keyring. But build a system your parents could use, and your non-technical friends could use. Because they are sending their data around unencrypted. And sometimes they are sending your data around. Unencrypted.

Think about it.