Category Archives: Coding

Yeah, I’m doing that, too.

Transport security in online banking, or: “Not my Department”

Lately, I have been trying to improve transport security (read: SSL settings and ciphers) for the online banking sites of the banks I am using. And, before you ask, yes, I enjoy fighting windmills.

Quick refresher on SSL / TLS before we continue: There are three things you can vary when choosing cipher suites:

  • Key Exchange: When connecting via SSL, you have to agree on cryptographic keys to use for encrypting the data. This happens in the key exchange. Example: RSA, DHE, …
  • Encryption Cipher: The actual encryption happens using this cipher. Example: RC4, AES, …
  • Message Authentication: The authenticity of encrypted messages is ensured using the algorithm selected here. Example: SHA1, MD5, …

After the NSA revelations, I started checking the transport security of the websites I was using (the SSL test from SSLLabs / Qualys is a great help for that). I noticed that my bank, which I will keep anonymous to protect the guilty, was using RC4 to provide transport encryption. RC4 is considered somewhere in between “weak” and “completely broken”, with people like Jacob Applebaum claiming that the NSA is decrypting RC4 in real time.

Given that, RC4 seemed like a bad choice for a cipher. I wrote a message to the support team of my bank, and received a reply that they were working on replacing RC4 with something more sensible, which they did a few months later. But, for some reason, they still did not offer sensible key exchange algorithms, insisting on RSA.

There is nothing inherently wrong with RSA. It is very widely used and I know of no practical attacks on the implementation used by OpenSSL. But there is one problem when using RSA for key exchanges in SSL/TLS: The messages are not forward secret.

What is forward secrecy? Well, let’s say you do some online banking, and some jerk intercepts your traffic. He can’t read any of it (it’s encrypted), but he stores it for later, regardless. Then something like Heartbleed comes along, and the same jerk extracts the private SSL key from your bank.

If you were using RSA (or, generally, any algorithm without forward secrecy) for the key exchange he will now be able to retroactively decrypt all the traffic he has previously stored, seeing everything you did, including your passwords.

However, there is a way to get around that: By using key exchange algorithms like Diffie-Hellman, which create temporary encryption keys that are discarded after the connection is closed. These keys never go “over the wire”, meaning that the attacker cannot know them (if he has not compromised the server or your computer, in which case no amount of crypto will help you). This means that even if the attacker compromises the private key of the server, he will not be able to retroactively decrypt all your stuff.

So, why doesn’t everyone use this? Good question. Diffie-Hellman leads to a slightly higher load on the server and makes the connection process slightly slower, so very active sites may choose to use RSA to reduce the load on their servers. But I assume that in nine of ten cases, people use RSA because they either don’t know any better or just don’t care. There may also be the problem that some obscure guideline requires them to use only specific algorithms. And as guidelines update only rarely and generally don’t much care if their algorithms are weak, companies may be left with the uncomfortable choice between compliance to guidelines and providing strong security, with non-compliance sometimes carrying hefty fines.

So, my bank actually referred to the guidelines of a german institution, the “deutsche Kreditwirtschaft”, which is an organisation comprised of a bunch of large german banking institutes. They worked on the standards for online banking in germany, among other things.

So, what do these security guidelines have to say about transport security? Good question. I did some research and came up blank, so I contacted the press relations department and asked them. It took them a month to get back to me, but I finally received an answer. The security guidelines consist of exactly one thing: “Use at least SSLv3“. For non-crypto people, that’s basically like saying “please don’t send your letters in glass envelopes, but we don’t care if you close them with glue, a seal, or a piece of string.”

Worse, in response to my question if they are planning to incorporate algorithms with forward secrecy into their guidelines, they stated that the key management is the responsibility of the banks. This either means that they have no idea what forward secrecy is (the reponse was worded a bit hand-wavy), or that they actually do know what it is, but have no intention of even recommending it to their member banks.

This leaves us with the uncomfortable situation where the banks point to the guidelines when asked about their lacklustre cipher suites, and those who make the guidelines point back at the banks, saying “Not my department!“. In programming, you call that “circular dependencies”.

So, how can this stalemate be broken? Well, I will write another message to my bank, telling them that while the guidelines do not include a recommendation of forward secrecy, they also do not forbid using it, so why would you use a key made of rubber band and rocks if you could just use a proper, steel key?

Don Quixote charging the windmills is licensed CC BY-SA 2.0
Don Quixote charging the windmills by Dave Winer is licensed CC BY-SA 2.0

And, of course, the more people do this, the more likely it is that the banks will actually listen to one of us…

Dynamically generating data types in python

For a project I was working on recently, I needed to define a bunch of data types in python. As defining 10 different datatypes with about the same functionality manually would’ve been a pain, I decided to try something else.

I already had a bunch of nested dictionaries defining the fields and types of all datatypes, as I was generating SQLite statements from them. The general format looked something like this:

datatypes = {
    "type1": {
        "field1": {
            "type": "INTEGER",
            "notNull": True,
            "primaryKey": False,
            "autoIncrement": False,
            "default": None,
            "foreignKey": {
                "table": "type2",
                "field": "field1",
                "onDel": "RESTRICT",
                "onUpd": "RESTRICT"
            }
        },
        "field2": {
            # ...
        },
        # ...
    },
    # ...
}

Each outer dictionary (“type1”) defines a data type, and each inner dictionary (“field1”) defines a key-value-pair of this data type, including some information like “can this be empty?”. So, I already had everything I needed to define my data types.

What are the data types supposed to be able to do? I needed them to have unchangeable values (meaning that I needed only “getters”, no “setters” outside the constructor). So, how can I create class definitions from this block of definitions?

Easy. I generate a long string containing all definitions I need and run it through exec, a function I usually tend to avoid like the plague (note that this assumes that the class definition is safe and cannot be changed by others, meaning that you don’t have to worry about the security ramifications of using exec).

The code for the generation itself only clocks in at 55 lines, including comments. For easier reading, I put it in a gist.

As you can see, the code will generate a bunch of definitions for each data type, including getters and a checkRep-Function that can be used to verify the consistency of the data. So far, it does not check for foreign key constraints, as they are quite annoying to check and enforced in the database backend anyway.

So, why is this awesome?

  • You don’t have to manually write out all your datatypes. Instead, you define them once and then generate them automatically.
  • Even large changes to datatypes are quick and easy, by just changing the definition in one place.
  • You can generate other things like DB interfaces and -definitions as well, all from the same source definition.

And what are the problems with this approach?

  • You are using exec, meaning that if someone untrusted gains access to your definitions, they can do evil things to your code.
  • code coverage tests don’t play well with it.

For a full example of the code in use, you can check out my (abandoned) InvoiceManager-Project on GitHub. There, I also generate database definitions, database validation code, database interfaces, and unittests for all of these things, all from the same source definition.

Let me know what you think.

Introducing the SMTP GPG Proxy

I frequently encounter software that allows me to send mails, but has no GPG support out of the box (sometimes not even using plugins). This annoys me greatly, especially if it is software like FusionInvoice, which may transport sensitive information in its mail messages. Since FusionInvoice (and many other programs) support SMTP for sending their mail, and since I had a few spare hours, I decided to see if I could hack something together to add GPG support to those programs. And the result was…

…the SMTP GPG Proxy

The SMTP GPG Proxy, besides having an awful name (name proposals welcome), is a Python program. It provides an SMTP Server and will accept incoming mail messages, encrypt / sign them according to its settings and magic strings in the mail subject, and then forward them to the upstream SMTP server.

Since the basic python smtpd-Module does not support encrypted connections, I used the modified “secure-smtpd”-Module by bcoe. It extends the basic smtpd with support for SSL-encrypted connections while providing an almost identical interface. For the encryption itself, I used the standard “python-gnupg”-wrapper, which isn’t ideal but gets the job done most of the time.

Setup

Setting up the SMTP GPG Proxy is quite easy. Just grab the latest version from the GitHub-Repository, install the dependencies, rename the config.py.example to config.py and fill in the settings (everything should be documented in there), and then launch the main program. Next, point your SMTP-speaking program at the IP and port you just configured (it is highly recommended to do this via localhost only, as incoming connections into the Proxy are, as of right now, not encrypted), and mail away.

Usage

To get the SMTP Proxy to encrypt a message, just send the mail and add the KeyIDs (including the “0x”) to the subject line, seperated by whitespaces. They will be automatically parsed and removed from the subject, so if you want to send a message with the subject “Invoice #23”, encrypted with 0x12345678 and 0x13374242, you would choose the subject “Invoice #23 0x12345678 0x13374242”. KeyIDs can be in short (8 characters) or long (16 characters) form, as well as full fingerprints (without whitespaces and prefixed by “0x”).

Depending on the settings, missing public keys will either lead to the message being rejected, sent unencrypted, or keyservers may be polled before rejecting or sending unencrypted if no public keys are found. You can also configure the program to GPG-sign all messages, or only encrypted messages, or no messages at all.

Development status

The program is currently in alpha, but it works very well for me. Still, as of right now there are some open issues with it, which I may or may not be working on. If you set up everything correctly, you should not encounter any problems. It is the border cases like incorrect SMTP passwords that are currently not dealt with very well.

Roadmap

If I find the time, I will keep developing the program, removing bugs, making it more stable, and adding more features like opportunistic encryption. However, I may not have the time to fully fix everything, and bugs that are annoying me will obviously be fixed faster than those I will never encounter in my usage.

However, as the program is open source and on GitHub, feel free to fork and submit pull requests. The code is, as of right now, shamefully undocumented, but as it has only about 200 lines, it should still be fairly easy to understand.

License

Like almost all my projects, I am releasing this program under the BSD 2-Clause License.

A case study in bad design: PHP Generator for MySQL

Welcome to the second installment of the “case study in bad design”-series, where I talk about generally horrible design in code, security or user experience. Todays subject is the PHP Generator for MySQL software by SQL Maestro (whose website will present you with a self-signed certificate for *.magicshoes.net if you try to access it via SSL, so you at least have to give them credit for creativity in that area).

PHP Generator for MySQL is a software that allows non-programmers to create web-frontends to their MySQL-Databases. It does a comparatively good job and provides some decent options, although the UI is somewhat cluttered and unintuitive, and the error reporting in places nonexistant. I was required to use it (as opposed to writing something myself) during my last employment with an institute at my university.

The story begins in July 2012, when I noticed that the code generated by PHP Generator had multiple vulnerabilities to Cross-Site scripting, allowing me to steal the login cookie (which, for good measure, contained the password in clear text, even if it was stored as a hash in the database). I cursed, wrote up some proof-of-concept code and reported the vulnerability to the devs.

A few weeks later, a new version of PHP Generator was released, fixing one of the two Cross-Site scripting holes I reported. They never responded to my mail and never fixed the second Cross-Site scripting hole. So, a year almost to the day later, I sent a follow-up mail, reminding them about the holes I reported, reporting another hole and setting a deadline of two weeks, after which I would apply for a CVE and publish the vulnerability. That got their attention and they responded within a day and got a new build out a few days later, fixing the vulnerabilities (and refusing to credit me in the changelog for reporting these issues, but hey, whatever).

A few days ago, I took another pass at the code and found another vulnerability (HTML stored in the database would be evaluated when displayed on the website), complaining that they were now using unsalted hashes of the password for authentication in the cookies (instead of session IDs completely unrelated to the password, which would be a better practice) and, after past experiences, setting a deadline of a week for a reply. Once again, they replied within a day.

Appearently, evaluating HTML from the database was a feature and not a bug. A feature that was on by default and could be disabled on a “per-input” basis. Whoever thought that was a good idea? Every “feature” that opens up the possibility for a security hole as big as stored XSS should either be completely removed or be off by default, to be enabled manually and with a big message box warning about the security implications. To make matters worse, the state of this setting seems to not be saved in the project file, leading to compatibility problems if the default value was changed (and I have no idea how they would make this state persist over restarts of the program if they save the setting nowhere…).

As for the proper session management, they claim to be working on something. They also may add salted hashes, but have not fully committed on that, citing possible compatibility issues.

They closed their mail with a statement that blew my mind:

By the way, we have never received any security related complaints from other PHP Generator users, so probably there is no real threat.

I’m not going to comment further on this statement, as anyone with at least a rudimentary understanding of security should be able to see what is wrong with this.

PHP Generator for MySQL starts at $99 for a single, non-commercial license without upgrades. I would think that you could expect more interest in the security of their customers for that.

A case study in bad design. Todays subject: the Deutsche Bahn

Welcome to the first installment of the new series “a case study in bad design“, which will probably be an ongoing series unless something very surprising happens (namely, people stopping to write horrible code).

Todays subject is the homepage of the german railways corporation, the Deutsche Bahn. (And just setting this link revealed another faux pas, namely that their SSL certificate is only valid for www.bahn.de, not for bahn.de without the leading www).

Everything began with my mother (which is, in a way, not surprising at all, but in that case I am not talking about my birth). She regularily takes the train to Bremen, and had set up a profile for her usual passenger settings (economy or first class, which kind of seat, that sort of thing) in her bahn.de-Account. She noticed her profile repeatedly disappearing, which, at some point, made her so angry she wrote a mail to the people responsible for the website.

They responded with some seemingly senseless information about her browser not allowing cookies. But since no sensible person would store those presets in a cookie if you have a perfectly good user account to store it in, so that was obviously bullshit, right?

Right?

(Checks title of blog post) ah, damn.

A quick check turned up that the travel profile was indeed stored in a cookie. Which would have been bad enough as it was, considering that this profile…

  1. …would be nice to have on more than once PC without setting it up seperately
  2. …is user specific and, subsequently, has no business being in a cookie instead of a database in the first place
  3. …is something that does not change all too often, which makes putting it into a cookie even more stupid than it already is

Well, we now know that this information is stored in a cookie. Then again, this does not explain the random disappearence of it. That is, until you check the cookie information.

Yes, the cookie is valid for a whooping 10 days! This means that every time you don’t visit bahn.de for 10 or more days, you will loose all your preset profiles. Who exactly thought that this was a good idea? Because that person was wrong. As an added bonus, the cookie is not deleted if you log out, so if you, for some reason, create a profile on a public computer, you are leaking your travelling preferences (that’s probably not a big deal, but completely unnecessary).

But, while we’re at it, let’s play around with that cookie. Maybe we can find some Cross-Site scripting (considering all the places I have already found it, it would not surprise me to find it here). Quickly add some quotation marks, just to see what happens, aaaaand…

Obviously.

A quick trip to the source code (a mere 3000 lines of horribly indented HTML and Javascript) reveals a bunch of JavaScript imports. A glance at the *.min.js-files, followed by a curse, followed by the awesomeness that is http://jsbeautifier.org/, revealed somewhat readable Javascript code, containing gems like “b && (a = b)” (a shorthand for “if (b) a = b;”, as it seems) and wonderful “for”-statements like the following:
for (var f = c, c = d, e = void 0, e = void 0, g = [], f = f.substring(4); 0 < f.length && 0 != f.indexOf("]#");)
Appearently, separating statements in a conditional with commata makes them evaluate one after another, and the last statements result is used to check if the conditions of the conditional are fulfilled. I especially love the double assignment of e = void 0.

I will not torture you (or me) with the whole >12 000 lines of Code, but rest assured that it does not get better. In the end, I gave up on finding the cause for the lifelock that occured after my modifications to the cookie, seeing as I am not likely to be paid for this crap and my pain resistance is not high enough for this single-letter variable bullshit. I’ll notify them about it anyway, although I am not sure what (if anything) will come of it.

Review: The UNIX-HATERS handbook

Note: I have stopped releasing my book reviews on this blog, as I want to put it on a more technical track. But since this is a technical book, I’ll post this review anyway.

The UNIX-Haters HandbookThe UNIX-Haters Handbook by Simson Garfinkel

My rating: 4 of 5 stars

This book was, among other things, a good history lesson. I learned more about the history of UNIX and computers / computer science from this book than in the past three years of studying computer science. It also made me aware of some rather horrible design choices in both the old UNIXes and the modern Linux, to some extent.

The hating on UNIX going on in this book is written in a highly amusing way, and I found myself chuckling about finding the things that annoy me today in this book from 1994, almost 20 years ago. Appearently, no one was interested in fixing inconsistencies between programs, yet another proof of the theory that, by releasing a program, you make a temporary design choice into a standard (although this still does not explain the discrepancies between git commit -S and git tag -s, if you know what I mean).

All in all, I would recommend this book to people interested in the history of UNIX and bad design choices.

View all my reviews

Introducing IssueBot, a Jabber MUC notification bot for GitHub

As part of a development team working on improving Enigmail, I recently found myself in need of a bot sending notifications to a Jabber multi-user chat (MUC) when the Issues of a GitHub-project receive an update (we already had a similar program for new commits in place, called commitbot). I searched for a while and did not find anything that fulfilled our requirements.

So, being a CS student and all that, I decided to write one.

Introducing IssueBot

IssueBot is a Python / Twisted bot, using the GitHub API to fetch information about the issues of a GitHub project. Those are then sent to a Jabber MUC. It can monitor multiple repositories at the same time, authenticate itself using an OAuth token (if you generate one manually) to increase the rate limit on the GitHub API, and will generate Notifications in the following conditions:

  • New ussue
  • Issue closed
  • One of the following has changed about the issue:
    • Title
    • State (open -> closed or vice versa)
    • Assignee
    • New comments

Development is still actively going on, with new features being planned and bugs being fixed, so keep an eye on the GitHub-Repository.

Configuration is done using a .tac-file (an example file is provided with the program). Just update the variables in it and you should be good to go. Instructions on how to use it can be found in the README.

As parts of the code are derived from the aforementioned commitbot, the Code is licensed under the GNU GPLv3. Feel free to fork, improve and send pull requests on the project page on GitHub.

The making of IssueBot

As it turns out, it is actually really easy to query the GitHub API. You send a request to a specific URL (for example, https://api.github.com/repos/octocat/hello-world/issues) and get a response with some JSON and some headers telling you how many requests you have remaining for the current hour (the API limit for unauthenticated users is 60 requests per hour).

Since it is pretty easy to do this in Python, and it has some nice support for JSON built-in using the standard json module, it was pretty easy to query the API, parse the result into a Python dictionary, and parse that into the local database. Then, the changes could be determined and notification messages generated. The only thing missing was the interface to Jabber.

For that, I decided to reuse some code from the commitbot project. This blew up the list of dependencies, but made it possible to somewhat painlessly work with Jabber MUCs. As an added bonus, one of the dependencies, Twisted, deamonizes the process automatically, saving me the trouble.

In the end, it took me about three hours to hack together the current version of the program. Most of the time was spent trying to figure out how to get Twisted to work with my main loop, which was actually non-trivial until I stumbled upon the LoopingCall-Instruction provided by Twisted.internet.task. As Twisted has some rather… interesting views on how it should be used, and my use case did not quite fit into that pattern, I found documentation on how to use it hard to find.

The program is now happily running on my server, spitting out the occasional notification into our chat, and works like a charm.

Adventures in base64, or: How not to design your confirmation links

I recently applied for a room at a student housing complex. After finishing up my online application, I got a confirmation link. I clicked it, checked that everything was fine, and closed it. I then reopened it because the URL had caught my eye. Slightly modified for privacy:

https://[...]/xyz.html?ID=MTIzNHxEb2V8Sm9obnwxOTkwLTAxLTAxCg==

Now, if you’ve been playing around with “dd if=/dev/urandom | base64” like I have, you will see the patterns that imply a base64 encoded string in this URL. So, I quickly copied the ID and decoded it (as I said, I changed the ID around a bit to avoid disclosing personal information).

$ echo MTIzNHxEb2V8Sm9obnwxOTkwLTAxLTAxCg== | base64 -d
1234|Doe|John|1990-01-01

Sooo. Well, that’s not very secure, is it? Give me a confirmation link which I know how to build and only have to guess one part of, which is probably automatically incrementing. I can work with that. But then again, why would I want to fake registrations to a student housing complex? Let’s keep digging.

$ echo "1235|Doe|John|1990-01-01" | base64
MTIzNXxEb2V8Sm9obnwxOTkwLTAxLTAxCg==

Let’s enter that changed ID into the URL, and…

Your Application could not be confirmed: The application was not found in the database (Applicant_ID:1235, Surname:Doe, Name:John, DateOfBirth:1990-01-01)

Oh well. Okay, let’s at least check for Cross-Site scripting, maybe we can find something…

$ echo "<script type="text/javascript">alert('xss');</script>|Herp|De Derp|1337-42-42" | base64
PHNjcmlwdCB0eXBlPSd0ZXh0L2phdmFzY3JpcHQnPmFsZXJ0KCd4c3MnKTs8L3NjcmlwdD58SGVycHxEZSBEZXJwfDEzMzctNDItNDIK

Enter this into the URL, and…

Your Application could not be confirmed: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'text/javascript'>alert('xss');' and Surname='Herp' and DateOfBirth='1337-' at line 1

Yikes. At this point, I backed the hell off, because we have some pretty strict laws about this stuff in Germany. SQL injections are not to be taken lightly, so I quickly whipped up an eMail to the organization and explained the problem to them. I received a lot of silence, and after a few days, I called them up and asked if they had received my mail. They had, and were working on it.

Almost two weeks passed, and the hole was still there, so I called again and asked for the person in charge of the homepage. A brief talk revealed that they had passed the issue on to their vendor, who was, appearently, not very good with providing delivery dates. I asked if I could release this blog entry, and they asked for more time, which I agreed to.

Today, I received an eMail stating that the hole had been fixed. I double checked it, and appearently, it was indeed fixed. I also received permission to release this blog entry. Appearently, the system was in use not only at this specific institution, but also in some other institutions (who have been notified about the need to update, I hope).

All in all, except for the long periods of waiting, it was a pleasent experience. The people at the institution were friendly and understood the dangers of the issue. If everyone would react as quickly and professionally as they did, the web would probably be a more secure place. The reaction time of the vendor could be improved though, taking almost a month to deliver a fix for a critical security issue that could be fixed by changing approximately one line of code is not acceptable.

Drinking games in Vatican: Implementing the SSH RandomArt-Algorithm in Javascript

I recently read an article about the algorithm used to generate the RandomArt-Images used for SSH keys (the “drunken bishop” algorithm). I looked around for a bit and did not find an implementation in Javascript, so I quickly hacked one together.

The algorithm is explained really well in the article I linked above, so I’ll not go over it again. The algorithm is pretty easy to implement, the main parts of this algorithm are:

  • Representing the field the bishop moves on (easy enough with a 2D array of integers)
  • Converting Hex to binary (surprisingly, there is no builtin function for that, but there is a good answer on StackOverflow to that)
  • Isolating the bit-pairs (by iterating through two-character substrings of the words)
  • Checking if moves are valid (by checking if the target coordinates are within the limits of the array)
  • Showing the results (by writing them into a div with a monospaced font set)

So, all in all, it was mostly an exercise in javascript and nothing really fancy, but it was fun to implement it, debug it (too much python, forgetting semicolons all over the place), get the whitespaces to work properly (there are about one million kinds of white spaces, and appearently only one does what I want), and finally testing it.

No effort was made to keep this code simple, efficient, or beautiful to look at. It was a proof of concept and some exercise for my rusty JavaScript skills, mixed with a general interest on how the RandomArt algorithm works.

You can take a look at the code in action at jsfiddle, or just look at the horrible code in the gist on github and make fun of me in the comments.

Results from the inofficial Enigmail “Night of Code”

Yesterday, we had a small “Night of Code” in Hamburg. Basically, five hackers met up in the rooms of the CCC Hamburg and tried to improve Enigmail, the Thunderbird extension for PGP-encrypting Mails. It was a hell of a lot of fun, and we actually made quite a bit of progress on several improvements.

It all started with a discussion on the mailing list of the computer science department of the University of Hamburg. We had a lengthy discussion on what is wrong with Enigmail and PGP, and some of us decided to do something about it. Someone organized a room, called for a “Night of Code”, and a few people responded, me among them.

We started with a short introduction on the architecture of Enigmail and what the important files are. Afterwards, we discussed what needed improvements (the consensus being “basically everything about the UI”) and everyone chose one of the proposed improvements and started working.

I don’t want to spoil the surprise on what the others have been working (although all of it will come in pretty handy, once it is finished and hopefully merged into the main project), but I can say a bit about what I worked on.

So, one of the important things when using PGP is to manage your Web of Trust. This includes the signing of the keys of other people (after you validated that they are, in fact, the person they are saying they are). For that purpose, there are Key-signing parties. And one of the major annoyances about those parties is the distribution of freshly signed keys.

On Linux, there is a neat command line tool called caff. It takes any number of Key-IDs, downloads the public keys, signs each ID seperately and mails it (encrypted) to the provided eMail address. The problem is that caff is pretty annoying to set up, and only works on Linux.

The Feature I am working on is something along those lines. I added a new checkbox on sigining keys…

The second checkbox is new, in case you were wondering.
The second checkbox is new, in case you were wondering. And the keys are totally legit, I checked. 😉

If you select the checkbox and sign a key (and no error occurs during the signing process), a new Message composition window will open:

The new Message

It will contain some sort of preset text and have the signed public key attached.

Now, this is all working great already, but there are still some things to do:

  • Save the last decision on whether to mail the key or not (currently, due to some weird behaviour of the Enigmail preferences function that I still need to figure out, it is not saved)
  • Automatically set the mail to be encrypted and signed, regardless of the settings.
  • Perhaps encrypt the public key before attaching it, to make sure the recipient needs his private key to get the new signatures?
  • Perhaps choose the sending account based on the private key that was used to sign the public key?

Now, the experience to work on Enigmail has been interesting and somewhat cool, but not without its problems.

  • To say the documentation of Enigmail is bad would be misleading, as it implies that there actually is a documentation, which is not the case. Everything you want to use, you need to figure out yourself, possibly by using the addon with debug output active and seeing which functions are used in what order.
  • Thunderbird isn’t much better. Many of the important functions (adding an attachment!) had to be reverse engineered from other addons or from the very helpful thunderbird-stdlib-Project on GitHub, as the documentation has some pretty big holes in significant places.

If you are an Enigmail dev and reading this: Please provide at least some documentation on what is done where in the code, and what APIs can be used for new features. I know you probably understand the code, but it makes the entry barrier for new devs very high.

If you are a Thunderbird dev: See above. The current docs are not enough, and the function names are in parts weird enough to make it almost impossible to find out how to actually use them without checking the source files, which takes time and is extremely annoying.

All in all, I enjoyed my time hacking on Enigmail. But it could have been a lot more productive if there was some form of documentation one could use. As for the new feature: I will try to get it to work properly and then submit a patch to the devs, but I do not know how long that will take, as my time is currently pretty limited because of other things I need to take care of (my bachelors thesis among them).

As for the others: I don’t know when their features will be finished, but we already have a bunch of ideas on what to do next, and if we find the time, we’ll create some more new features. Some of our ideas have the potential to vastly increase usability, so I am very curious as to the reactions of the devs. Let’s hope for the best.