Category Archives: Security

SSL certs crash without trust

Eddy Nigg found out and blogged about how he could buy SSL certificates for a domain he clearly doesn’t own nor control. The cert is certified by Comodo who apparently has outsourced (parts of) there cert business to a separate company who obviously does very little or perhaps no verification at all of the buyers.

As a result, buyers could buy certificates from there for just about any domain/site name, and Comodo being a trusted CA in at least Firefox would thus make it a lot easier for phishers and other cyber-style criminals to setup fraudulent sites that even get the padlock in Firefox and looks almost perfectly legitimate!

The question is now what Mozilla should do. What Firefox users should expect their browser to do when HTTPS sites use Comodo-verified certs and how Comodo and their resellers are going to deal with everything…

Read the scary thread on the mozilla dev-tech-crypto list.

Update: if you’re on the paranoid/safe side you can disable trusting their certificates by doing this:

Select Preferences -> Advanced -> View Certificates -> Authorities. Search for
AddTrust AB -> AddTrust External CA Root and click “Edit”. Remove all Flags.

Snooping on government HTTPS

As was reported by some Swedish bloggers, and I found out thanks to kryptoblog, it seems the members of the Swedish parliament all access the internet via a HTTP proxy. And not only that, they seem to access HTTPS sites using the same proxy and while a lot of the netizens of the world do this, the members of the Swedish parliament have an IT department that is more big-brotherish than most: they decided they “needed” to snoop on the network traffic even for HTTPS connections – and how do you accomplish this you may ask?

Simple! The proxy simply terminates the SSL connection, then fetches the remote HTTPS document and run-time generates a “faked” SSL cert for the peer that is signed by a CA that the client trusts and then delivers that to the client. This does require that the client has got a CA cert installed locally that makes it trust certificates signed by the “faked” CA but I figure the parliament’s IT department “help” its users to this service.

Not only does this let every IT admin there be able to snoop on user names and passwords etc, it also allows for Man-In-The-Middle attacks big-time as I assume the users will be allowed to go to HTTPS sites using self-signed certificates – but they probably won’t even know it!

The motivation for this weird and intrusive idea seems to be that they want to scan the traffic for viruses and other malware.

If I were a member of the Swedish parliament I would be really upset and I would uninstall the custom CA and I would seriously consider accessing the internet using an ssh tunnel or similar. But somehow I doubt that many of them care, and the rest of them won’t be capable to take counter-measures against this.

In the middle there is a man

The other day an interesting bug report was posted against the Firefox browser, and it caused some interesting discussions and blog posts on the subject of Man-In-The-Middle attacks and how current browsers etc make it (too?) easy to accept self-signed certificates and thus users are easily mislead. (Peter Burkholder wrote a great piece on SSL MITMing already back in 2002 which goes into detail on how this can be done.).

The entire issue essentially boils down to this:

To be able to really know that you’re communicating with the true remote site (and not an impostor), you must have some kind of verification system.

In SSL land we have this system with CA certs for verifying certs and it works pretty good in most cases (I think). However, so many sites on the internet use HTTPS today without having the certificate signed by a party that is known to the browser already – most of them are so called self-signed which means there’s nobody else that guarantees that they are who they claim to be, just themselves.

All current modern browsers want to give the users easy access to HTTP sites, to HTTPS sites with valid properly-signed certs but also allow users to connect to and browse on HTTPS sites with self-signed certs. And here comes the problem: how to tell users that HTTPS with self-signed certs is very insecure but still let them proceed? How do we tell them that the user may proceed but if this is a known popular site you really should expect a true and valid certificate as otherwise it is quite possibly a MITM you’re seeing?

People are so used to just accept exceptions and click away nagging pop-ups so the warnings and alerts that are explicit and implied by the prompts you have to go through to accept the self-signed certificate. They don’t seem to have much effect. As can be seen in this bug report, accepting an impostor’s certificate for a large known site is far too easy.

In the SSH land however, we don’t have the ca cert system and top-down trust hierarchy that SSL/TLS imposes. But does this matter? I’d say no, as most if not all users still don’t reflect much over the fact when a server’s host key is reported different than what you used before. Or when you connect to a host the first time you accept the host key without trying to verify it using a different channel. Thus you’re subject to pretty much the same MITM risk. The difference is perhaps that less “mere end users” are using SSH this way.

Let me just put emphasis on this: SSL and SSH are secure. The insecureness here is not due to how the protocols work, but rather they are flaws that appear when we mix in real world users and UIs and so.

I don’t have any sensible solutions to these problems myself. I’m crap at designing things for mere humans and UIs etc and I make no claims of understanding end users.

It seems there’s a nice tool called ettercap that’s supposedly a fine thing to use when you want to run your own MITM attacks on your LAN! And on the other side: an interesting take at improving the “accept this certificate” UI is offered by the Firefox’s Perspectives plugin which basically also checks with N other sources’ view to help you decide whether to trust a certificate.

I want to round off my rant with a little quote:

I have little, and decreasing, desire to continue to invest in strong security for a product that discards that security for the masses” [*] / Nelson B Bolyard – prominent NSS hacker

Curl Cyclomatic Complexity

I was at the OWASP Sweden meeting last night and spoke about Open source and security. One of the other speakers present was Simon Josefsson who in his talk showed a nice table listing functions in his project sorted by “complexity“. Functions above a certain score are then considered “high risk” as they are hard to read and follow and thus may be subject to security problems.

The kind man he is, Simon already shows a page with a Curl Cyclomatic Complexity Report nicely identifying a bunch of functions we should really consider poking at to decrease complexity of. The top-10 “bad” functions are:

Function Score Statements Lines Code
ssh_statemach_act 254 880 1582 lib/ssh.c
Curl_http 204 395 886 lib/http.c
readwrite_headers 129 269 709 lib/transfer.c
Curl_cookie_add 118 247 502 lib/cookie.c
FormAdd 105 210 421 lib/formdata.c
dprintf_formatf 92 233 395 lib/mprintf.c
multi_runsingle 94 251 606 lib/multi.c
Curl_proxyCONNECT 74 212 443 lib/http.c
readwrite_data 73 127 319 lib/transfer.c
ftp_state_use_port 60 195 387 lib/ftp.c

I intend to use this as an indication on what functions within libcurl to work on. My plan is to primarily break down each of these functions to smaller ones to make them easier to read and follow. It would be cool to get every single function below 50. But I’m not sure that’s feasible or even really a good idea.

A bad move. A really bad move.

So I wrote this little perl script to perform a lot of repeated binary Rockbox builds. It builds something like 35 builds and zips them up and gives them proper names in a dedicated output directory. Perfect to do things such as release builds.

Then I wrote a similar one to build manuals and offer them too. I then made the results available on the Rockbox 3.0RC (release candidate) page of mine.

Cool, me thinks, and since I’ll be away now for a week starting Wednesday I think I should make the scripts available in case someone else wants to play with them and possibly make a release while I’m gone.

I did

mv buildall.pl webdirectory/buildall.pl.txt

… thinking that I don’t want it to try to execute as a perl script on the server so I rename it to a .txt extension. But did this work? No. Did it cause total havoc? Yes.

First, Apache apparently still thinks these files are perl scripts (== cgi scripts) on my server, even if they got an additional extension. I really really didn’t expect this.

Then, my scripts are doing a command chain similar to “mkdir dir; cd dir; rm -rf *”. It works great when invoked in the correct directory. It works less fine when the web server invokes this because someone clicked on the file I just made available to the world.

Recursive deletion of all files the web server user was allowed to erase.

Did I immediately suspect foul play and evil doings by outsiders? Yes. Did it take quite a while to restore the damages from backups? Yes. Did it feel painful to realize that I myself was to blame for this entire incident and not at all any outside or evil perpetrator? Yes yes yes.

But honestly, in the end I felt good that it wasn’t a security hole somewhere that caused it since I hate spending all that time to track it down and fix it. And thanks to a very fine backup system, I had most of the site and things back up and running after roughly one hour off-line time.

Rockbox

Security and Open Source

OWASP Sweden is arranging an event on October 6th in Stockholm Sweden to talk about security in the open source process.

I will be there doing talk about security in open source projects, in particular then how we work with security in the curl project. If you think of anything particular you would like me to address or include, feel free to give be a clue already before the event!

Getting cacerts for your tools

As the primary curl author, I’m finding the comments here interesting. That blog entry “Teaching wget About Root Certificates” is about how you can get cacerts for wget by downloading them from curl’s web site, and people quickly point out how getting cacerts from an untrusted third party place of course is an ideal situation for an MITM “attack”.

Of course you can’t trust any files off a HTTP site or a HTTPS site without a “trusted” certificate, but thinking that the curl project would run one of those just to let random people load PEM files from our site seems a bit weird. Thus, we also provide the scripts we do all this with so that you can run them yourself with whatever input data you need, preferably something you trust. The more paranoid you are, the harder that gets of course.

On Fedora, curl does come with ca certs (at least I’m told recent Fedoras do) and even if it doesn’t, you can actually point curl to use whatever cacert you like and since most default installs of curl uses OpenSSL like wget does, you could tell curl to use the same cacert your wget install uses.

This last thing gets a little more complicated when one of the two gets compiled with a SSL library that doesn’t easily support PEM (read: NSS), but in the case of curl in recent Fedora they build it with NSS but with an additional patch that allows it to still be able to read PEM files.

Will 2008 become 1984?

Next week in Sweden (June 18th), as reported in several places lately including slashdot, the Swedish parliament is supposed to vote for the pretty far-going law allowing FRA (a swedish defence organization previously involved in radio-surveillance etc) to wire-tap phone calls and computer traffic that cross the Swedish borders. The majority in the parliament is for the law, while it seems most of the ordinary people are against it. The hope is now that a few people will vote against their parties, that they will have the guts to stand up and “do the right thing” instead of following the party line.

I won’t go into how silly, stupid and bad such a law is but I’ll instead just show this great video to all swedes:

(video snipped from here)

stopa FRAlagen nu

This banner says (roughly translated by me) “On June 18th the government will take away your personal integrety. All internet traffic, all phone calls, all email and SMS traffic will be wire-tapped starting January 1st 2009. Big brother sees you! … and violates the Swedish Constitution.”

public suffixes list

I noticed the new site publicsuffix.org that has been setup by the mozilla organization in an attempt to list public suffixes for all TLDs in the world, to basically know how to prevent sites from setting cookies that would span over just about all sites under that “public suffix”.

While I can see what drives this effort and since we have the same underlying problem in curl as well, I have sympathy for the effort. Still, I dread “having to” import and support this entire list in curl only to be able to better work like the browsers in the cookie department. Also, it feels like a cat and mouse race where the list may never be complete anyway. It is doomed to lack entries, or in the worst case list “public suffixes” that aren’t any such public suffixes anymore and thus it’ll prevent sites using that suffix to properly use cookies…

There’s no word on the site if IE or Opera etc are going to join this effort.

Update: there are several people expressing doubts about the virtues of this idea. Like Patrik Fältström on DNSOP.