Tag Archives: ssl

Fosdem 2011: my libcurl talk on video

Kai Engert was good enough to capture all the talks in the security devroom at Fosdem 2011, and while I’m seeding the full torrent I’ve made my own talk available as a direct download from here:

Fosdem 2011: security-room at 14:15 by Daniel Stenberg

The thing is about 107MB big, 640×480 resolution and is roughly 26 minutes playing time. WebM format.

libcurl, seven SSL libs and one SSH lib

I did a talk today at Fosdem with this title. The room only had 48 seats and it was completely packed with people standing everywhere it was possible around the seated guys.

The English slides from my talk are below. It was also recorded on video so I hope I’ll be able to post once it becomes available online

The big protocols

OWASP Sweden once again arranged another interesting meeting, this time with three talks.owasp

The title of the meeting on January 21st here in Stockholm called the protocols “the big ones” (but in Swedish) but I have no idea what kind of measurement they’ve used or what the small ones are or what other “big protocols” there might be! 😉

First we got to hear HÃ¥vard Eidnes tell us about BGP and that protocol seems to suffer from its share of security problems with the protocol itself but perhaps even more with the actual implementations as one of the bigger recent BGP-related incidents that was spoken about was about how internal routes were leaked to the outside from Pakistan in Feb 2008 which made them block the entire world’s access to Youtube. This talk also gave us some insights on the “wild west” of international routing and the lack of control and proper knowledge about who’s allowed to route what to where.

There then was a session by Rickard Bellgrim about DNSSEC and even though I’ve heard talks about this protocol in the past I couldn’t but to again feel that man they have a lot of terminology in that world that makes even a basic description fairly hard to keep up with in some parts of it all. And man do they have a lot of signing and keys and fingerprints and trusts going on… Of course DNSSEC is the answer to lots of existing problems with DNS and DNSSEC certainly opens up a range of new fun. The idea to somehow replace the need for ca-certs by storing keys in DNS is interesting, but even though technically working and sound I fear the browser vendors and the CAs of the SSL world won’t be very fast to turn the wheels to roll in that direction. DNSSEC certainly makes name resolving a lot more complicated, and I wonder if c-ares should ever get into that game… And BTW, DNSSEC of course doesn’t take away the fact that specific implementations may still be vulnerable to security flaws.

The last talk of the evening was about SSL, or rather TLS, held by Fredrik Hesse. He gave us a pretty detailed insight into how the protocol works, and then a fairly detailed overview of the flaws discovered during the last year or so, primarily MD5 and rogue ca certs, the null-prefix cert names and the TLS renegotiation bug. I felt good about already knowing just about everything of what he told us. I can also boast with having corrected the speaker afterward at the pub where we were having our post-talk-beers as he was evidently very OpenSSL focused when he spoke about what SSL libraries can and cannot do.

A great evening. And with good beers too. Thanks to the organizers!

curl fooled by null-prefix

We’ve just now released a security advisory on curl and libcurl regarding how a forger can trick libcurl to verify a forged site as having a fine certificate if you just had a CA create one for you with a carefully crafted embedded zero…

I think this flaw brings the light so greatly on the problems we deal with to maintain code to be safe and secure. When writing code, and as in this case using C, we might believe we’re mostly vulnerable to buffer overflows, pointer messups, memory leaks or similar. Then we see this fascinatingly imaginative “attack” creep up…

The theory in short and somewhat simplified:

A server certificate is always presented by a server when a client connects to it using SSL. The certificate contains the servers name. The client verifies that A) the cert is signed by the correct authority and B) that the cert has the correct name inside.

The A) thing works because servers buy their cert from a CA authority that has its public signature in all browsers, and thus we can be “cryptographically safe” when we see a match.

This last flaw was in the naming part (B). Apparently someone managed to trick a CA to hand out a cert to them using an embedded zero byte. Like if haxx.se would buy the cert, we’d get it with an embedded zero like:

“example.com\0.haxx.se”

Now, this works fine in certificates since they store the string and its length separately. In the language C we’re used to have strings that are terminated with a trailing zero… so, if we would take over the “example.com” HTTPS server we could put our legitimately purchased certificate on that server and clients would use strcmp() or the equivalent to check the name in the certificate against the host name they try to connect to.

The embedded zero makes strcmp(host, certname) return MATCH and the client was successfully fooled.

curl is no longer vulnerable to this trick since 7.19.6, and we have released a boatload of patches for older versions in case upgrading is not an option.

encrypted file transfer protocols compared

I like putting up some explanatory “this versus that” documents on stuff I know a little about. I’ve done things like curl vs wget, ftp vs http and http vs bittorrent in the past.

This time, I decided it was about time to do a technical comparison of the four major encrypted file transfer protocols SCP, SFTP, FTPS and HTTPS and explain how they differ in as many aspects and viewpoints as possible. I quite often get questions about how some of these compare against some of the others and why you’d use one instead of another etc. I hope this document will help people to find such answers themselves.

Of course I do mistakes and sometimes express myself in muddy ways, so your feedback and help is important. You can help me make this comparison become better!

http://daniel.haxx.se/docs/encrypted-transfer-protocols-compared.html

It’s still rough and all, but what question and comparisons between them do you miss? What mistakes have I done? What parts aren’t spelled out clear enough?

Linux distros consolidate crypto libs

For a while already, the Fedora distribution has fought battles, done lots of work and pushed for a consolidation of all packages that use crypto libs to completely go with Mozilla’s NSS.

Now it seems to be OpenSUSE’s turn. The discussion I link to here doesn’t make any definite conclusions but they seem to lean towards NSS as well, claiming it has the most features. I wonder what they base that statement on – if there’s a public doc anywhere that state exactly which has what that makes any contender better than any other for them?

In the Fedora case it seems they’ve focused on the NSS FIPS license as the deciding factor but the license issue is also often brought up in this discussion.

I’ve personally been pondering on writing some kind of unified crypto layer that would expose a single API to an application and handle the different libs as backends, pretty much the same way we do it internally in libcurl at the moment. It hasn’t taken off (or even been started) since I’ve not had the time nor energy for it yet.

SSL certs crash without trust

Eddy Nigg found out and blogged about how he could buy SSL certificates for a domain he clearly doesn’t own nor control. The cert is certified by Comodo who apparently has outsourced (parts of) there cert business to a separate company who obviously does very little or perhaps no verification at all of the buyers.

As a result, buyers could buy certificates from there for just about any domain/site name, and Comodo being a trusted CA in at least Firefox would thus make it a lot easier for phishers and other cyber-style criminals to setup fraudulent sites that even get the padlock in Firefox and looks almost perfectly legitimate!

The question is now what Mozilla should do. What Firefox users should expect their browser to do when HTTPS sites use Comodo-verified certs and how Comodo and their resellers are going to deal with everything…

Read the scary thread on the mozilla dev-tech-crypto list.

Update: if you’re on the paranoid/safe side you can disable trusting their certificates by doing this:

Select Preferences -> Advanced -> View Certificates -> Authorities. Search for
AddTrust AB -> AddTrust External CA Root and click “Edit”. Remove all Flags.

What is this yassl really?

yassl is said to be Yet Another SSL library and I’ve been told that for example it is the preferred library used by the mysql camp. I got interested in this several years ago when I learned about it since I thought it was fun to see an alternative implementation of OpenSSL that still offers the same API.

Since then, I’ve amused myself by trying to build and run curl with it like every six months or so. I’ve made (lib)curl build fine with yassl (and its configure script also detects that it is an OpenSSL API emulated by yassl), but I’ve never seen it run the entire curl test suite through without failing at least one test!

I asked the mysql guy about how yassl has worked for them, but he kind of shrugged and admitted that they hadn’t tried it much (and then I don’t know really who he spoke for, the entire team or just he and his closest friends) but he said it worked for them.

Today I noticed the yassl version 1.9.6 that I downloaded, built and tried against curl. This time curl completely fails to build with it…

Let me also point out that it’s not like I’ve not told the yassl team (person?) about these problems in the past. I have, and there have been adjustments that have been meant to address problems I’ve seen. I just can’t make curl use it successfully… libcurl can still be built and run with OpenSSL, GnuTLS or NSS so it’s not like we lack SSL library alternatives.

The same team/person seems to behind another SSL lib called Cyassl that’s aimed for smaller footprint systems and I’ve heard whispers about people trying to get libcurl to build against that and it surely is going to be interesting to see where that leads!

In the middle there is a man

The other day an interesting bug report was posted against the Firefox browser, and it caused some interesting discussions and blog posts on the subject of Man-In-The-Middle attacks and how current browsers etc make it (too?) easy to accept self-signed certificates and thus users are easily mislead. (Peter Burkholder wrote a great piece on SSL MITMing already back in 2002 which goes into detail on how this can be done.).

The entire issue essentially boils down to this:

To be able to really know that you’re communicating with the true remote site (and not an impostor), you must have some kind of verification system.

In SSL land we have this system with CA certs for verifying certs and it works pretty good in most cases (I think). However, so many sites on the internet use HTTPS today without having the certificate signed by a party that is known to the browser already – most of them are so called self-signed which means there’s nobody else that guarantees that they are who they claim to be, just themselves.

All current modern browsers want to give the users easy access to HTTP sites, to HTTPS sites with valid properly-signed certs but also allow users to connect to and browse on HTTPS sites with self-signed certs. And here comes the problem: how to tell users that HTTPS with self-signed certs is very insecure but still let them proceed? How do we tell them that the user may proceed but if this is a known popular site you really should expect a true and valid certificate as otherwise it is quite possibly a MITM you’re seeing?

People are so used to just accept exceptions and click away nagging pop-ups so the warnings and alerts that are explicit and implied by the prompts you have to go through to accept the self-signed certificate. They don’t seem to have much effect. As can be seen in this bug report, accepting an impostor’s certificate for a large known site is far too easy.

In the SSH land however, we don’t have the ca cert system and top-down trust hierarchy that SSL/TLS imposes. But does this matter? I’d say no, as most if not all users still don’t reflect much over the fact when a server’s host key is reported different than what you used before. Or when you connect to a host the first time you accept the host key without trying to verify it using a different channel. Thus you’re subject to pretty much the same MITM risk. The difference is perhaps that less “mere end users” are using SSH this way.

Let me just put emphasis on this: SSL and SSH are secure. The insecureness here is not due to how the protocols work, but rather they are flaws that appear when we mix in real world users and UIs and so.

I don’t have any sensible solutions to these problems myself. I’m crap at designing things for mere humans and UIs etc and I make no claims of understanding end users.

It seems there’s a nice tool called ettercap that’s supposedly a fine thing to use when you want to run your own MITM attacks on your LAN! And on the other side: an interesting take at improving the “accept this certificate” UI is offered by the Firefox’s Perspectives plugin which basically also checks with N other sources’ view to help you decide whether to trust a certificate.

I want to round off my rant with a little quote:

I have little, and decreasing, desire to continue to invest in strong security for a product that discards that security for the masses” [*] / Nelson B Bolyard – prominent NSS hacker

Getting cacerts for your tools

As the primary curl author, I’m finding the comments here interesting. That blog entry “Teaching wget About Root Certificates” is about how you can get cacerts for wget by downloading them from curl’s web site, and people quickly point out how getting cacerts from an untrusted third party place of course is an ideal situation for an MITM “attack”.

Of course you can’t trust any files off a HTTP site or a HTTPS site without a “trusted” certificate, but thinking that the curl project would run one of those just to let random people load PEM files from our site seems a bit weird. Thus, we also provide the scripts we do all this with so that you can run them yourself with whatever input data you need, preferably something you trust. The more paranoid you are, the harder that gets of course.

On Fedora, curl does come with ca certs (at least I’m told recent Fedoras do) and even if it doesn’t, you can actually point curl to use whatever cacert you like and since most default installs of curl uses OpenSSL like wget does, you could tell curl to use the same cacert your wget install uses.

This last thing gets a little more complicated when one of the two gets compiled with a SSL library that doesn’t easily support PEM (read: NSS), but in the case of curl in recent Fedora they build it with NSS but with an additional patch that allows it to still be able to read PEM files.