Category Archives: Security

curl fooled by null-prefix

We’ve just now released a security advisory on curl and libcurl regarding how a forger can trick libcurl to verify a forged site as having a fine certificate if you just had a CA create one for you with a carefully crafted embedded zero…

I think this flaw brings the light so greatly on the problems we deal with to maintain code to be safe and secure. When writing code, and as in this case using C, we might believe we’re mostly vulnerable to buffer overflows, pointer messups, memory leaks or similar. Then we see this fascinatingly imaginative “attack” creep up…

The theory in short and somewhat simplified:

A server certificate is always presented by a server when a client connects to it using SSL. The certificate contains the servers name. The client verifies that A) the cert is signed by the correct authority and B) that the cert has the correct name inside.

The A) thing works because servers buy their cert from a CA authority that has its public signature in all browsers, and thus we can be “cryptographically safe” when we see a match.

This last flaw was in the naming part (B). Apparently someone managed to trick a CA to hand out a cert to them using an embedded zero byte. Like if haxx.se would buy the cert, we’d get it with an embedded zero like:

“example.com\0.haxx.se”

Now, this works fine in certificates since they store the string and its length separately. In the language C we’re used to have strings that are terminated with a trailing zero… so, if we would take over the “example.com” HTTPS server we could put our legitimately purchased certificate on that server and clients would use strcmp() or the equivalent to check the name in the certificate against the host name they try to connect to.

The embedded zero makes strcmp(host, certname) return MATCH and the client was successfully fooled.

curl is no longer vulnerable to this trick since 7.19.6, and we have released a boatload of patches for older versions in case upgrading is not an option.

User data probably for sale

It’s time for a little “doomsday prophesy”.

Already seen happen

As was reported last year in Sweden, mobile operators here sell customer data (Swedish article) to companies who are willing to pay. Even though this might be illegal (Swedish article), all the major Swedish mobile phone operators do this. This second article mentions that the operators think this practice is allowed according to the contract every customer has signed, but that’s far from obvious in everybody else’s eyes and may in fact not be legal.

For the non-Swedes: one mobile phone user found himself surfing to a web site that would display his phone number embedded on the site! This was only possible due to the site buying this info from the operator.

While the focus on what data they sell has been on the phone number itself – and I do find that a pretty good privacy breach in itself – there’s just so much more the imaginative operators just very likely soon will offer companies who just pay enough.

Legislations going the wrong way

There’s this EU “directive” from a few years back:

Directive 2006/24/EC of the European Parliament and of the Council of 15 March 2006 on the retention of data generated or processed in connection with the provision of publicly available electronic communications services or of public communications networks and amending Directive 2002/58/EC

It basically says that Internet operators must store information of users’ connections made on the net and keep them around for a certain period. Sweden hasn’t yet ratified this but I hear other EU member states already have it implemented…

(The US also has some similar legislation being suggested.)

It certainly doesn’t help us who believe in maintaining a level of privacy!

What soon could happen

There’s hardly a secret that operators run network supervision equipments on their customer networks and thus they analyze and snoop on network data sent and received by each and every customer. They do this for network management reasons and for such legislations I mentioned above. (Disclaimer: I’ve worked and developed code for a client that makes and sells products for exactly this purpose.)

Anyway, it is thus easy for the operators to for example spot common URLs their users visit. They can spot what services (bittorrent, video sites, Internet radio, banks, porn etc) a user frequents. Given a particular company’s interest, it could certainly be easy to check for specific competitors in users’ visitor logs or whatever and sell that info.

If operators can sell the phone numbers of their individual users, what stops them from selling all this other info – given a proper stash of money from the ones who want to know? I’m convinced this will happen sooner or later, unless we get proper legislation that forbids the operators from doing this… In Sweden this sell of info is mostly likely to get done by the mobile network operators and not the regular Internet providers simply because the mobile ones have this end user contract to lean on that they claim gives them this right. That same style of contract and terminology, is not used for regular Internet subscriptions (I believe).

So here’s my suggestion for Think Geek to expand somewhat on their great shirt:

i-read-your-everything

(yeah, I have one of those boring ones with only the first line on it…)

Haxx for you

So our company is named Haxx and it has been named like this for more than a decade, but the name is considered by some people be a mark of evil or something.

In my closest circle of friends we’ve kind of “always” liked using silly names and we’ve since long had a fascination with double Xes. Once upon the time in the early 90s we teamed up under the name Frexx and we did some funky programs on the Amiga. Most notably a programming language called FPL and the text editor FrexxEd.

When we then during the second half of the 90s needed to start an actual company to easier cater for our “spare time businesses” we wanted a new name but still one in a similar spirit. Being big friends and practitioners of writing “quick hacks” (“hack” in the sense that it is a quickly done program/script that perhaps isn’t always written very solidly or nice but works for the moment) to solve our own problems both at work and at home, we found Haxx to be a perfect name for us – Hack in pluralis, spelled with double-x.

Already at the time we took the name we knew about this bad habit at places that seemed to lump Hackers with Crackers or similar so we knew there would be a risk that some could assume us to be something else based on our name, but what the heck, we liked the name and we are and were hackers and we do and did a lot of hacks. Haxx it was. Haxx it is.

These days we get some minor problems due to this. At some companies (let’s not name any specific but you know the kind) they have black-listed haxx.se web sites (presumably because of the name ‘haxx’ in the domain name), some people get mails from us our the mailing lists we host easier filtered as spam and we get our share of strange suggestions etc.

I guess the upside of it is that we get our chances to whine on people and systems who decide to filter contents purely based on the presence of a single 4-letter word, either in a domain name or in web page or mail contents, and that is actually hilariously stupid.

Haxx

curl 7.19.4

curl and libcurl 7.19.4 has just been released! This time I think the perhaps most notable fix is the CVS-2009-0037 security fix which this release addresses. A little over 600 days passed since the previous vulnerability was announced.

Other than that major event, there are a bunch of interesting changes in this release:

  • Added CURLOPT_NOPROXY and the corresponding –noproxy
  • the OpenSSL-specific code disables TICKET (rfc5077) which is enabled by default in openssl 0.9.8j
  • Added CURLOPT_TFTP_BLKSIZE
  • Added CURLOPT_SOCKS5_GSSAPI_SERVICE and CURLOPT_SOCKS5_GSSAPI_NEC – with the corresponding curl options –socks5-gssapi-service and –socks5-gssapi-nec
  • Improved IPv6 support when built with with c-ares >= 1.6.1
  • Added CURLPROXY_HTTP_1_0 and –proxy1.0
  • Added docs/libcurl/symbols-in-versions
  • Added CURLINFO_CONDITION_UNMET
  • Added support for Digest and NTLM authentication using GnuTLS
  • CURLOPT_FTP_CREATE_MISSING_DIRS can now be set to 2 to retry the CWD even when MKD fails
  • GnuTLS initing moved to curl_global_init()
  • Added CURLOPT_REDIR_PROTOCOLS and CURLOPT_PROTOCOLS

We also did at least 15 documented bugfixes in this release and 25 people are credited for their help to make it happen.

Linux distros consolidate crypto libs

For a while already, the Fedora distribution has fought battles, done lots of work and pushed for a consolidation of all packages that use crypto libs to completely go with Mozilla’s NSS.

Now it seems to be OpenSUSE’s turn. The discussion I link to here doesn’t make any definite conclusions but they seem to lean towards NSS as well, claiming it has the most features. I wonder what they base that statement on – if there’s a public doc anywhere that state exactly which has what that makes any contender better than any other for them?

In the Fedora case it seems they’ve focused on the NSS FIPS license as the deciding factor but the license issue is also often brought up in this discussion.

I’ve personally been pondering on writing some kind of unified crypto layer that would expose a single API to an application and handle the different libs as backends, pretty much the same way we do it internally in libcurl at the moment. It hasn’t taken off (or even been started) since I’ve not had the time nor energy for it yet.

Fun with executable extensions in viewvc

A few years ago I wrote up silly little perl script (let’s call it script.pl) that would fetch a page from a site that returns a “random URL off the internet”. I needed a range of URLs for a test program of mine and just making up a thousand or so URLs is tricky. Thus I wrote this script that I would run and allow to get a range of URLs on each invoke and then run it again later and append to the log file. It wasn’t a fancy script, but it solved my task.

The script was part of a project I got funded to work on, that was improving libcurl back in 2005/2006 so I thought adding and committing the script to CVS felt only natural and served a good purpose. To allow others to repeat what I did.

Fast forward to late 2008. The script is now browsable via viewvc on a site that… eh, doesn’t have “.pl” disabled as a cgi extension in its config! The result of course is that each time someone tries to view the script using the web interface, the web server invokes the script locally!

All of a sudden I get a mail from someone, who apparently is admin or something of the site this old script was using, and he mentions that a machine on our network is hammering his site with many requests per second (38 requests/second apparently) and asked me to stop this. It turns out a search engine crawler has indexed the viewvc output several times, and now some 8 processes or so were running this script.pl and they were all looping around getting a page, outputting the URL, getting another page…

While I think 38 requests second is a bit low to even be considered a DOS, it certainly wasn’t intended nor friendly and I was greatly surprised when I slowly realized how it all came to end up like this! Man I suck! It reminds me of my other extension mess from just a few months ago…

Maybe I’ll learn how to do things right in the future when I grow up!

SSL certs crash without trust

Eddy Nigg found out and blogged about how he could buy SSL certificates for a domain he clearly doesn’t own nor control. The cert is certified by Comodo who apparently has outsourced (parts of) there cert business to a separate company who obviously does very little or perhaps no verification at all of the buyers.

As a result, buyers could buy certificates from there for just about any domain/site name, and Comodo being a trusted CA in at least Firefox would thus make it a lot easier for phishers and other cyber-style criminals to setup fraudulent sites that even get the padlock in Firefox and looks almost perfectly legitimate!

The question is now what Mozilla should do. What Firefox users should expect their browser to do when HTTPS sites use Comodo-verified certs and how Comodo and their resellers are going to deal with everything…

Read the scary thread on the mozilla dev-tech-crypto list.

Update: if you’re on the paranoid/safe side you can disable trusting their certificates by doing this:

Select Preferences -> Advanced -> View Certificates -> Authorities. Search for
AddTrust AB -> AddTrust External CA Root and click “Edit”. Remove all Flags.

Snooping on government HTTPS

As was reported by some Swedish bloggers, and I found out thanks to kryptoblog, it seems the members of the Swedish parliament all access the internet via a HTTP proxy. And not only that, they seem to access HTTPS sites using the same proxy and while a lot of the netizens of the world do this, the members of the Swedish parliament have an IT department that is more big-brotherish than most: they decided they “needed” to snoop on the network traffic even for HTTPS connections – and how do you accomplish this you may ask?

Simple! The proxy simply terminates the SSL connection, then fetches the remote HTTPS document and run-time generates a “faked” SSL cert for the peer that is signed by a CA that the client trusts and then delivers that to the client. This does require that the client has got a CA cert installed locally that makes it trust certificates signed by the “faked” CA but I figure the parliament’s IT department “help” its users to this service.

Not only does this let every IT admin there be able to snoop on user names and passwords etc, it also allows for Man-In-The-Middle attacks big-time as I assume the users will be allowed to go to HTTPS sites using self-signed certificates – but they probably won’t even know it!

The motivation for this weird and intrusive idea seems to be that they want to scan the traffic for viruses and other malware.

If I were a member of the Swedish parliament I would be really upset and I would uninstall the custom CA and I would seriously consider accessing the internet using an ssh tunnel or similar. But somehow I doubt that many of them care, and the rest of them won’t be capable to take counter-measures against this.

In the middle there is a man

The other day an interesting bug report was posted against the Firefox browser, and it caused some interesting discussions and blog posts on the subject of Man-In-The-Middle attacks and how current browsers etc make it (too?) easy to accept self-signed certificates and thus users are easily mislead. (Peter Burkholder wrote a great piece on SSL MITMing already back in 2002 which goes into detail on how this can be done.).

The entire issue essentially boils down to this:

To be able to really know that you’re communicating with the true remote site (and not an impostor), you must have some kind of verification system.

In SSL land we have this system with CA certs for verifying certs and it works pretty good in most cases (I think). However, so many sites on the internet use HTTPS today without having the certificate signed by a party that is known to the browser already – most of them are so called self-signed which means there’s nobody else that guarantees that they are who they claim to be, just themselves.

All current modern browsers want to give the users easy access to HTTP sites, to HTTPS sites with valid properly-signed certs but also allow users to connect to and browse on HTTPS sites with self-signed certs. And here comes the problem: how to tell users that HTTPS with self-signed certs is very insecure but still let them proceed? How do we tell them that the user may proceed but if this is a known popular site you really should expect a true and valid certificate as otherwise it is quite possibly a MITM you’re seeing?

People are so used to just accept exceptions and click away nagging pop-ups so the warnings and alerts that are explicit and implied by the prompts you have to go through to accept the self-signed certificate. They don’t seem to have much effect. As can be seen in this bug report, accepting an impostor’s certificate for a large known site is far too easy.

In the SSH land however, we don’t have the ca cert system and top-down trust hierarchy that SSL/TLS imposes. But does this matter? I’d say no, as most if not all users still don’t reflect much over the fact when a server’s host key is reported different than what you used before. Or when you connect to a host the first time you accept the host key without trying to verify it using a different channel. Thus you’re subject to pretty much the same MITM risk. The difference is perhaps that less “mere end users” are using SSH this way.

Let me just put emphasis on this: SSL and SSH are secure. The insecureness here is not due to how the protocols work, but rather they are flaws that appear when we mix in real world users and UIs and so.

I don’t have any sensible solutions to these problems myself. I’m crap at designing things for mere humans and UIs etc and I make no claims of understanding end users.

It seems there’s a nice tool called ettercap that’s supposedly a fine thing to use when you want to run your own MITM attacks on your LAN! And on the other side: an interesting take at improving the “accept this certificate” UI is offered by the Firefox’s Perspectives plugin which basically also checks with N other sources’ view to help you decide whether to trust a certificate.

I want to round off my rant with a little quote:

I have little, and decreasing, desire to continue to invest in strong security for a product that discards that security for the masses” [*] / Nelson B Bolyard – prominent NSS hacker

Curl Cyclomatic Complexity

I was at the OWASP Sweden meeting last night and spoke about Open source and security. One of the other speakers present was Simon Josefsson who in his talk showed a nice table listing functions in his project sorted by “complexity“. Functions above a certain score are then considered “high risk” as they are hard to read and follow and thus may be subject to security problems.

The kind man he is, Simon already shows a page with a Curl Cyclomatic Complexity Report nicely identifying a bunch of functions we should really consider poking at to decrease complexity of. The top-10 “bad” functions are:

Function Score Statements Lines Code
ssh_statemach_act 254 880 1582 lib/ssh.c
Curl_http 204 395 886 lib/http.c
readwrite_headers 129 269 709 lib/transfer.c
Curl_cookie_add 118 247 502 lib/cookie.c
FormAdd 105 210 421 lib/formdata.c
dprintf_formatf 92 233 395 lib/mprintf.c
multi_runsingle 94 251 606 lib/multi.c
Curl_proxyCONNECT 74 212 443 lib/http.c
readwrite_data 73 127 319 lib/transfer.c
ftp_state_use_port 60 195 387 lib/ftp.c

I intend to use this as an indication on what functions within libcurl to work on. My plan is to primarily break down each of these functions to smaller ones to make them easier to read and follow. It would be cool to get every single function below 50. But I’m not sure that’s feasible or even really a good idea.