Tag Archives: Security

Code re-use is fun

Back in 2003 I wrote up support for the HTTP NTLM authentication method for libcurl. Happy with my achievement, I later that year donated a GPL licensed version of my code to the Wget project (which also was my first contact with the signed paper stuff with the GNU/FSF to waive my copyright claims and instead hand them over). What was perhaps not so amusing with this code was when both curl and Wget 2005 were discovered to have the same security flaw due to my mistakes in this code shared by both projects!

Just recently, the neon project seems to be interested in taking on the version I adjusted somewhat for them, so possibly the third HTTP code is soon using this. Yeah I posted it on their mailing list back then so it has been sitting there in the archives maturing for some 6 years by now…

I also happened to fall over the SSH Tunnel Creator tool, which I’ve never used myself, that apparently snatched my neon donation (quite according to what the license allowed of course) and used it in their tool to do NTLM!

It’s actually not until recent years I discovered libntlm, and while I don’t know how good it was back in the days when I wrote my first NTLM stuff I generally think using existing libs is the better idea…

curl 7.19.4

curl and libcurl 7.19.4 has just been released! This time I think the perhaps most notable fix is the CVS-2009-0037 security fix which this release addresses. A little over 600 days passed since the previous vulnerability was announced.

Other than that major event, there are a bunch of interesting changes in this release:

  • Added CURLOPT_NOPROXY and the corresponding –noproxy
  • the OpenSSL-specific code disables TICKET (rfc5077) which is enabled by default in openssl 0.9.8j
  • Added CURLOPT_TFTP_BLKSIZE
  • Added CURLOPT_SOCKS5_GSSAPI_SERVICE and CURLOPT_SOCKS5_GSSAPI_NEC – with the corresponding curl options –socks5-gssapi-service and –socks5-gssapi-nec
  • Improved IPv6 support when built with with c-ares >= 1.6.1
  • Added CURLPROXY_HTTP_1_0 and –proxy1.0
  • Added docs/libcurl/symbols-in-versions
  • Added CURLINFO_CONDITION_UNMET
  • Added support for Digest and NTLM authentication using GnuTLS
  • CURLOPT_FTP_CREATE_MISSING_DIRS can now be set to 2 to retry the CWD even when MKD fails
  • GnuTLS initing moved to curl_global_init()
  • Added CURLOPT_REDIR_PROTOCOLS and CURLOPT_PROTOCOLS

We also did at least 15 documented bugfixes in this release and 25 people are credited for their help to make it happen.

Fun with executable extensions in viewvc

A few years ago I wrote up silly little perl script (let’s call it script.pl) that would fetch a page from a site that returns a “random URL off the internet”. I needed a range of URLs for a test program of mine and just making up a thousand or so URLs is tricky. Thus I wrote this script that I would run and allow to get a range of URLs on each invoke and then run it again later and append to the log file. It wasn’t a fancy script, but it solved my task.

The script was part of a project I got funded to work on, that was improving libcurl back in 2005/2006 so I thought adding and committing the script to CVS felt only natural and served a good purpose. To allow others to repeat what I did.

Fast forward to late 2008. The script is now browsable via viewvc on a site that… eh, doesn’t have “.pl” disabled as a cgi extension in its config! The result of course is that each time someone tries to view the script using the web interface, the web server invokes the script locally!

All of a sudden I get a mail from someone, who apparently is admin or something of the site this old script was using, and he mentions that a machine on our network is hammering his site with many requests per second (38 requests/second apparently) and asked me to stop this. It turns out a search engine crawler has indexed the viewvc output several times, and now some 8 processes or so were running this script.pl and they were all looping around getting a page, outputting the URL, getting another page…

While I think 38 requests second is a bit low to even be considered a DOS, it certainly wasn’t intended nor friendly and I was greatly surprised when I slowly realized how it all came to end up like this! Man I suck! It reminds me of my other extension mess from just a few months ago…

Maybe I’ll learn how to do things right in the future when I grow up!

SSL certs crash without trust

Eddy Nigg found out and blogged about how he could buy SSL certificates for a domain he clearly doesn’t own nor control. The cert is certified by Comodo who apparently has outsourced (parts of) there cert business to a separate company who obviously does very little or perhaps no verification at all of the buyers.

As a result, buyers could buy certificates from there for just about any domain/site name, and Comodo being a trusted CA in at least Firefox would thus make it a lot easier for phishers and other cyber-style criminals to setup fraudulent sites that even get the padlock in Firefox and looks almost perfectly legitimate!

The question is now what Mozilla should do. What Firefox users should expect their browser to do when HTTPS sites use Comodo-verified certs and how Comodo and their resellers are going to deal with everything…

Read the scary thread on the mozilla dev-tech-crypto list.

Update: if you’re on the paranoid/safe side you can disable trusting their certificates by doing this:

Select Preferences -> Advanced -> View Certificates -> Authorities. Search for
AddTrust AB -> AddTrust External CA Root and click “Edit”. Remove all Flags.

Filling our pipes

At around 13:43 GMT Friday the 5th of December 2008, the network that hosts a lot of services like this site, the curl site, the rockbox site, the c-ares site, CVS repositories, mailing lists, my own email and a set of other open source related stuff, become target of a vicious and intense DDoS attack. The attack was in progress until about 17:00 GMT on Sunday the 7th. The target network is owned and ran by CAG Contactor.

Tens of thousands of machines on the internet suddenly started trying to access a single host within the network. The IP they targeted has in fact never been publicly used as long as we’ve owned it (which is just a bit under two years) and it has never had any public services.

We have no clue whatsoever why someone would do this against us. We don’t have any particular services that anyone would gain anything by killing. We’re just very puzzled.

Our “ISP”, the guys we buy bandwidth and related services from, said they used up about 1 gigabit/sec worth of bandwidth and with our “mere” 10 megabit/sec connection it was of course impossible to offer any services while this was going on.

It turns out our ISP did the biggest blunder and is the main cause for the length of this outage: we could immediately spot that the target was a single IP in our class C network. We asked them to block all traffic to this IP as far out as possible to stop such packets from entering their network. And they did. For a short while there was silence and sense again.

For some reason that block “fell off” and our network got swamped again and it then remained unusable for another 48 hours or so. We know this, since our sysadmin guy investigated our firewall logs on midday Sunday and they all revealed that same target IP as destination. Since we only have a during-office-hours support deal with our network guys (as we’re just a consultant company with no services that really need 24 hour support) they simply didn’t care much about our problem but said they would deal with it Monday morning. So our sysadmin shutdown our firewall to save our own network from logging overload and what not.

Given the explanations I’ve got over phone (I have yet to see and analyze logs from this), it does sound like some sort of SYN flood and they attempted to connect to many different TCP ports.

4-5 hours after the firewall was shutdown, the machines outside of our firewall (but still on our network) suddenly became accessible again. The attack had stopped. We have not seen any traces of it since then. The firewall is still shutdown though, as the first guy coming to the office Monday morning will switch it on again and then – hopefully – all services should be back to normal.

Snooping on government HTTPS

As was reported by some Swedish bloggers, and I found out thanks to kryptoblog, it seems the members of the Swedish parliament all access the internet via a HTTP proxy. And not only that, they seem to access HTTPS sites using the same proxy and while a lot of the netizens of the world do this, the members of the Swedish parliament have an IT department that is more big-brotherish than most: they decided they “needed” to snoop on the network traffic even for HTTPS connections – and how do you accomplish this you may ask?

Simple! The proxy simply terminates the SSL connection, then fetches the remote HTTPS document and run-time generates a “faked” SSL cert for the peer that is signed by a CA that the client trusts and then delivers that to the client. This does require that the client has got a CA cert installed locally that makes it trust certificates signed by the “faked” CA but I figure the parliament’s IT department “help” its users to this service.

Not only does this let every IT admin there be able to snoop on user names and passwords etc, it also allows for Man-In-The-Middle attacks big-time as I assume the users will be allowed to go to HTTPS sites using self-signed certificates – but they probably won’t even know it!

The motivation for this weird and intrusive idea seems to be that they want to scan the traffic for viruses and other malware.

If I were a member of the Swedish parliament I would be really upset and I would uninstall the custom CA and I would seriously consider accessing the internet using an ssh tunnel or similar. But somehow I doubt that many of them care, and the rest of them won’t be capable to take counter-measures against this.

In the middle there is a man

The other day an interesting bug report was posted against the Firefox browser, and it caused some interesting discussions and blog posts on the subject of Man-In-The-Middle attacks and how current browsers etc make it (too?) easy to accept self-signed certificates and thus users are easily mislead. (Peter Burkholder wrote a great piece on SSL MITMing already back in 2002 which goes into detail on how this can be done.).

The entire issue essentially boils down to this:

To be able to really know that you’re communicating with the true remote site (and not an impostor), you must have some kind of verification system.

In SSL land we have this system with CA certs for verifying certs and it works pretty good in most cases (I think). However, so many sites on the internet use HTTPS today without having the certificate signed by a party that is known to the browser already – most of them are so called self-signed which means there’s nobody else that guarantees that they are who they claim to be, just themselves.

All current modern browsers want to give the users easy access to HTTP sites, to HTTPS sites with valid properly-signed certs but also allow users to connect to and browse on HTTPS sites with self-signed certs. And here comes the problem: how to tell users that HTTPS with self-signed certs is very insecure but still let them proceed? How do we tell them that the user may proceed but if this is a known popular site you really should expect a true and valid certificate as otherwise it is quite possibly a MITM you’re seeing?

People are so used to just accept exceptions and click away nagging pop-ups so the warnings and alerts that are explicit and implied by the prompts you have to go through to accept the self-signed certificate. They don’t seem to have much effect. As can be seen in this bug report, accepting an impostor’s certificate for a large known site is far too easy.

In the SSH land however, we don’t have the ca cert system and top-down trust hierarchy that SSL/TLS imposes. But does this matter? I’d say no, as most if not all users still don’t reflect much over the fact when a server’s host key is reported different than what you used before. Or when you connect to a host the first time you accept the host key without trying to verify it using a different channel. Thus you’re subject to pretty much the same MITM risk. The difference is perhaps that less “mere end users” are using SSH this way.

Let me just put emphasis on this: SSL and SSH are secure. The insecureness here is not due to how the protocols work, but rather they are flaws that appear when we mix in real world users and UIs and so.

I don’t have any sensible solutions to these problems myself. I’m crap at designing things for mere humans and UIs etc and I make no claims of understanding end users.

It seems there’s a nice tool called ettercap that’s supposedly a fine thing to use when you want to run your own MITM attacks on your LAN! And on the other side: an interesting take at improving the “accept this certificate” UI is offered by the Firefox’s Perspectives plugin which basically also checks with N other sources’ view to help you decide whether to trust a certificate.

I want to round off my rant with a little quote:

I have little, and decreasing, desire to continue to invest in strong security for a product that discards that security for the masses” [*] / Nelson B Bolyard – prominent NSS hacker

Curl Cyclomatic Complexity

I was at the OWASP Sweden meeting last night and spoke about Open source and security. One of the other speakers present was Simon Josefsson who in his talk showed a nice table listing functions in his project sorted by “complexity“. Functions above a certain score are then considered “high risk” as they are hard to read and follow and thus may be subject to security problems.

The kind man he is, Simon already shows a page with a Curl Cyclomatic Complexity Report nicely identifying a bunch of functions we should really consider poking at to decrease complexity of. The top-10 “bad” functions are:

Function Score Statements Lines Code
ssh_statemach_act 254 880 1582 lib/ssh.c
Curl_http 204 395 886 lib/http.c
readwrite_headers 129 269 709 lib/transfer.c
Curl_cookie_add 118 247 502 lib/cookie.c
FormAdd 105 210 421 lib/formdata.c
dprintf_formatf 92 233 395 lib/mprintf.c
multi_runsingle 94 251 606 lib/multi.c
Curl_proxyCONNECT 74 212 443 lib/http.c
readwrite_data 73 127 319 lib/transfer.c
ftp_state_use_port 60 195 387 lib/ftp.c

I intend to use this as an indication on what functions within libcurl to work on. My plan is to primarily break down each of these functions to smaller ones to make them easier to read and follow. It would be cool to get every single function below 50. But I’m not sure that’s feasible or even really a good idea.

Security and Open Source

OWASP Sweden is arranging an event on October 6th in Stockholm Sweden to talk about security in the open source process.

I will be there doing talk about security in open source projects, in particular then how we work with security in the curl project. If you think of anything particular you would like me to address or include, feel free to give be a clue already before the event!