Tag Archives: cURL and libcurl

50 hours offline

Several sites in the haxx.se domain and other stuff related to me and my fellows were completely offline for almost 50 hours between August 24th 19:00 UTC and August 26th 20:30 UTC.

The sites affected included the main web sites for the following projects: curl, c-ares, trio, libssh2 and Rockbox. It also affected mailing lists and CVS repositories etc for some of those.

The reason for the outage has been explained by the ISP (Black Internet) to be because of some kind of sabotage. Their explanation given so far (first in Swedish):

Strax efter kl 20 i måndags drabbades Black Internet och Black Internets kunder av ett mycket allvarligt sabotage. Sabotaget gjordes mot flera av våra core-switchar, våra knutpunkter. Detta resulterade i ett mer eller mindre totalt avbrott för oss och våra kunder. Vi har polisanmält händelsen och har ett bra samarbete med dem.

Translated to English (by me) it becomes:

Soon after 8pm on Monday, Black Internet and its customers were struck by a very serious act of sabotage. The sabotage was made against several of our core switches. This resulted in a more or less total disruption of service for us and our customers. We have reported the incident to the police and we have a good cooperation with them.

Do note that you could keep track of this situation by following me on twitter.

It’s good to be back. Let’s hope it’ll take ages until we go away like that again!

Update: according to my sources, someone erased/cleared Black Internet’s core routers and then they learned that they had no working backups so they had to restore everything by hand.

fully respect your rights

This is [name removed] writing at Toshiba Corporation.

We are considering using your program curl (http://curl.haxx.se/) in our products. Before going any further, however, we would like to confirm the following so that we are sure to fully respect your rights.

I am so impressed. Thank you Toshiba for being this upfront and courteous when incorporating an open source product. The license is perfectly free and open for you to use curl for this purpose, but the sheer act of this “making sure” gets my 10 points for great business conduct.

curl fooled by null-prefix

We’ve just now released a security advisory on curl and libcurl regarding how a forger can trick libcurl to verify a forged site as having a fine certificate if you just had a CA create one for you with a carefully crafted embedded zero…

I think this flaw brings the light so greatly on the problems we deal with to maintain code to be safe and secure. When writing code, and as in this case using C, we might believe we’re mostly vulnerable to buffer overflows, pointer messups, memory leaks or similar. Then we see this fascinatingly imaginative “attack” creep up…

The theory in short and somewhat simplified:

A server certificate is always presented by a server when a client connects to it using SSL. The certificate contains the servers name. The client verifies that A) the cert is signed by the correct authority and B) that the cert has the correct name inside.

The A) thing works because servers buy their cert from a CA authority that has its public signature in all browsers, and thus we can be “cryptographically safe” when we see a match.

This last flaw was in the naming part (B). Apparently someone managed to trick a CA to hand out a cert to them using an embedded zero byte. Like if haxx.se would buy the cert, we’d get it with an embedded zero like:

“example.com\0.haxx.se”

Now, this works fine in certificates since they store the string and its length separately. In the language C we’re used to have strings that are terminated with a trailing zero… so, if we would take over the “example.com” HTTPS server we could put our legitimately purchased certificate on that server and clients would use strcmp() or the equivalent to check the name in the certificate against the host name they try to connect to.

The embedded zero makes strcmp(host, certname) return MATCH and the client was successfully fooled.

curl is no longer vulnerable to this trick since 7.19.6, and we have released a boatload of patches for older versions in case upgrading is not an option.

curl 7.19.6 is here!

Yet again we strike back with an update to the popular download tool curl and the transfer library libcurl.

Noticeable changes this time include:

  • A security related fix, for the flaw named CVE-2009-2417.
  • CURLOPT_FTPPORT (and curl’s -P/–ftpport) support port ranges
  • Added CURLOPT_SSH_KNOWNHOSTS, CURLOPT_SSH_KEYFUNCTION, CURLOPT_SSH_KEYDATA so that both the library and the curl tool now understand and work with OpenSSH style known_hosts file (if built with libssh2 1.2 or later)
  • CURLOPT_QUOTE, CURLOPT_POSTQUOTE and  CURLOPT_PREQUOTE can be told to ignore error responses when used with FTP. Handy if you want to run custom commands that may fail, but still enjoy persistent connections properly.

Let me just mention that the known_host support will make the SCP and SFTP transfers done with curl one step more secure. My work on this feature (both in libssh2 and in libcurl) was sponsored by a well-known company that shall remain unidentified at their request.

cURL

libcurl in package management

A few days ago I noticed that the “urlgrabber” project now has switched to using pycurl (the python libcurl binding) in their bleeding edge development. It means that projects using that, such well-known apps like yum and anaconda then use libcurl. Already since ages the Suse installer named YaST is using libcurl and a few months ago I learned that the opensolaris package management (pkg) is also switching to become pycurl based.

According to the lead man on the urlgrabber project, Seth Vidal, there are several reasons to switch from Python’s native urllib for (mostly) HTTP transport and he was friendly enough to mention a few to me. Clearly the two primary reasons are FIPS certification and urllib’s lacking HTTP proxy support. The FIPS certification is something the Fedora project has been pushing for a lot during recent time and thus they’ve worked hard on making libcurl support NSS for SSL/TLS, and the lack of HTTP proxy support is supposedly hard to push into urllib itself due to its stagnant development etc.

In Debian-esque worlds, libcurl and curl are already used by the package system in forms of apt-transport-https and apt-file.

It seems that when you run an open source operating system tomorrow, chances are that libcurl is in the back-end of the package system.

curl 7.19.5

I’m happy to say that we’ve just shipped our 111th public release of curl and libcurl: 7.19.5

Notable changes this time include:

  • libcurl now closes all dead connections whenever you attempt to open a new connection
  • libssh2’s version number can now be figured out run-time instead of using the build-time fixed number
  • CURLOPT_SEEKFUNCTION may now return CURL_SEEKFUNC_CANTSEEK
  • curl can now upload with resume even when reading from a pipe
  • a build-time configured curl_socklen_t is now used instead of socklen_t

… and there are at least 29 bugs fixed. All this during 75 days since the last release.

Thanks everyone!

Dear Apple Inc

Dear Apple Inc,

As one of the primary authors of libcurl and curl, two parts that are included in every Mac OS X release since years back, I was only wondering if you would consider sponsoring me with a Mac, to make it easier for me to do (lib)curl development, tuning and bug-fixing on/for the Mac?green-apple

I really don’t have any particular income from Macs so I don’t see how I can personally motivate spending some 2000 USD on a Mac only for curl. And to be honest, I can’t think of any other reason to get a Mac either!

I did look around Apple’s web site to find an email adress of someone to send my plea to, but I failed. So I’ll just put it here. I have exactly no hope in actually accomplishing anything with this other than putting some attention on how things are.

This post was triggered by recent libcurl bugs that seem to show up only on Mac!

Getting support to curl

The other day I read this blog post by Stormy Peters, talking about getting people to sponsor or support Open Source projects and she continued to describe the Gnome approach and a bunch of projects that accept donations etc etc.

It made me (not too surprising) think about the situation for our little project cURL. We’re independent of any umbrella organization (GNU, ASF, etc) and we don’t have any vendor or company backing that pays for daily development or maintenance. We don’t have any legal entity or formal organization behind the project. We’re all just a bunch of people on some mailing lists.

We do have occasional companies and vendors who step up and pay individual developers to add features or provide various kinds of support, but they’re all basically single-shot occurrences and nothing that’s done on an ongoing basis.

Or products are used in all Linux distros, by hundreds of companies and so on. We’re a fairly active team, continuously working on bug fixes, tweaks and adding new features.

What can we do to make us more attractive for more support or active sponsoring by some vendor(s)?

Would joining an “umbrella” organization or forming a legal entity make it any more likely to happen?

Isn’t it so, that if the project is mature and good enough already, there’s actually very very little incentive for any company to take it under their wings and rather the market economy makes it a lot more profitable to simply use it as it is and if – at worst – in the end something really hits the fan, you can pay someone at that crisis point to fix up the immediate problem. And then continue like before.

And to be honest, I think we are proving to everyone that it works this way by continuing to deliver rock solid quality software. For no price. Completely open source. Year after year. Darnit, it’s just too fun to stop!

cURL

Adding known hosts support

… to libcurl and libssh2!

I’m about to start this little mini adventure, so if you’re one of the guys out there who’s been looking forward to be able to do even more (Open)SSH-like things with curl and libcurl when we use SCP and SFTP then consider this a little notification to start listening!

This will require improvements and changes in both projects, and funnily enough I’m already involved knee-deep in both so that shouldn’t cause any problems. I do however greatly appreciate feedback and reviews of my pending implementation proposals! I want this done in a way that benefits many and that isn’t too likely to break at least within the nearest future.

Ok, enough of that. Stand by for posts to the mailing lists. I’ll start off with the libcurl one which will thus be a slightly higher level API for all this. I’ll update this blog post later on to feature direct links to my proposals. Please consider posting responses to the suggestions to the appropriate mailing list!

The libcurl proposal

The first mail to libssh2-devel

HTTP Status Report

Mark Nottingham Mark Nottingham held a very interesting one hour talk on the status of HTTP and the work on HTTPbis on a QCon conference recently, and luckily for us HTTP geeks there’s this great video/presentation from that.

curl is mentioned at least twice in the slides, unfortunately it has a wrong fact on the second mention where it says curl uses “Pragma: no-cache” as it isn’t true anymore. It used to do that, but we’ve stopped doing it in curl since a while ago.

I’m a subscriber to the httpbis mailing list and a casual contributor, but nonetheless his summary and overview of the state was refreshing as I’ve not been able to keep up with all the details and I haven’t been tracking that working group from its start either.