Category Archives: cURL and libcurl

curl and/or libcurl related

Fosdem 2011: my libcurl talk on video

Kai Engert was good enough to capture all the talks in the security devroom at Fosdem 2011, and while I’m seeding the full torrent I’ve made my own talk available as a direct download from here:

Fosdem 2011: security-room at 14:15 by Daniel Stenberg

The thing is about 107MB big, 640×480 resolution and is roughly 26 minutes playing time. WebM format.

libcurl, seven SSL libs and one SSH lib

I did a talk today at Fosdem with this title. The room only had 48 seats and it was completely packed with people standing everywhere it was possible around the seated guys.

The English slides from my talk are below. It was also recorded on video so I hope I’ll be able to post once it becomes available online

Going to FOSDEM 2011

Fosdem 2011We’re going to FOSDEM again. This year we’ll ship over the entire company (all three of us) and we’ll join up with a few fellow Rockbox hackers and spend a weekend in Brussels among thousands of fellow free software and open source hackers.

During this conference, 5-6 February, I’ve submitted a libcurl-related talk to the embedded-room that wasn’t accepted into the regular program, but I’ve agreed to still prepare it and I then might get a slot in case someone gets sick or something. A bit ungrateful as now I still have to prepare my slides for the talk but there’s a big risk that I’ve done it in vain! I’ve also submitted a suggestion for a second talk in the opensc/security room (also related to stuff in the curl project) but as of now (with but 16 days left) that schedule is yet to be announced so I don’t know if I’ll do a talk there or not.

So, I might do no talks. I might do two. I just don’t know. We’ll see.

If you’re a friend of mine and you’re going to FOSDEM this year, please let me know and we can meet and have a chat or whatever. I love getting faces to all the names, nicks and email addresses I otherwise only see of many people.

Update: My talk in the security room is titled “libcurl: Supporting seven SSL libraries and one SSH library” and will start at 14:15 on Saturday the 5th of February.

“Hacking me”

If you ever wonder how clever it was of me to make an FTP tool that used the default anonymous password curl_by_daniel@... once upon a time and you want to know why I changed that to ftp@example.com instead? Here’s a golden snippet to just absorb and enjoy:

Date: Thu, 23 Dec 2010 22:56:00
From: iHack3r <hidden>
To: info@[my company]
Subject: Hacking me

To the idiot named Daniel, Please stop brute force attacking my FTP client. I do not appreciate it, i have an anonymous account set up for the general public to access my files that i want them to access, QUIT trying to hack the admin because 1. DISABLED unless i am leaving to go somewhere without my computer 2: THE PASSWORD is random letters and numbers.

-iHack3r

The password was changed at Feb 13 2007 in curl version 7.16.2, but there are a surprisingly large amount of older curls still around out there…

Update: as the person responded again after having read this blog post and still didn’t get it, I felt the urge to speak up in even more clear terms:

I didn’t have anything to do with any “hacker attack” on any site. Not yours, and not anyone else’s. The fact that almost-my-email address appeared in your logs is because I wrote the FTP client. It is a general FTP client that is being used by a very very large amount of people all over the world. If I ever would attack a site, why on earth would I send along my real name or email address?

Byte ranges for FTP

In the IETF ftpext2 working group there have been some talks around clients’ and servers’ ability to do and support “ranged” file transfers, that is transferring only a piece of any given file. FTP supports the REST command and has done so since the dawn of man (RFC765 – June 1980), and using that command, a client can set the starting point for a transfer but there is no way to set the end point. HTTP has supported the Range: header since the first HTTP 1.1 spec back in January 1997, and that supports both a start and an end point. The HTTP header does in fact support multiple ranges within the same header, but let’s not overdo it here!

Currently, to avoid getting an entire file a client would simply close the data connection when it has got all the data it wants. The unfortunate reality is that some servers don’t notice clients doing this, so in order for this to work reliably a client also has to send ABOR, and after this command has been sent there is no way for the client to reliably figure out the state of the control connection so it has to get closed as well (which is crap in case more files are to be transferred to or from the same host). It primarily becomes unreliable because when ABOR is sent, the client gets one or two responses back due to a race condition between the closing and the actual end of transfer etc, and it isn’t possible to tell exactly how to continue.

A solution for the future is being worked on. I’ve joined up the effort to write a spec that will suggest a new FTP command that sets the end point for a transfer in the same vein REST sets the start point. For the moment, we’ve named our suggested command RANG (as short for range). “We” in this context means Tatsuhiro Tsujikawa, Anthony Bryan and myself but we of course hope to get further valuable feedback by the great ftpext2 people.

There already are use cases that want range request for FTP. The people behind metalinks for example want to download the same file from many servers, and then it makes sense to be able to download little pieces from different sources.

The people who found the libcurl bugs I linked to above use libcurl as part of the Fedora/Redhat installer Anaconda, and if I understand things right they use this feature to just get the beginning of some files to check them out and avoid having to download the full file before it knows it truly wants it. Thus it saves lots of bandwidth.

In short, the use-cases for ranged FTP retrievals are quite likely pretty much the same ones as they are for HTTP!

The first RANG draft is now available.

Scalable application layer transfers

At FSCONS 2010 I had the pleasure to do a talk about how to make your client-side networking applications really scale when upping the number of simultaneous connections. Including some details that libcurl will support you all the way!

My talk was named “Scalable application layer transfers” and the slides from it is available online. See below. Hopefully the video recording of it will appear later and I’ll post a  follow-up then. A little extra bonus material as background would be my poll vs select vs event-based article.

As I mentioned in a previous post, the room was shock full when I started preparing my equipment for the talk since the session before me was a keynote, but by the time I actually starter presenting there were only the limited set of hardcore geeks left.

In the FSCONS program there were several talks over the weekend about women in FOSS and so on, while I on the other hand certainly only contributed to enforcing the stereotypes by being white, male, middle-aged, very techy and I delivered my two speeches for audiences in which I believe not a single woman attended. Whether I am part of the problem or the solution we can discuss in a separate post later on… 🙂

The award for me

I was asked what the Nordic Free Software Award that I received last year meant to me. This was my response that I now repost here for the public to see:

Daniel WinnerTo me, the NFSA is a recognition from my own kind. A really big thumbs-up from within my own team. From fellow hackers who know.

In a world where we spend lots and lots of time alone in front of screens during long dark hours, where most of what you do is just silently pushed into source code repositories or consumed by eager downloaders distributed all over the world, getting that kind of positivism is invaluable.

I found it to not only be a very big ego boost, but it also really ignited my desire to do more, to reach further and to prove that my receiving of the award is the beginning and not the end of what I am set to do in our free software world. In my particular case it was a primary factor behind the start of the Foss-sthlm network that I co-started not long after I got the award. I’ve pushed foss-sthlm forwards during this year with several meetings with a hundred or more attendees.

Getting weird looks from outsiders or a thank you from the occasional user is fun, but getting an award from people who actually know what you might have done and what it takes to do it, is priceless.

I’m perfectly aware that I am the super-nerd. I’m not the social guy. I’m not the person who unite crowds or inspire teams to create miracles. I’m a software developer and I design and create code. Lots of it. I debate technical details, protocols and choices on mailing lists. Lots of them. I share as much as possible of all that of course and I’m thrilled that what I do is considered this good and is appreciated to this extent.

Everyone doing volunteer work wants to get recognition for their efforts. I got it. Thank you!

During the social event at FSCONS 2010 when we announced the winner of this year and handed him his prizes, I was also given the prize I never received last year because I wasn’t around at the actual award ceremony then. And of course, these guys love puns so…

Award prizes

From the left: a box with rocks (for my work on Rockbox), a transformer toy (I don’t quite recall the reason for that) and curlers (for my work on curl). Click on the image to see it in full resolution, it is taken with my crappy mobile camera.

Living With Open Source

.SEAs a session during the Internetdagarna conference (orginized by .SE), Björn Stenberg, Daniel Melin and I joined up to talk about open source with the title “Living With Open Source” (“Att Leva med Öppen Källkod” in the language of the brave: Swedish) on October 27. We did a 90 minute session split up between the three of us. The session was in Swedish and it was recorded so I expect that it will be made available online soon for those who are curious but didn’t attend.

Bjorn Stenberg during "att leva med Öppen kallkod"

Björn (on the picture above) started off by talking about how to work with Open Source as a user when using Open Source components. How to deal with changes, sending upstream, the cost of keeping changes private etc.

Talare - Att leva med öppen källkodDaniel Melin continued and talked about open source licensing. It is quite clearly an area that people find tricky and mysterious, judging from the many questions that followed. I think large parts of the audience wasn’t very advanced or well versed into open source details so then of course there is a lot to learn and to talk about. I think we all felt that we tried to cover quite a lot that together with the questions was hard to fit within the given time.

I ended our triplet by talking about open source from a producer’s viewpoint, how we view things in a typical open source project and I used a lot of details and factual points from the cURL project.

The audience consisted of perhaps 50 people. We had a rather nerdy subject and we had tough competition from five other parallel sessions, with some of them featuring Internet and other local celebrities.

Over all, I think we did good. The idea that held our three talks together I think was fine, we kept the schedule pretty good, the audience seemed to enjoy it and I had a great time. And we got a really nice lunch afterwards!

curl: ten years of more code and contributors

It feels like I’ve been doing curl forever, while in fact it is “only” in its early teens. I decided to dig up some numbers on how the development have been within the project over the last decade. How have things changed during the 10 most recent years.

To spice up the numbers, I generated some graphs based on them and to then make the graphs nice and presentable I moved them all over to a single graph using my super gimp powers.

Bugs, Linus of code and contributors over time in curl

Click the image to get a full resolution version. But even the small one shows the data I wanted to illustrate: we gain contributors in roughly the same speed as we grow in lines of code. And at the same time we get roughly the same amount of bug reports over the years, apparently independently from the amount of code and contributors! Note that I separate the bug fixed bars from the bug report bars because bug fixed is the amount of bugfixes mentioned in release notes while the bug reports is the count in the web based bug tracker. As seen we fixed a lot more bugs than we get submitted in the bug tracker.

I should add that the reason the green contributor line starts out a little slow and gets a speed bump after a while, is that I changed my way of working at that point and got much better at tracking exactly all contributors. The general angle on the curve for the last 4-5 years is however what I think is the interesting part of it. How it is basically the same angle as the source code increase.

The bug report counter is merely taken from our bug tracker at sourceforge, which is a very inexact count as a very large amount of bugs are reported on the mailing lists only.

Data from the curl release table, tells that during these 10 years we’ve done 77 releases in which we fixed 1414 bugs. That’s 18.4 bug fixes per release and one release roughly every 47 days on average. 141 bug fixes per year on average.

To see how this development has changed over time I decided to compare those numbers against those for the most recent 2.5 years. During this most recent 25% of the period we’ve done releases every 60 days on average but counting 155 bug fixes per year. Which made that the average number of bug fixes per release have gone up to 26; one bugfix every 2.3 days.

A more negative interpretation on this could be that we’re only capable of a certain amount of bug fixes per time so no matter how much code we get we fix bugs at roughly the same rate. The fact that we don’t get any increasing amount of bug reports of course speaks against this theory.

Testing 2-digit year numbers in cookies

In the current work of the IETF http-state working group, we’re documenting how cookies work. The question came up how browsers and clients treat years in ‘expires’ strings if the year is only specified with two digits. And more precisely, is 69 in the future or in the past?

I decided to figure that out. I setup a little CGI that can be used to check what your browser thinks:

http://daniel.haxx.se/cookie.cgi

It sends a single cookie header that looks like:

Set-Cookie: testme=yesyes; expires=Wed Sep  1 22:01:55 69;

The CGI script looks like this:

print "Content-Type: text/plain\n";
print "Set-Cookie: testme=yesyes; expires=Wed Sep  1 22:01:55 69;\n";
print "\nempty?\n";
print $ENV{'HTTP_COOKIE'};

You see that it prints the Cookie: header, so if you reload that URL you should see “testme=yesyes” being output if the cookie is still there. If the cookie is still there, your browser of choice treats the date above as a date in the future.

So, what browsers think 69 is in the future and what think 69 is in the past? Feel free to try out more browsers and tell me the results, this is the list we have so far:

Future:

Firefox v3 and v4 (year 2069)
curl (year 2038)
IE 7 (year 2069)
Opera (year 2036)
Konqueror 4.5
Android

Past:

Chrome (both v4 and v5)
Gnome Epiphany-Webkit

Thanks to my friends in #rockbox-community that helped me out!

(this info was originally posted to the httpstate mailing list)

Beyond just “69”

(this section was added after my first post)

After having done the above basic tests, I proceeded and wrote a slightly more involved test that sets 100 cookies in this format:

Set-Cookie: test$yy=set; expires=Wed Oct  1 22:01:55 $yy;

When the user reloads this page, the page prints all “test$yy” cookies that get sent to the server. The results with the various browsers is very interesting. These are the ranges different browsers think are future:

  • Firefox: 21 – 69 (Safari and Fennec and MicroB on n900) [*]
  • Chrome: 10 – 68
  • Konqueror: 00 – 99 (and IE3, Links, Netsurf, Voyager)
  • curl: 10 – 70
  • Opera: 41 – 69 (and Opera Mobile) [*]
  • IE8: 31 – 79 (and slimbrowser)
  • IE4: 61 – 79 (and IE5, IE6)
  • Midori: 10 – 69 (and IBrowse)
  • w3m: 10 – 37
  • AWeb: 10 – 77
  • Nokia 6300: [none]

[*] = Firefox has a default limit of 50 cookies per host which is the explanation to this funny range. When I changed the config ‘network.cookie.maxPerHost’ to 200 instead (thanks to Dan Witte), I got the more sensible and expected range 10 – 69. Opera has the similar thing, it has a limit of 30 cookies by default which explains the 41-69 limit in this case. It would otherwise get 10-69 as well. (thanks to Stanislaw Adrabinski). I guess that the IE8 range is similarly restricted due to it using a limit of 50 cookies per host and an epoch at 1980.

I couldn’t help myself from trying to parse what this means. The ranges can roughly be summarized like this:

0-9: mostly in the past
10-20: almost always in the future except Firefox
21-30: even more likely to be in the future except IE8
31-37: everyone but opera thinks this is the future
38-40: now w3m and opera think this is the past
41-68: everyone but w3m thinks this is the future
69: Chrome and w3m say past
70: curl, IE8, Konqueror say future
71-79: IE8 and Konqueror say future, everyone else say past
80-99: Konqueror say future, everyone else say past

How to test a browser near you:

  1. goto http://daniel.haxx.se/cookie2.cgi
  2. reload once
  3. the numbers shown on the screen is the year numbers the browsers consider
    to be the future as described above