Category Archives: cURL and libcurl

curl and/or libcurl related

Living With Open Source

.SEAs a session during the Internetdagarna conference (orginized by .SE), Björn Stenberg, Daniel Melin and I joined up to talk about open source with the title “Living With Open Source” (“Att Leva med Öppen Källkod” in the language of the brave: Swedish) on October 27. We did a 90 minute session split up between the three of us. The session was in Swedish and it was recorded so I expect that it will be made available online soon for those who are curious but didn’t attend.

Bjorn Stenberg during "att leva med Öppen kallkod"

Björn (on the picture above) started off by talking about how to work with Open Source as a user when using Open Source components. How to deal with changes, sending upstream, the cost of keeping changes private etc.

Talare - Att leva med öppen källkodDaniel Melin continued and talked about open source licensing. It is quite clearly an area that people find tricky and mysterious, judging from the many questions that followed. I think large parts of the audience wasn’t very advanced or well versed into open source details so then of course there is a lot to learn and to talk about. I think we all felt that we tried to cover quite a lot that together with the questions was hard to fit within the given time.

I ended our triplet by talking about open source from a producer’s viewpoint, how we view things in a typical open source project and I used a lot of details and factual points from the cURL project.

The audience consisted of perhaps 50 people. We had a rather nerdy subject and we had tough competition from five other parallel sessions, with some of them featuring Internet and other local celebrities.

Over all, I think we did good. The idea that held our three talks together I think was fine, we kept the schedule pretty good, the audience seemed to enjoy it and I had a great time. And we got a really nice lunch afterwards!

curl: ten years of more code and contributors

It feels like I’ve been doing curl forever, while in fact it is “only” in its early teens. I decided to dig up some numbers on how the development have been within the project over the last decade. How have things changed during the 10 most recent years.

To spice up the numbers, I generated some graphs based on them and to then make the graphs nice and presentable I moved them all over to a single graph using my super gimp powers.

Bugs, Linus of code and contributors over time in curl

Click the image to get a full resolution version. But even the small one shows the data I wanted to illustrate: we gain contributors in roughly the same speed as we grow in lines of code. And at the same time we get roughly the same amount of bug reports over the years, apparently independently from the amount of code and contributors! Note that I separate the bug fixed bars from the bug report bars because bug fixed is the amount of bugfixes mentioned in release notes while the bug reports is the count in the web based bug tracker. As seen we fixed a lot more bugs than we get submitted in the bug tracker.

I should add that the reason the green contributor line starts out a little slow and gets a speed bump after a while, is that I changed my way of working at that point and got much better at tracking exactly all contributors. The general angle on the curve for the last 4-5 years is however what I think is the interesting part of it. How it is basically the same angle as the source code increase.

The bug report counter is merely taken from our bug tracker at sourceforge, which is a very inexact count as a very large amount of bugs are reported on the mailing lists only.

Data from the curl release table, tells that during these 10 years we’ve done 77 releases in which we fixed 1414 bugs. That’s 18.4 bug fixes per release and one release roughly every 47 days on average. 141 bug fixes per year on average.

To see how this development has changed over time I decided to compare those numbers against those for the most recent 2.5 years. During this most recent 25% of the period we’ve done releases every 60 days on average but counting 155 bug fixes per year. Which made that the average number of bug fixes per release have gone up to 26; one bugfix every 2.3 days.

A more negative interpretation on this could be that we’re only capable of a certain amount of bug fixes per time so no matter how much code we get we fix bugs at roughly the same rate. The fact that we don’t get any increasing amount of bug reports of course speaks against this theory.

Testing 2-digit year numbers in cookies

In the current work of the IETF http-state working group, we’re documenting how cookies work. The question came up how browsers and clients treat years in ‘expires’ strings if the year is only specified with two digits. And more precisely, is 69 in the future or in the past?

I decided to figure that out. I setup a little CGI that can be used to check what your browser thinks:

http://daniel.haxx.se/cookie.cgi

It sends a single cookie header that looks like:

Set-Cookie: testme=yesyes; expires=Wed Sep  1 22:01:55 69;

The CGI script looks like this:

print "Content-Type: text/plain\n";
print "Set-Cookie: testme=yesyes; expires=Wed Sep  1 22:01:55 69;\n";
print "\nempty?\n";
print $ENV{'HTTP_COOKIE'};

You see that it prints the Cookie: header, so if you reload that URL you should see “testme=yesyes” being output if the cookie is still there. If the cookie is still there, your browser of choice treats the date above as a date in the future.

So, what browsers think 69 is in the future and what think 69 is in the past? Feel free to try out more browsers and tell me the results, this is the list we have so far:

Future:

Firefox v3 and v4 (year 2069)
curl (year 2038)
IE 7 (year 2069)
Opera (year 2036)
Konqueror 4.5
Android

Past:

Chrome (both v4 and v5)
Gnome Epiphany-Webkit

Thanks to my friends in #rockbox-community that helped me out!

(this info was originally posted to the httpstate mailing list)

Beyond just “69”

(this section was added after my first post)

After having done the above basic tests, I proceeded and wrote a slightly more involved test that sets 100 cookies in this format:

Set-Cookie: test$yy=set; expires=Wed Oct  1 22:01:55 $yy;

When the user reloads this page, the page prints all “test$yy” cookies that get sent to the server. The results with the various browsers is very interesting. These are the ranges different browsers think are future:

  • Firefox: 21 – 69 (Safari and Fennec and MicroB on n900) [*]
  • Chrome: 10 – 68
  • Konqueror: 00 – 99 (and IE3, Links, Netsurf, Voyager)
  • curl: 10 – 70
  • Opera: 41 – 69 (and Opera Mobile) [*]
  • IE8: 31 – 79 (and slimbrowser)
  • IE4: 61 – 79 (and IE5, IE6)
  • Midori: 10 – 69 (and IBrowse)
  • w3m: 10 – 37
  • AWeb: 10 – 77
  • Nokia 6300: [none]

[*] = Firefox has a default limit of 50 cookies per host which is the explanation to this funny range. When I changed the config ‘network.cookie.maxPerHost’ to 200 instead (thanks to Dan Witte), I got the more sensible and expected range 10 – 69. Opera has the similar thing, it has a limit of 30 cookies by default which explains the 41-69 limit in this case. It would otherwise get 10-69 as well. (thanks to Stanislaw Adrabinski). I guess that the IE8 range is similarly restricted due to it using a limit of 50 cookies per host and an epoch at 1980.

I couldn’t help myself from trying to parse what this means. The ranges can roughly be summarized like this:

0-9: mostly in the past
10-20: almost always in the future except Firefox
21-30: even more likely to be in the future except IE8
31-37: everyone but opera thinks this is the future
38-40: now w3m and opera think this is the past
41-68: everyone but w3m thinks this is the future
69: Chrome and w3m say past
70: curl, IE8, Konqueror say future
71-79: IE8 and Konqueror say future, everyone else say past
80-99: Konqueror say future, everyone else say past

How to test a browser near you:

  1. goto http://daniel.haxx.se/cookie2.cgi
  2. reload once
  3. the numbers shown on the screen is the year numbers the browsers consider
    to be the future as described above

What do you see in a future curl?

cURLDuring the first few years of the cURL project I used to sum up something about the past year and what I expected of the year to come at every anniversary. I stopped that at some point as I felt I didn’t have much new to contribute with, it mostly got repetitive over the years. I’ve never been the guy with the eyes fixed at the horizon and a grand plan of what to do in a future or work towards on a long-term basis. I’m more the guy with my feet firmly on the ground solving today’s problems right now as good as possible so that it stays stable and fine tomorrow.

So here follows some thoughts and reasoning around stuff that may and may not be the future of curl and libcurl. I’m as always putting my trust in you my friends to help me out with the details…

Protocols

curl’s wide protocol support has surprised even me. While I think the amount of protocols that can be supported without bending over backwards far too much is shrinking rapidly, I also think that we’ve learned that there are always more protocols that can be supported and that people will like to use with curl and libcurl. I expect more to come.

Specific protocols that are on the watch list include:

  • Gopher – I just had to mention it just because curl doesn’t currently support it, it was there once but got yanked out when we found out it didn’t work and hadn’t been doing so for a very long time without anyone noticing! The support is about to come back though thanks to a patch being worked on right now.
  • SPDY is the Google HTTP replacement (experimental) protocol that if it turns out successful seems like a very important piece for curl to grok.
  • Websockets will come to stay in your browser and it will happen soon. To remain a really powerful web scraper tool in the future with more and more sites switching to using Websockets for parts of its functionality, I believe at least some level of support might be required.
  • SCTP. Long-shot.This transport layer protocol was standardized in 2007 but really has not taken off in any significant way, even though it features a lot of benefits compared to TCP in many aspects. The now dead HTTP over SCTP internet-draft shows that HTTP can indeed be moved over to use it and it would solve a bunch of the same problems that SPDY does (and MPTCP). SCTP also has its own share of problems that hamper its adoption, primarily the lack of support in middle-boxes like NATs and routers.

Do you think there’s any particular other protocol we should support in a future?

Stability

I’m not sure if curl is particular unstable. I don’t think it suffers from any unusual amounts of bugs or anything, but I figure a decent list of stuff to consider for the future is if we make things within the project in a good way. Do we use the proper procedures so that we reach stability and produce a good product? Should we fix, change or replace certain ways or practices so that our outputs get less flaws?

I suspect these questions are hard for someone as involved and inside the project as myself. Possibly it is also hard for someone coming from the outside to convince us old-timers about anything like that too…

Interface

The current libcurl API was basically introduced when libcurl was first made a reality back in the year 2000. While the multi interface and the subsequent multi_socket API were added later on, they are both heavily influenced and affected by the previous easy interface.

Maybe there’s a much better API we should have that would make it easier to make a better library and easier to write better application using this library? It would require some serious brain-storming to come up with something, or perhaps there’s someone out there who already have thought something out?

Contributors

We’re but a few people who push commits to the master branch. How should we proceed to attract more? Should we work differently, perhaps have sub-maintainers for parts of the code etc? Can we work differently or make anything better to encourage more people to send bug reports and/or patches?

We do in fact have a very low level to entry already so I’m not aware of many additional things we can carve out to streamline this further.

Sponsors

As friends of me know, I always feel a little bit envy for projects and developers who manage to get corporate sponsors or otherwise enough paid support contracts to make them able to work on their pet projects full-time or at least part-time. I certainly have never reached that level for more than just a few months at a time. I’m not sure this is very important, as lack of funding doesn’t stop us it just slows us down and really, in an open source project like ours there aren’t really many hard deadlines. If it doesn’t get done now, it’ll get done later if enough people want it to happen.

Getting funding isn’t always easy either, since there may very well appear a company that wants to pay for a feature to get added but we don’t agree that it is a feature suitable for the project…

Organization

At times I think of the long-term future of our little project. Like what’s gonna happen the day I want to give up my chieftainship or if a company would like to step up and sponsor the project, but won’t be able to because there’s no actual legal entity for the project to sponsor or pay.

Would it make sense to have a curl organization? We could (I presume) easily join an umbrella organization like ASF. Or perhaps even more suitable the Software Freedom Conservancy. I will certainly not advocate setting up our own non-profit or anything like that.

I don’t think FSF or GNU will ever be a serious alternative since a pretty solid design goal of mine has been to avoid the *GPL licenses for curl to keep it attractive for commercial interests in a higher degree than (L)GPL licensed projects are. (This is not a license debate, so please don’t lecture me about the existence of GPL’ed commercial stuff.) I will not change license.

Feedback

One thing I’m sure of though, is that we will continue to listen to our users and the general curl and libcurl community about what to work on and what isn’t good and what we should do next. Please tell me your opinions and views on these matters!

curl performance

Benchmarks, speed, comparisons, performance. I get a lot of questions on how curl and libcurl compare against other tools or libraries, and I rarely have any specific answers as I personally basically never use or test any other tools or libraries!

This text will instead be more elaborate on how we work on libcurl, why I believe libcurl will remain the fastest alternative.

cURL

libcurl is low-level

libcurl is written in C and uses the native function calls of the operating system to perform its network operations. It features a lot of features, but when it comes to plain sending and receiving data the code paths are very short and the loops can’t be shortened or sped up by any significant amount. Based on these facts, I am confident that for simple single stream transfers you really cannot write a file transfer library that runs faster. (But yes, I believe other similarly low-level style libraries can reach the same speeds.)

When adding more complicated test cases, like doing SSL or perhaps many connections that need to be kept persistent between transfers, then of course libraries can start to differ. libcurl uses SSL libraries natively so if they are fast, so will libcurl’s SSL handling be and vice versa. Of course we also strive to provide features such as connection pooling, SSL session id reuse, DNS caching and more to make the normal and frequent use cases as fast as we possibly can. What takes time when using libcurl should be the underlying network operations, not the tricks libcurl adds to them.

event-based is the way to grow

If you plan on making an app that uses more than just a few connections, libcurl can of course still do the heavy lifting for you. You should consider taking precautions already when you do your design and make sure that you can use an event-based concept and avoid relying on select or poll for the socket handling. Using libcurl’s multi_socket API, you can go up and beyond tens of thousands of connections and still reach maximum performance. And this using basically all of the protocols libcurl supports, this is not limited to a small subset. (There are unfortunate exceptions, like for example “file://” URLs but there are completely technical reasons for this.)

Very few file transfer libraries have this direct support for event-based operations. I’ve read reports of apps that have gone up to and beyond 70,000 connections on the same host using libcurl like this. The fact that TCP only has a 16 bit field in the protocol header for “source port” of course forces users who want to try this stunt to use more than one interface as source address.

And before you ask: you cannot grow a client to that amount using any other technique than event-based using many connections in the same thread, as basically no other approach scales as well.

When handling very many connections, the mere “juggling” of the connections take time and can be done in good or bad ways. It would be interesting to one day measure exactly how good libcurl is at this.

Binding Benchmark and Comparisons

We are aware of something like 40 different bindings for libcurl to make it possible to use from just about any language you like. Lots, if not just every, language also tend to have its own native version of a transfer or at least HTTP library. For many languages, the native version is the one that is most preferred and most used in books, articles and promoted on the internet. Rarely can the native versions compete with the libcurl based ones in actual transfer performance. Because of what I mentioned above: libcurl does little extra when transferring stuff but the actual and raw transfer. Of course there still needs to be additional glue and logic to make libcurl work fine with the different languages’ own unique environments, but it is still often proven that it doesn’t make the speed gain get lost or become invisible. I’ll illustrate below with some sample environments.

Ruby comparisons

Paul Dix is a Ruby guy and he’s done a lot of work with HTTP libraries and Ruby, and he’s also done some benchmarks on libcurl-based Ruby libraries. They show that the tools built on top of libcurl run significantly faster than the native versions.

Perl comparisons

“Ivan” wrote up a benchmarking script that performs a number of transfers using three different mechanisms available to perl hackers. One of them being the official libcurl perl binding (WWW::Curl), one of them being the perl standard one called LWP. The results leaves no room for doubts: the libcurl-based version is significantly faster than the “native” alternatives.

PHP comparisons

The PHP binding for libcurl, PHP/CURL, is a popular one. In PHP the situation is possibly a bit different as they don’t have a native library that is nearly as feature complete as the libcurl binding, but they do have a native version for doing things like getting HTTP data etc. This function has been compared against PHP/CURL many times, for example Ricky’s comparisons, and Alix Axel’s comparisons. They all show that the libcurl-based alternative is faster. Exactly how much faster is of course depending on a lot of factors but I’m not going into such specific details here and now.

We miss more benchmarks!

I wish I knew about more benchmarks and comparisons of speed. If you know of others, or if you get inspired enough to write up and publish any after reading my rant here, please let me know! Not only is it fun and ego-boosting to see our project win, but I also want to learn from them and see where we’re lacking and if anyone beats us in a test, it’d be great to see what we could do to improve.

I’ll talk at FSCONS 2010

Recently I was informed that I got two talks accepted to the FSCONS 2010 conference, to be held in the beginning of November 2010.

My talks will be about the Future and current state of internet transport protocols (TCP, HTTP, SPDY, WebSockets, SCTP and more) and on High performance multi-protocol applications with libcurl, which will educate the audience on how to use libcurl when doing high performance clients with potentially a very large number of simultaneous transfers. A somewhat clueful reader will of course spot that these two talks have a lot in common, and yeah they do reveal a lot of what I do and what I like and what I poke on these days. I hope I’ll be able to put the light on some things not everyone is already perfectly aware of.

The talks will be held in English, and if the past FSCONS conferences tell anything, my talks will be video filmed and become available online afterward for the world to see if you have a funeral or something to attend to that prevents you from actually attending in person.

If you have thoughts, questions or anything on these topics that you would like to get answered in my talk, feel free to bring them up and I’ll see what I can do.

(If those fine guys and gals at FSCONS ever settled for a logo, or had one I could link to, I would’ve shown one of them right here.)

C-ares, now and ahead!

The project c-ares started many years ago (March 2004) when I decided to fork the existing ares project to get the changes done that I deemed necessary – and the original project owner didn’t want them.

I did my original work on c-ares back then primarily to get a good asynchronous name resolver for libcurl so that we would get around the limitation of having to do the name resolves totally synchronously as the libc interfaces mandate. Of course, c-ares was and is more than just name resolving and not too surprisingly, there have popped up other projects that are now using c-ares.

I’m maintaining a bunch of open source projects, and c-ares was never one that I felt a lot of love for, it was mostly a project that I needed to get done and when things worked the way I wanted them I found myself having ended up as maintainer for yet another project. I’ve repeatedly mentioned on the c-ares mailing list that I don’t really have time to maintain it and that I’d rather step down and let someone else “take over”.

After having said this for over 4 years, I’ve come to accept that even though c-ares has many users out there, and even seems to be appreciated by companies and open source projects, there just isn’t any particular big desire to help out in our project. I find it very hard to just “give up” a functional project, so I linger and do my best to give it the efforts and love it needs. I very much need and want help to maintain and develop c-ares. I’m not doing a very good job with it right now.

Threaded name resolves competes

I once thought we would be able to make c-ares capable of becoming a true drop-in replacement for the native system name resolver functions, but over the years with c-ares I’ve learned that the dusty corners of name resolving in unix and Linux have so many features and fancy stuff that c-ares is still a long way from that. It has also made me turn around somewhat and I’ve reconsidered that perhaps using a threaded native resolver is the better way for libcurl to do asynchronous name resolves. That way we don’t need any half-baked implementations of the resolver. Of course it comes at the price of a new thread for each name resolve, which turns really nasty of you grow the number of connections just a tad bit, but still most libcurl-using applications today hardly use more than just a few (say less than a hundred) simultaneous transfers.

Future!

I don’t think the future has any radical changes or drastically new stuff in the pipe for c-ares. I think we should keep polishing off bugs and add the small functions and features that we’re missing. I believe we’re not yet parsing all records we could do, to a convenient format.

As usual, a project is not about how much we can add but about how much we can avoid adding and how much we can remain true to our core objectives. I wish the growing popularity will make more people join the project and then not only to through a single patch at us, but to also hand around a while and help us somewhat more.

Hopefully we will one day be able to use c-ares instead of a typical libc-based name resolver and yet resolve the same names.

Join us and help us give c-ares a better future!

c-ares

curl vs libcurl

In my mini-series of articles A vs B, the time is up for curl vs libcurl.

For me, the differences are so very clear and obvious but I get a fair stream of questions from users and random people that I thought it was about time to make an effort to once and for all make a page with the facts stated. A fixed home for curl vs libcurl knowledge.

So I did. And now I mentioned it to you. Enjoy! If you have additional content you think belong there or if you think anything is unclear or wrong, don’t hesitate to let me know!

cURL

Daniel’s currency exchange is no more

For quite a number of years I maintained a little web service to provide currency exchange rates in a handy format and in a way that was friendly for machines and other machine-exchangers. My personal favorite feature was the “easy conversion” helper that would provide a “easy to calculate in head” formula for back and forth between two currencies based on their current rates. Like “multiply by 5 and divide by 2” etc.

This service goes all the way back to 1997 when I started to work on getting exchange rates downloaded as a service to the IRC bot I ran in #amiga on efnet (even before the split when ircnet was created). Back then I was primarily working on the IRC bot named Dancer. 1997 I started the work on a tool to fetch rates. The tool would become curl and the web site to access the rates was initially hosted by the company Frontec for which I worked back then.One dollar bill

The URL changed a few more times but it has been available at http://daniel.haxx.se/currency for the last few years until a few weeks ago. Well, technically the URL still works but the service does not.

So a few weeks ago the primary site I’ve scraped for this info changed their format and I decided to not play cat and mouse anymore. I was already bending the rules by not reading their terms of service as I feared I wouldn’t be allowed to use their data like this. Also, I really don’t have any use for this service myself so I decided to do myself a service and stop wasting spare time on one of these projects that don’t give me enough personal satisfaction. I’m sure that if there is a demand for such a service I now closed down, there will be someone else out there ready to fire it up and serve users.

So long, and thanks for all the currency exchange fun.

My talk Optimera Sthlm

30 minutes is a tricky period to fill with contents when you do a talk, and yesterday I did my best at confusing/informing the audience at the OPTIMERA STHLM conference in transport layer performance. Where time is spent or lost today in TCP, what to think about to get things to behave faster, that RTT is not getting better even though brandwidth is growing really fast these days and a little about some future technologies like WebSockets, SPDY, SCTP and MPTCP.

Note: this talk is entirely in Swedish.

My slides for this is also viewable with slideshare.net like this: