Tag Archives: cURL and libcurl

Open source personal

I participate in a range of different open source projects. Of course I spend more time on some of them and only a very little time in most of them, but I’m currently listed as member of 18 projects on sourceforge and 16 on ohloh and I can easily figure out a bunch more than aren’t listed on either of those sites.

I’m just the kind of guy who tend to actually get the code and write up a patch for problems, and in fact also in many cases I’ll write an fresh application and publish it openly for the world (not that my typical programs get any particularly large audience but still). I’m not saying everyone has to be like this, I’m just describing me here.

It seems this is a troublesome concept for people to grasp.

I get a large amount of private mail where people talk about “your project” (as in a single one that I am supposed to understand which one they’re referring to) and just about all open source-related interview/questionnaire things I’ve filled in tend to assume My One Single Project. In the first case I can often guess which one they refer to by the phrasing of the mail, and in the second case I tend to answer for the project I’m involved the most in.

So I get this feed of private emails on projects I participate in, but I don’t like private emails about open source projects when people request and expect free support and help. If they want free support, I expect the people to post the questions publicly and open to allow others to reply and read both the question and the subsequent answer online, right there at the time they’re asked but also much later when searching for help on the same subject as then the answers will be around in mailing list archives etc.

These days I have a blanket reply form that I bounce back when I get private support mails and I will admit that most people respect that after having been told about the situation. Every now and then of course I get a violent refusal for sympathy and instead I get to learn I’m an arrogant bastard. This is also related to the fact that:

We (Haxx) run and offer commercial support around curl and libcurl, and for that purpose we have a dedicated support email address. Mail there if you’re willing to pay for support. That’s actually quite clearly spelled out everywhere where that address is displayed, but yet people seem to find that a good place to mail random questions and bug reports. Just today I got a very upset mail response after I mentioned the “paid support” part of the deal there expecting us (me?) to instantly fix bugs regardless since I’ve been told about them per email…

All in all, I’m not really complaining since I’m generally getting along fine with everyone and stuff around this.

Just everyone try to keep things apart: the projects, the people and the companies. They’re sometimes intertwined but sometimes not.

Some stats on curl development

Counting curl 6.0 and up to curl 7.19.3 we’ve done 78 releases during the 9.4 years it took.

In this time, we’ve mentioned 1259 bugfixes and 389 notable changes.

This makes one bugfix done every 2.7 days. One release done every 43rd day with an average of 16 bugfixes done in each. The longest interval ever between two curl releases was 139 days, back in 2000 when we worked to release the first version 7 release (known as 7.1).

To compare with how our work has been more recently, doing the same math limited to the 20 latest releases only (the 3.3 years since and including 7.15.0) shows that we’re still on 2.7 days per bugfix (although we know that the code base has grown steadily for years) but we’re now on 61 days between releases and 21 bugfixes/release…

All this info and more will be visible on a web page on the curl site soonish, I’m still working on polishing it up.

What other useful or useless but interesting numbers could be extracted from this?

curl 7.19.3

I just now sent away the announcement of curl and libcurl 7.19.3. With some 30 bugfixes and only two actual changes I hope this will again be a solid release that’ll be appreciated and used all over.

The changes are:

  • CURLAUTH_DIGEST_IE bit added for CURLOPT_HTTPAUTH and CURLOPT_PROXYAUTH – as older Internet Explorers have an “interesting” take at the Digest authentication and servers that speak that dialect doesn’t like libcurl’s regular way
  • VC9 Makefiles were added to the release package, for the VS2008 users of the world

Download here.

Linux distros consolidate crypto libs

For a while already, the Fedora distribution has fought battles, done lots of work and pushed for a consolidation of all packages that use crypto libs to completely go with Mozilla’s NSS.

Now it seems to be OpenSUSE’s turn. The discussion I link to here doesn’t make any definite conclusions but they seem to lean towards NSS as well, claiming it has the most features. I wonder what they base that statement on – if there’s a public doc anywhere that state exactly which has what that makes any contender better than any other for them?

In the Fedora case it seems they’ve focused on the NSS FIPS license as the deciding factor but the license issue is also often brought up in this discussion.

I’ve personally been pondering on writing some kind of unified crypto layer that would expose a single API to an application and handle the different libs as backends, pretty much the same way we do it internally in libcurl at the moment. It hasn’t taken off (or even been started) since I’ve not had the time nor energy for it yet.

FLOSS Weekly #51 on curl

FLOSS WeeklyLate Wednesday evening (middle European time zone) on January 7th 2009 I was up doing a live recording of the podcast show FLOSS Weekly with Leo Laporte and Randal Schwartz. This recording is now available for download as episode #51.

We chatted a bit about curl and libcurl and I think I did a decent job of keeping to the subject and not making a total fool of myself. Enjoy!

(The talk was done using skype and yes my laptop was running Windows at the time…!)

IETF http-state group created

Over at the IETF another group was just created named http-state (with an associated mailing list) with the specific goal:

Ultimately, the purpose of this group is to create an updated HTTP State Management Mechanism RFC (aka cookies) that will supersede the Netscape spec, RFCs 2109, 2964, 2965 then add in real-world usage (e.g. HTTPOnly), and possibly add in additional features and possibly merge in draft-broyer-http-cookie-auth-00.txt and draft-pettersen-cookie-v2-03.txt.

I’ve joined the list and I hope to follow and participate in this, as I believe the current state of HTTP cookies is a rather sorry mess and the Netscape spec is still what closest describes how cookies work in the wild. Of course I’ll do it with my libcurl experience in my luggage.

While it perhaps would be cool to join the group in more formal way, there’s no way for me to participate in that IETF meeting in San Francisco in March.

A new year with new fun

I had a great and relaxing winter/Christmas holiday and hence my silence here.

I’m now back up to speed, with a podcast interview done yesterday (I’ll post another entry when it gets available), I do some funded development on libcurl and libssh2 in the background while I’m spending my days at my client’s place working on a 10G traffic analyzer product.

It was rather calm during “the break” but I’ve now noticed that at least the curl project has gotten significantly increased activity again. We’re on a feature freeze now for the January release, but there seems to be at least 4 patches pending adding new stuff for the release planned to come after this (around March if things go well).

More libcurl adoption

Some recent news showing libcurl possibly widening its user-base:

Eugene V. Lyubimkin posted a suggestion that libcurl should be used by the upcoming APT release for all ftp and http accesses!

Mr Johansen at Sun told us libcurl is being considered (via the pycurl binding) for the new OpenSolaris package manager.

perl’s widely used module for HTTP/FTP etc, called LWP, has gotten a libcurl-powered sibling called LPW-curl, which if I understand things correctly makes transfers using the traditional LWP-style and API but is powered by libcurl underneath.

Someone (not being me) registered libcurl.org. The site actually contains rather accurate info but if I disable adblock it shows lots of ads on the page though so I guess that’s why the page exists… (googling for “libcurl” now shows this site among the 5-6 first hits, which surprises me…)

More?