Tag Archives: birthday

twenty-five years of curl

Time flies when you are having fun. Today is curl‘s 25th birthday.

The curl project started out very humbly as a small renamed URL transfer tool that almost nobody knew about for the first few years. It scratched a personal itch of mine,

Me back then

I made that first curl release and I’ve packaged every single release since. The day I did that first curl release I was 27 years old and I worked as a software engineer for Frontec Tekniksystem, where I mostly did contract development on embedded systems for larger Swedish product development companies. For a few years in the late 90s I would for example do quite a few projects at and for the telecom giant Ericsson.

I have enjoyed programming and development ever since I got my first computer in the mid 80s. In the 1990s I had already established a daily schedule where I stayed up late when my better half went to bed at night, and I spent another hour or two on my spare time development. This is basically how I have manged to find time to devote to my projects the first few decades. Less sleep. Less other things.

Gradually and always improving

The concept behind curl development has always been to gradually and iteratively improve all aspects of it. Keep behavior, but enhance the code, add test cases, improve the documentation. Over and over, year after year. It never stops. As the timeline below helps showing.

Similarly, there was no sudden specific moment when suddenly curl became popular and the number of users skyrocketed. Instead, the number of users and the popularity of the tool and library has gradually and continuously grown. In 1998 there were few users. By 2010 there were hundreds of millions.

We really have no idea exactly how many users or installations of libcurl there are now. It is easy to estimate that it runs in way more than ten billion installations purely based on the fact that there are 7 billion smart phones and 1 billion tablets in the world , and we know that each of them run at least one, but likely many more curl installs.

Before curl

My internet transfer journey started in late 1996 when I downloaded httpget 0.1 to automatically download currency rates daily to make my currency exchange converter work correctly for my IRC bot. httpget had some flaws so I sent back fixes, but Rafael, the author, quickly decided I could rather take over maintenance of the thing. So I did.

I added support for GOPHER, change named of the project, added FTP support and then in early 1998 I started adding FTP upload support as well…


The original curl logo.

On March 20 1998, curl 4.0 was released and it was already 2,200 lines of code on its birthday because it was built on the projects previously named httpget and urlget. It then supported three protocols: HTTP, GOPHER and FTP and featured 24 glorious command line options.

The first release of curl was not that special event since I had been shipping httpget and urlget releases for over a year already, so while this was a new name it was also “just another release” as I had done many times already.

We would add HTTPS and TELNET support already the first curl year, which also introduced the first ever curl man page. curl started out GPL licensed but I switched to MPL already within that first calendar year 1998.

The first SSL support was powered by SSLeay. The project that in late 1998 would transition over into becoming OpenSSL.

In August 1998, we added curl on the open source directory site freshmeat.net.

The first curl web page was published at http://www.fts.frontec.se/~dast. (the oldest version archived by the wayback machine is from December 1998)

In November 1998 I added a note to the website about the mind-blowing success as the latest release had been downloaded 300 times! Success and popularity were far from instant.

Screenshot from the curl website of November 1998

During this first year, we shipped 20 curl releases. We have never repeated that feat again.


We created the first configure script, added support for cookies and appeared as a package in Debian Linux.

The curl website moved to http://curl.haxx.nu.

We added support for DICT, LDAP and FILE through the year. Now supporting 8 protocols.

In the last days of 1999 we imported the curl code to the cool new service called Sourceforge. All further commit counts in curl starts with this import. December 29, 1999.


Privately, I switched jobs early 2000 but continued doing embedded contract development during my days.

The rules for the TLD .se changed and we moved the curl website to curl.haxx.se.

I got married.

In August 2000, we shipped curl 7.1 and things changed. This release introduced the library we decided to call libcurl because we couldn’t come up with a better name. At this point the project were at 17,200 lines of code.

The libcurl API was inspired by how fopen() works and returns just an opaque handle, and how ioctl() can be used to set options.

Creating a library out of curl was an idea I had almost from the beginning, as I’ve already before that point realized the power a good library can bring to applications.

The first CVE for curl was reported.

Users found the library useful and increased the curl uptake. One of the first early adopters of libcurl was the PHP language, which decided to use libcurl as their default HTTP/URL transfer engine.

We created the first test suite.


We changed the license and offered curl under the new curl license (effectively MIT) as well as MPL. The idea to slightly modify the curl license was a crazy one, but the reason for that has been forgotten.

We added support for HTTP/1.1 and IPv6.

In June, the THANKS file counted 67 named contributors. This is a team effort. We surpassed 1,100 total commits in March and in July curl was 20,000 lines of code.

Apple started bundling curl with Mac OS X when curl 7.7.2 shipped in Mac OS X 10.1.


The test suite contained 79 test cases.

We dropped the MPL option. We would never again play the license change game.

We added support for gzip compression over HTTP and learned how to use SOCKS proxies.


The curl “autobuild” system was introduced: volunteers run scripts on their machines that download, build and run the curl tests frequently and email back the results to our central server for reporting and analyses. Long before modern CI systems made these things so much easier.

We added support for Digest, NTLM and Negotiate authentication for HTTP.

In August we offered 40 individual man pages.

Support for FTPS was added, protocol number 9.

My first child, Agnes, was born.

I forked the ares project and started the c-ares project to provide and maintain a library for doing asynchronous name resolves – for curl and others. This project has since then also become fairly popular and widely used.


At the beginning of 2003, curl was 32,700 lines of code.

We made curl support “large files”, which back then meant supporting files larger than 2 and 4 gigabytes.

We implemented support for IDN, International Domain Names.


GnuTLS become the second supported TLS library. Users could now select which TLS library they wanted their build to use.

Thanks to a grant from the Swedish “Internetfonden”, I took a leave of absence from work and could implement the first version of the multi_socket() API to allow applications to do more parallel transfers faster.

git was created and they quickly adopted curl for their HTTP(S) transfers.

TFTP became the 10th protocol curl supports.


We decided to drop support for “third party FTP transfers” which made us bump the SONAME because of the modified ABI. The most recent such bump. It triggered some arguments. We learned how tough bumping the SONAME can be to users.

The wolfSSL precursor called cyassl became the third SSL library curl supported.

We added support for HTTP/1.1 Pipelining and in the later half of the year I accepted a contract development work for Adobe and added support for SCP and SFTP.

As part of the SCP and SFTP work, I took a rather big step into and would later become maintainer of the libssh2 project. This project is also pretty widely used.

I had a second child, my son Rex.


Now at 51,500 lines of code we added support for a fourth SSL library: NSS

We added support for LDAPS and the first port to OS/400 was merged.

For curl 7.16.1 we added support for --libcurl. Possibly my single favorite curl command line option. Generate libcurl-using source code repeating the command line transfer.

In April, curl had 348 test cases.


By now the command line tool had grown to feature 126 command line options. A 5x growth during curl’s ten first years.

In March we surpassed 10,000 commits.

I joined the httpbis working group mailing list and started slowly to actively participate within the IETF and the work on the HTTP protocol.

Solaris ships curl and libcurl. The Adobe flash player on Linux uses libcurl.

In September the total count of curl contributors reached 654.


On FLOSS Weekly 51, I talked about curl on a podcast for the first time.

We introduced support for building curl with cmake. A decision that is still being discussed and questioned if it actually helps us. To make the loop complete, cmake itself uses libcurl.

In July the IETF 75 meeting was held in Stockholm, my home town, and this was the first time I got to physically meet several of my personal protocol heroes that created and kept working on the HTTP protocol: Mark, Roy, Larry, Julian etc.

In August, I quit my job to work for my own company Haxx, but still doing contracted development. Mostly doing embedded Linux by then.

Thanks to yet another contract, I introduced support for IMAP(S), SMTP(S) and POP3(S) to curl, bumping the number of supported protocols to 19.

I was awarded the Nordic Free Software Award 2009. For my work on curl, c-ares and libssh2.


We added support for RTSP, and RTMP(S).

PolarSSL became the 6th supported SSL library.

We switched version control system from CVS to git and at the same time we switched hosting from Sourceforge to GitHub. From this point on we track authorship of commits correctly and appropriately, something that was much harder to do with CVS.

Added support for the AxTLS library. The 7th.


Over 80,000 lines of code.

The cookie RFC 6265 shipped. I was there and did some minor contributions for it.

We introduced the checksrc script that verifies that source code adheres to the curl code style. Started out simple, has improved and been made stricter over time.

I got a thank you from Googlers which eventually landed me some Google swag.

We surpassed 100 individual committers.


149 command line options.

Added support for Schannel and Secure Transport for TLS.

When I did an attempt at a vanity count of number of curl users, I ended up estimating they were 550 million. This was one of the earlier realizations of mine that man, curl is everywhere!

During the entire year of 2012, there were 67 commit authors.


Added support for GSKit, a TLS library mostly used on OS/400. The 10th supported TLS library.

In April the number of contributors had surpassed 1,000 and we reached over 800 test cases.

We refactored the internals to make sure everything is done non-blocking and what we call “use multi internally” so that the easy interface is just a wrapper for a multi transfer.

The initial attempts at HTTP/2 support were merged (powered by the great nghttp2 library) as well as support for doing connects using the Happy Eyeballs approach.

We created our first two CI jobs.


I started working for Mozilla in the Firefox networking team, remotely from my house in Sweden. For the first time in my career, I would actually work primarily with networking and HTTP etc with a significant overlap with what curl is and does. Up until this point, the two sides of my life had been strangely separated. Mozilla allowed me to spend some work hours on curl.

At 161 command line options and 20 reported CVEs.

59 man pages exploded into 270 man pages in July when every libcurl option got its own separate page.

We added support for the libressl OpenSSL fork and removed support for QsoSSL. Still at 10 supported TLS libraries.

In September, there was 105,000 lines of code.

Added support for SMB(S). 24 protocols.


Added support for BoringSSL and mbedTLS.

We introduced support for doing proper multiplexed transfers using HTTP/2. A pretty drastic paradigm change in the architecture when suddenly multiple transfers would share a single connection. Lots of refactors and it took a while until HTTP/2 support got stable.

It followed by our first support for HTTP/2 server push.

We switched over to the GitHub working model completely, using its issue tracker and doing pull-requests.

The first HTTP/2 RFC was published in May. I like to think I contributed a little bit to the working group effort behind it.

My HTTP/2 work this year was in part sponsored by Netflix and it was a dance to make that happen while still employed by and working for Mozilla.

20,000 commits.

I started writing everything curl.

We also added support for libpsl, using the Public Suffix List for better cookie handling.


curl switched to using HTTP/2 by default for HTTPS transfers.

In May, curl feature 185 command line options.

We got a new logo, the present one. Designed by Adrian Burcea at Soft Dreams.

Added support for HTTPS proxies and TLS 1.3.

curl was audited by Cure 53.

A Swedish tech site named me 2nd best developer in Sweden. Because of my work on curl.

At 115,500 lines of code at the end of the year.


curl got support for building with and using multiple TLS libraries and doing the choice of which to use at start-up.

Fastly reached out and graciously and generously started hosting the curl website as well as my personal website. This help putting the end to previous instabilities when blog posts got too popular for my site to hold up and it made the curl site snappier for more people around the globe. They have remained faithful sponsors of the project ever since.

In the spring of 2017, we had our first ever physical developers conference, curl up, as twenty something curl fans and developers went to Nuremberg, Germany to spend a weekend doing nothing but curl stuff.

In June I was denied traveling to the US. This would subsequently take me on a prolonged and painful adventure trying to get a US visa.

The first SSLKEYLOGFILE support landed, we introduced the new MIME API and support for brotli compression.

The curl project was adopted into the OSS-Fuzz project, which immediately started to point out mistakes in our code. They have kept fuzzing curl nonstop since then.

In October, I was awarded the Polhem Prize. Sweden’s oldest and probably most prestigious engineering award. This prize was established and has been awarded since 1876. A genuine gold medal, handed over to me by no other than his majesty the king of Sweden. The medal even has my name engraved.


Added support for DNS over HTTPS and the new URL API was introduced to allow applications to parse URLs the exact same way curl does it.

I joined the Changelog podcast and talked about curl turning 20.

Microsoft started shipping curl bundled with Windows. But the curl alias remains.

We introduced support for a second SSH library, so now SCP and SFTP could be powered by libssh in addition to the already supported libssh2 library.

We added support for MesaLink but dropped support for AxTLS. At 12 TLS libraries.

129,000 lines of code. Reached 10,000 stars on GitHub.

To accept a donation it was requested we create an account with Open Collective, and so we did. It has since been a good channel for the project to receive donations and sponsorships.

In November 2018 it was decided that the HTTP-over-QUIC protocol should officially become HTTP/3.

At 27 CI jobs at the end of the year. Running over 1200 test cases.


I started working for wolfSSL, doing curl full-time. It just took 21 years to make curl my job.

We added support for Alt-Svc and we removed support for the always so problematic HTTP/1.1 Pipelining.

We introduced our first curl bug bounty program and we have effectively had a bug bounty running since. In association with hackerone. We have paid almost 50,000 USD in reward money for 45 vulnerabilities (up to Feb 2023).

Added support for AmiSSL and BearSSL: at 14 libraries.

We merged initial support for HTTP/3, powered by the quiche library, and a little later also with a second library: ngtcp2. Because why not do many backends?

We started offering curl in an “official” docker image.


The curl tool got parallel transfer powers, the ability to output data in JSON format with -w and the scary --help output was cleaned up and arranged better into subcategories.

In March, for curl 7.69.0, I started doing release video presentations, live-streamed.

The curl website moved to curl.se and everything curl moved over to the curl.dev domain.

MQTT become the 25th supported protocol.

The first support for HSTS was added, as well as support for zstd compression.

wolfSSH became the third supported SSH library.

We removed support for PolarSSL.

Initial support for hyper as an alternative backend for HTTP/1 and HTTP/2.

In November, in the middle of Covid, I finally got a US visa.

The 90th CI job was created just before the end of the year.


Dropped support for MesaLink but added support for rustls. At 13 TLS libraries.

Ingenuity landed on Mars, and curl helped it happen.

Received a very unpleasant death threat over email from someone deeply confused, blaming me for all sorts of bad things that happened to him.

Reached 20,000 stars on GitHub.

Supports GOPHERS. 26 protocols.

187 individuals authored commits that were merged during the year.


Merged initial support WebSocket (WS:// and WSS:// URLs) and a new API for handling it. At 28 protocols.

We added the --json command line option and libcurl got a new header API, which then also made the command line tool get new “header picking” ability added to -w. We also added --rate and --url-query.

The HTTP/3 RFC was published in June.

msh3 become the third supported HTTP/3 library.

Trail of Bits did a curl security audit, sponsored by OpenSSF.

The 212th curl release was done in December. Issue 10,000 was created on GitHub.


At the start of the year: 155,100 lines of code. 486 man pages. 1560 test cases. 2,771 contributors. 1,105 commit authors. 132 CVEs. 122 CI jobs. 29,733 commits. 48,580 USD rewarded in bug-bounties. 249 command line options. 28 protocols. 13 TLS libraries. 3 SSH libraries. 3 HTTP/3 libraries.

Introduce support for HTTP/3 with fall-back to older versions, making it less error-prone to use it.

On March 13 we surpassed 30,000 commits.

On March 20, we release curl 8.0.0. Exactly 25 years since the first curl release.

Staying relevant

Over the last 25 years we have all stopped using and forgotten lots of software, tools and services. Things come and go. Everything has its time and lots of projects simply do not keep up and gets replaced by something else at some point.

I like to think that curl is still a highly relevant software project with lots of users and use cases. I want to think that this is partly because we maintain it intensely and with both care and love. We make it do what users want it to do. Keep up, keep current, run the latest versions, support the latest security measures, be the project you would like to use and participate. Lead by example.

My life is forever curl tinted

Taking curl this far and being able to work full time on my hobby project is a dream come real. curl is a huge part of my life.

Me, on vacation in Portugal in 2019.

This said, curl is a team effort and it would never have taken off or become anything real without all our awesome contributors. People will call me “the curl guy” and some will say that it is “my” project, but everyone who has ever been close to the project knows that we are many more in the team than just me.

25 years

That day found httpget I was 26 years old. I was 27 by the time I shipped curl. I turned 52 last November.

I’ve worked on curl longer than I’ve worked for any company. None of my kids are this old. 25 years ago I did not live in my house yet. 25 years ago Google didn’t exist and neither did Firefox.

Many current curl users were not even born when I started working on it.

Beyond twenty-five

I feel obligated to add this section because people will ask.

I don’t know what the future holds. I was never good at predictions or forecasts and frankly I always try to avoid reading tea leaves. I hope to stay active in the project and to continue working with client-side internet transfers for as long as it is fun and people want to use the results of my work.

Will I be around in the project in another 25 years? Will curl still be relevant then? I don’t know. Let’s find out!

curl: 22 years in 22 pictures and 2222 words

curl turns twenty-two years old today. Let’s celebrate this by looking at its development, growth and change over time from a range of different viewpoints with the help of graphs and visualizations.

This is the more-curl-graphs-than-you-need post of the year. Here are 22 pictures showing off curl in more detail than anyone needs.

I founded the project back in the day and I remain the lead developer – but I’m far from alone in this. Let me take you on a journey and give you a glimpse into the curl factory. All the graphs below are provided in hires versions if you just click on them.

Below, you will learn that we’re constantly going further, adding more and aiming higher. There’s no end in sight and curl is never done. That’s why you know that leaning on curl for Internet transfers means going with a reliable solution.

Number of lines of code

Counting only code in the tool and the library (and public headers) it still has grown 80 times since the initial release, but then again it also can do so much more.

At times people ask how a “simple HTTP tool” can be over 160,000 lines of code. That’s basically three wrong assumptions put next to each other:

  1. curl is not simple. It features many protocols and fairly advanced APIs and super powers and it offers numerous build combinations and runs on just all imaginable operating systems
  2. curl supports 24 transfer protocols and counting, not just HTTP(S)
  3. curl is much more than “just” the tool. The underlying libcurl is an Internet transfer jet engine.

How much more is curl going to grow and can it really continue growing like this even for the next 22 years? I don’t know. I wouldn’t have expected it ten years ago and guessing the future is terribly hard. I think it will at least continue growing, but maybe the growth will slow down at some point?

Number of contributors

Lots of people help out in the project. Everyone who reports bugs, brings code patches, improves the web site or corrects typos is a contributor. We want to thank everyone and give all helpers the credit they deserve. They’re all contributors. Here’s how fast our list of contributors is growing. We’re at over 2,130 names now.

When I wrote a blog post five years ago, we had 1,200 names in the list and the graph shows a small increase in growth over time…

Daniel’s share of total commits

I started the project. I’m still very much involved and I spend a ridiculous amount of time and effort in driving this. We’re now over 770 commits authors and this graph shows how the share of commits I do to the project has developed over time. I’ve done about 57% of all commits in the source code repository right now.

The graph is the accumulated amount. Some individual years I actually did far less than 50% of the commits, which the following graph shows

Daniel’s share of commits per year

In the early days I was the only one who committed code. Over time a few others were “promoted” to the maintainer role and in 2010 we switched to git and the tracking of authors since then is much more accurate.

In 2014 I joined Mozilla and we can see an uptake in my personal participation level again after having been sub 50% by then for several years straight.

There’s always this argument to be had if it is a good or a bad sign for the project that my individual share is this big. Is this just because I don’t let other people in or because curl is so hard to work on and only I know my ways around the secret passages? I think the ever-growing number of commit authors at least show that it isn’t the latter.

What happens the day I grow bored or get run over by a bus? I don’t think there’s anything to worry about. Everything is free, open, provided and well documented.

Number of command line options

The command line tool is really like a very elaborate Swiss army knife for Internet transfers and it provides many individual knobs and levers to control the powers. curl has a lot of command line options and they’ve grown in number like this.

Is curl growing too hard to use? Should we redo the “UI” ? Having this huge set of features like curl does, providing them all with a coherent and understandable interface is indeed a challenge…

Number of lines in docs/

Documentation is crucial. It’s the foundation on which users can learn about the tool, the library and the entire project. Having plenty and good documentation is a project ambition. Unfortunately, we can’t easily measure the quality.

All the documentation in curl sits in the docs/ directory or sub directories in there. This shows how the amount of docs for curl and libcurl has grown through the years, in number of lines of text. The majority of the docs is in the form of man pages.

Number of supported protocols

This refers to protocols as in primary transfer protocols as in what you basically specify as a scheme in URLs (ie it doesn’t count “helper protocols” like TCP, IP, DNS, TLS etc). Did I tell you curl is much more than an HTTP client?

More protocols coming? Maybe. There are always discussions and ideas… But we want protocols to have a URL syntax and be transfer oriented to map with the curl mindset correctly.

Number of HTTP versions

The support for different HTTP versions has also grown over the years. In the curl project we’re determined to support every HTTP version that is used, even if HTTP/0.9 support recently turned disabled by default and you need to use an option to ask for it.

Number of TLS backends

The initial curl release didn’t even support HTTPS but since 2005 we’ve support customizable TLS backends and we’ve been adding support for many more ones since then. As we removed support for two libraries recently we’re now counting thirteen different supported TLS libraries.

Number of HTTP/3 backends

Okay, this graph is mostly in jest but we recently added support for HTTP/3 and we instantly made that into a multi backend offering as well.

An added challenge that this graph doesn’t really show is how the choice of HTTP/3 backend is going to affect the choice of TLS backend and vice versa.

Number of SSH backends

For a long time we only supported a single SSH solution, but that was then and now we have three…

Number of disclosed vulnerabilities

We take security seriously and over time people have given us more attention and have spent more time digging deeper. These days we offer good monetary compensation for anyone who can find security flaws.

Number of known vulnerabilities

An attempt to visualize how many known vulnerabilities previous curl versions contain. Note that most of these problems are still fairly minor and some for very specific use cases or surroundings. As a reference, this graph also includes the number of lines of code in the corresponding versions.

More recent releases have less problems partly because we have better testing in general but also of course because they’ve been around for a shorter time and thus have had less time for people to find problems in them.

Number of function calls in the API

libcurl is an Internet transfer library and the number of provided function calls in the API has grown over time as we’ve learned what users want and need.

Anything that has been built with libcurl 7.16.0 or later you can always upgrade to a later libcurl and there should be no functionality change and the API and ABI are compatible. We put great efforts into make sure this remains true.

The largest API additions over the last few year are marked in the graph: when we added the curl_mime_* and the curl_url_* families. We now offer 82 function calls. We’ve added 27 calls over the last 14 years while maintaining the same soname (ABI version).

Number of CI jobs per commit and PR

We’ve had automatic testing in the curl project since the year 2000. But for many years that testing was done by volunteers who ran tests in cronjobs in their local machines a few times per day and sent the logs back to the curl web site that displayed their status.

The automatic tests are still running and they still provide value, but I think we all agree that getting the feedback up front in pull-requests is a more direct way that also better prevent bad code from ever landing.

The first CI builds were added in 2013 but it took a few more years until we really adopted the CI lifestyle and today we have 72, spread over 5 different CI services (travis CI, Appveyor, Cirrus CI, Azure Pipelines and Github actions). These builds run for every commit and all submitted pull requests on Github. (We actually have a few more that aren’t easily counted since they aren’t mentioned in files in the git repo but controlled directly from github settings.)

Number of test cases

A single test case can test a simple little thing or it can be a really big elaborate setup that tests a large number of functions and combinations. Counting test cases is in itself not really saying much, but taken together and looking at the change over time we can at least see that we continue to put efforts into expanding and increasing our tests. It should also be considered that this can be combined with the previous graph showing the CI builds, as most CI jobs also run all tests (that they can).

Number of commits per month

A commit can be tiny and it can be big. Counting a commit might not say a lot more than it is a sign of some sort of activity and change in the project. I find it almost strange how the number of commits per months over time hasn’t changed more than this!

Number of authors per month

This shows number of unique authors per month (in red) together with the number of first-time authors (in blue) and how the amounts have changed over time. In the last few years we see that we are rarely below fifteen authors per month and we almost always have more than five first-time commit authors per month.

I think I’m especially happy with the retained high rate of newcomers as it is at least some indication that entering the project isn’t overly hard or complicated and that we manage to absorb these contributions. Of course, what we can’t see in here is the amount of users or efforts people have put in that never result in a merged commit. How often do we miss out on changes because of project inabilities to receive or accept them?

72 operating systems

Operating systems on which you can build and run curl for right now, or that we know people have ran curl on before. Most mortals cannot even list this many OSes off the top of their heads. If you know of any additional OS that curl has run on, please let me know!

20 CPU architectures

CPU architectures on which we know people have run curl. It basically runs on any CPU that is 32 bit or larger. If you know of any additional CPU architecture that curl has run on, please let me know!

32 third party dependencies

Did I mention you can build curl in millions of combinations? That’s partly because of the multitude of different third party dependencies you can tell it to use. curl support no less than 32 different third party dependencies right now. The picture below is an attempt to some sort of block diagram and all the green boxes are third party libraries curl can potentially be built to use. Many of them can be used simultaneously, but a bunch are also mutually exclusive so no single build can actually use all 32.

60 libcurl bindings

If you’re looking for more explanations how libcurl ends up being used in so many places, here are 60 more. Languages and environments that sport a “binding” that lets users of these languages use libcurl for Internet transfers.

Missing pictures

“number of downloads” could’ve been fun, but we don’t collect the data and most users don’t download curl from our site anyway so it wouldn’t really say a lot.

“number of users” is impossible to tell and while I’ve come up with estimates every now and then, making that as a graph would be doing too much out of my blind guesses.

“number of graphs in anniversary blog posts” was a contender, but in the end I decided against it, partly since I have too little data.

(Scripts for most graphs)


Every anniversary is an opportunity to reflect on what’s next.

In the curl project we don’t have any grand scheme or roadmap for the coming years. We work much more short-term. We stick to the scope: Internet transfers specified as URLs. The products should be rock solid and secure. The should be high performant. We should offer the features, knobs and levers our users need to keep doing internet transfers now and in the future.

curl is never done. The development pace doesn’t slow down and the list of things to work on doesn’t shrink.

curl is 18 years old tomorrow

Another notch on the wall as we’ve reached the esteemed age of 18 years in the cURL project. 9 releases were shipped since our last birthday and we managed to fix no less than a total of 457 bugs in that time.


On this single day in history…

20,000 persons will be visiting the web site, transferring over 4GB of data.

1.3 bug fixes will get pushed to the git repository (out of the 3 commits made)

300 git clones are made of the curl source tree, by 100 unique users.

4000 curl source archives will be downloaded from the curl web site

8 mails get posted on the curl mailing lists (at least one of them will be posted by me).

I will spend roughly 2 hours on curl related work. Mostly answering mail, bug reports and debugging, but also maintaining infrastructure, poke on the web site and if lucky, actually spending a few minutes writing new code.

Every human in the connected world will use at least one service,  tool or application that runs curl.

Happy birthday to us all!

Stockholm from above

At my little party for my 40th birthday, I got a present from a few awesome friends: a flight over Stockholm by helicopter. At August 19th 2011 it was made into reality and I spent roughly 20 minutes in the air. I took a (shaky) movie of the tour that you can see below. Enjoy.

Tack Grönros, Ericsson och Feltzing!

I had the seat to the left of the driver and had a spectacular ability to view everything both forwards and to the left. The ride was “shaky” and you could really feel the wind affect the little thing. The weather was sunny and 20-21 something degrees Celsius, a perfect day for this.

To really make it a day, I also opened up and had a sip from my Smokehead Extra Black that I received at the same time as the helicopter ride. It was similarly super!

I took the video with my simple Fujifilm FinePix F100fd camera, and I edited it with Openshot – which I had never done before. I found it to be a nice experience and I’m likely to use that tool again. I also learned that if you upload a 1.2GB video to youtube that is longer than 15 minutes, it will allow you to waste a long time to upload it, it will convert it, it will give you a link to it and then when you view that link… it says the video was too long so you can’t see it!