Category Archives: cURL and libcurl

curl and/or libcurl related

The curl roadmap webinar 2025

On March 6 2025 at 18:00 UTC, I am doing a curl roadmap webinar, talking through a set of features and things I want to see happen in curl during 2025.

Figure out my local time.

This is an opportunity for you to both learn about the plans but also to provide feedback on said ideas. The roadmap is not carved in stone, nor is it a promise that these things truly will happen. Things and plans might change through the year. We might change our minds. The world might change.

The event will be done in parallel over twitch and Zoom. To join the Zoom version you need to sign up for it in advance. To join the live-streamed twitch version, you just need to show up.

Sign up here for Zoom

You will be able to ask questions and provide comments using either channel.

A second curl distro meeting 2025

We are doing a rerun of last year’s successful curl + distro online meeting. A two-hour discussion, meeting, workshop for curl developers and curl distro maintainers. Maybe this is the start of a new tradition?

2025 event details

Last year I think we had a very productive meeting that led to several good outcomes. In fact, just seeing some faces and hearing voices from involved people is good and helps to enhance communication and cooperation going forward.

The objective for the meeting is to make curl better in distros. To make distros do better curl. To improve curl in all and every way we think we can, together.

Everyone who feels this is a subject they care about is welcome to join. The meeting is planned to take place in the early evening European time, early morning west coast US time. With the hope that it covers a large enough amount of curl interested people.

The plan is to do this on April 10, and all the details, planning and discussion items are kept on the dedicated wiki page for the event.

Feel free to add your own proposed discussion items, and if you feel inclined, add yourself as an intended participant. Feel free to help make this invite reach the proper people.

See you in April.

curl website traffic Feb 2025

Data without logs sure leaves us open for speculations.

I took a quick look at what the curl.se website traffic situation looks like right now. Just as a curiosity.

Disclaimer: we don’t log website visitors at all, we don’t run any web analytics on the site so we basically don’t know a lot of who does what on the site. This is done both for privacy reasons, but also for practical reasons. Managing logs for this setup is work I rather avoid to do and to pay for.

What do we have, is a website that is hosted (fronted) by Fastly on their CDN network, and as part of that setup we have an admin interface that offers accumulated traffic data. We get some numbers, but without details and specifics.

Bandwidth

Over the last month, the site served 62.95 TB. This makes it average over 2TB/day. On the most active day in the period it sent away 3.41 TB.

Requests

At 12.43 billion requests, it makes an average request transfer size 5568 bytes.

Downloads

Since we don’t have logs, we can’t count curl download perfectly. But we do have stats for request frequency for objects of different sizes from the site, and in the category 1MB-10MB we basically only have curl tarballs.

1.12 million such objects were downloaded over the last month. 37,000 downloads per day, or about one curl tarball downloaded every other second around the clock.

Of course most curl users never download it from curl.se. The source archives are also offered from github.com and users typically download curl from their distro or get it installed with their operating system etc.

But…?

The average curl tarball size from the last 22 releases is 4,182,317 bytes. 3.99 MB.

1.12 million x 3.99 MB is only 4,362 gigabytes. Not even ten percent of the total traffic.

Even if we count the average size of only the zip archives from recent releases, 6,603,978 bytes, it only makes 6,888 gigabytes in total. Far away from the almost 63 terabytes total amount.

This, combined with low average transfer size per request, seems to indicate that other things are transferred off the site at fairly extreme volumes.

Origin offload

99.77% of all requests were served by the CDN without reaching the origin site. I suppose this is one of the benefits of us having mostly a static site without cookies and dynamic choices. It allows us to get a really high degree of cache hits and content served directly from the CDN servers, leaving our origin server only a light load.

Regions

Fastly is a CDN with access points distributed over the globe, and the curl website is anycasted, so the theory is that users access servers near them. In the same region. If we assume this works, we can see from where most traffic to the curl website comes from.

The top-3:

  1. North America – 48% of the bandwidth
  2. Europe – 24%
  3. Asia – 22%

TLS

Now I’m not the expert on how exactly the TLS protocol negotiation works with Fastly, so I’m guessing a little here.

It is striking that 99% of the traffic uses TLS 1.2. It seems to imply that a vast amount of it is not browser-based, as all popular browsers these days mostly negotiate TLS 1.3.

HTTP

Seemingly agreeing with my TLS analysis, the HTTP version distribution also seem to point to a vast amount of traffic not being humans in front of browsers. They prefer HTTP/3 these days, and if that causes problems they use HTTP/2.

98.8% of the curl.se traffic uses HTTP/1, 1.1% use HTTP/2 and only the remaining tiny fraction of less than 0.1% uses HTTP/3.

Downloads by curl?

I have no idea how large share of the downloads that are actually done using curl. A fair share is my guess. The TLS + HTTP data imply a huge amount of bot traffic, but modern curl versions would at least select HTTP/2 unless the users guiding it specifically opted not to.

What is all the traffic then?

In the past, we have seen rather extreme traffic volumes from China downloading the CA cert store we provide, but these days the traffic load seems to be fairly evenly distributed over the world. And over time.

According to the stats, objects in the 100KB-1MB range were downloaded 207.31 million times. That is bigger than our images on the site and smaller than the curl downloads. Exactly the range for the CA cert PEM. The most recent one is at 247KB. Fits the reasoning.

A 247 KB file downloaded 207 million times equal 46 TB. Maybe that’s the explanation?

Sponsored

The curl website hosting is graciously sponsored by Fastly.

Changing every line three times

Is there some magic making three times, or even pi, the number of times you need to write code for it to be good?

So what am I talking about? Let’s rewind and start with talking about writing code.

Let’s say you start out by writing a program that is exactly one hundred lines long, and you release your creation to the world. Every line in this program was written just once.

Then someone reports a bug so you change source code lines. Say you change ten lines. Which is the equivalent of adding ten lines and removing ten lines. The total number of lines remains 100 lines, but you have written 110. The average line has then been changed 1.1 times.

Over time, you come to change more lines and if the project survives you probably add new code too. A living software project that is maintained is bound to have had many more lines added than what is currently present in the current working source code branch.

Exactly how many more lines were added than what is currently present?

That is the question that I asked myself regarding curl development. If we play with the thought that curl is a decently mature project as it has been developed for over twenty-five years maybe the number of times every line has been changed would tell us something?

By counting the number of added lines and comparing how many lines of code are still present, we know how often lines are changed – on average. Sure, some lines in the file headers and similar are probably rarely changed and some others are changed all the time, but let’s smear out the data and just count average.

curl is also an interesting test subject here because it has grown significantly over time. It started out as 180 lines of code in 1996 (then called httpget) and is almost 180,000 lines of code today in early 2025. An almost linear growth in number of lines of code over time, while at the same time having a fierce rate of bugfixes and corrections done.

I narrowed this research to all the product code only, so it does not include test cases, documentation, examples etc. I figured that would be the most interesting bits.

Number of lines of code

First a look at the raw number of how many lines of product code is present at different moments in time during the project’s history.

Added LOC per LOC still present

Then, counting the number of added lines of code (LOC) and comparing with how many lines of code are still present. As you can see here, the change rate is around three for a surprisingly long time.

Already by 2004 we had modified every line on average three times. The rate of changes then goes up and down a little but remains roughly three for years until 2015 something when the change rate start to gradually increase a little to 3.5 in early 2025 – while at the same time the number of lines of code in the project kept growing.

Today, February 18 2025 actually marks the day when it was calculated to a number above 3.5 for the first time ever.

What does it mean?

It means that every line in the product source code tree have by now been edited on average 3.5 times. It might been that we have written bad code and need to fix many bugs or that go back to refactor and improve existing lines frequently. Probably both.

Of course, some lines are edited and changed far more often than others, the 3.5 is just an average. We have some source lines left in the code that was brought before year 2000 and have not been changed since.

OpenSSL does a QUIC API

But will it satisfy the world?

I have blogged several times in the past about how OpenSSL decided to not ship the API for QUIC a long time ago, even though the entire HTTP world was hoping for it – or even expecting it. OpenSSL rejected the proposal to merge the proposed API and thereby implicitly decided to obstruct wide QUIC and HTTP/3 adoption outside browsers.

The OpenSSL team instead proclaimed that their ambition and goal was to implement their own QUIC stack and offer that to users. The OpenSSL team took a long time to implement it, but has shipped their own stack implementation and API since OpenSSL 3.2 – first released in November 2023.

Lagging behind

In the curl project we have been on top of this game all the way. We made curl capable of using OpenSSL-QUIC as a backend for QUIC (using nghttp3 for the HTTP/3 parts) as soon as that arrived. We immediately reported obvious flaws and omissions in their API and we have worked with the OpenSSL team over time as they have slowly but gradually addressed most (but not all) of our concerns.

Lots of other QUIC stack implementations have spent years in beta state, working out and polishing their implementations and APIs. OpenSSL went fairly quickly into shipping something they say can be used in production.

OpenSSL-QUIC considered experimental

We still consider the curl backend using the OpenSSL-QUIC implementation as experimental and discourage users from using such builds in production. Now primarily because it is a performance and resource hog compared to the competition.

The OpenSSL-QUIC implementation using curl has been measured up to 4 times (!) slower than ngtcp2 using up to 25 times (!) the amount of memory.

The API is different

The API OpenSSL finally merged on February 10, 2025 is however not the exact same API that was proposed many years ago, the API that BoringSSL, quictls and the others have been offering for many years by now. It is different, and because of this the QUIC implementations that want to use it need to adapt specifically for this and cannot just interchangeably switch between the many OpenSSL forks. It uses a pull concept versus the push of what other provides.

Additionally, the pull requests mentions: At the moment our QUIC stack does not support early data. A significant missing feature compared to what the OpenSSL forks (and other libraries) support.

When I asked the OpenSSL team about how they came to ship such a different API to what everyone else offers and what QUIC implementers have been asking for, the given explanation was:

– The API is layered on what we use internally to plug in our QUIC client and server implementations in a clean manner […] We did get some feedback from other QUIC stacks – but the proof will be in actual usage which we expect will occur most likely after the 3.5 release is out.

Rumors have it that they say they spoke to four QUIC stack authors and they all had planned to support it.

I don’t know which four stacks this was, and I’m curious to see this happen. Most QUIC stacks are not written to handle different TLS libraries with different APIs, so they would either add that support now or switch to a different API all together. Either way, not a trivial undertaking.

Also “planning to support it” is easy to say. Could also just mean at some point in a distant future.

ngtcp2 is the world’s generic QUIC stack

There are multiple QUIC implementations out there, but the only one that has had the idea of being TLS library independent already from its start, is ngtcp2.

Since curl supports ngtcp2 for QUIC, it can then also work with any of the TLS libraries ngtcp2 supports: BoringSSL, AWS-LC, quictls, wolfSSL, GnuTLS. This fits the curl mindset very well.

ngtcp2 is also the only QUIC stack curl supports right now that is not considered experimental.

Could in theory ease HTTP/3 adoption

Assuming the API works, and assuming ngtcp2 can make use of the API in a fine way, this unexpected change of attitude could be the move that suddenly and for real makes HTTP/3 adoption in and with curl take off. OpenSSL has a firm grip of the TLS library landscape (in particular in the Open Source realm) and a huge share of curl users uses it. Starting with the pending OpenSSL version 3.5 in the spring of 2025, building with curl + OpenSSL can then possibly enable HTTP/3.

So will it? I asked a lead developer of the ngtcp2 library if they are going to work on adding an adaptation for the new OpenSSL 3.5 API.

– I have no plan to do that. OpenSSL QUIC API lacks 0rtt support and its pulling crypto data is not ideal for me, other TLS stacks do not do that. Basically, the current state is inferior than what we have proposed 6 years ago. I will revisit this after OpenSSL adds 0rtt support and revise its pull model.

(Clearly they were not one of the four stack authors OpenSSL talked to.)

While this does of course not prevent someone else from doing the work, even thought the 0RTT limitation is something that primarily has to be added to the OpenSSL API, it might imply that it will not happen immediately.

In my opinion, I think it could be useful and educational if the OpenSSL project themselves wrote that adaptation for ngtcp2 to “dogfood” their own API.

curl

An attempt to illustrate the QUIC stack situation in current curl is shown below.

Currently curl supports four backends, out of which one (msh3) is scheduled for removal later this year and one (the kernel module version) is still only proposed for future inclusion – waiting for the kernel module to actually get adopted into the Linux kernel.

This leaves three actively developed backends, out of which one (ngtcp2) is the one we recommend and push for. The quiche and the OpenSSL-QUIC ones are still experimental.

The new OpenSSL API I discuss in this blog post is the one that would populate the lower right box, providing a TLS API for the QUIC library running on top of it. Hence me putting the red “happening?” text for this puzzle piece.

The red column second from the right is the OpenSSL-QUIC solution, using OpenSSL’s QUIC implementation.

Early days

It has not even been a week since this new API was merged into OpenSSL’s git repository, so it is far too early to give any predictions. Presumably it won’t even be used much for real by others until it gets shipped in a public release, planned to happen in April 2025.

Update

On February 26, there was another OpenSSL update: the QUIC API now offers 0-RTT support and they say this will be part of what ships in 3.5.

curl 8.12.1

This is a quick follow-up patch release due to the number of ugly regressions in the 8.12.0 release.

Release presentation

Numbers

the 265th release
0 changes
8 days (total: 9,827)

65 bugfixes (total: 11,428)
67 commits (total: 34,180)
0 new public libcurl function (total: 96)
0 new curl_easy_setopt() option (total: 306)

0 new curl command line option (total: 267)
25 contributors, 14 new (total: 3,332)
34 authors, 18 new (total: 1,341)
0 security fixes (total: 164)

Bugfixes

libcurl

  • asyn-thread: fix build with CURL_DISABLE_SOCKETPAIR
  • asyn-thread: fix the returned bitmask from Curl_resolver_getsock
  • asyn-thread: survive a c-ares/HTTPSRR channel set to NULL
  • content_encoding: #error on too old zlib
  • imap/pop3/smtp: TLS upgrade fixes
  • include necessary headers for inet_ntop/inet_pton
  • drop support for libssh older than 0.9.0
  • netrc: return code cleanup, fix missing file error
  • openssl-quic: ignore ciphers for h3
  • openssl: fix out of scope variables in goto
  • vtls: fix multissl-init
  • vtsl: eliminate ‘data->state.ssl_scache’
  • wakeup_write: make sure the eventfd write sends eight bytes

tool

  • tool_ssls: switch to tool-specific get_line function

scripts

  • build: add tool_hugehelp.c into IBMi build
  • configure/cmake: check for realpath
  • configure/cmake: set asyn-rr a feature only if httpsrr is enabled
  • runtests: fix the disabling of the memory tracking
  • runtests: quote commands to support paths with spaces

docs

  • CURLOPT_SSH_KNOWNHOSTS.md: strongly recommend using this
  • CURLSHOPT_SHARE.md: adjust for the new SSL session cache
  • SPONSORS.md: clarify that we don’t promise goods or services

disabling cert checks: we have not learned much

And by that I mean the global “we” as in the world of developers.

In the beginning there was SSL

When I first learned about SSL and how to use it in the mid to late 1990s, it took me a while to realize and understand the critical importance of having the client verifying the server’s certificate in the handshake. Once I had understood, we made sure that curl would default to doing the check correctly and refuse connecting if the certificate check fails.

Since curl and libcurl 7.10 (released in October 2002) we verify server certificates by default. Today, more than twenty-two years later, there should realistically be virtually no users left using a curl version that does not verify certificates by default.

What’s verifying

The standard way to verify a TLS server certificate is by A) checking that it is signed by a trusted certificate authority (CA) and B) that the cert was created for the thing you interact with; that the domain name is listed in the certificate.

Optionally, you can opt to “pin” a certificate which then verifies that the certificate is the one that corresponds to a specific hash. This is generally considered more fragile but avoids the use of a “CA store” (a set of certificates for the certificate authorities “you” trust) needed to verify the digital signature of the server certificate.

Skipping means insecure

Skipping the certificate verification makes the connection insecure. Because if you do not verify, there is nothing that prevents a middle-man to sit between you and the real server. Or even to just fake being the real server.

Challenges

If you try to use the production site’s certificate in your development environment, you might connect to the server using a different name and then the verification fails.

If you have an active middle man intercepting and wanting to snoop on the TLS traffic, it needs to provide a different certificate and unless that can get signed by a CA you trust, the verification fails.

If you have an outdated or maybe no CA store at all, then the verification fails.

If the server does not update its certificate correctly, it might expire and then the verification fails. Similarly, in order to do a correct verification your client needs a clock that is at least roughly in sync with reality or the verification might fail.

Verification also takes more time compared to how fast it is to just skip the entire step. Sometimes and to some, weirdly enticing.

And yet all curl and libcurl documentation for this strongly discourages users from disabling the check.

A libcurl timeline

curl added support for SSL in April 1998 (years before they renamed it TLS). curl makes certificate checks by default since 2002, both the tool and the library. At the time, I felt I was a little slow to react but at least we finally made sure that curl users would do this check by default.

Ten years later, in October 2012, there was a paper published called The most dangerous code in the world, in which the authors insisted that the widespread problem of applications not verifying TLS certificates with libcurl was because This interface is almost perversely bad. The problem was apparently libcurl’s API.

The same “fact” would be repeated later, for example in this 2014 presentation saying that this is our fault because the API (for PHP) looks like it takes a boolean when in reality it did not.

The libcurl API for this

I do not claim that we have the best API in libcurl, but I can say that extremely few libraries can boast an API and ABI stability that comes even close to ours. We have not broken the ABI since 2006. We don’t mind carrying a world on our shoulders that have learned to depend on this and us. So we don’t change the API, even though it could have been done a little better.

CURLOPT_SSL_VERIFYPEER is a boolean option to ask for server certificate verification against the CA store. It is set TRUE by default, so an application needs to set it to FALSE (0) to disable the check. This option works together with the next one.

CURLOPT_SSL_VERIFYHOST is a separate option to verify that the name embedded in the certificate matches the name in the URL (basically). This option was never a boolean but accepts a number. 0 disables the check, and 2 was for the maximum check level. With 2 being the default.

Both options are thus by default set to verify, and an application can lessen the checks by changing one or both of them.

Adaptations

After that most dangerous article was posted in 2012 that basically said we were worthless, without ever telling that to us or submitting an issue or pull-request with us, we changed how CURLOPT_SSL_VERIFYHOST worked in the 7.28.1 release – shipped in December 2012.

Starting then, we made setting the option to 1 an error (and it would just leave the original value untouched). Before that update, setting VERIFYHOST to 1 was a debug-like mode that made libcurl output warnings on mismatches but still let the connection through. A silly mode to offer.

In 2019 we tweaked the VERIFYHOST handling a little further and made the value 1 and 2 do the same thing: verify the name.

I have no idea what the authors of that 2012 paper would think about this API tweak, but at least the options are now two proper booleans.

I did not think the authors were right when they originally published that paper, but yet we improved the API a little. I dare to claim that the problem with disabled certificate checks is not because of a bad libcurl API.

curl

The curl tool of course is a libcurl using application and it itself offers the --insecure (-k) option which when used switches off both those above mentioned libcurl options. Also strongly discouraged to actually use beyond testing and triaging.

Other layers on top

libcurl is itself used by a lot of frameworks and languages that expose the options to their respective users. Often they then even use the same option names. We have over 60 documented language bindings for libcurl.

For example, the PHP/CURL binding is extremely popular and well used and it has the options provided and exposed using the exact same names, values and behavior.

Disabling the checks

More than twenty-two years of having this enabled by default. More than twelve years since the most dangerous paper. After countless articles on the topic. Everyone I talk to knows that we all must verify certificates.

In almost all cases, you can fix the failed verification the proper way instead of disabling the check. It is just usually a little more work.

State of checks using libcurl today

I searched GitHub on February 10 2025 for “CURLOPT_SSL_VERIFYPEER, FALSE” and it quickly showed me some 140,000 matching repositories. Sure, not all these matches are bad uses since they can be done conditionally etc and it can also be done using other bindings using different option names that this search does not catch etc. Or they might use pinning, which also is not caught by this simple search term.

Searching for “CURLOPT_SSL_VERIFYPEER, 0” shows 153,000 additional matches.

A quick walk-through shows that there are lot of genuine, mostly sloppy, certificate disabling curl using code among these matches.

We could fool ourselves into thinking that the state of certificate check disabling is better in modern software in wide use, made by big teams.

A quick CVE search immediately found several security vulnerabilities for exactly this problem published only last year:

  • CVE-2024-32928 – The libcurl CURLOPT_SSL_VERIFYPEER option was disabled on a subset of requests made by Nest production devices.
  • CVE-2024-56521 – An issue was discovered in TCPDF before 6.8.0. If libcurl is used, CURLOPT_SSL_VERIFYHOST and CURLOPT_SSL_VERIFYPEER are set unsafely.
  • CVE-2024-5261 – In affected versions of Collabora Online, in LibreOfficeKit, curl’s TLS certificate verification was disabled (CURLOPT_SSL_VERIFYPEER of false).

If I was into bug-bounties, I would know where to start.

What do we do?

Clearly, this is work that never gets complete or done – it might arguable actually get worse as the volume of software grows. We need to keep telling people to fix this. To stop encouraging others to do wrong. Lead by good example. Provide good documentation and snippets for people to copy from.

I took a very tiny step and reported a bug against a documentation that seemed encourage the disabling. If we all submit a bug or two when we see these problems, things might gradually improve.

When/if you submit bug reports as well, please remember to stay polite, friendly and to the point. Explain why disabling the check is bad. Why keeping the check is good.

Rinse and repeat. Until the end of time.

curl 8.12.0

Release presentation

Numbers

the 264th release
8 changes
56 days (total: 9,819)

244 bugfixes (total: 11,417)
367 commits (total: 34,180)
2 new public libcurl function (total: 96)
0 new curl_easy_setopt() option (total: 306)

1 new curl command line option (total: 267)
65 contributors, 34 new (total: 3,332)
34 authors, 18 new (total: 1,341)
3 security fixes (total: 164)

Security

CVE-2025-0167: netrc and default credential leak. When asked to use a .netrc file for credentials and to follow HTTP redirects, curl could leak the password used for the first host to the followed-to host under certain circumstances. This flaw only manifests itself if the netrc file has a default entry that omits both login and password. A rare circumstance.

CVE-2025-0665: eventfd double close. libcurl would wrongly close the same file descriptor twice when taking down a connection channel after having completed a threaded name resolve.

CVE-2025-0725: gzip integer overflow. When libcurl is asked to perform automatic gzip decompression of content-encoded HTTP responses with the CURLOPT_ACCEPT_ENCODING option, using zlib 1.2.0.3 or older, an attacker-controlled integer overflow would make libcurl perform a buffer overflow. There should be virtually no users left using such an old and vulnerable zlib version.

Changes

  • curl: add byte range support to –variable reading from file
  • curl: make –etag-save acknowledge –create-dirs
  • curl: add ‘time_queue’ variable to -w
  • getinfo: provide info which auth was used for HTTP and proxy:
  • openssl: add support to use keys and certificates from PKCS#11 provider
  • QUIC: 0RTT for gnutls via CURLSSLOPT_EARLYDATA
  • vtls: feature ssls-export for SSL session im-/export
  • hyper: dropped support

Bugfixes

Some of the bugfixes to highlight.

libcurl

  • acknowledge CURLOPT_DNS_SERVERS set to NULL
  • fix CURLOPT_CURLU override logic
  • initial HTTPS RR resolve support
  • ban use of sscanf()
  • conncache: count shutdowns against host and max limits
  • support use of custom libzstd memory functions
  • cap cookie expire times to 400 days
  • parse only the exact cookie expire date
  • include the shutdown connections in the set curl_multi_fdset returns
  • easy_lock: use Sleep(1) for thread yield on old Windows
  • ECH: update APIs to those agreed with OpenSSL maintainers
  • fix ‘time_appconnect’ for early data with GnuTLS
  • HTTP/2 and HTTP7/3: strip TE request header
  • mbedtls: fix handling of blocked sends
  • mime: explicitly rewind subparts at attachment time.
  • fix mprintf integer handling in float precision
  • terminate snprintf output on windows
  • fix curl_multi_waitfds reporting of fd_count
  • fix return code for an already-removed easy handle from multi handle
  • add an ssl_scache to the multi handle
  • auto-enable OPENSSL_COEXIST for wolfSSL + OpenSSL builds
  • use SSL_poll to determine writeability of OpenSSL QUIC streams
  • free certificate on error with Secure Transport
  • fix redirect handling to a new fragment or query (only)
  • return “IDN” feature set for winidn and appleidn

scripts

  • numerous cmake improvements
  • scripts/mdlinkcheck: markdown link checker

curl tool

  • return error if etag options are used with multiple URLs
  • accept digits in –form type= strings
  • make –etag-compare accept a non-existing file

docs

  • add INFRASTRUCTURE.md describing project infra

Next

The next release is probably going to be curl 8.13.0 and if things go well, it ships on April 2, 2025.

European Open Source Achievement Award

I have been awarded the European Open Source Achievement Award! Proud, happy and humble I decided to accept it, as well as the associated nomination for president of the new European Open Source Academy for the coming two years.

This information was not made public until the very same day of the award ceremony, on January 30th 2025, so I was not able to talk about it before. Then FOSDEM kept me occupied the days immediately following.

Official letter

Dear Mr. Daniel Stenberg,

On behalf of the OSAwards.eu initiative, it is our great honour to invite you to receive the European Open Source Achievement Award, on the occasion of the inaugural ceremony of the European Open Source Awards to be held in Brussels on Thursday, 30 January 2025 at 18:30.

In recognition of your leadership quality, we would also like to extend this invitation for you to join the European Open Source Academy in the quality of Academy President, for a two year tenure. As Academy President you will play a critical role in guiding the establishment and reputability of the European Open Source Academy and its annual European Open Source Awards. You can find more information about the provisional structure of the European Open Source Academy and expected involvement from founding members in the attached project brief.

This inaugural award, corresponding with an invitation to the European Open Source Academy, recognises your exceptional contribution as a European open source leader whose impact has transformed the European and global technological landscape, and whose engagement has highly contributed to a thriving open source community in Europe. Your founding and continuous contribution to cURL has had a tremendous impact on the global and European technological landscape thanks to its innovative nature. We also want to recognise your continuous commitment to open source maintenance and knowledge sharing, positioning you as a leading and respected figure in the European open source community.

The inaugural ceremony will be a formal event followed by a gala cocktail reception. You are welcome to bring a guest with you, please let us know their name for us to add them to the guest list. You can find more information about the programme in the attached concept note. Logistical information for the ceremony will be shared upon confirmation of your attendance.

Thank you for considering our invitation to receive the European Open Source Achievement award and to join the European Open Source Academy. We hope that you will accept this testimony of recognition and we remain available should you have any questions.

On the award

I have worked on and with Open Source for some thirty years. I believe in the model, I like the community, I enjoy the challenges. Some of my work in Open Source has been successful beyond my wildest dreams.

Getting recognition for my work in the wider world outside the inner circle is huge. The many thousands of hours of starring on screens, debugging code, tearing hair and silently yelling at my past self for not writing better comments actually sometimes produce something useful.

There are many awesome people in the European Open Source universe. I can only imagine the struggle the award committee had to select a single awardee.

Thank you!

My wife Anja joined me in Brussels and we participated at the award gala dinner there on January 30th, 2025 when I was handed the actual physical award. The Thursday just before the FOSDEM weekend. I sported an extra large smile on my face during that entire following FOSDEM conference.

The actual physical award trophy is shown off in a little video below.

On the academy

I believe Open Source has been an immense success story over the last three decades and it has become what is essentially the foundation of all current digital infrastructure. I am convinced that Europeans are already well positioned in this ecosystem but we should not lean back and think that anything is done or over. We need to keep on our toes and rather strengthen and enforce Open Source, our participation in and our understanding of it – and we need to fund it. This is where a lot of important software is made and controlled. We need to make sure that the EU leadership understands this.

Those are examples of what I hope this European Open Source Academy can help out with.

My role as president of the academy is not going to be a time-sink. I cannot allow myself that. I have curl work to do and I remain a full-time lead developer of curl. I will be president on the side, for a limited number of hours per month.

Next

It never gets old or boring to get awards, so even if I have been given a whole range of truly fabulous awards by now, every new one still makes me humble and super excited.

Getting recognition, awards and thank yous are superb ways to boost energy and motivation – I highly recommend it. I am totally set on continuing my work on curl and other Open Source for many more years to come.

I want to lead by example. I aspire to be the Open Source person I myself looked for and tried to mimic when I was younger.

Showing off

Some photos

Next to me you can also see the three additional awardees: Amandine Le Pape (The Business & Impact Award), Lydia Pintscher (The Advocacy & Awareness Award), and David Cuartielles (The Skills & Education Award). Awesome people.

The person handing me the award, seen on the photos, is Omar Mohsine, open source coordinator at the United Nations Office for Digital and Emerging Technologies.

Update

The seven stars on the OSAwards logo symbolize the core principles and values of open source software. They represent collaboration, transparency, community, innovation, freedom, diversity, and inclusivity.

A 1337 curl author

For quite some time now, I celebrate and welcome every new commit author in the curl project in the public. Recently, that means I send out a toot on Mastodon saying Welcome so and so as curl commit author number XYZ and a link to their initial curl work. (example 1, example 2).

This messaging is not done automatically. GitHub helps out by specifically mentioning in a PR when it is done by a first-timer to the repository, and I have a convenient local script that tells me how many authors we have so far, and then I type up the message myself and send it. (Sometimes I miss one, which I regret.)

This process takes me about seven point five seconds per case of manual labor. Writing an automated script to do this correctly, triggered for the right persons, would take the equivalent of many years of new authors.

For the last few months, people have more and more noticed and replied mentions about the fact that we were approaching commit author number 1337. Lots of people have said things in the style of “I should learn to program soon so that I can become number 1337”.

The number 1337, is of course just a number. I find it amusing and charming that it seems to have this almost magic aura and attraction to so many people in our community.

Today, commit author 1337 was finally announced, only three years since we announced author 1000. There are no permanent records or anything of this fact other than this blog post. Further, there is a risk that we have a duplicate or two somewhere in there so that a recount at a later time will end up differently.

Commit author 1337 became Michael Schuster who wrote this pull request, which fixed a minor build issue in the mbedTLS backend code. Thanks!

337 new authors over the last three years equals roughly two new commit authors per week on average. Pretty good. We have room for many more!

A relevant statistic in this context is also that 65% of all commit authors only ever authored a single commit.

Now, let’s go for author two thousand next…