Category Archives: Open Source

Open Source, Free Software, and similar

Fun multipart/form-data inconsistencies

I still remember the RFC number off the top of my head for the first multipart formdata spec that I implemented support for in curl. Added to curl in version 5.0, December 1998. RFC 1867.

Multipart formdata is the name of the syntax for how HTTP clients send data in a HTTP POST when they want to send binary content and multiple fields. Perhaps the most common use case for users is when uploading a file or an image with a browser to a website. This is also what you fire off with curl’s -F command line option.

RFC 1867 was published in November 1995 and it has subsequently been updated several times. The most recent incarnation for this spec is now known as RFC 7578, published in July 2015. Twenty years of history, experiences and minor adjustments. How do they affect us?

I admit to having dozed off a little at the wheel and I hadn’t really paid attention to the little tweaks that slowly had been happening in the multipart formata world until Ryan Sleevi woke me up.

Percent-encoding

While I wasn’t looking, the popular browsers have now all switched over to a different encoding style for field and file names within the format. They now all use percent-encoding, where we originally all used to do backslash-escaping! I haven’t actually bothered to check exactly when they switched, primarily because I don’t think it matters terribly much. They do this because this is now the defined syntax in WHATWG’s HTML spec. Yes, another case of separate specs diverging and saying different things for what is essentially the same format.

curl 7.80.0 is probably the last curl version to do the escaping the old way and we are likely to switch to the new way by default starting in the next release. The work on this is done by Patrick Monnerat in his PR.

For us in the curl project it is crucial to maintain behavior, but it is also important to be standard compliant and to interoperate with the big world out there. When all the major browsers have switched to percent-encoding I suspect it is only a matter of time until servers and server-side scripts start to assume and require percent-encoding. This is a tricky balancing act. There will be users who expect us to keep up with the browsers, but also some that expect us to maintain the way we’ve done it for almost twenty-three years…

libcurl users at least, will be offered a way to switch back to use the old escaping mechanism to make sure applications that know they work with older style server decoders can keep their applications working.

Landing

This blog post is made public before the PR mentioned above has been merged in order for people to express opinions and comments before it is done. The plan is to have it merged and ship in curl 7.81.0, to be released in January 2022.

16000 curl commits

Almost 14 months since I celebrated 15,000 commits in curl’s source code repository I have now passed 16,000 commits.

My commit number 16,000 was a minor man page fix.

The official gitstats page shows that I’ve committed changes on almost 4,600 separate days since the year 2000.

16,000 commits is 13,413 commits more than the person with the second-most number of commits: Yang Tse (2587 commits). He has however not committed anything in curl since he vanished from the project in 2013.

I have also done 4,700 commits in the curl-www repository, but that’s another story.

My first 25 years of HTTP

I like figuring out even or somehow particularly aligned numbers and dates to celebrate. Here’s another one: today marks the day when httpget 0.1 was released in 1996.

httpget 0.1 was a tiny command line tool written by Rafael Sagula. It was less than 300 lines of C code. (Today, the product code is 173,000 lines!)

I found httpget just days after it was released when I was searching for a tool to use for downloading currency rates with from an HTTP site. This was the time before Google existed so I assume I used Altavista or something. I can’t remember actually.

The httpget source code was my initial step into the HTTP world. In many ways, reading that code opened my eyes.

Of course there was some issues with the code that I can’t remember any longer, but I very soon sent Rafael my first patches to improve it. A few weeks later I took over maintenance of the little tool. The rest is documented elsewhere. The oldest httpget source code I still have around is httpget 1.3, released sometime between April and August 1997.

I have since been working with HTTP pretty much daily. For twenty-five years.

If someone tries to tell you that HTTP is an easy or simple protocol, don’t believe them.

The curl v8 plan

For a long time I have been wanting to avoid us to ever reach curl version 7.100.0. I strongly suspect that going three-digits in the minor number will cause misunderstandings and possibly even glitches in people’s comparison scripts etc. If nothing else, it is just a very high number to use in a version string and I believe we would be better off by starting over. Reset the clock so to speak.

Given that, a curl version 8.0.0 is inevitably going to have to happen and since we do releases every 8 weeks and we basically bump the version number in just about every release, there is a limited amount of time left to avoid the minor number to reach 100. We just shipped curl 7.80.0, so we have less than 20 release cycles in the worst case; a few years.

A while ago it struck me that we have a rather big anniversary coming up, also within a few years, and that is curl’s 25th birthday.

Let’s combine the two !

On March 20, 2023, curl turns 25 years old. On that same day, the plan is to release curl 8.0.0, the first major number bump in (by then) 23 years.

The major number bump will happen independently of what features we add or not, independently if there are any new bells and whistles to celebrate or “just” a set of bug-fixes landed. As we always do, the release is time based and not feature based, but the different unique property that this particular release will have is that it will also reset the minor version number and increase the major version number!

In the regular curl release cycles we always do releases every eight weeks on Wednesdays and if we ever adjust the cycle, we do it with full weeks and stick to releasing on Wednesdays.

March 20, 2023 however is a Monday. curl 8.0.0 will therefore also be different in the way that it will not be released on a Wednesday. We will also have to adjust the cycle period before and after this release, since they cannot then be eight weeks cycles. The idea is to go back to Wednesdays and the regular cycles again after 8.0.0 has happened.

Credits

Image by Tanja Richter from Pixabay

curl 7.80.0 post quantum

Welcome to another curl release, 7 weeks since the previous one.

Release presentation

Numbers

the 204th release
6 changes
49 days (total: 8,636)

117 bug-fixes (total: 7,397)
198 commits (total: 27,866)
1 new public libcurl function (total: 86)
4 new curl_easy_setopt() option (total: 294)

1 new curl command line option (total: 243)
78 contributors, 44 new (total: 2,533)
50 authors, 28 new (total: 976)
0 security fixes (total: 111)
0 USD paid in Bug Bounties (total: 16,900 USD)

Changes

CURLOPT_MAXLIFETIME_CONN

This new option provides yet another knob for applications to control and limit connection reuse. Using this option, the application sets an upper limit that specifies that connections may not be older than this when being reused. It can for example be used to make sure connections don’t “get stuck” on one single specific load-balancer for extended periods of time.

CURLOPT_PREREQFUNCTION

This new callback gets called immediately before the request is started (= “Pre req”). Gives the application a heads up and some details about what is just about to start.

libssh2: SHA256 fingerprint support

This is a bump up from the previous MD5 version. Make sure that curl connections to the correct host with a cryptographically strong fingerprint check.

curl_url_strerror()

When a URL API function returns an error it does so using a CURLUcode type. Now there’s a function to convert this error code into an error message.

support UNC paths in file: URLs on Windows

The URL parser now understands UNC paths when parsing URLs on Windows.

allow setting of groups/curves with wolfSSL

The wolfSSL backend now allows setting of specific curves for TLS 1.3 connections, which allows users to use post quantum algorithms if wolfSSL is built to support them!

Bug-fixes

This is another release with more than one hundred individual bug-fixes, and here are a selected few I think might be worth highlighting.

more hyper work

I’ve done numerous small improvements in this cycle to take the hyper backend closer to become a solid member of the curl backend family.

print curl --help descriptions aligned right

When listing available options with --help or -h, the list is now showing the descriptions right-aligned which makes the output more easy-to-read in my opinion. Illustration:

Showing off right-aligned –help descriptions

store remote IP address for QUIC connections too

HTTP/3 uses QUIC and with this bug fixed, the %{remote_ip} variable for --write-out works there as well – as you’d expect. This fixes the underlying CURLINFO_PRIMARY_IP option.

reject HTTP response codes < 100

The HTTP response parser no longer accepts response code numbers below 100 as a legitimate protocol. The HTTP protocol has never specified any such code so this should not cause any problems. Lots of other HTTP clients already enforce this requirement too.

do not wait for writable socket if there’s no remote HTTP/2 window

If curl runs out of remote HTTP/2 window for a stream while uploading, ie the other end says it can’t receive any data right now, curl would still wait for the socket to be writable which would cause really bad busy-loops.

get libssh2 version at runtime

curl now asks libssh2 for its version string in runtime instead of showing the version that was used back when the curl binary was built, as it might very well be upgraded dynamically after the build!

require all man pages to use the same section headers in the same order

We tighten the bolts even more to make the libcurl documentation consistent. Now all libcurl man pages have to feature the same set of headers in the same order to not cause test failure. This includes a required example section. We also added an extra check to detect a common backslash-wrong-formatting mistake that we’ve previously done several times in man page examples.

NTLM: use DES_set_key_unchecked with OpenSSL

Turns out that the code that is implemented to use OpenSSL for doing NTLM authentication was using a function call that returns error if a “bad” key is used. NTLM v1 being a very weak algorithm in general makes it very easy to end up calling the function with such a weak key and then the NTLM authentication failed…

openssl: if verifypeer is not requested, skip the CA loading

If peer verification is disabled for a transfer when curl is built to use OpenSSL or one of its forks, curl will now completely skip the loading of the CA cert bundle. It was basically only used for being able to show in the verbose output if there was a match or not – that was then ignored anyway – and by skipping the load step the TLS handshake will use less memory and CPU resources.

urlapi: URL decode percent-encoded host names

The URL parser did not accept percent encoded host names. Now it does. Note however that libcurl will not by default percent-encode the host name when extracting a URL for the purpose of keeping IDN names working. It’s a little complicated.

ngtcp2: use QUIC TLS RFC9001

We switch over to use the “real” QUIC identifier when setting up connections instead of using the identifier made for a previous draft version of the protocol. curl still asks h3-29 for HTTP/3 since that RFC has still not shipped, but includes h3 as well – since it turns out some servers assume plain h3 when the final QUIC v1 version is used for transport.

a failed etag save now only fails that transfer

Specifying a file name to save etags in will from now on only fail those transfers using that file. If you specify more transfers that use another file or not use etags at all, those transfers can still get done.

Added test case for checksrc!

The custom tool we use for checking code style compliance, checksrc, has become quite advanced and now checks for a lot of source code details, and when we once again improved it in this release cycle we also added a test case for the tool itself to make sure it maintains its functionality even when we keep improving it going forward!

Next

The next release is planned for January 5, 2022. We have several changes queued up as pull requests already so I’d say it is likely that it then will become version 7.81.0.

Support?

I offer commercial support and curl related contracting for you and your company!

a github wishlist

The curl project’s source code has been hosted on GitHub since March 2010. I wrote a blog post in 2013 about what I missed from the service back then and while it has improved significantly since then, there are still features I miss and flaws I think can be fixed.

For this reason, I’ve created and now maintain a dedicated git repository with feedback “items” that I think could enhance GitHub when used by a project like curl:

bagder.github.io/github-feedback

Useful feedback

The purpose of this repository is to allow each entry to be on-point and good feedback to GitHub. I do not expect GitHub to implement any of them, but the better we can present the case for every issue, the more likely I think it is that we can gain supporters for them.

What makes curl “special”

I don’t think curl is a unique project in the way we run and host it. But there are a few characteristics that maybe make it stand out a little from many other projects hosted on GitHub:

  • curl is written in C. It means it cannot be part of the “dependency” and related security checks etc that GitHub provides
  • we are git users but we are not GitHub exclusive: we allow participation in and contributions to the project without a GitHub presence
  • we are an old-style project where discussions, planning and arguments about project details are held on mailing lists (outside of GitHub)
  • we have strict ideas about how git commits should be done and how the messages should look like etc, so we cannot accept merges done with the buttons on the GitHub site

You can help

Feel free to help me polish the proposals, or even submit new ones, but I think they need to be focused and I will only accept issues there that I agree with.

Submit issues, pull-requests or bring up a discussion.

The QUIC API OpenSSL will not provide

In a world that is now gradually adopting HTTP/3 (which, as you know, is implemented over QUIC), the problem with the missing API for QUIC is still a key problem.

There are a number of existing QUIC library implementation now since a few years back, and they are slowly maturing. The QUIC protocol became RFC 9000 and friends, but the most popular TLS libraries still don’t provide the necessary APIs to make QUIC libraries possible to use them.

Example that makes people want HTTP/3

Example tweet of what makes people keen on experimenting and deploying HTTP/3.

OpenSSL PR8797

For a long time, many people and projects (including yours truly) in the QUIC community were eagerly following the OpenSSL Pull Request 8797, which introduced the necessary QUIC APIs into OpenSSL. This change brought the same API to OpenSSL that BoringSSL already provides and as such the API has already been used and tested out by several independent implementations.

Implementations have a problem to ship to the world based on BoringSSL since that’s a TLS library without versions and proper releases, so it is not a good choice for the big wide world. OpenSSL is already the most widely used TLS library out there and lots of applications are already made to use that.

Delays made quictls happen

The OpenSSL PR8797 was delayed back in February 2020 on when the OpenSSL management committee (OMC) decreed that they would not deal with that PR until after their pending 3.0.0 release had shipped.

“It is our expectation that once the 3.0 release is done, QUIC will become a significant focus of our effort.”

OpenSSL then proceeded and their 3.0.0 release was delayed significantly compared to their initial time schedule.

In March 2021, Microsoft and Akamai announced quictls, an OpenSSL fork with the express idea to ship OpenSSL + the QUIC API. They didn’t want to wait for OpenSSL to do it.

Several QUIC libraries can now use quictls. quictls has kept their fork up to date and now offers the equivalent of OpenSSL 3.0.0 + the QUIC API.

While we’ve been waiting for OpenSSL to adopt the API.

OpenSSL makes a turn instead

Then came the next blow to everyone’s expectations. An autumn surprise. On October 13, the OpenSSL OMC announces:

The focus for the next releases is QUIC, with the objective of providing a fully functional QUIC implementation over a series of releases (2-3).

OpenSSL has decided to implement a complete QUIC stack on their own and with the given time line it sounds like it will take them a few years (?) to ship. And instead of providing the API lots of implementers have been been waiting for so long, they explicitly say that it is a non-goal at the start:

The MVP will not contain a library API for an HTTP/3 implementation (it is a non-goal of the initial release).

I didn’t write my own QUIC implementation but I’ve followed the work of several of the implementations fairly closely and it is fairly complicated journey they set out for themselves – for very unclear reasons. There already exist several high quality QUIC libraries, why does OpenSSL think they need to make yet another one? They seem to be overloaded with work already before, which the long delays of the 3.0.0 release seemed to show, how are they going to be able to add a complete new stack implementation of top of this? The future will tell.

PR8797 closed

On October 20 2021, the pull request that was created in April 2019, is finally closed for real as a “won’t fix”.

Screenshot of the actual closing of the PR

Where are we now?

The lack of a QUIC API in OpenSSL has held us back and with this move from OpenSSL, it will continue to hold us back for an uncertain amount of time going forward.

QUIC stacks will have to stick to using or switching to other libraries.

I’m disappointed.

James Snell, one of the key contributors on the QUIC and HTTP/3 work in nodejs tweeted:

Credits

Image by Marzena P. from Pixabay

The most used software components in the world

I’ve previously said that curl is one of the most widely used software components in the world with its estimated over ten billion installations, and I’m getting questions about it every now and then.

— Is curl the most widely used software component in the world? If not, which one is?

We can’t know for sure which products are on the top list of the most widely deployed software components. There’s no method for us to count or estimate these numbers with a decent degree of certainty. We can only guess and make rough estimates – and it also depends on exactly what we count. And quite probably also depending on who‘s doing the counting.

First, let’s acknowledge that SQLite already hosts a page for mostly deployed software module, where they speculate on this topic (and which doesn’t even mention curl). Also, does this count number of devices running the code or number of installs? If we count devices, does virtual machines count? Is it the number of currently used installations or total number of installations done over the years?

Choices

The SQLite page suggests four contenders for the top-5 list and I think it is pretty good:

  • zlib (the original implementation)
  • libpng
  • libjpeg
  • sqlite

I will go out on a limb and say that the two image libraries in the list, while of course very widely used, are not typically used on devices without screens and in the IoT world of today, such devices are fairly common. Light bulbs, power switches, networking gear etc. I think it might imply that they are slightly less used than the others in the list. Secondarily, libjpeg seems to not actually be around, but there are a few other successors that are used? Ie not a single implementation.

All top components are Open Source (sqlite’s situation is special but they still call it open source), and I don’t think it is a coincidence.

Are there other contenders not mentioned here? I figure maybe some of the operating systems for the tiniest devices that ship in the billions could be there. But I’m not sure there’s any such obvious market dominant player. There are other compression libraries too, but I doubt they reach the levels of zlib at this moment.

Someone brings up the Linux kernel, which certainly is very well used, but all Android devices, servers, windows 10 etc probably don’t make the unit count go over 7 billion and I believe that in virtually all Linux these kernel installs, curl, zlib and sqlite also run…

Similarly to how SQLite forgot to mention curl, I might of course also have a blind eye for some other really well-used code block.

The finalists

We end up with three finalists:

  • zlib
  • sqlite
  • libcurl

I think it is impossible for us to rank these three in an order with any good certainty. If we look at that sqlite list of where it is used, we quickly recognize that zlib and libcurl are deployed in pretty much all of them as well. The three modules have a huge overlap and will all be installed in billions of devices, while of course there are also plenty that only install one or two of them.

I just can’t figure out the numbers that would rank these modules in the top-list.

The SQLite page says: our best guess is that SQLite is the second mostly widely deployed software library, after libz. They might of course be right. Or wrong. They also don’t specify or explain how they do that guess.

libc

Whenever I’ve mentioned widely used components in the past, someone has brought up “libc” as a contender. But since there are many different libc implementations and they are typically done for specific platforms/operating systems, I don’t think any single of the libc implementations actually reach the top-5 list.

zlib in curl/sqlite

Many people says zlib, partly because curl uses it, but then I have to add that zlib is an optional dependency for curl and I know many, including large volume, users that ship products with libcurl that doesn’t use zlib at all. One very obvious and public example, is the curl.exe shipped in Windows 10 – that’s maybe one billion installs of curl that don’t bundle zlib.

If I understand things correctly, the situation is similar in sqlite: it doesn’t always ship with a zlib dependency.

The poll

I asked my twitter followers which one of these three components they guess is the most widely used one. Very unscientifically and of course skewed towards libcurl (since I asked and I have a curl bias),

The over 2,000 respondents voted libcurl with a fairly high margin.

The datestamp shown in the image is when the poll went up and it was online for 24 hours.

What did I miss?

Did I miss a contender?

Have I overlooked some stats that make one of these win?

Updates: Since this was originally posted, I have had OpenSSL, expat and the Linux kernel proposed to me as additional finalists and possibly most-used components.

Credits

Image by PIRO4D from Pixabay

Hackad: curl use on TV

There’s this new TV-show on Swedish Television (SVT) called Hackad (“hacked” in English), which is about a team of white hat hackers showing the audience exactly how vulnerable lots of things, people and companies are and how they can be hacked using various means. In the show the hackers show how they hack into peoples accounts, their homes and their devices.

Generally this is done in a rather non-techy way as they mostly describe what they do in generic terms and not very specifically or with technical details. But in some short sequences the camera glances over a screen where source code or command lines are shown.

A little mr Robot like but in reality.

Similar to the fictional mr Robot, a readily available tool to use to accomplish what you want is of course… curl. In episode 4, we can easily spot curl command lines in several different shots.

Jesper Larsson is one of the hackers in the show and he responded on this blog post about them using curl, on Twitter:

Screenshots from episode 4

Lots of curl command lines
Markdown document with embedded curl command lines
Another snap of the same document showing more curl
David’s laptop when outside the house, showing a number of curl command lines, slightly blurry.

curl installations per capita

I’ve joked with friends and said that we should have a competition to see whom among us have the largest number of curl installations in their homes. This is of course somewhat based on that I claim that there are more than ten billion curl installations in the world. That’s more installations than humans. How many curl installations does an average person have?

Amusingly, someone also asked me this question at curl presentation I did recently.

I decided I would count my own installations to see what number I could possibly come up with, ignoring the discussion if I’m actually could be considered “average” in this regard or not. This counting includes a few assumptions and estimates, but this isn’t a game we can play with complete knowledge. But no crazy estimates, just reasonable ones!

I decided to count my entire household’s amount just to avoid having to decide exactly which devices to include or not. I’m counting everything that is “used regularly” in my house (things that haven’t been used within the last 12 months don’t count). We’re four persons in my household. Me, my wife and my two teenage kids.

Okay. Let the game begin. This is the Stenberg household count of October, 2021.

Computer Operating Systems

4: I have two kids who have one computer each at home. One Windows 10 and one macOS. They also have one ChromeOS laptop each for school.

3: My wife has no less than three laptops with Windows 10 for work and for home.

3: I have three computers I use regularly. One Windows 10 laptop and two Debian Linuxes (laptop + desktop).

1: We have a Windows 10 NUC connected to the living room TV.

Subtotal: 11 full fledged computers.

Computer applications

Tricky. In the Linux machines, the curl installation is often shared by all users so just because I use multiple tools (like git) that use curl doesn’t increase the installation count. Presumably this is also the same for most macOS and ChromeOS apps.

On Windows however, applications that use libcurl use their own private build (as Windows itself doesn’t provide libcurl, only the curl tool) so they would count as additional installations. But I’m not sure how much curl is used in the applications my family use on Windows. I don’t think my son for example plays any of those games in which I know they use curl.

I do however have (I counted!) 8 different VMs installed in my two primary development machines, running Windows, Linux (various distros for curl testing) and FreeBSD and they all have curl installed in them. I think they should count.

Subtotal: 8 (at least)

Phone and Tablet Operating Systems

2: Android phones. curl is part of AOSP and seem to be shipped bundled by most vendor Androids as well.

1: Android tablet

2: iPhones. curl has been part of iOS since the beginning.

1: iOS tablet

Subtotal: 6

Phone and tablet apps

6 * 5: Youtube, Instagram. Spotify, Netflix, Google photos are installed in all of the mobile devices. Lots of other apps and games also use libcurl of course. I’ve decided to count low.

Subtotal: 30 – 40 yeah, the mobile apps really boost the amount.

TV, router, NAS, printer

1: an LG TV. This is tricky since I believe the TV operating system itself uses curl and I know individual apps do, and I strongly suspect they run their own builds so more or less every additional app on the TV run its own curl installation…

1: An ASUS wifi router I’m “fairly sure” includes curl

1: A Synology NAS I’m also fairly sure has curl

1: My printer/scanner is an HP model. I know from “sources” that pretty much every HP printer made has curl in them. I’m assuming mine does too.

Subtotal: 4 – 9

Potentials

I have half a dozen wifi-enabled powerplugs in my house but to my disappointment I’ve not found any evidence that they use curl.

I have a Peugeot e2008 (electric) car, but there are no signs of curl installed in it and my casual Google searches also failed me. This could be one of the rarer car brands/models that don’t embed curl? Oh the irony.

My Peugeot e2008

I have a Fitbit Versa 3 watch, but I don’t think it runs curl. Again, my googling doesn’t show any signs of that, and I’ve found no traces of my Ember coffee cup using curl.

My fridge, washing machine, dish washer, stove and oven are all “dumb”, not network connected and not running curl. Gee, my whole kitchen is basically curl naked.

We don’t have game consoles in the household so we’re missing out on those possible curl installations. I also don’t have any bluray players or dedicated set-top/streaming boxes. We don’t have any smart speakers, smart lightbulbs or fancy networked audio-players. We have a single TV, a single car and have stayed away from lots of other “smart home” and IoT devices that could be running lots of curl.

Subtotal: lots of future potential!

Score

11 + 8 + 6 + 30to40 + 4to9 = 59 to 74 CIPH (curl installations per household). If we go with the middle estimate, it means 66.

16.5 CIPC (curl installations per capita)

If the over 16 curl installations per person in just this household is an indication, I think it may suggest that my existing “ten billion installations” estimate is rather on the low side… If we say 10 is a fair average count and there are 5 billion Internet connected users, yeah then we’re at 50 billion installations

What’s your score?