Tag Archives: cURL and libcurl

curl 7.82.0 Impartial Content

Welcome to the 206th curl release, 59 days since we shipped curl 7.81.0. The extra three days because I was away on the day the release would normally have been done. (I call it Impartial Content as a little play on the HTTP 206 response code message.)

Download curl 7.82.0 from curl.se as always.

Release presentation

Numbers

the 206th release
2 change
59 days (total: 8,751)

173 bug-fixes (total: 7,691)
266 commits (total: 28,321)
0 new public libcurl function (total: 86)
0 new curl_easy_setopt() option (total: 295)

1 new curl command line option (total: 245)
67 contributors, 39 new (total: 2,597)
43 authors, 24 new (total: 1,014)
0 security fixes (total: 111)
0 USD paid in Bug Bounties (total: 16,900 USD)

Changes

There are only two changes this time around:

The JSON option

With the new --json command line option, curl suddenly made it more convenient to send JSON from command lines and shell scripts.

MesaLink: removed support

curl supports a crazy amount of different TLS libraries, but the amount was now decreased by one (to 13) as we officially drop support for MesaLink. The library is not developed anymore so we don’t want to encourage future users to go down that road.

Bug-fixes

Here are some of my favorite bug-fixes from this release cycle.

bearssl

We landed three notable fixes for the bearssl backend which should make users of it happy. For cert expiration, incomplete CA certs and session resumption.

strlen call removals

After I posted a library-call count to the mailing list that showed quite a large number of calls to strlen(), a cleanup race started that subsequently reduced the number of calls by over 60% in some use cases! Primarily replaced by compile-time constants.

configure requires –with-nss-deprecated

To build curl with the NSS backend using configure, you now need to confirm this choice by also passing on --with-nss-deprecated to make it clear to users what the future looks like for our NSS support.

erase some more sensitive command line arguments

After a lengthy discussion we now hide even more command line arguments from appearing in ps output (on systems that support it). Since the hiding is done by curl itself, there is still a short moment during which they will be visible, plus that we cannot hide everything so there is still a risk that some argument might leak information unwillingly. That is the nature of command line arguments. Use the config file concept or stdin etc to work around that.

NPN is deprecated

The TLS extension NPN is now marked “deprecated” and is scheduled for removal in six months unless someone yells very loudly and explains why not. This extension was once used to negotiate SPDY and early HTTP/2 but have no purpose these days. The browsers removed support for it several years ago.

allow CURLOPT_HTTPHEADER change “:scheme”

The only pseudo header for HTTP/2 and HTTP/3 that couldn’t be modified by a user can now be changed at will.

remove support for TPF, Netware, vxWorks

Support for these platforms for which the code haven’t been modified for the last decade or so, and therefore are highly unlikely to still work, were dropped. After this, I had it confirmed that you can still build curl for vxWorks using the “regular” build!

remove support for CURL_DOES_CONVERSIONS

After support for TPF was dropped, we took the next step and removed support for the charset conversion functions necessary to run curl on non-ASCII platforms such EBCDIC using ones. As TPF was the only/last platform such platform we supported, this cleanup improved lots of code paths.

allow user callbacks to call curl_multi_assign

A regression in 7.81.0 made curl_multi_assign() return error if used from a callback.

http3: quiche and ngtcp2 fixes

We landed several fixes in both HTTP/3 backends, improving the situation for everyone who plays HTTP/3 with curl.

reduce memory use when FTP is disabled

After several cleanups the total memory footprint for builds with FTP and/or proxy support disabled has been reduced.

check for ~/.config/curlrc too

curl now also checks for its default “config file” in the path mentioned above.

DNS options that need c-ares now fail without it

The command line tool offers a set of functions to control DNS specific details, and since those options only work if libcurl was built to use c-ares and not at all if it was built to use another resolver backend, curl will now correctly return error when one of those options are used when libcurl can’t execute them.

keep trailing dot in host name

If there is a trailing dot after the host name in the URL, that dot is now kept in the name when used everywhere internally – except for the SNI field in TLS.

wolfssl: when SSL_read() returns zero, check the error

Even while obviously very rare, curl could wrongly return an “end of transfer” prematurely before this fix.

Next

The next release is scheduled to ship on April 27, 2022.

curl on “software at scale”

I was a guest at the software at scale podcast a while ago and the recording went up recently. We talked about a lot of things curl, including:

  • The complexity behind HTTP. What goes on behind the scenes when I make a web request?
  • The organizational work behind internet-wide RFCs, like HTTP/3.
  • Rust in curl. The developer experience, and the overall experience of integrating Hyper.
  • WebSockets support in curl
  • Fostering an open-source community.
  • People around the world think Daniel has hacked their system, because of the curl license often included in malicious tools.
  • Does curl have a next big thing?

Listen to it here.

Also: links to all my guest appearances on podcasts and shows.

The great curl roadmap 2022 webinar

In the beginning of every year I jot down a couple of “larger things” I would like to work on during the year. Some of the ideas are more in the maybe category and meant to be tested on the audience and users to see what others think and want, others are features I believe are ripe and ready for addition.

I talk about what I personally plan and consider to work on as I cannot control or decide what volunteers and random contributors will do this year, but of course with the hope that feedback from users and customers will guide me.

If you have use cases, applications or devices with needs or interests in particular topics going forward: let me know! At this webinar or outside of it.

The curl roadmap 2022 webinar will happen on February 17th at 09:00 PST (17:00 UTC, 18:00 CET). Register to attend on this link:

Register here

The webinar will be recorded and made available after the event as well.

Credits

Image by Narcis Ciocan from Pixabay

My work on tool vs library

I’m the lead developer in the curl project. We make the command line tool curl and the library libcurl, for doing Internet transfers. The command line tool uses the library for all the internet transfer heavy lifting.

The command line tool is somewhat of a shell binding to access libcurl.

We make these things. We recently surpassed 1,000 authors. I lead the project and I have done the most number of commits per month in curl for the last 79 months, and in fact in 222 of the 267 months we have stats for.

The tool came first

curl was born in 1998 as a command line tool only. Two years in, we (well, mostly me actually) remodeled some internals and shipped the first libcurl to the world in August 2000. The idea was (and still is) to provide the same Internet transfer powers the tool has to any application or device out there that need it.

Code sizes

The library side of things is right now about 85% of the total product code. A little over 120,000 lines right now, including comments.

Pie chart showing code distribution between tool and library

Users at scale

Many users think of the curl project as equivalent to the curl tool and the command line tool certainly has a lot of users. It is available for and on all the popular platforms. It is impossible to count curl command users, but millions should not be an exaggeration.

While it seems likely that more users are using the tool than is writing applications that use libcurl, each product, service or device that uses libcurl can themselves scale up the volumes. A single libcurl user can use libcurl in an application used by billions. A few hundreds or thousands of libcurl users populate the world with things transferring data with our library. (A noticeable share of the current Internet traffic is likely driven by libcurl.)

The net result is therefore that libcurl runs in several thousand times more installations than the tool.

If we visualize the number of curl users as a yellow ball and the libcurl installations as a green ball, putting them next to each other would look something like this:

Planet curl is barely visible next to planet libcurl. Install and user numbers are estimated.

(Image math: 3 million curl users, 10 billion libcurl installations. Yellow sphere radius is 90 vs the green’s 1340)

Complexity

Making a command line tool is much easier than doing a library. The command line tool has just one entry point and the interface is limited. A command line and associated files and pipes to read from. In our case, the tool lets libcurl deal with a lot of platform specifics which makes the command line code generic and mostly identical on all platforms it runs on.

A library has many more entry points, each that needs to be written to care for what the users might pass in to them. libcurl has code to work with millions of build combinations, and (right now) up to 35 different external libraries. (13 different TLS libraries, 3 different SSH libraries, 2 different QUIC/HTTP/3 libraries, 3 different compression libraries etc.)

libcurl is used in many more challenging environments such as niche operating systems, scaled up to thousands of concurrent transfers, systems that never exits and in builds with a creative set of features disabled at build-time.

Compared to the curl tool, libcurl is way more advanced. A bug in the command line tool is often much easier to fix than one in the library. Such bugs are also rarer since it is much simpler and a smaller amount of code.

Where it matters

Because of what I have outlined above, I focus my curl work on library related changes. Both when it comes to bug-fixes and adding features. It scales better to the world, and as one of the designers and architects of most solutions used in libcurl I am in many cases a suitable engineer to work on many of the more complicated problems.

It is smarter for the entire project for me to leave the slightly less complicated problems and more easily understood features to be fixed and added by others. It scales better. After all, I have a finite number of hours per week to spend on curl. I want to make the best use of my time as I can.

Therefore, I tend to leave tool-related things for others to work on.

Where it pays

I work on curl for a living. The companies that pay for support have higher priority than almost any other bug reports, and they tend to be libcurl related. Keeping my paying customers happy is crucial to me. And in a funny way also to others, since that work usually end up benefiting all curl users.

Where its fun

As I work on curl full-time during my workdays and also during a good chunk of my spare time, I need to “lighten up” my work at times and get some variation. Sometimes I can go about and find new little things to work on in the project that maybe isn’t top priority by any means, but are things that could use some polish and are different enough from what I spent the rest of my week on. To give me variation. To keep the fun.

Also, sometimes scratching the surface on a new somewhat forgotten place brings up more important stuff.

Theory meets practice

Years ago I even for a while considered to hand over maintenance of the curl tool to someone else and more distinctly say that I would only work on the library as they are separate entities and could possibly benefit from being worked on independently from each other.

My idea of focusing my work on the more complicated issues, to work on design and architecture and help newcomers find their way into the code doesn’t always work out.

I never gave up maintenance of the tool and a lot of things that someone else could implement or fix in the project aren’t, which makes me eventually get to work on that too anyway. For the good of the project. Also, it makes my work day more varied and fun if I take occasional strolls around the project every now and then and put on some new paint on areas I find could use some.

Summary

Most of my work efforts go into libcurl matters. But I work on the tool too.

curl dash-dash-json

The curl “cockpit” is yet again extended with a new command line option: --json. The 245th command line option.

curl is a generic transfer tool for sending and receiving data across networks and it is completely agnostic as to what it transfers or even why.

To allow users to craft all sorts of transfers, or requests if you will, it offers a wide range of command line options. This flexibility has made it possible for a large number of users to keep using curl even as network ecosystems and its (HTTP) use have changed over time.

Craft content to send

curl offers a few convenience options for creating contents to send in HTTP “uploads” . Options such as -F for building multi-part formposts, and --data-urlencode for URL-encoding POST data.

When curl was born, JSON didn’t exist. Over the decades since, JSON has grown to become a very commonly used format for structured data, often used in “REST API” calls and more. curl command lines sending and receiving JSON are very common now.

When asking users today what features they would like to see added in curl, and when asking them what good features they see in curl alternatives, a huge chunk of them mention JSON support in one way or another.

Partly because dealing with JSON with curl sometimes make the involved command lines rather long and quirky.

JSON in curl

The discussion has been ignited in the curl community about what, if anything, we should do in curl to make it a smoother and better tool when working with JSON. The offered opinions range from nothing (“curl is content agnostic”) to full-fledged JSON generator, parser and pretty-printer (or a combination in between). I don’t think we are ready to put our collective foot down on where we should go with this.

Introducing --json

While the discussion is ongoing, we still bring a first JSON oriented feature to the tool as a first step. The --json option. This is a new option that basically works as an alias, or shortcut, to sending JSON to an endpoint. You use it like this:

curl --json '{"tool": "curl"}' https://example.com/

Send JSON from a file:

curl --json @json.txt https://example.com/

Or send JSON passed to curl on stdin:

echo '{"a":"b"}' | curl --json @- https://example.com/

This change does not affect the libcurl library at all, it is only done the tool side of things.

Details

The --json option is the equivalent of setting

--data [arg]
--header "Content-Type: application/json"
--header "Accept: application/json"

If you want another Content-Type or Accept header, then you can override them as usual with -H. If you use multiple --json options on the same command line, the contents from them will be concatenated before sent off.

Ships!

This new json option was merged in this commit and will be available and present in curl 7.82.0, to be released in early March 2022. Of course you can build it from git or a daily snapshot already now to test it out!

Future JSON stuff

I want to at least experiment a little with some level of JSON creation for curl. To help users who want to create basic JSON on the command line to send off. Lots of users run into problems with this since JSON uses double quotes a lot and then the combination of quoting and unquoting and the shell often make users confused or even frustrated.

Exactly how or to what level is still to be determined. Your opinions and feedback are very valuable and helpful.

jo+curl+jq

This trinity of commands helps you send correctly formatted data and show the returned JSON easily. Like in a command line similar to:

jo name=daniel tool=curl | curl --json @- https://httpbin.org/post | jq

jo, jq

curl with rust

I did an online presentation with this name for the Rust Linz meetup, on January 27 2022. This is the recording:

The individual slides are also available.

Content quickly

In this presentation I talk about how libcurl’s most important aspect is the stable ABI and API. If we just maintain those, we can change the internals however we like.

libcurl has a system with build-time selected components, called backends. They are usually powered by third party libraries. With the recently added HTTP backend there are now seven “flavors” of backends.

A backend can be provided by a library written in rust, as long as that rust component provides a C interface that libcurl can use. When it does, it being rust or not is completely transparent and a non-issue for libcurl.

curl currently supports components written in rust for three different backends:

None of these backends are yet “feature complete”, but we are moving slowly towards that. Your help is appreciated!

1,000 commit authors

In the curl project we switched to using git for source code control in March 2010. Since then we have exact tracking of every commit author to the project. In the times before git, we used CVS which doesn’t properly separate committers from authors.

The number of commit authors in curl has gradually been increasing over time.

1998-03-20: 1 author recorded in the premiere curl release. It actually took until May 30th 2001 for the second recorded committer (partly of course because we used CVS). 1167 days to bump up the number one notch.

2014-05-06: 250 authors took 5891 days to reach.

2017-08-18: 500 authors required additional 1200 days

2019-12-20: 750 authors was reached after only 854 more days

2022-01:30: reaching 1000 authors took an additional 772 days.

Jan-Piet Mens became the thousandth author by providing pull-request #8354.

These 1,000 authors have done in total 28,163 commits. This equals 28.2 commits on average while the median commit author only did a single one – more than 60% of the authors are one-time committers after all! Only fourteen persons in the project have authored more than one hundred commits.

It took us 8,717 days to reach 1,000 authors, making it one new committer in the project per 8.7 days on average, over the project’s life time so far. The latest 250 authors have “arrived” at a rate of one new author per 3.1 days.

In 2021 we had more commit authors than ever before and also more first-timers than ever. It will certainly be a challenge to reach that same level or maybe even break the record again in 2022.

It took almost 24 years to reach 1,000 authors. How long until we double this?

Visualized on video

To celebrate this occasion, I did a fresh gource video of curl development so far; for the duration of the first 1,000 committers. I spiced it up by adding 249 annotations of source related facts laid out in time.

Credits: the music is from bensound.com.

GitHub can’t count

If you look at the curl/curl repository on GitHub, it appears to be short of a few hundred committers. It says 792 right now.

This is because GitHub can’t count commit authors. The theory is that they somehow ignore committers that do not have GitHub accounts. My author count is instead based on plain git access and standard git commands run against a plain git clone:

$ git shortlog -s | wc -l
1000

Don’t mix URL parsers

I have had my share of adventures with URL parsers and their differences in the past. The current state of my research on the topic of (failed) URL interoperability remains available in this GitHub document.

Use one and only one

There is still no common or standard URL syntax format in sight. A string that you think looks like a URL passed to one URL parser might be considered fine, but passed to a second parser it might be rejected or get interpreted differently. I believe the state of URLs in the wild has never before been this poor.

The problem

If you parse a URL with parser A and make conclusions about the URL based on that, and then pass the exact same URL to parser B and it draws different conclusions and properties from that, it opens up not only for strange behaviors but in some cases for downright security vulnerabilities.

This is easily done when you for example use two different libraries, frameworks or libraries that need to work on that URL, but the repercussions are not always easy to see at once.

A well-known presentation on this topic from 2017 is Orange Tsai’s A New Era Of SSRF – Exploiting Url Parsers.

URL Parsing Confusion

The report EXPLOITING URL PARSERS: THE GOOD, BAD, AND INCONSISTENT (by Noam Moshe, Sharon Brizinov, Raul Onitza-Klugman and Kirill Efimov) was published today and I have had the privilege to have read and worked with the authors a little on this prior to its release.

As you see in the report, it shows that problems very similar to those mr Tsai reported and exploited back in 2017 are still present today, although perhaps in slightly different ways.

As the report shows, the problem is not only that there are different URL standards and that every implementation provides a parser that is somewhere in between both specs, but on top of that, several implementations often do not even follow the existing conflicting specifications!

The report authors also found and reported a bug in curl’s URL parser (involving percent encoded octets in host names) which I’ve subsequently fixed so if you use the latest curl that one isn’t present anymore.

curl’s URL API

In the curl project we attempt to help applications and authors to reduce the number of needed URL parsers in any given situation – to a large part as a reaction to the Tsai presentation from 2017 – with the URL API we introduced for libcurl in 2018.

Thanks to this URL parser API, if you are already using libcurl for transfers, it is easy to also parse and treat URLs elsewhere exactly the same way libcurl does. By sticking to the same parser, there is a significantly smaller risk that repeated parsing bring surprises.

Other work-arounds

If your application uses different languages or frameworks, another work-around to lower the risk that URL parsing differences will hurt you, is to use a single parser to extract the URL components you need in one place and then work on the individual components from that point on. Instead of passing around the full URL to get parsed multiple times, you can pass around the already separated URL parts.

Future

I am not aware of any present ongoing work on consolidating the URL specifications. I am not even aware of anyone particularly interested in working on it. It is an infected area, and I will get my share of blow-back again now by writing my own view of the state.

The WHATWG probably say they would like to be the steward of this and they are generally keen on working with URLs from a browser standpoint. It limits them to a small number of protocol schemes and from my experience, getting them to interested in changing something for the the sake of aligning with RFC 3986 parsers is hard. This is however the team that more than any other have moved furthest away from the standard we once had established. There are also strong anti-IETF sentiments oozing there. The WHATWG spec is a “living specification” which means it continues to change and drift away over time.

The IETF published RFC 3986 back in 2005, they saw the RFC 3987 pretty much fail and then more or less gave up on URLs. I know there are people and working groups there who would like to see URLs get brought back to the agenda (as I’ve talked to a few of them over the years) and many IETFers think that the IETF is the only group that can do it proper, but due to the unavoidable politics and the almost certain collision course against (and cooperation problems with) WHATWG, it is considered a very hot potato that barely anyone wants to hold. There are also strong anti-WHATWG feelings in some areas of the IETF. There is just a too small of a chance of a successful outcome from something that mostly likely will take a lot of effort, will, thick skin and backing from several very big companies.

We are stuck here. I foresee yet another report to be written a few years down the line that shows more and new URL problems.

My URL isn’t your URL.

curl 7.81.0 – more percent

There has been eight weeks since 7.80.0.

Release presentation

Numbers

the 205th release
1 change
56 days (total: 8,636)

121 bug-fixes (total: 7,518)
189 commits (total: 28,055)
0 new public libcurl function (total: 86)
1 new curl_easy_setopt() option (total: 295)

1 new curl command line option (total: 244)
53 contributors, 25 new (total: 2,558)
32 authors, 14 new (total: 990)
0 security fixes (total: 111)
0 USD paid in Bug Bounties (total: 16,900 USD)

Security

Today we celebrate our fourth consecutive release without any new vulnerability to fix and reveal.

Change

This release comes with just one change to note, but one that brings both a new libcurl setopt (CURLOPT_MIME_OPTIONS) and a new command line option (--form-escape). Starting now, libcurl defaults to percent encoding certain fields when doing multi-part HTTP formposts.

Bug-fixes

As usual, here’s a set of selected favorite bug-fixes of mine from this cycle:

require “see also” for every documented option in curl.1

When the curl command man page is generated at build time, the script now makes sure that there is a “see also” for each option. This will help users find related info. More mandatory information for each option makes us do better documentation that ultimately helps users.

lazy-alloc the table in Curl_hash_add()

The internal hash functions moved the allocation of the actual hash table from the init() function to when the first add() is called to add something to the table. This delay simplified code (when the init function became infallible ) and does even avoid a few allocs in many cases.

enable haproxy support for hyper backend

Plus a range of code and test cases adjusted to make curl built with hyper run better. There are now less than 30 test cases still disabled for hyper. We are closing in!

mbedTLS: add support for CURLOPT_CAINFO_BLOB

Users of this backend can now also use this feature that allows applications to provide a CA cert store in-memory instead of using an external file.

multi: handle errors returned from socket/timer callbacks

It was found out that the two multi interface callbacks didn’t at all treat errors being returned the way they were documented to do. They are now, and the documentation was also expanded to clarify.

nss:set_cipher don’t clobber the cipher list

Applications that uses libcurl built to use NSS found out that if they would select cipher, they would also effectively prevent connections from being reused due to this bug.

openldap: implement STARTTLS

curl can now switch LDAP transfers into LDAPS using the STARTTLS command much like how it already works for the email protocols. This ability is so far limited to LDAP powered by OpenLDAP.

openssl: define HAVE_OPENSSL_VERSION for OpenSSL 1.1.0+

This little mistake made libcurl use the wrong method to extract and show the OpenSSL version at run-time, which most notably would make libcurl say the wrong version for OpenSSL 3.0.1, which would rather show up as the non-existing version 3.0.0a.

sha256/md5: return errors when init fails

A few internal functions would simply ignore errors from these hashing functions instead of properly passing them back to the caller, making them to rather generate the wrong hash instead of properly and correctly returning an error etc.

curl: updated search for a file in the homedir

The curl tool now searches for personal config files in a slightly improved manner, to among other things make it find the same .known_hosts file on Windows as the Microsoft provided ssh client does.

url: check ssl_config when re-use proxy connection

A bug in the logic for checking connections in the connection pool suitable for reuse caused flaws when doing subsequent HTTPS transfers to servers over the same HTTPS proxy.

ngtcp2: verify server certificate

When doing HTTP/3 transfers, libcurl is now doing proper server certificate verification for the QUIC connection – when the ngtcp2 backend is used. The quiche backend is still not doing this, but really should.

urlapi: accept port number zero

Years ago I wrote a blog post about using port zero in URLs to do transfers. Then it turned out port zero did not work like that with curl anymore so work was done and now order is restored again and port number zero is once again fine to use for curl.

urlapi: provide more detailed return codes

There are a whole range of new error codes introduced that help better identify and pinpoint what the problem is when a URL or a part of a URL cannot be parsed or will not be accepted. Instead of the generic “failed to parse URL”, this can now better tell the user what part of the URL that was found out to be bad.

socks5h: use appropriate ATYP for numerical IP address

curl supports using SOCKS5 proxies and asking the proxy to resolve the host name, what we call socks5h. When using this protocol and using a numerical IP address in the URL, curl would use the SOCKS protocol slightly wrong and pass on the wrong “ATYP” parameter which a strict proxy might reject. Fixed now.

Coming up?

The curl factory never stops. There are many pull-requests already filed and in the pipeline of possibly getting merged. There will also, without any doubts, be more ones coming up that none of us have yet thought about or considered. Existing pending topics might include:

  • the ManageSieve protocol
  • --no-clobber
  • CURLMOPT_STREAM_WINDOW_SIZE
  • Remove Mesalink support
  • HAproxy protocol v2
  • WebSockets
  • Export/import SSL session-IDs
  • HTTP/3 fixes
  • more hyper improvements
  • CURLFOLLOW_NO_CUSTOMMETHOD

Next release

March 2, 2022 is the scheduled date for what will most probably become curl 7.82.0.

The curl year 2021

Every year is a curl year.

I’m saving my bigger summary for curl’s 24th birthday in March, but when reaching the end of a calendar year it feels natural and even fun to look back and highlight some of the things we accomplished and what happened during this particular lap around the sun. I decided to pick five areas to highlight.

This has been another great curl year and it has been my pleasure to serve this project working full time with it, and I intend to keep doing it next year as well.

Activities, contribution and usage have all grown. I don’t think there has ever before been a more curl year than 2021.

Contributions

In 2021, the curl project beats all previous project records in terms of contribution. More than 180 individuals authored commits to the source code repository, out of more than 130 persons were first-time committers. Both numbers larger than ever before.

The number of authors per month was also higher than ever before and we end the year with a monthly average of 25 authors.

The number of committers who authored ten or more commits within a single year lands on 15 this year. A new record, up from the previous 13 in 2014 and 2017.

We end this year with the amazing number of more than 2,550 persons listed as contributors. We are also very close to reaching 1,000 committers. We are just a dozen authors away. Learn how to help us!

I personally have done about 60% of all commits to curl in 2021 and I was awarded a GitHub star earlier this year. I was a guest on eight podcast episodes this year, talking curl at least partly in all of them.

New backends

This year we introduced support for two new backends in curl: hyper and rustls. I suppose it is a sign of the times that both of them are written in Rust and could be a stepping stone into a future with more curl components written in memory safe languages.

We actually got an increase in number of CVEs reported in 2021, 13 separate ones, after previously having had a decreasing trend the last few years. To remind us that security is still crucial!

Technically we merged the first hyper code already in late 2020 but we’ve worked on it through 2021 and this has made it work almost on par with the native code now.

None of these two new backends are yet used or exercised widely yet in curl, but we are moving in that direction. Slowly but surely.

Also backend related, during 2021 we removed the default TLS library choice when building curl and instead push that decision to get made by the person building curl. It refuses to build unless a choice is made.

The backends in curl provides build-time “pluggable” functionality.

Everything curl

In September 2015 I started to write Everything curl. The book to cover all there is to know and learn about curl. The project, the command line tool and the library.

When I started out, I wrote a lot of titles and sub-titles that I figured should be covered and detailed. For those that I didn’t yet have any text written I just wrote TBD. Over time I thought of more titles so I added more TBDs all over – and I created myself a script that would list which files that had the most number of TBDs outstanding. I added more and more text and explanations over time, but the more content I added I often thought of even more things that were still missing.

It took until December 15, 2021 to erase the final TBD occurrence! Six years and three months.

Presently, everything curl consists of more than 81,000 words in 12,000 lines of text. Done using more than 1,000 commits.

There are and probably always will be details missing and text that can be improved and clarified, but all the sections I once thought out should be there now at least are present and covered! I trust that users will tell us what we miss, and as we continue to grow and develop curl there will of course pop up new things to add to the book.

Death threat

In February 2021 I received a death threat by email. It is curl related because I was targeted entirely because my name is in the curl copyright statement and license and that is (likely) how the person found and contacted me. Months later, the person who sent me the threat apologized for his behavior.

It was something of a brutal awakening for me that reminded me with far too much clarity than I needed, that everything isn’t always just fun and games when people find my email address in their systems.

I filed a police report. I had a long talk with my wife. It shook my world there for a moment and it hinted of the abyss of darkness that lurk out there. I cannot say that it particularly changed my life or how I go about with curl development since then, but I think maybe it took away some of the rosy innocence out of the weird emails I get.

Mars

Not only did we finally get confirmation this year that curl is used in space – we learned that curl was used in the Mars 2020 Helicopter Mission! Quite possibly one of the coolest feats an open source project can pride itself with.

GitHub worked with NASA and have given all contributors to participating projects with a GitHub account a little badge on their profile. Shown here on the right. I think this fact alone might have helped attract more contributors this year. Getting your code into curl gets your contributions to places few other projects go.

There’s no info anywhere as to what function and purpose curl had exactly in the project and we probably will never know, but I think we can live with that. Now we are aiming for more planets.