Funding Dan to improve curl tests

A few weeks ago I mentioned how we fund Stefan’s work on improving HTTP(/3) in curl. Now, in similar spirit we are funding Dan Fandrich to work on further improving test infrastructure. Dan has worked fiercely on the introduction of parallel tests over the recent year or so and this is work that builds on that and continues down that road.

This funding is paid for by sponsors and donors, via Open Collective and GitHub sponsors. Thank you all!

Test Analysis System

curl contains a regression test suite of over 1900 individual test cases that are run automatically on every commit submission and on every pull request in almost 130 different environments., meaning that every change can result in more than 140,000 tests being run. A spurious test failure rate of a mere 0.001% is likely to cause a perfectly good PR to end up showing with a red failure. A new contributor that doesn’t understand this problem can spend hours poring over his or her patch and the related code in curl, searching for a problem that isn’t there.

Analyzing 140,000 tests for each change to the curl source code to find failure trends (such as flaky tests) demands an automated solution. Dan has created a system (working name Test Clutch) that has been successfully ingesting curl CI test results for much of the past year and has been used by him to find flaky tests as well as permanently failing tests (often submitted under the mistaken impression that the failed test was merely flaky). It collects individual test results from all the CI systems used in the curl project into a database where they can be analyzed.

This system has potential to be useful to a broader base of curl developers to help see test trends, test platform coverage and to better determine which tests are flaky and could use improvement. It has been written in a fashion such that test results for other projects besides curl can also be added and analyzed separately.

Work Projects

Make Test Clutch available

The current test ingestion and analysis system will be productionized and the analysis summary table will be integrated into the curl web site for easy access for developers.

Assist in PR work

This task will involve writing code to trigger the test analysis system to retrieve detailed PR test results when available. It must make a reasonable determination of when all the expected tests have been completed (since not all tests will run for every PR) then commenting on the PR with a summary of the test results and believability of any test failures.

When

These are project that will benefit the project when implemented but they are not time sensitive and Dan is not going to work full time on them. There are no exact end dates set for them.

The result of Dan’s work will become visible in PRs and website updates as we go forward.

Five year full time curl anniversary

Five years ago now, on February 2nd 2019, I started working for wolfSSL doing curl full time. I have now worked longer for wolfSSL than I previously did for Mozilla.

I have said it before and I will say it again: working full time on curl is my definition of living the dream.

Joining wolfSSL was not just me changing employer, it changed everything for me. First, I am not just a regular employee, I am the lead curl developer and the curl support we offer for commercial customers is unparalleled. No other business or individuals can offer the same level of support, knowledge, experience, insights and ability to merge fixes and changes back into curl mainline.

At wolfSSL we offer commercial services around and support for curl and libcurl. Contract development of new features, debugging, fixing problems and just about every other aspect of getting users get better use of (lib)curl in their products and services.

I think this change has been good for curl and curl project as well. The last five years have seen more and faster development than any other previous five year period. I have been able to work intensely and a lot on curl, When fixing bugs and adding features for customers, but even more just the general improving of things for everyone that the money from support customers makes possible.

curl 8.6.0

Numbers

the 254th release
7 changes
56 days (total: 9,448)

154 bug-fixes (total: 9,888)
257 commits (total: 31,684)
0 new public libcurl function (total: 93)
1 new curl_easy_setopt() option (total: 304)

0 new curl command line option (total: 258)
65 contributors, 40 new (total: 3,078)
36 authors, 18 new (total: 1,237)
1 security fix (total: 151)

Release presentation

Security

CVE-2024-0853: OCSP verification bypass with TLS session reuse. curl inadvertently kept the SSL session ID for connections in its cache even when the verify status (OCSP stapling) test failed. A subsequent transfer to the same hostname could then succeed if the session ID cache was still fresh, which then skipped the verify status check.

Changes

  • Markdown documentation. Most of the libcurl and command line documentation is now written using (basic) markdown instead of previous formats. Easier to read, easier to write.
  • CURLE_TOO_LARGE. A new libcurl error code for when “something” is growing too big to be allowed. Like a URL, a HTTP request or similar. Previously it would return out of memory for those situations which caused confusion to users.
  • CURLINFO_QUEUE_TIME_T. Applications can now ask libcurl how long a transfer was “queued” internally before it actually started.
  • CURLOPT_SERVER_RESPONSE_TIMEOUT_MS. A new millisecond version of the already existing option to allow applications higher resolution control.
  • Use GetAddrInfoExW on >= Windows 8. On current Windows versions libcurl will now do asynchronous name resolving by default without using threads, which should be less resource heavy.
  • libpsl detection failure in configure causes error. If configure cannot find libpsl it will require the user to say that it should not be used, or to fix the problem. To make people who build curl more aware of the PSL state of the build.
  • runtests supports -gl, When you invoke individual test cases on macOS, you can now ask to run it with lldb with -gl just as you have been able to run it with gdb using -g for decades. Helps debugging difficult cases.

Bugfixes

Here some of my favorite bugfixes from this cycle:

configure: add libngtcp2_crypto_boringssl detection. Previously it would only detect and build out of the box with the quictls version of ngtcp2 builds.

configure: when enabling QUIC, check that TLS supports QUIC. More efforts trying to detect wrong and invalid build combinations earlier, to avoid users ending up with broken builds.

all libcurl man page examples are verified in CI. Every man page example now compiles cleanly. This step made us detect and fix numerous tiny mistakes of the most annoying kind: when you copy code from docs and it does not work.

curl shows ipfs and ipns as supported “protocols”. In the regular --version output. Even if they are converted to https:// internally.

curl bsearches command line options. The command line parsing is now magnitudes faster. Of course it will not really be noticeable outside of the most extreme cases.

curl stopped supporting @filename style for --cookie. This syntax was never documented and was not used in any test case. It was risking to cause unwanted surprises.

curl –remove-on-error only removes “real” files. Mostly as a precaution for when users are unclever enough to run curl with elevated privileges and would save to a device or named pipe etc.

curl no longer sets the file comment on Amiga. It would truncate the URL weirdly and also risk leak credentials if such were used in the URL.

lib: reduced use of the download buffer all over. The download buffer has over time been abused for all kinds of buffer purposes. This cycle we have made a lot of such buffer use instead start use their own buffers. With a little luck, this will make us possible use a single download buffer for all transfer in a multi handle, thereby drastically reducing the amount of used memory when doing parallel transfers. With no behavior difference or performance degradation. Details on this will follow later.

lib: use memdup0 instead of malloc + memcpy. This was a common code pattern, and with this we reduce the number of mallocs and memcpys at the same time – which we think is good since they are known “problem functions” that are easy to mess up.

lib: various conversions from malloc to dynbuf. In similar spirit as the above, we continued to switch more functions away from using malloc and family to instead use the internal dynbuf API for managing dynamic buffers in a way that is less likely to cause memory related issues.

resolving: with modern c-ares, use its default timeout. It means tighter timeouts by default but also that this combo now also respect the timeout that can be specified in resolve.conf.

headers API: make sure the trailing newline is never stored. A header with no content on the right side of the colon would erroneously get its trailing newline stored as content

mprintf: overhaul, performance and bugfixes. Now the curl printf functions work even more similar to the glibc counterparts especially when provided illegal %-combinations and when using the <num>$ operator. Performance measurements on this new code also says this code now executes around 30% faster on commonly used format strings.

ftp: handle the PORT parsing without allocation. Minor cleanup.

http3: initial support for OpenSSL 3.2 QUIC stack. The forth QUIC backend in curl is here.

http: check for “Host:” case insensitively. If you would ask to disable this header with a different casing that what was compared, it would still send an empty header in the request.

http: only act on 101 responses when they are HTTP/1.1. If a HTTP response says another protocol version with a 101 response, it is now considered an illegal combination.

openldap: fix an LDAP crash. LDAP without TLS would crash on basic use.

openldap: fix STARTTLS. It was recently broken in a refactor.

Next

There are no revolutionary changes in the pipe, but there are a series of things we most likely are going to land in the next cycle making the next version number likely to become 8.7.0.

Coming: a curl distros meeting

The curl project arranges a two hour video conference meeting on March 21, 2024, with the aim of getting together people from curl and persons packaging curl for distributions. Linux distros and others. If you distribute curl to your users, consider yourself invited!

The main purpose is to improve curl and how curl is shipped to end users. The idea is to make it a multi-party conversation to see how we can improve collaboration, information exchange, testing, experiences etc so that we in the curl project can help the distros ship curl better, faster and more convenient. To the benefit for everyone who uses curl and libcurl packaged by distributions.

Of course, getting people working on related matters together and talking could also help us get to know each other a little better, to get some faces and voices attached to email addresses and by that, making cooperation smoother in the future.

The meeting is open for anyone interested to join and participate in.

The event starts at 16:00 UTC and might last up to two hours.

All details and specifics for this event is put in this wiki page:

https://github.com/curl/curl/wiki/curl-distro-discussion-2024

Welcome!

Logitech G915 TKL

On December 21 2023 I started using this new keyboard as my daily driver. I have used my previous one almost daily since August 2014. Back in 2015 I counted doing almost 7 million key-presses/year. I don’t think I’ve typed any less since then. I rather think there are signs that I have typed a lot more since, especially since I started working full-time on curl in 2019. My estimate is thus that my trusted old keyboard has served over 70 million key-presses. And it has done it very well.

I did not want to comment on the keyboard too early, but I also did not want too long so that I forget about the little details a transition like this can bring up. I have averaged around 30,000 key strokes per day since I started using the new keyboard (I used keyfreq to figure this out). I also toyed around on keybr.com (where I reach 84 words/minute typing speed – with a fair amount of typos), to get a feel for how efficient I can possibly become on this thing. Seems okay!

Keyboards belongs in this annoying category of products that are really hard to figure out and get a proper feel for ahead of time even if you would be able to try them out before purchase. They pretty much require that we use them for a while before we can tell for sure we like them and want to stay with them many hours every day for the next several years.

Based on a recommendation from my brother Björn, I went with Logitech G915 TKL – tactile keys flavored. “Carbon” they call this color. It apparently also is available in white. Without a recommendation, I would not have considered this brand.

RGB Lights

This thing is branded a “gaming keyboard” and then of course the default keyboard RGB lightning effects consist of moving and changing colors across all the keys. There are 10 color style presets to select from, but the keyboard goes back to the default after idle timeouts, making the default mode really important. The factory default mode really screams hey look at me!, and is mostly disturbing to use.

I have read comments from Linux users online who even returned their G915 keyboards due to this crazy scheme not being possible to change properly on device, and the unfortunate truth is that I had to install their control software on a Windows box nearby to set a new default coloring mode: a solid single color on all keys – including that big G logo-symbol in the top left. Resetting back to this mode is fine. I appreciate some light on the keys, especially when using the keyboard at night.

It could perhaps also be mentioned that their control software on Windows, G Hub , while looking fancy, was not easy to figure out how to do this with.

I like that the lights on the keyboard go off completely after idling long enough, so that my workplace goes almost completely dark when left alone.

TKL

Ever since I got my previous keyboard I have been interested in a Ten Key Less (TKL) model. I never use any of those keys on the numerical part and I figured it could be good ergonomics to reduce the travel distance for my right hand when moving from keys to the mouse and back. A move that I must do hundreds if not thousands of times per day.

Wireless

This keyboard is wireless but it is not a feature I care about. I use this quite stationary at my desk within cable distance and I find rather use a cable than having to recharge the device.

Layout

I’m using the traditional Swedish QWERTY layout, which some people legitimately will mention is not optimal for development compared to US keyboard layouts. This would be because of what key combos you need to write for parentheses, braces, forward-slash etc. But since the Swedish language has these commonly used extra letters (Å, Ä and Ö) and we have been using this keyboard layout for a long time in this country, I am used to it and where ever you go on Sweden and find a keyboard, almost every one of them will use this layout. It is the path of least resistance.

I have never been a fan of the ergonomic keyboard styles and layouts. Probably because I never really allowed myself enough time to get used to them.

Media keys

With my previous keyboard I had to press Fn plus an F-key to do media control, while this new one has a set of dedicated media keys above the F-keys as you can see on the image above. I can also control the volume with the scroll wheel thing on the top right. Pretty convenient actually.

I have grown to appreciate having media control keys on the keyboard, and I find that having dedicated keys like this is a step up as it saves me from doing combo-presses frequently. A little less finger gymnastics.

Switches

Logitech does not seem to have a specific brand name for their switches. They offer the same keyboard with three different styles: linear, clicky or tactile. I went with tactile and I’m satisfied. The feel is not substantially different than my previous cherry reds.

Sound

I had Cherry MX Red switches in my previous keyboard, and this new one uses Logitech switches without any distinct name (that I could find). I did not have any particular problem with the noise level before but I hoped that this new one would perhaps be a little quieter. For the sake of family members when I type away on my keyboard when they are asleep and maybe to be less audible when I do video recordings etc. My home office and this keyboard setup is not far away from where they sleep.

I think maybe this keyboard is a little more silent, but not by much. I asked family members if they had noticed any difference in sound levels since my keyboard switch but none of them had noticed any change.

Size

Length: 368 mm
Width: 150 mm
Height: 22 mm
Weight: 810 g

The actual size of the keyboard and keys is almost identical to my previous keyboard, I mean when excluding the removed numerical keyboard. The weight of the G915 makes it feel solid and it sits firmly and fixated on the table reliably even during most my intense typing sessions. I suppose it is also good for wireless use as it won’t feel as “flimsy” as regular all-plastic wireless ones tend to feel. But I have not used it much in wireless mode.

Feeling

It feels solid and like it can survive being used daily for a long time.

This is what my previous keyboard looked like.

curl docs format evolution

I trust you have figured out already that I have the highest ambitions for the curl documentation. I want everything documented in a clear, easy-to-read and easy-to-find manner. This takes a lot of work and is not something that happens without effort.

Part of providing best-in-class documentation has in my mind always been to provide and ship man pages. And to make sure that those latest up-to-date man pages are always offered to users as good-looking webpages on the curl site.

Over twenty years ago I decided to render the web versions of the man pages by creating my own tool for that purpose: roffit. We still use it on the site today. It is a simple tool that converts nroff formatted man pages to HTML and adds cross-referencing links to other man pages. There is no concept of links in nroff, but I decided to use a combination of text and markup to use as a signal and it has worked out fine since. In part of course because I have then also subsequently made sure to format the curl man pages in this roffit-friendly way.

curl.1

We started writing the curl man pages in the nroff format some time in the beginning of 1999. Mostly then because we did not know or have any tool that would convert nicely into nroff and later on to make sure we got things formatted the way we wanted. This approach works and can be used, but nroff is not a terribly human-friendly format and over time we decided that documenting several hundred command line options in a single nroff file was not ideal.

In November 2016 we revamped the system and started to generate the curl.1 man page at build time from multiple source files. Every command line option would then be documented in its own stand-alone file. Easier to read, easier to manage. Yet we controlled the output nroff format so that the web version would still look excellent. The individual files got .d extensions (for document) but were mostly still using nroff, and were mostly all appended to create the result.

Over time, we introduced more convenience formatting aids to the .d files so that we could write them using less and less nroff, and as a result they also grew more readable in their source format.

The .d file format never had its own name. It is just how we documented 250+ command line options for the curl command line tool.

The libcurl docs however remained authored in nroff. In 2014, the number of libcurl man pages exploded when we started doing a separate document for every libcurl option. We went from 59 man pages in the project to 270 between two releases. Since then we have added many more. Today in early 2024, we have a total of 496 individual man pages in the project. 493 of them concerns libcurl.

Markdown

When curl started, markdown did not exist – that format was created first in 2004. The early documentation we created in the project that was not nroff, we made using plain text files. It was first in 2014 when we slowly started to convert some of our text documents into markdown.

Markdown has several upsides compared to plain old text. In particular it is easier to render good-looking web versions of them, which we do on the curl web site and it offers cross-document linking etc. Over the years we have converted almost all our text documents into markdown. Markdown is also a user-friend format as it is easy to read and easy to edit for contributors, new and old.

curldown

The move away from using nroff for the libcurl man pages came with the introduction of the “curldown” document format in early 2024. Meant to make it easy to generate the same kind of nroff files we previously hand-edited, but using “markdown”. I use quotes around it because it is not a full-fledged markdown support, but it looks similar enough to trick casual users and we decided to use “.md” file extensions to make editors, GitHub and more treat and show them as such.

The file format is inspired by the one we created for the command line tool: with a header holding meta-data and then simplified markdown. Easy to read, easy to edit – for everyone. No more strange nroff syntax to copy and paste. The fact that GitHub render the sources files clean and good-looking is also attractive.

Switching to a markdown lookalike format had some other side-effects: suddenly a lot of CI jobs we have running caught these files and had a go at them: spellchecks, prose checks, checks for capitalized words after periods and more. This improved the language significantly.

Also, generating the nroff files rather than editing them has more benefits than just avoiding the baroque syntax: it also allows us to change styles and escape certain sequences through-out all documentation with just a few fixes in the scripts producing the output.

After this 47,000 lines added and removed patch was merged, we have a completely new source format system for the libcurl docs, that generates the same style of nroff output we had previously. With improved language.

But then why not…

With the huge switch to “curldown” for libcurl options, it felt wrong to keep the old curl .d files for the curl.1 man page, seeing as they were almost there already. I did the work and converted them over to the same markdown lookalike style I did for libcurl: they are now also .md files and now they too get spell-checked and scrutinized much more.

I took this move one step further: previously we generated the single curl.1 man page using a fixed header file and a footer file that we prepended and appended to the nroff generated from the (converted) .d documentation.

Now, we instead have a mainpage.idx file that lists which (markdown) file names to include and convert to nroff. This allows us to split the two big headers and footers into several smaller markdown files, each with its own header. Like “DESCRIPTION”, “URL”, and “GLOBBING”. Separating them makes them easier to manage for contributors and the index file makes it easy to change the order of sections etc.

We end up with a curl.1 man page that looks quite similar to before, but with improve language and easier format to edit the documentation going forward.

When I type this, we have 836 files with .md extensions in the curl git repository. At the time of the most recent release, we had 61.

Future

I’m sure it does not stop here. I already have some ideas on how to improve this setup further.

I’m considering doing something to completely avoid the nroff step for the website HTML versions of the man pages and instead go straight from markdown.

The curl tool has the manpage built-in, (shown with the -M option), and I would like to fix the build to do this without using nroff.

I have some more CI jobs to add to verify and check the language in the markdown files. Mostly to make sure the language is consistent.

curl is a CNA

The curl project has been accepted as a CVE Numbering Authority (CNA) for vulnerabilities in all products directly made or managed by the project. If I’m counting correctly, we are the 351st CNA.

The official announcement from Mitre states: curl is now a CVE Numbering Authority (CNA) for all products made and managed by the curl project. This includes curl, libcurl, and trurl.

In plain English, this means that we will reserve and manage our own CVEs in the future directly against the CVE database with no middle man, and also that we have a scope for CVEs that is our territory: curl and libcurl. No one else can now register CVEs for our products – without involving us. (There’s an appeals process so someone can still actually file CVEs for issues even if we say no, but at least there’s a process where both sides will argue their points.)

We do not particularly want to be a CNA but we hope that this move will make it harder to file more stupid curl CVEs in the future.

emails I received, the collection

I have since at least 2009 posted occasional emails I received on this blog. Often they are emails from people who found my email address somewhere, thinking I am involved in the product or service where they found me. In cars, games, traces after breaches, apps and more.

They range from plain weird to fun to scary and threatening.

Somehow they help me remember that the world we live in is super complicated.

The ones I have shown here on the blog are only a small subset of those that I have received. And of all the ones I have received, I have not kept all, or at least not stored them in ways that makes it easy for me to find them now.

Also: for products such as Cisco Anyconnect and Mobdro I have received dozens of emails so I just selected a few examples to include

Out of the ones that I could find, I have dug up seventy-four different messages, converted them to markdown and added them to a brand new Daniel email collection.

I plan to add future email correspondence that meet the criteria as well.

Enjoy!

The top image for this blog post is a joke back from when I received the subject: urgent warning email which talked about the “hacking ring”.

PSL in curl

PSL in this context stands for Public Suffix List. It is a list of known public suffixes in domain names. A public suffix is a name under which Internet users can (or historically could) directly register names – other than the top-level domains. One of the most commonly known and used such suffixes is “co.uk”, but the complete list is rather long and exhaustive.

Public suffixes are important when working with HTTP cookies, because the only way an HTTP client can accurately deny setting super cookies is to know about every existing PSL domain and deny servers to set cookies for those domains.

The use of PSL is recommend in RFC 6265: “If feasible, user agents SHOULD use an up-to-date public suffix list” (and yes, there mere fact that it speaks of a list and not the list of course highlights how problematic this is)

A super cookie is a cookie that is set for a “too wide” domain, making it apply across multiple sites in ways cookies are not supposed to function. If we allow a super cookie set for “co.uk”, that cookie could get sent to a huge amount of different domains.

If you ask me, this is one of the ugliest parts of cookie functionality. And there are a few such parts to select from.

The exact list of names present in the PSL varies over time. New names are added, old names are removed. This of course makes aging clients over time increasingly likely to unwittingly accept cookies for domain names that have since been added to the list. Or block cookies for domains that are no longer in the list!

curl, cookies and PSL

curl has supported cookies since 1998, but it has only supported PSL since 2015 (7.46.0). PSL support in curl is powered by the libpsl library.

Support for this external library is entirely optional in the build and lots of users deliberately decide to build curl without it. Leaving it out reduces footprint, avoids an extra dependency etc. Lots of users who build curl won’t use cookies or do not care about the risk for super cookies.

When you invoke curl -V, to get to see the versions and features of your currently installed curl tool, look for the PSL keyword among the features to learn whether your curl build supports Public Suffix List handling.

cookies are cross-client

Since cookies are sent to clients / browsers typically in a completely client-agnostic way, setting cookies with “illegal” domains make them get discarded and not used. This makes sites tend to not try super cookies for normal scenarios as those cookies will just be rejected by most clients. This fact sometimes save the non-PSL compliant clients.

Of course, servers can also just send a flood of cookies to the client trying different domain attributes – as clients will just silent discard the illegal ones and keep the legal ones and then use them later according to their own best abilities.

Force a decision

Based on the realization that people have been building and shipping curl without PSL support perhaps without fully understanding what they have ducked for, I have just merged a change to curl’s configure script. This change will ship in curl 8.6.0.

The configure script checks for libpsl by default (and has done so for a long time) and will use the library if detected, but starting now it will exit with an error if the detection fails to highlight this fact in a much more noticeable way. The users who want to keep building without it can just add --without-libpsl to hush up the message. Previously, a failed detection only worked as a hint to configure that it should build curl without PSL support – completely fine, but maybe not always done so with the user being totally knowledgeable of what that means.

This change still does not necessarily mean that the person getting the configure error understands PSL in depth, but it forces them to make a decision: Willingly build without it, or fix the build to use it. Your choice!

Not a security issue (in curl)

This (now past) lenient take, to allow users to build curl easily without PSL support, was reported as a curl security problem.

I disagree with that take, as I believe it is the responsibility of everyone building curl to make sure that the sum of the components is what they think it should be – the curl build procedure is not designed for nor attempting to make sure that the final product is hardened security wise (whatever that could be). PSL support is optionally built-in – by design.

This said: users who mistakenly use curl without PSL support might of course end up in what could be seen as a potential security problem. I do however not think that it is a security problem in the curl project itself.

Funding Stefan’s curl work

The curl fund sponsors curl development in Q1 2024 . The curl fund consists entirely and only of money donated to the project by companies and individuals.

Thank you sponsors!

These two funded projects are first out in 2024.

Support for OpenSSL’s QUIC API

Designated lead developer: Stefan Eissing

In OpenSSL‘s recent 3.2 release, they introduced support for client-side QUIC. OpenSSL has taken a different API approach than what just about all other TLS libraries have done (including all the OpenSSL forks): they implement a full QUIC stack.

The result is that curl’s QUIC (and by extension HTTP/3) support that already works with three different backends does not work with OpenSSL, but only with the OpenSSL family of forks (BoringSSL, libressl, AWS-LC and quictls), wolfSSL and GnuTLS. Also, OpenSSL just shipped this, while the other TLS libraries have shipped their QUIC APIs for years already.

This design choice of OpenSSL significantly hampers HTTP/3 adoption with curl (and others) since OpenSSL remains the by far most popular TLS library to use with curl.

To improve this situation going forward, Stefan Eiussing will work on adding support for OpenSSL’s QUIC in curl using nghttp3 on top. Experience from other QUIC libraries says that they usually don’t have all the methods we need on their first attempt, and since this is OpenSSL’s first attempt and nobody from them has asked us for feedback, we expect a high risk of at least minor problems. Finding those issues earlier is better than later, in case we need to work with the OpenSSL team to improve things.

Ideally, no major problems are identified and we have HTTP/3 support with OpenSSL in official curl builds later in the year.

Refactoring the HTTP Handler

Designated lead developer: Stefan Eissing

The second half of Stefan’s assignment is to work on generally improving the state of HTTP and handling the different HTTP versions in the libcurl source code.

We want to make curl’s HTTP handler focus on generic HTTP processing of requests and responses only and separate the serialization on the wire into version specific parts. This puts HTTP 1.x, 2 and 3 implementations on par with each other. Other 1.x implementations can then be configured in (e.g. the Rust based “hyper” module) without needing to replace the core HTTP handling of curl.

Completion

Stefan has already started. We expect and hope these two projects to be mostly done and landed by the end of Q1 2024. Stefan has an established presence in the project and has already previously done lots of good development work. We are in good hands with Stefan behind the wheel for this.

I personally will of course assist, help out, review, participate in design decisions and generally cheer on, while continuing to do my regular “chores” and curl work: security, releases, maintenance, customer and user support for example.

curl, open source and networking