Category Archives: cURL and libcurl

curl and/or libcurl related

Oops, I spilled the beans

Saturday June 18: I had some curl time in the afternoon and I was just about to go edit the four security advisories I had pending for the next release, to brush up the language and check that they read fine, when it dawned on me.

These particular security advisories were still in draft versions but maybe 90% done. There were details, like dates and links to current in-progress patches, left to update. I also like to reread them a few times, especially in a webpage rendered format, to make sure they are clear and accurate in describing the problem, the solution and all other details, before I consider them ready for publication.

I checked out my local git branch where I expected the advisories to reside. I always work on pending security details in a local branch named security-next-release or something like that. The branch and its commits remain private and undisclosed until everything is ready for publication.

(I primarily use git command lines in terminal windows.)

The latest commits in my git log output did not show the advisories so I did a rebase but git promptly told me there was nothing to rebase! Hm, did I use another branch this time?

It took me a few seconds to realize my mistake. I saw four commits in the git master branch containing my draft advisories and then it hit me: I had accidentally pushed them to origin master and they were publicly accessible!

The secrets I was meant to guard until the release, I had already mostly revealed to the world – for everyone who was looking.

How

In retrospect I can’t remember exactly how the mistake was done, but I clearly committed the CVE documents in the wrong branch when I last worked on them, a little over a week ago. The commit date say June 9.

On June 14, I got a bug report about a problem with curl’s .well-known/security.txt file (RFC 9116) where it was mentioned that our file didn’t have an Expires: keyword in spite of it being required in the spec. So I fixed that oversight and pushed the update to the website.

When doing that push, I did not properly verify exactly what other changes that would be pushed in the same operation, so when I pressed enter there, my security advisories that had accidentally been committed in the wrong branch five days earlier and still were present there were also pushed to the remote origin. Swooosh.

Impact

The advisories are created in markdown format, and anyone who would update their curl-www repository after June 14 would then get them into their local repository. Admittedly, there probably are not terribly many people who do that regularly. Anyone could also browse them through the web interface on github. Also probably not something a lot of people do.

These pending advisories would however not appear on the curl website since the build files were not updated to generate the HTML versions. If you could guess the right URL, you could still get the markdown version to show on the site.

Nobody reported this mistake in the four days they were visible before I realized my own mistake (and nobody has reported it since either). I then tried googling the CVE numbers but no search seemed to find and link to the commits. The CVE numbers were registered already so you would mostly get MITRE and other vulnerability database listings that were still entirely without details.

Decision

After some quick deliberations with my curl security team friends, we decided expediting the release was the most sensible thing to do. To reduce the risk that someone takes advantage of this and if they do, we limit the time window before the problems and their fixes become known. For curl users security’s sake.

Previously, the planned release date was set to July 1st – thirteen days away. It had already been adjusted somewhat to not occur on its originally intended release Wednesday to cater for my personal summer plans.

To do a proper release with several security advisories I want at least a few days margin for the distros mailing list to prepare before we go public with everything. There was also the Swedish national midsummer holiday coming up next weekend and I did not feel like ruining my family’s plans and setup for that, so I picked the first weekday after midsummer: June 27th.

While that is just four days earlier than what we had previously planned, I figure those four days might be important and if we imagine that someone finds a way to exploit one of these problems before then, then at least we shorten the attack time window by four days.

curl 7.84.0 was released on June 27th. The four security advisories I had mostly leaked already were published in association with that: CVE-2022-32205, CVE-2022-32206, CVE-2022-32207 and CVE-2022-32208.

Lessons

  1. When working with my security advisories, I must pay more attention and be more careful with which branch I commit to.
  2. When pushing commits to the website and I know I have pending security sensitive details locally that have not been revealed yet, I should make it a habit to double-check that what I am about to push is only and nothing but what I expect to be there.

Simultaneously, I have worked using this process for many years now and this is the first time I did this mistake. I do not think we need to be alarmist about it.

Credits

The Swedish midsummer pole image by Patrik Linden from Pixabay. Facepalm photo by Alex E. Proimos.

curl 7.84.0 inside every box

Welcome to take the next step with us in this never-ending stroll.

Release presentation

Numbers

the 209th release
8 changes
47 days (total: 8,865)

123 bug-fixes (total: 7,980)
214 commits (total: 28,787)
0 new public libcurl function (total: 88)
2 new curl_easy_setopt() option (total: 297)

1 new curl command line option (total: 248)
51 contributors, 20 new (total: 2,652)
35 authors, 13 new (total: 1,043)
4 security fixes (total: 125)
Bug Bounties total: 34,660 USD

Security

This is another release in which scrutinizing eyes have been poking around and found questionable code paths that could be lead to insecurities. We announce four new security advisories this time – all found and reported by Harry Sintonen. This bumps mr Sintonen’s curl CVE counter up to 17; the number of security problems in curl found and reported by him alone.

CVE-2022-32205: Set-Cookie denial of service

A malicious server can serve excessive amounts of Set-Cookie: headers in a HTTP response to curl and curl stores all of them. A sufficiently large amount of (big) cookies make subsequent HTTP requests to this, or other servers to which the cookies match, create requests that become larger than the threshold that curl uses internally to avoid sending crazy large requests (1048576 bytes) and instead returns an error.

CVE-2022-32206: HTTP compression denial of service

curl supports “chained” HTTP compression algorithms, meaning that a server response can be compressed multiple times and potentially with different algorithms. The number of acceptable “links” in this “decompression chain” was unbounded, allowing a malicious server to insert a virtually unlimited number of compression steps.

CVE-2022-32207: Unpreserved file permissions

When curl saves cookies, alt-svc and hsts data to local files, it makes the operation atomic by finalizing the operation with a rename from a temporary name to the final target file name.

In that rename operation, it might accidentally widen the permissions for the target file, leaving the updated file accessible to more users than intended.

CVE-2022-32208: FTP-KRB bad message verification

When curl does FTP transfers secured by krb5, it handles message verification failures wrongly. This flaw makes it possible for a Man-In-The-Middle attack to go unnoticed and even allows it to inject data to the client.

Changes

We have no less than eight different changes logged this time. Two are command line changes and the rest are library side.

--rate

This new command line option rate limits the number of transfers per time period.

deprecate --random-file and --egd-file

These are two options that have not been used by anyone for an extended period of time, and starting now they have no functionality left. Using them has no effect.

curl_global_init() is threadsafe

Finally, and this should be conditioned to say that the function is only thread-safe on most platforms.

curl_version_info: adds CURL_VERSION_THREADSAFE

The point here is that you can check if global init is thread-safe in your particular libcurl build.

CURLINFO_CAPATH/CAINFO: get default CA paths

As the default values for these values are typically figured out and set at build time, applications might appreciate being able to figure out what they are set to by default.

CURLOPT_SSH_HOSTKEYFUNCTION

For libssh2 enabled builds, you can now set a callback for hostkey verification.

deprecate RANDOM_FILE and EGDSOCKET

The libcurl version of the change mentioned above for the command line. The CURLOPT_RANDOM_FILE and CURLOPT_EGDSOCKET options no longer do anything. They most probably have not been used by any application for a long time.

unix sockets to socks proxy

You can now tell (lib)curl to connect to a SOCKS proxy using unix domain sockets instead of traditional TCP.

Bugfixes

We merged way over a hundred bugfixes in this release. Below are descriptions of some of the fixes I think are particularly interesting to highlight and know about.

improved cmake support for libpsl and libidn2

more powers to the cmake build

address cookie secure domain overlay

Addressed issues when identically named cookies marked secure are loaded over HTTPS and then again over HTTP and vice versa. Cookies are complicated.

make repository REUSE compliant

Being REUSE compliant makes we now have even better order and control of the copyright and licenses used in the project.

headers API no longer EXPERIMENTAL

The header API is now officially a full member of the family.

reject overly many HTTP/2 push-promise headers

curl would accept an unlimited number of headers in a HTTP/2 push promise request, which would eventually lead to out of memory – starting now it will instead reject and cancel such ridiculous streams earlier.

restore HTTP header folding behavior

curl broke the previous HTTP header behavior in the 7.83.1 release, and it has now been restored again. As a bonus, the headers API supports folded headers as well. Folding headers being the ones that are the rare (and deprecated) continuation headers that start with a whitespace.

skip fake-close when libssh does the right thing

Previously, libssh would, a little over-ambitiously, close our socket for us but that has been fixed and curl is adjusted accordingly.

check %USERPROFILE% for .netrc on Windows

A few other tools apparently look for and use .netrc if found in the %USERPROFILE% directory, so by making curl also check there, we get better cross tool .netrc behavior.

support quoted strings in .netrc

curl now supports quoted strings in .netrc files so that you can provide spaces and more in an easier way.

many changes in ngtcp2

There were lots of big and small changes in the HTTP/3 backend powered by ngtcp2.

provide a fixed fake host name in NTLM

curl no longer tries to provide the actual local host name when doing NTLM authentication to reduce information leakage. Instead, curl now uses the same fixed fake host name that Firefox uses when speaking NTLM: WORKSTATION.

return error from “lethal” poll/select errors

A persistent error in select() or poll() could previously be ignored by libcurl and not result in an error code returned to the user, making it loop more than necessary.

strcase optimizations

The case insensitive string comparisons were optimized.

maintain path-as-is after redirects

After a redirect or if doing multi-stage authentication, the --path-as-is status would be dropped.

support CURLU_URLENCODE for curl_url_get

This is useful when for example you ask the API to accept spaces in URLs and you want to later extract a valid URL with such an embedded space URL encoded

Coming next

7.85.0 is scheduled to ship on August 31, 2022.

curl is REUSE compliant

The REUSE project is an effort to make Open Source projects provide copyright and license information (for all files) in a machine readable way.

When a project is fully REUSE compliant, you can easily figure out the copyright and license situation for every single file it holds.

The easiest way to accomplish this is to make sure that all files have the correct header with the appropriate copyright info and SPDX-License-Identifier specified, but it also has ways to provide that meta data in adjacent files – for files where prepending that info isn’t sensible.

What we needed to do

We were already in a fairly good place before this push. We have a script that verifies the presence of copyright header in files (including checking the end year vs the latest git commit), with a list of files that were deliberately skipped.

The biggest things we needed to do were

  1. Add the SPDX identifier all over
  2. Make sure that the skipped files also have copyright and licensing info provided
  3. Add a CI job that verifies that we remain compliant

I also ended up adjusting our own copyright scan script to use the REUSE metadata files instead of its own ignore filters which also made it even easier for us to make sure we are and remain compatible — that every single files in the curl git repository has a known and documented license and copyright situation.

As a bonus, the cleanup work helped us detect an example file that stood out which we got relicensed and we removed two older files that had their own unique licenses (without any good reason).

There are 3518 files in the curl git repository this exact moment.

Compliant!

Starting mid-June 2022, curl is 100% REUSE compliant. curl 7.84.0 will be the first release done in this status.

Motivation

I think it is a good idea to have perfect control over the copyright and license situation for every single file, and to make sure that the situation is documented enough and to a level that allows anyone and everyone to check it out and learn how things lie. No surprises.

Companies have obviously figured out this info before to a degree that they have been satisfied with since curl is widely used even commercial since a long time. But I believe that by providing the information in an even easier and more descriptive way makes things even better. For existing and future users.

I also think that the low threshold for us to reach this compliance was a factor. We were almost there already. We just need to polish up some small details and I think it made it worth it.

This cleanup also makes sure we have perfect control and knowledge of the license situation, now and going forward. I think this can be expected from a project aiming for gold standard.

The curl SPDX license identifier

Keen readers will notice that curl has its own license identifier. It is called the curl license. Not MIT, X or a BSD variation. curl.

The reason for this is good old stupidity. In January 2001 we adopted the MIT license for use in the project because we believed it better matches what we want compared to the previous license situation. We started out with a dual license situation together with the MPL license we used previously, but the MPL part was removed completely in October 2002.

For reasons that have since been forgotten, we thought it was a good idea to edit the license text. To trim it a little. Since August 2002, the license text that started out as an MIT/X license is no longer a perfect copy. It is a derivative . Very similar and almost identical. But it’s not the same.

When the SPDX project created their set of identifiers for well-used licenses out in the FOSS world they decided that the curl license is different enough from the MIT/X license to treat it separately and give it its own identifier. I know of no other project than curl that uses this particular edited version of the MIT license.

In hindsight, I believe the editing of the license text back in 2002 was dumb. I regret it, but I will not change it again. I think we can live with this situation pretty good.

Credits

Most of the heavy lifting necessary to make curl compliant was done by Max Mehl.

curl user survey 2022 analysis

Once again I’ve collected the numbers, generated graphs, scratched my head and tried to understand what users mean and how to best use this treasure trove of user feedback.

The curl user survey 2022 ran for two full weeks in the end of May. Here is the document with all the numbers, graphs and analysis from this year’s data.

You will learn what protocols curl users use (HTTPS and HTTP), which TLS backend is the most popular (OpenSSL) and which the top platform is (Linux). And a lot more.

Spoiler: the results are not terribly different than last year and the year before that!

The analysis is a 36-page PDF, available here:

curl-user-survey-2022-analysis

If you have specific feedback on the analysis itself, then I’m all ears. I’m not statistics scholar or anything, but I believe all the numbers, graphs and data I present in there are accurate barring my mistakes of course.

Making libcurl init more thread-safe

Twenty-one years ago, in May 2001 we introduced the global initialization function to libcurl version 7.8 called curl_global_init().

The main reason we needed this separate function to get called before anything else was used in libcurl, was that several of libcurl’s dependencies at the time (including OpenSSL and GnuTLS) had themselves thread-unsafe initialization procedures.

This rather lame characteristic found in several third party dependencies made the libcurl function inherit that property: not thread-safe. A nasty “feature” in a library that otherwise prides itself for being thread-safe and in many ways working at “it should”. A function that is specifically marked as thread unsafe was not good. Is not good.

Still, we were victims of circumstances and if these were the dependencies we were going to use, this is what we needed to do.

Occasionally, this limitation has poked people in the eye and really hurt them since it makes some use cases really difficult to realize.

Dependencies improved

Over the following decades, the dependencies libcurl use have almost all shaped up and removed the thread-unsafe property of their initialization procedures.

We also slowly cleaned away other code that happened to also fall into the init function out of laziness and convenience because it was there and could be used (or perhaps abused).

Eventually, we were basically masters of our own faith again. The closet was all cleared out and the scrubby leftovers we had sloppily left in there had been cleaned up and gotten converted to proper thread-safe code.

The task of finally making curl_global_init() thread-safe was brought up and attempted a little half-assed a few times but was never pulled through all the way.

The challenges always included that we want to avoid relying on thread library and that we are supporting building libcurl with C89 compilers etc.

Finally, the spring cleaning of 2022

Thanks to work spear-headed by Thomas Guillem who came bursting in with a clear use-case in mind where he felt he really need this to work, and voila now the next libcurl release (7.84.0) features a thread-safe init.

If configure finds support for _Atomic (a C11 feature) or it runs on a new enough Windows version (this should cover a vast amount of platforms), libcurl can now do its own spinlock implementation that makes the init function thread-safe and independent of threading libraries.

New HTTP core specs

Before this, the latest refreshed specification of HTTP/1.1 was done in the RFC 7230 series, published in June 2014. After that, HTTP/2 was done in the spring of 2015 and recently the HTTP/3 spec has been a work in progress.

To better reflect this new world of multiple HTTP versions and an HTTP protocol ecosystem that has some parts that are common for all versions and some other parts that are specific for each particular version, the team behind this refresh has been working on this updated series.

My favorite documents in this “cluster” are:

HTTP Semantics

RFC 9110 basically describes how HTTP works independently of and across versions.

HTTP/1.1

RFC 9112 replaces 7230.

HTTP/2

RFC 9113 replaces 7540.

HTTP/3

RFC 9114 is finally the version three of the protocol in a published specification.

Credits

Top image by Gerhard G. from Pixabay. The HTTP stack image is done by me, Daniel.

.netrc pains

The .netrc file is used to hold user names and passwords for specific host names and allows tools to login to those systems automatically without having to prompt the user for the credentials while avoiding having to use them in command lines. The .netrc file is typically set without group or world read permissions (0600) to reduce the risk of leaking those secrets.

History

Allegedly, the .netrc file format was invented and first used for Berknet in 1978 and it has been used continuously since by various tools and libraries. Incidentally, this was the same year Intel introduced the 8086 and DNS didn’t exist yet.

.netrc has been supported by curl (since the summer of 1998), wget, fetchmail, and a busload of other tools and networking libraries for decades. In many cases it is the only cross-tool way to provide credentials to remote systems.

The .netrc file use is perhaps most widely known from the “standard” ftp command line client. I remember learning to use this file when I wanted to do automatic transfers without any user interaction using the ftp command line tool on unix systems in the early 1990s.

Example

A .netrc file where we tell the tool to use the user name daniel and password 123456 for the host user.example.com is as simple as this:

machine user.example.com
login daniel
password 123456

Those different instructions can also be written on the same single line, they don’t need to be separated by newlines like above.

Specification

There is no and has never been any standard or specification for the file format. If you google .netrc now, the best you get is a few different takes on man pages describing the format in a high level. In general this covers our needs and for most simple use cases this is good enough, but as always the devil is in the details.

The lack of detailed descriptions on how long lines or fields to accept, how to handle special character or white space for example have left the implementers of the different code basis to decide by themselves how to handle those things.

The horse left the barn

Since numerous different implementations have been done and have been running in systems for several decades already, it might be too late to do a spec now.

This is also why you will find man pages out there with conflicting information about the support for space in passwords for example. Some of them explicitly say that the file format does not support space in passwords.

Passwords

Most fields in the .netrc work fine even when not supporting special characters or white space, but in this age we have hopefully learned that we need long and complicated passwords and thus having “special characters” in there is now probably more common than back in the 1970s.

Writing a .netrc file with for example a double-quote or a white space in the password unfortunately breaks tools and is not portable.

I have found at least three different ways existing tools do the parsing, and they are all incompatible with each other.

curl parser (before 7.84.0)

curl did not support spaces in passwords, period. The parser split all fields at the following space or newline and accepted whatever is in between. curl thus supported any characters you want, except space and newlines . It also did not “unquote” anything so if you wanted to provide a password like ""llo (with two leading double-quotes), you would use those five bytes verbatim in the file.

wget parser

This parser allows a space in the password if you provide it quoted within double-quotes and use a backslash in front of the space. To specify the same ""llo password mentioned above, you would have to write it as "\"\"llo".

fetchmail parser

Also supports spaces in passwords. Here the double-quote is a quote character itself so in order to provide a verbatim double-quote, it needs to be doubled. To specify the same ""llo password mentioned above, you would have to write it as """"llo – that is with four double-quotes.

What is the best way?

Changing any of these parsers in an effort to unify risk breaking existing use cases and scripts out in the wild with outraged users as a result. But a change could also generate a few happy users too who then could better share the same .netrc file between tools.

In my personal view, the wget parser approach seems to be the most user friendly one that works perhaps most closely to what I as a user would expect. So that’s how I went ahead and made curl work.

What to do

Users will of course be stuck with ancient versions for a long time and this incompatibility situation will remain for the foreseeable future. I can think of a few work-arounds users can do to cope:

  • Avoid space, tabs, newline and various quotes in passwords
  • Use separate .netrc files for separate tools
  • Provide passwords using other means than .netrc – with curl you can for example explore using –config instead

Future curl supports quoting

We are changing the curl parser somewhat in the name of compatibility with other tools (read wget) and curl will allow quoted strings in the way wget does it, starting in curl 7.84.0. While this change risks breaking a few command lines out there (for users who have leading double-quotes in their existing passwords), I think the change is worth doing in the name of compatibility and the new ability to use spaces in passwords.

A little polish after twenty-four years of not supporting spaces in user names or passwords.

Hopefully this will not hurt too many users.

Credits

Image by Anja-#pray for ukraine# #helping hands# stop the war from Pixabay

curl offers repeated transfers at slower pace

curl --rate is your new friend.

This option is for when you use curl to do many requests in a single command line, but you want curl to not do them as quickly as possible. You want curl to do them no more often than at a certain interval. This is a way to slow down the request frequency curl would otherwise possibly use. Tell curl to do the transfers no faster than…

This is a completely different and separate option from the transfer speed rate limit option --limit-rate that has existed for a long time.

A primary reason for using this option is when the server end has a certain capped acceptance rate or other cases where you know it makes no sense to do the requests faster than at a certain interval.

With this new option, you specify the maximum transfer frequency you allow curl to use – in number of transfer starts per time unit (sometimes called request rate) with the new --rate option.

Set the fastest allowed rate with --rate "N/U" where N is an integer number and U is a time unit. Supported units are ‘s’ (for second), ‘m’ (for minute), ‘h’ (for hour) and ‘d’ (for day, as in a 24 hour unit). U is the time unit. The default time unit, if no “/U” is provided, is number of transfers per hour.

For example, to make curl not do its requests faster than twice per minute, use --rate 2/m but if you rather want 25 per hour, then use --rate 25/h.

If curl is provided several URLs and a single transfer completes faster than the allowed rate, curl will wait before it kicks off the next transfer in order to maintain the requested rate and not go faster. If curl is told to allow 10 requests per minute, it will not start the next request until 6 seconds have elapsed since the previous transfer started.

This option has no effect when --parallel is used. Primarily because you then ask for the transfers to happen simultaneously and we have not figured out how this option should affect such transfers!

This functionality uses a millisecond resolution timer internally. If the allowed frequency is set to more than 1000 transfer starts per second, it will instead run unrestricted.

When retrying transfers, enabled with --retry, the separate retry delay logic is used and not this setting.

Rate-limiting response headers

There is ongoing work to standardize HTTP response headers for the purpose or rate-limiting. (See RateLimit Header Fields for HTTP.) Using these headers, a server can tell the client what the maximum allowed transfer rate is and it can adapt.

This new command line option however, does nothing with any such new headers, but I think it would make sense to make a future version able to check for the rate-limit headers and, if opted-in, adapt to those instead of the frequency set by the user.

A challenge with these ratelimit headers vs the --rate command line option is of course that the response headers for this will return the rules for a given site/API, and curl might have been told to talk to many different sites which might all have different (or no) rates in their headers. Also, the command line option works for all protocols curl supports, not just HTTP(S).

Ship it

This feature is due in the pending curl 7.84.0 release.

Credits

Image by kewl from Pixabay

case insensitive string comparisons in C

Back in 2008, I had a revelation when it dawned on me that the POSIX function called strcasecmp() compares strings case insensitively, but locale dependent. Because of this, “file” and “FILE” is not actually a case insensitive match in Turkish but is a match in most other locales. curl would sometimes fail in mysterious ways due to this. Mysterious to the users, now we know why.

Of course this behavior was no secret. The knowledge about this problem was widespread already then. It was just me who hadn’t realized this yet.

A custom replacement

To work around that problem for curl, we immediately implemented our own custom comparison replacement function that doesn’t care about locales. Internet protocols work the same way no matter which locale the user happens to prefer.

We did not go the POSIX route. The POSIX function for case insensitive string comparisons that ignores the locale is called strcasecmp_l() but that uses a special locale argument and also doesn’t exist on non-POSIX platforms.

curl has used its custom set of functions since 7.19.1, released in early November 2008.

OpenSSL 3.0.3

Fast forward to May 2022. OpenSSL released their version 3.0.3. In the change-log for this release we learned that they now offer public functions for case insensitive string comparisons. Whatdoyouknow! They too have learned about the Turkish locale. Apparently to the degree that they feel they need to offer those functions in their already super-huge API set. Oh well, that is certainly their choice.

I can relate since we too have such functions in libcurl, but I have always regretted that we added them to the API since comparing strings is not libcurl’s core business. We did wrong then and we still live with the consequences several decades later.

OpenSSL however took the POSIX route and based their implementation on strcasecmp_l() and use a global variable for the locale and an elaborate system to initialize that global and even a way to make things work if string comparisons are needed before that global variable is initialized etc.

This new system was complicated to the degree that it broke the library on several platforms, which curl users running Windows 7 figured out almost instantly. curl with OpenSSL 3.0.3 simply does not work on Windows 7 – at all.

Reasons for not exposing a string compare API

Libraries should only provide functions that are within their core objective. Not fluffy might be useful things. Reasons for this include:

  • It adds to the complexity to users. Yet another function in the ever expanding set of function calls in the API.
  • It increases the documentation size even more and makes the real things harder to find somewhere in there.
  • It adds “attack surface” and areas where you can make errors and introduce security problems.
  • You get more work since now you have additional functions to keep ABI and API stable for all eternity and you have to spend developer time and effort on making sure they remain so.

Do a custom one for OpenSSL?

I think there is a software law that goes something like this

eventually, all C libraries implement their own case insensitive string comparison functions

When I proposed they should implement their own custom function in discussions in one of the issues about this OpenSSL problem, the suggestion was shot down fairly quickly because of how hard it is to implement a such function that is as fast as the glibc version.

In my ears, that sounds like they prefer to stick with an overworked and complicated error-prone system, because an underlying function is faster, rather than going with simplicity and functionality at the price of sightly slower performance. In fairness, they say that case insensitive string comparisons are “6-7%” of the time spent in some (to me unknown) performance test they referred to. I have no way or intention to argue with that.

I think maybe they couldn’t really handle that idea from an outsider and they might just need a little more time to figure out the right way forward on their own. Then go with simple.

I am of course not in the best position to say how they should act on this. I’m just a spectator here. I may be completely wrong.

Update (May 23)

In a separate PR (4 days after this blog post went live), OpenSSL suddenly implemented their own and it was deemed that it would not hurt performance noticeably. Merged on May 23. Almost like they followed my recommendation!

OpenSSL’s current tolower() implementation used in the comparison function is similar to curl’s old one so I suspect curl’s current function is a tad bit faster.

Custom vs glibc performance

glibc truly has really fast string comparison implementations, with optimized assembly versions for the common architectures. Versions written in plain C tend to be slower.

However, the API and way to use those functions to make them locale independent is horrific because of the way it forces the caller to provide a locale argument (which could be the “C” locale – the equivalent of no locale).

The curl custom function

That talk about the slowness of custom string functions made us start discussing this topic a little in the curl IRC channel and we bounced around some ideas of what things the curl function does not already do and what it could do and how it compares against the glibc assembly version.

Also: the string comparisons in curl are certainly not that performance critical as they seem to be in OpenSSL and while used a lot in curl they are not used in the most important performance critical transfer-data code paths.

Optimizations

Frank Gevaerts took the lead and after some rounds and discussions backed up with tests, he ended up with an updated function that is 1.6 to 1.7 times faster than before his work. We dropped non-ASCII support in curl a while ago, which also made this task more straight-forward.

The two improvements:

  1. Use a lookup table for our own toupper() implementation instead of the previous simple condition + math.
  2. Better end of loop handling: return immediately on mismatch, and a minor touch-up of the final check when the loop goes all the way to the end.

Measurements

The glibc assembler versions are still faster than curl’s custom functions and the exact speed improvements the above mentioned changes provide will of course depend both on platform and the test set.

Ships in 7.84.0

The faster libcurl functions will ship in curl 7.84.0. I doubt anyone will notice the difference!

curl annual user survey 2022

For the eighth consecutive year, we run the curl user survey. We usually kick it off during this time of the year.

Tell us how you use curl!

This is the best and frankly the only way the curl project has to get real feedback from people as to what features that are used and which are not used as well as other details in the project that can help us navigate our future and what to do next. And what not to do next.

curl runs no ads, has no trackers, users don’t report anything back and the project has no website logs. We are in many aspects completely blind as to what users do with curl and what they think of it. Unless we ask. This is us asking.

How is curl working for you?

[Go to survey]

Please ask your curl-using friends to also stop by and tell us their views!

[The survey analysis]

Credits

Image by Andreas Breitling from Pixabay