Category Archives: cURL and libcurl

curl and/or libcurl related

xCurl

It is often said that Imitation is the Sincerest Form of Flattery.

Also, remember libcrurl? That was the name of the thing Google once planned to do: reimplement the libcurl API on top of their Chrome networking library. Flattery or not, it never went anywhere.

The other day I received an email asking me about details regarding something called xCurl. Not having a clue what that was, a quick search soon had me enlightened.

xCurl is, using their own words, a Microsoft Game Development Kit (GDK) compliant implementation of the libCurl API.

A Frankencurl

The article I link to above describes how xCurl differs from libcurl:

xCurl differs from libCurl in that xCurl is implemented on top of WinHttp and automatically follows all the extra Microsoft Game Development Kit (GDK) requirements and best practices. While libCurl itself doesn’t meet the security requirements for the Microsoft Game Development Kit (GDK) console platform, use xCurl to maintain your same libCurl HTTP implementation across all platforms while changing only one header include and library linkage.

I don’t know anything about WinHttp, but since that is an HTTP API I can only presume that making libcurl use that instead of plain old sockets has to mean a pretty large surgery and code change. I also have no idea what the mentioned “security requirements” might be. I’m not very familiar with Windows internals nor with their game development setups.

The article then goes on to describe with some detail exactly which libcurl options that work, and which don’t and what libcurl build options that were used when xCurl was built. No DoH, no proxy support, no cookies etc.

The provided functionality is certainly a very stripped down and limited version of the libcurl API. A fun detail is that they quite bluntly just link to the libcurl API documentation to describe how xCurl works. It is easy and convenient of course, and it will certainly make xCurl “forced” to stick to the libcurl behavior

With large invasive changes of this kind we can certainly hope that the team making it has invested time and spent serious effort on additional testing, both before release and ongoing.

Source code?

I have not been able to figure out how to download xCurl in any form, and since I can’t find the source code I cannot really get a grip of exactly how much and how invasive Microsoft has patched this. They have not been in touch or communicated about this work of theirs to anyone in the curl project.

Therefore, I also cannot say which libcurl version this is based on – as there is no telling of that on the page describing xCurl.

The email that triggered me to crawl down this rabbit hole included a copyright file that seems to originate from an xCurl package, and that includes the curl license. The curl license file has the specific detail that it shows the copyright year range at the top and this file said

Copyright (c) 1996 - 2020, Daniel Stenberg, daniel@haxx.se, and many contributors, see the THANKS file.

It might indicate that they use a libcurl from a few years back. Only might, because it is quite common among users of libcurl to “forget” (sometimes I’m sure on purpose) to update this copyright range even when they otherwise upgrade the source code. This makes the year range a rather weak evidence of the actual age of the libcurl code this is based on.

Updates

curl (including libcurl) ships a new version at least once every eight weeks. We merge bugfixes at a rate of around three bugfixes per day. Keeping a heavily modified libcurl version in sync with the latest curl releases is hard work.

Of course, since they deliberately limit the scope of the functionality of their clone, lots of upstream changes in curl will not affect xCurl users.

License

curl is licensed under… the curl license! It is an MIT license that I was unclever enough to slightly modify many years ago. The changes are enough for organizations such as SPDX to consider it a separate one: curl. I normally still say that curl is MIT licensed because the changes are minuscule and do not change the spirit of the license.

The curl license of course allows Microsoft or anyone else do to this kind of stunt and they don’t even have to provide the source code for their changes or the final product and they don’t have to ask or tell anyone:

Permission to use, copy, modify, and distribute this software for any purpose with or without fee is hereby granted, provided that the above copyright notice and this permission notice appear in all copies.

I once picked this license for curl exactly because it allows this. Sure it might sometimes then make people do things in secret that they never contribute back, and we miss out on possible improvements because of that, but I think the more important property is that no company feels scared or restricted as to when and where they can use this code. A license designed for maximum adoption.

I have always had the belief that it is our relentless update scheme and never-ending flood of bugfixes that is what will keep users wanting to use the real thing and avoid maintaining long-running external patches. There will of course always be exceptions to that.

Follow-up

Forensics done by users who installed this indicate that this xCurl is based on libcurl 7.69..x. We removed a define from the headers in 7.70.0 (CURL_VERSION_ESNI) that this package still has. It also has the CURLOPT_MAIL_RCPT_ALLLOWFAILS define, added in 7.69.0.

curl 7.69.1 was released on March 11, 2020. It has 40 known vulnerabilities, and we have logged 3,566 bugfixes since then. Of course not all of any of those affect xCurl.

URL parser performance

URLs is a dear subject of mine on this blog, as readers might have noticed.

“URL” is this mythical concept of a string that identifies a resource online and yet there is no established standard for its syntax. There are instead multiple ones out of which one is on purpose “moving” so it never actually makes up its mind but instead keeps changing.

This then leads to there being basically no two URL parsers that treat URLs the same, to the extent that mixing parsers is considered a security risk.

The standards

The browsers have established their WHATWG URL Specification as a “living document”, saying how browsers should parse URLs, gradually taking steps away from the earlier established RFC 3986 and RFC 3987 attempts.

The WHATWG standard keeps changing and the world that tries to stick to RFC 3986 still needs to sometimes get and adapt to WHATWG influences in order to interoperate with the browser-centric part of the web. It leaves URL parsing and “URL syntax” everywhere in a sorry state.

curl

In the curl project we decided in 2018 to help mitigate the mixed URL parser problem by adding a URL parser API so that applications that use libcurl can use the same parser for all its URL parser needs and thus avoid the dangerous mixing part.

The libcurl API for this purpose is designed to let users parse URLs, to extract individual components, to set/change individual components and finally to extract a normalized URL if wanted. Including some URL encoding/decoding and IDN support.

trurl

Thanks to the availability and functionality of the public libcurl URL API, we could build and ship the separate trurl tool earlier this year.

Ada

Some time ago I was made aware of an effort to (primarily) write a new URL parser for node js – although the parser is stand-alone and can be used by anyone else who wants to: The Ada URL Parser. The two primary developers behind this effort, Yagiz Nizipli and Daniel Lemire figured out that node does a large amount of URL parsing so by speeding up this parser alone it would apparently have a general performance impact.

Ada is C++ project designed to parse WHATWG URLs and the first time I was in contact with Yagiz he of course mentioned how much faster their parser is compared to curl’s.

You can also see them reproduce and talk about these numbers on this node js conference presentation.

Benchmarks

Everyone who ever tried to write code faster than some other code has found themselves in a position where they need to compare. To benchmark one code set against the other. Benchmarking is an art that is close to statistics and marketing: very hard to do without letting your own biases or presumptions affect the outcome.

Speed vs the rest

After I first spoke with Yagiz, I did go back to the libcurl code to see what obvious mistakes I had done and what low hanging fruit there was to pick in order to speed things up a little. I found a few flaws that maybe did a minor difference, but in my view there are several other properties of the API that is actually more important than sheer speed:

  • non-breaking API and ABI
  • readable and maintainable code
  • sensible and consistent API
  • error codes that help users understand what the problem is

Of course, there is also the thing that if you first figure out how to parse a URL the fastest way, maybe you can work out a smoother API that works better with that parsing approach. That’s not how I went about when creating the libcurl API.

If we can maintain those properties mentioned above, I still want the parser to run as fast as possible. There is no point in being slower than necessary.

URLs vs URLs

Ada parses WHATWG URLs and libcurl parses RFC 3986 URLs. They parse URLs differently and provide different feature sets. They are not interchangeable.

In Ada’s benchmarks they have ignored the parser differences. Throw the parsers against each other, and according to all their public data since early 2023 their parser is 7-8 times faster.

700% faster really?

So how on earth can you make such a simple thing as URL parsing 700% faster? It never sat right with me when they claimed those numbers but since I had not compared them myself I trusted them. After all, they should be fairly easy to compare and they seemed clueful enough.

Until recently when I decided to reproduce their claims and see how much their numbers depends on their specific choices of URLs to parse. It taught me something.

Reproduce the numbers

In my tests, their parser is fast. It is clearly faster than the libcurl parser, and I too of course ignored the parser results since they would not be comparable anyway.

In my tests on my development machine, Ada is 1.25 – 1.8 times faster than libcurl. There is no doubt Ada is faster, just far away from the enormous difference they claim. How come?

  1. You use the input data that most favorably shows a difference
  2. You run the benchmark on a hardware for which your parser has magic hardware acceleration

I run a decently modern 13th gen Intel Core-I7 i7-13700K CPU in my development machine. It’s really fast, especially on single-thread stuff like this. On my machine, the Ada parser can parse more URLs/second than even the Ada people themselves claim, which just tells us they used slower machines to test on. Nothing wrong with that.

The Ada parser has code that is using platform specific instructions on some environments and the benchmark they decide to use when boasting about their parser was done on such a platform. An Apple m1 CPU to be specific. In most aspects except performance per watt, not a speed monster CPU.

In itself this is not wrong, but maybe a little misleading as this is far from clearly communicated.

I have a script, urlgen. that generates URLs in as many combinations as possible so that the parser’s every corner and angle are suitably exercised and verified. Many of those combination therefor illegal in subtle ways. This is the set of URLs I have thrown at the curl parser mostly, which then also might explain why this test data is the set that makes Ada least favorable (at 1.26 x the libcurl speed). Again: their parser is faster, no doubt. I have not found a test case that does not show it running faster than libcurl’s parser.

A small part of the explanation of how they are faster is of course that they do not provide the result, the individual components, in their own separately allocated strings.

Here’s a separate detailed document how I compared.

More mistakes

They also repeatably insist curl does not handle International Domain Names (IDN) correctly, which I simply cannot understand and I have not got any explanation for. curl has handled IDN since 2004. I’m guessing a mistake, an old bug or that they used a curl build without IDN support.

Size

I would think a primary argument against using Ada vs libcurl’s parser is its size and code. Not that I believe that there are many situations where users are actually selecting between these two.

Ada header and source files are 22,774 lines of C++

libcurl URL API header and source files are 2,103 lines of C.

Comparing the code sizes like this is a little unfair since Ada has its own IDN management code included, which libcurl does not, and that part comes with several huge tables and more.

Improving libcurl?

I am sure there is more that can be done to speed up the libcurl URL parser, but there is also the case of diminishing returns. I think it is pretty fast already. On Ada’s test case using 100K URLs from Wikipedia, libcurl parses them at an average of 178 nanoseconds per URL on my machine. More than 5.6 million real world URLs parsed per second per core.

This, while also storing each URL component in a separate allocation after each parse, and also returning an error code that helps identifying the problem if the URL fails to parse. With an established and well-documented API that has been working since 2018 .

The hardware specific magic Ada uses can possibly be used by libcurl too. Maybe someone can try that out one day.

I think we have other areas in libcurl where work and effort are better spent right now.

curl on 100 operating systems

In a recent pull-request for curl, I clarified to the contributor that their change would only be accepted and merged into curl’s git code repository if they made sure that the change was done in a way so that it did not break (testing) for and on legacy platforms.

In that thread, I could almost feel how the contributor squirmed as this requirement made their work harder. Not by much, but harder no less.

I insisted that since curl at that point (and still does) already supports 32 bit time_t types, changes in this area should maintain that functionality. Even if 32 bit time_t is of limited use already and will be even more limited as we rush toward the year 2038. Quite a large number of legacy platforms are still stuck on the 32 bit version.

Why do I care so much about old legacy crap?

Nobody asked me exactly that using those words. I am paraphrasing what I suspect some contributors think at times when I ask them to do additional changes to pull requests. To make their changes complete.

It is not so much about the legacy systems. It is much more about sticking to our promises and not breaking things if we don’t have to.

Partly stability and promises

In the curl project we work relentlessly to maintain ABI and API stability and compatibility. You can upgrade your libcurl using application from the mid 2000s to the latest libcurl – without recompiling the application – and it still works the same. You can run your unmodified scripts you wrote in the early 2000s with the latest curl release today – and it is almost guaranteed that it works exactly the same way as it did back then.

This is more than a party trick and a snappy line to use in the sales brochures.

This is the very core of curl and libcurl and a foundational principle of what we ship: you can trust us. You can lean on us. Your application’s Internet transfer needs are in safe hands and you can be sure that even if we occasionally ship bugs, we provide updates that you can switch over to without the normal kinds of upgrade pains software so often comes with. In a never-ending fashion.

Also of course. Why break something that is already working fine?

Partly user numbers don’t matter

Users do matter, but what I mean in this subtitle is that the number of users on a particular platform is rarely a reason or motivator for working on supporting it and making things work there. That is not how things tend to work.

What matters is who is doing the work and if the work is getting done. If we have contributors around that keep making sure curl works on a certain platform, then curl will keep running on that platform even if they are said to have very few users. Those users don’t maintain the curl code. Maintainers do.

A platform does not truly die in curl land until necessary code for it is no longer maintained – and in many cases the unmaintained code can remain functional for years. It might also take a long time until we actually find out that curl no longer works on a particular platform.

On the opposite side it can be hard to maintain a platform even if it has large amount of users if there are not enough maintainers around who are willing and knowledgeable to work on issues specific to that platform.

Partly this is how curl can be everywhere

Precisely because we keep this strong focus on building, working and running everywhere, even sometimes with rather funny and weird configurations, is an explanation to how curl and libcurl has ended up in so many different operating systems, run on so many CPU architectures and is installed in so many things. We make sure it builds and runs. And keeps doing so.

And really. Countless users and companies insist on sticking to ancient, niche or legacy platforms and there is nothing we can do about that. If we don’t have to break functionality for them, having them stick to relying on curl for transfers is oftentimes much better security-wise than almost all other (often homegrown) alternatives.

We still deprecate things

In spite of the fancy words I just used above, we do remove support for things every now and then in curl. Mostly in the terms of dropping support for specific 3rd party libraries as they dwindle away and fall off like leaves in the fall, but also in other areas.

The key is to deprecate things slowly, with care and with an open communication. This ensures that everyone (who wants to know) is aware that it is happening and can prepare, or object if the proposal seems unreasonable.

If no user can detect a changed behavior, then it is not changed.

curl is made for its users. If users want it to keep doing something, then it shall do so.

The world changes

Internet protocols and versions come and go over time.

If you bring up your curl command lines from 2002, most of them probably fail to work. Not because of curl, but because the host names and the URLs used back then no longer work.

A huge reason why a curl command line written in 2002 will not work today exactly as it was written back then is the transition from HTTP to HTTPS that has happened since then. If the site actually used TLS (or SSL) back in 2002 (which certainly was not the norm), it used a TLS protocol version that nowadays is deemed insecure and modern TLS libraries (and curl) will refuse to connect to it if it has not been updated.

That is also the reason that if you actually have a saved curl executable from 2002 somewhere and manage to run that today, it will fail to connect to modern HTTPS sites. Because of changes in the transport protocol layers, not because of changes in curl.

Credits

Top image by Sepp from Pixabay

Discussion

Hacker news

curl coasters

Durable, sturdy, washable and good-looking curl cheat sheet coasters. What more can you possibly need?

Buy them here

Tim Westermann is the creator who makes, sells and distributes these beauties. My only involvement has been to help Tim with the textual content on them.

There is no money going to me or the curl project as part of this setup. But you might become a better curl user while saving your desk at the same time.

I have a set myself. I love them. Note that they are printed on both sides – with different content.

They do not only look like PCBs, they are high-quality printed circuit boards (PCBs).

Dimensions: 10 cm x 10 cm [3.94″ x 3.94″]
Thickness: 1.6mm [0.04″]
Weight: 30g

mastering libcurl

On November 16 2023, I will do a multi-hour tutorial video on how to use libcurl. How to master it. As a follow-up to my previous video class called mastering the curl command line (at 13,000+ views right now).

This event will run as a live-stream webinar combination. You can opt to join via zoom or twitch. Zoom users can ask questions using voice, twitch viewers can do the same using the chat. For attending via Zoom you need to register, for twitch you can just show up.

It starts at 18:00 CET (09:00 PST, 17:00 UTC)

This session is part one of the planned two-part series. The amount of content is simply too much for me to deliver in a single sitting with intact quality. The second part will happen the following Monday, on November 20. Same time.

Both sessions will be recorded and will be made available for browsing after the fact.

At the time of me putting this blog post up the presentation and therefore agenda is not complete. I have also not yet figured out exactly how to do the split between the two episodes. Below you will find the planned topics that will be covered over the two episodes. Details will of course change in the final presentation.

The idea is that very little about developing with libcurl should be left out. This is the most thorough, most advanced, most in-depth libcurl video you ever saw. And it will be shock-full with source code examples. All examples that are shown in the video, are also provided stand-alone for easy browsing, copy and paste and later reading in a dedicated mastering libcurl GitHub repository.

Part one

The slides for part one.

Part two

The slides for part two.

Mastering libcurl

The project

  • Reminders about the curl project

getting it

  • installing
  • building
  • debugging
  • free support
  • paid support

API and ABI

  • compatibility
  • versions
  • the API is for C
  • header files
  • compiling libcurl programs
  • documentation

Architecture

  • C89
  • backends
  • everything is non-blocking

API fundamentals

  • content ignorant / URLs / callbacks
  • basic by default
  • global init
  • easy handles
  • easy options
  • curl_easy_setopt
  • curl_easy_perform
  • curl_easy_cleanup
  • write callback
  • multi handle
  • multi perform
  • curl_multi_info_read
  • curl_multi_cleanup
  • a multi example
  • caches
  • curl_multi_socket_action
  • curl_easy_getinfo

Puppeteering

  • verbose
    debug function
    tracing
  • curl_version / curl_version_info
  • persistent connections
  • multiplexing
  • Downloads
  • Storing downloads
  • Compression
  • Multiple downloads
  • Maximum file size
  • Resuming and ranges
  • Uploads
  • Multiple uploads
  • Transfer controls
  • Stop slow transfers
  • Rate limiting
  • Connections
  • Name resolve tricks
  • Connection timeout
  • Network interface
  • Local port number
  • Keep alive
  • Timeouts
  • Authentication
  • .netrc
  • return codes
  • –libcurl
  • post transfer meta-data
  • caches
  • some words on threads
  • error handling with libcurl

Share API

  • sharing data between easy handles

TLS

  • enable TLS
  • ciphers
  • verifying server certificates
    custom checks
  • client certificates
  • on TLS backends
  • SSLKEYLOGFILE

Proxies

  • Proxy type
  • HTTP proxy
  • SOCKS proxy (tor)
  • Authentication
  • HTTPS proxy
  • Proxy environment variables
  • Proxy headers

HTTP

  • Ranges
  • HTTP versions
  • Conditionals
  • HTTP POST
    data with callback
  • Multipart formpost
  • Redirects
  • Modify the HTTP request
  • HTTP PUT
  • Cookies
  • Alternative Services
  • HSTS
  • HTTP/2
  • HTTP/3

HTTP header API

  • get specific header field after transfer
  • iterate over many headers

URL API

  • Parse a URL
  • extract components
  • update components
  • URL encoding/decoding
  • IDN encoding/decoding
  • redirects

WebSocket

  • just a quickie, I did a separate websocket video recently
    https://youtu.be/NLIhd0wYO24

3,000 contributors

Thank you everyone who has helped out in making curl into what it is today.

We make an effort to note the names of and say thanks to every single individual who ever reported bugs, fixed problems, ran tests, wrote code, polished the website, spell-fixed documentation, assisted debug sessions, helped interpret protocol standards, reported security problems or co-authored code etc.

In 2005 I decided to go back through the project history and make sure all names that had been involved up to that point in time would also be mentioned in the THANKS file. This is the reason for the visible bump in the graph.

Since then, we add the names of all the helpers. We say thanks and give credits in commit messages and we have scripts to help us collect them and mention them as contributors. We probably miss occasional ones but I hope and believe that most of all the awesome people that ever helped us are recorded accordingly and given credit.

We are nothing without out dear contributors.

Today, this list of people we are thankful for, reached 3,000 entries when Alex Klyubin’s pull request was merged. (The list of names on the website is synced every once in a while so it actually typically shows slightly fewer people than we have logged in git.)

The team behind curl. 3,000 persons over almost 27 years.

We reached 2,000 contributors less than four years ago, in October 2019, so we have added about 250 new names per year to this list the last four years.

We reached 1,000 contributors in March 2013, meaning from then it took about 6.5 years to get the next 1,000. Roughly 150 new names per year.

The first 1,000 took over 16 years to reach.

You too can see it quite clearly in the graph above as well: the rate of which we get new contributors to help out in the project is increasing.

curl has existed for 9338 days. That equals one new contributor every 3.1 days for over 25 years, on average.

Non-code or code alike

It is oftentimes said that Open Source projects in general have a hard time to properly recognize and appreciate non-code contributions. In the curl project, we try hard to not differentiate between help and help.

If a person helps out to take the curl project further, even if just by a little, we say thank you very much and add their name to the list. We need and are grateful for code and non-code contributions alike. One of the most important parts of the curl project is the documentation, and that is clearly not code.

About the contributors

We cannot say much about the contributors because in an effort to lower the bars and reduce friction, we also do not ask them about details. We only know the name they provide the help under, which could be a pseudonym and in some cases clearly are nicknames. We do not know where in the world they originate from or which company they work for, we can’t tell their gender, skin color, religion or other human “properties”. And we don’t care much about those specifics. Our job is to bring curl forward.

This said: there is a risk that we have added the same contributors twice or more, if they have helped out using several different names. That is just not something we can detect or avoid. Unless the contributor themself informs us.

There is also a risk that some of the persons that contributed to curl are not nice people or that they work for reprehensible organizations. We focus on the quality of their submissions and if they hide who they are, we will never know if they actually are animal-hurting nazis hiding behind pseudonyms.

(Lack of) Diversity

As far as I can tell, we have a lousy contributor diversity. I am pretty sure the majority of all help come from old white middle-class western men. Like myself.

I cannot fully know this for sure because I only actually know a small fraction of all the contributors, but out of the ones I have met this is true and I believe I have met or at least communicated with the ones who have done the vast majority of all the changes.

I would much rather see us have many more contributors from other parts of the world, female, and with non-christian backgrounds, but I cannot control who comes to us. I can only do my best to take care of all and appreciate every contribution without discrimination.

Commit authors

This day when we reach 3,000 contributors, we also count 1,201 commit authors. Persons with their names as authors of at least one commit in the curl source repository. 40% of the contributors are committers. Almost 65% of the committers only ever committed once.

3,000 visualized

The top image of this blog post is a photo from FOSDEM a few years back when I did a presentation in front of a packed room with some 1,400 attendees. The largest room at FOSDEM is said to fit 1415 persons. So not even two such giant rooms would be enough to hold all the curl contributors if it would have been possible to get them all in one place…

You too?

You too can be a curl contributor. We are friendly. It is not hard. There is lots to do. Your contributions can end up getting used by literally billions of humans.

How I made a heap overflow in curl

In association with the release of curl 8.4.0, we publish a security advisory and all the details for CVE-2023-38545. This problem is the worst security problem found in curl in a long time. We set it to severity HIGH.

While the advisory contains all the necessary details. I figured I would use a few additional words and expand the explanations for anyone who cares to understand how this flaw works and how it happened.

Background

curl has supported SOCKS5 since August 2002.

SOCKS5 is a proxy protocol. It is a rather simple protocol for setting up network communication via a dedicated “middle man”. The protocol is for example typically used when setting up communication to get done over Tor but also for accessing Internet from within organizations and companies.

SOCKS5 has two different host name resolver modes. Either the client resolves the host name locally and passes on the destination as a resolved address, or the client passes on the entire host name to the proxy and lets the proxy itself resolve the host remotely.

In early 2020 I assigned myself an old long-standing curl issue: to convert the function that connects to a SOCKS5 proxy from a blocking call into a non-blocking state machine. This is for example much noticeable when an application performs a large amount of parallel transfers that all go over SOCKS5.

On February 14 2020 I landed the main commit for this change in master. It shipped in 7.69.0 as the first release featuring this enhancement. And by extension also the first release vulnerable to CVE-2023-38545.

A less wise decision

The state machine is called repeatedly when there is more network data to work on until it is done: when the connection is established.

At the top of the function I made this:

bool socks5_resolve_local =
  (proxytype == CURLPROXY_SOCKS5) ? TRUE : FALSE;

This boolean variable holds information about whether curl should resolve the host or just pass on the name to the proxy. This assignment is done at the top and thus for every invocation while the state machine is running.

The state machine starts in the INIT state, in which the main bug for today’s story time lies. The flaw is inherited from the function from before it was turned into a state-machine.

if(!socks5_resolve_local && hostname_len > 255) {
  socks5_resolve_local = TRUE;
}

SOCKS5 allows the host name field to be up to 255 bytes long, meaning a SOCKS5 proxy cannot resolve a longer host name. On finding a too long host name. the curl code makes the bad decision to instead switch over to local resolve mode. It sets the local variable for that purpose to TRUE. (This condition is a leftover from code added ages ago. I think it was downright wrong to switch mode like this, since the user asked for remote resolve curl should stick to that or fail. It is not even likely to work to just switch, even in “good” situations.)

The state machine then switches state and continues.

The issue triggers

If the state machine cannot continue because it has no more data to work with, like if the SOCKS5 server is not fast enough, it returns. It gets called again when there is data available to continue working on. Moments later.

But now, look at the local variable socks5_resolve_local at the top of the function again. It again gets set to a value depending on proxy mode – not remembering the changed value because of the too long host name. Now it again holds a value that says the proxy should resolve the name remotely. But the name is too long…

curl builds a protocol frame in a memory buffer, and it copies the destination to that buffer. Since the code wrongly thinks it should pass on the host name, even though the host name is too long to fit, the memory copy can overflow the allocated target buffer. Of course depending on the length of the host name and the size of the target buffer.

Target buffer

The allocated memory area curl uses to build the protocol frame in to send to the proxy, is the same as the regular download buffer. It is simply reused for this purpose before the transfer starts. The download buffer is 16kB by default but can also be set to use a different size at the request of the application. The curl tool sets the buffer size to 100kB. The minimum accepted size is 1024 bytes.

If the buffer size is set smaller than 65541 bytes this overflow is possible. The smaller the size, the larger the possible overflow.

Host name length

A host name in a URL has no real size limit, but libcurl’s URL parser refuses to accept names longer than 65535 bytes. DNS only accepts host names up 253 bytes. So, a legitimate name that is longer than 253 bytes is unusual. A real name that is longer than 1024 is virtually unheard of.

Thus it pretty much requires a malicious actor to feed a super-long host name into this equation to trigger this flaw. To use it in an attack. The name needs to be longer than the target buffer to make the memory copy overwrite heap memory.

Host name contents

The host name field of a URL can only contain a subset of octets. A range of byte values are plain invalid and would cause the URL parser to reject it. If libcurl is built to use an IDN library, that one might also reject invalid host names. This bug can therefore only trigger if the right set of bytes are used in the host name.

Attack

An attacker that controls an HTTPS server that a libcurl using client accesses over a SOCKS5 proxy (using the proxy-resolver-mode) can make it return a crafted redirect to the application via a HTTP 30x response.

Such a 30x redirect would then contain a Location: header in the style of:

Location: https://aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/

… where the host name is longer than 16kB and up to 64kB

If the libcurl using client has automatic redirect-following enabled, and the SOCKS5 proxy is “slow enough” to trigger the local variable bug, it will copy the crafted host name into the too small allocated buffer and into the adjacent heap memory.

A heap buffer overflow has then occurred.

The fix

curl should not switch mode from remote resolve to local resolve due to too long host name. It should rather return an error and starting in curl 8.4.0, it does.

We now also have a dedicated test case for this scenario.

Credits

This issue was reported, analyzed and patched by Jay Satiro.

This is the largest curl bug-bounty paid to date: 4,660 USD (plus 1,165 USD to the curl project, as per IBB policy)

Classic related Dilbert strip. The original URL seems to no longer be available.

Rewrite it?

Yes, this family of flaws would have been impossible if curl had been written in a memory-safe language instead of C, but porting curl to another language is not on the agenda. I am sure the news about this vulnerability will trigger a new flood of questions about and calls for that and I can sigh, roll my eyes and try to answer this again.

The only approach in that direction I consider viable and sensible is to:

  1. allow, use and support more dependencies written in memory-safe languages and
  2. potentially and gradually replace parts of curl piecemeal, like with the introduction of hyper.

Such development is however currently happening in a near glacial speed and shows with painful clarity the challenges involved. curl will remain written in C for the foreseeable future.

Everyone not happy about this are of course welcome to roll up their sleeves and get working.

Including the latest two CVEs reported for curl 8.4.0, the accumulated total says that 41% of the security vulnerabilities ever found in curl would likely not have happened should we have used a memory-safe language. But also: the rust language was not even a possibility for practical use for this purpose during the time in which we introduced maybe the first 80% of the C related problems.

It burns in my soul

Reading the code now it is impossible not to see the bug. Yes, it truly aches having to accept the fact that I did this mistake without noticing and that the flaw then remained undiscovered in code for 1315 days. I apologize. I am but a human.

It could have been detected with a better set of tests. We repeatedly run several static code analyzers on the code and none of them have spotted any problems in this function.

In hindsight, shipping a heap overflow in code installed in over twenty billion instances is not an experience I would recommend.

Behind the scenes

To learn how this flaw was reported and we worked on the issue before it was made public. Go check the Hackerone report.

On Scott Adams

I use his “I’m going to write myself a minivan”-strip above because it’s a classic. Adams himself has turned out to be a questionable person with questionable opinions and I do not condone or agree with what he says.

curl 8.4.0

We cut the release cycle short and decided to ship this release now rather than later because of the heap overflow issue we found.

Release presentation

Numbers

the 252nd release
3 changes
28 days (total: 9,336)

136 bug-fixes (total: 9,551)
216 commits (total: 31,158)
1 new public libcurl function (total: 93)
0 new curl_easy_setopt() option (total: 303)

1 new curl command line option (total: 258)
46 contributors, 20 new (total: 2,996)
21 authors, 7 new (total: 1,200)
2 security fixes (total: 148)

Security

SOCKS5 heap buffer overflow (HIGH)

(CVE-2023-38545) This flaw makes curl overflow a heap based buffer in the SOCKS5 proxy handshake.

See also my separate detailed explainer about CVE-2023-38545.

cookie injection with none file (LOW)

(CVE-2023-38546) This flaw allows an attacker to insert cookies at will into a running program using libcurl, if the specific series of conditions are met and the cookies are put in a file called “none” in the application’s current directory.

Changes

IPFS protocols via HTTP gateway

The curl tool now supports IPFS URLs via gateway. I emphasize that it is the tool because this support is not libcurl. The URL needs to be a correct IPFS URL but curl only works with it if you provide an IPFS gateway, it has no actual native IPFS implementation. You want to read the new IPFS section on the curl website for details.

curl_multi_get_handles()

This is new and very simply function added to the libcurl API: it returns all the easy handles that were previously added to it.

dropped support for legacy mingw.org toolchain

The legacy mingw version is deprecated and by dropping support for this we can simplify code a little.

Bugfixes

Some of the things we fixed in this release are…

made cmake more aligned with configure

Numerous smaller and larger fixes went in this cycle to make sure the cmake and configure configs are more aligned and create more similar default builds.

expire the timeout when trying next IP

Iterating over IP addresses when connecting could accidentally do delays, making the process take longer time than necessary.

remove unnecessary cookie struct fields

curl now keeps much less data in memory per cookie

update curl man page references

All curl man pages got their references updated and they are now verified and checked in tests to remain accurate and well formatted.

use per-request counter to check too large http headers

The check that prevents too large accumulated HTTP response headers actually used the wrong counter so it kicked in too early.

aws-sigv4: fix sorting with empty parts

Getting this authentication method to work in all cases turns out to be a real adventure and in this release we fix yet some minor issues.

let the max file size option stop too big transfers

Up until now, the maximum file size option only works on stopping transfers before it even began if libcurl knew the file size was too big. Starting now, it will also stop ongoing transfers if they reach the maximum limit. This should help users avoid unwanted surprises.

lib: use wrapper for curl_mime_data fseek callback

Rewinding files when doing multipart formbased transfers on 32 bit ARM using the legacy libcurl curl_formadd API did not work because of data size incompatibilities. It took some work to find and understand as it still worked fine on x86 32 bit for example!

libssh: cap SFTP packet size sent

The libssh library mostly passes on the data with the same size libcurl passes to it, it turns out. That is not compatible with the SFTP protocol so in order to make libcurl work better, it now caps how much data it can send in a single libssh send call. It probably makes SFTP uploads much slower.

misc: better random boundary separators

The mime boundaries used for multipart formposts now use more random bits than before. Up from 64 to 130 bits. It now produces strings using alphanumerical characters instead of just hex.

quic: set ciphers/curves like for TLS

The same style of support for setting TLS 1.3 ciphers and curves as for regular TLS were added to the QUIC code.

http2: retry on GOAWAY

Improved handling of GOAWAY when wanting to use use connection and then move on to use another.

fall back to http/https proxy env-variable if ws/wss not set

When using one of the WebSocket schemes, curl will now fall back and try the http_proxy and https_proxy environment variables if ws_proxy or wss_proxy is not set.

accept –expand on file names too

The variable --expand functionality did not work for command line options that accept file names, such as --output. It does now.

Next

We have synced the coming release cycles on this release. The next one is thus planned to happen in exactly eight weeks time. On December 6, 2023.