All posts by Daniel Stenberg

Me, curl and Dagens Nyheter

In the afternoon of October 1st 2019, I had the pleasure of welcoming Linus Larsson and Jonas Lindkvist into my home in Huddinge, south of Stockholm, Sweden. My home is also my office as I work full-time from home. These two fine gentlemen work for Sweden’s largest morning newspaper, Dagens Nyheter, which boasts 850,000 daily readers.

Jonas took what felt like a hundred photos of me, most of them when I sit in my office chair at my regular desk where my primary development computers and environment are. As you can see in the two photos on this blog post. I will admit that I did minimize most of my regular Windows from the screens to that I wouldn’t accidentally reveal something personal or sensitive, but on the plus side is that if you pay close attention you can see my Simon Stålenhag desktop backgrounds better!

Me and Linus then sat down and talked. We talked about my background, how curl was created and how it has “taken off” to an extent I of course could never even dream about. Today, I estimate that curl runs in perhaps ten billion installations. A truly mind boggling – and humbling – number.

The interview/chat lasted for about an hour or so. I figured we had touched most relevant areas and Linus seemed content with the material and input he’d gotten from me. As this topic and article wasn’t really time sensitive or something that would have to be timed with something particular Linus explained that he didn’t know exactly when it would get published and it didn’t bother me. I figured it would be cool whenever!

On the morning of October 14 I collected the paper from my mailbox (because yes, I still do have a paper version newspaper arriving at my home every morning) and boom, I spotted an interesting little note in the lower right hand corner.

You can see the (Swedish-speaking) front-page blurb on the photo on the right.

Världens största programmerare du aldrig hört talas om (links to the dn.se site for the Swedish article, possibly behind a paywall)

The interesting timing this morning made it out so that this was the same morning I delivered a keynote at Castor Software Days at KTH in Stockholm titled “curl, a hobby project that conquered the world” (slides) – which by the way was received very well and I got a lot of positive comments and interesting conversations afterwards. And lots of people of course noticed the interestingly timed coincidence with the DN article!

Daniel in front of an audience at KTH, Stockholm.

The DN article reaches out to “ordinary” people in ways I’m not used to, so of course this made more of my non-techie friends suddenly realize a little more of what I do. I think it captures my “journey” and my approach to life and curl fairly well.

I’ll probably extend this blog post with links/photos of the actual DN articles at a later point once I feel I don’t risk undermining DN’s business by doing so.

(photos by Jonas Lindkvist, Dagens Nyheter, used in the online article about me)

curl 7.66.0 – the parallel HTTP/3 future is here

I personally have not done this many commits to curl in a single month (August 2019) for over three years. This increased activity is of course primarily due to the merge of and work with the HTTP/3 code. And yet, that is still only in its infancy…

Download curl here.

Numbers

the 185th release
6 changes
54 days (total: 7,845)

81 bug fixes (total: 5,347)
214 commits (total: 24,719)
1 new public libcurl function (total: 81)
1 new curl_easy_setopt() option (total: 269)

4 new curl command line option (total: 225)
46 contributors, 23 new (total: 2,014)
29 authors, 14 new (total: 718)
2 security fixes (total: 92)
450 USD paid in Bug Bounties

Two security advisories

TFTP small blocksize heap buffer overflow

(CVE-2019-5482) If you told curl to do TFTP transfers using a smaller than default “blocksize” (default being 512), curl could overflow a heap buffer used for the protocol exchange. Rewarded 250 USD from the curl bug bounty.

FTP-KRB double-free

(CVE-2019-5481) If you used FTP-kerberos with curl and the server maliciously or mistakenly responded with a overly large encrypted block, curl could end up doing a double-free in that exit path. This would happen on applications where allocating a large 32 bit max value (up to 4GB) is a problem. Rewarded 200 USD from the curl bug bounty.

Changes

The new features in 7.66.0 are…

HTTP/3

This experimental feature is disabled by default but can be enabled and works (by some definition of “works”). Daniel went through “HTTP/3 in curl” in this video from a few weeks ago:

Parallel transfers

You can now do parallel transfers with the curl tool’s new -Z / –parallel option. This is a huge change that might change a lot of use cases going forward!

Retry-after

There’s a standard HTTP header that some servers return when they can’t or won’t respond right now, which indicates after how many seconds or at what point in the future the request might be fulfilled. libcurl can now return that number easily and curl’s –retry option makes use of it (if present).

curl_multi_poll

curl_multi_poll is a new function offered that is very similar to curl_multi_wait, but with one major benefit: it solves the problem for applications of what to do for the occasions when libcurl has no file descriptor at all to wait for. That has been a long-standing and perhaps far too little known issue.

SASL authzid

When using SASL authentication, curl and libcurl now can provide the authzid field as well!

Bug-fixes

Some interesting bug-fixes included in this release..

.netrc and .curlrc on Windows

Starting now, curl and libcurl will check for and use the dot-prefixed versions of these files even on Windows and only fall back and check for and use the underscore-prefixed versions for compatibility if the dotted one doesn’t exist. This unifies curl’s behavior across platforms.

asyn-thread: create a socketpair to wait on

With this perhaps innocuous-sounding change, libcurl on Linux and other Unix systems will now provide a file descriptor for the application to wait on while name resolving in a background thread. This lets applications know better when to call libcurl again and avoids having to just blindly wait and retry. A performance gain.

Credentials in URL when using HTTP proxy

We found and fixed a regression that made curl not use credentials properly from the URL when doing multi stage authentication (like HTTP Digest) with a proxy.

Move code into vssh for SSH backends

A mostly janitor-style fix that also now abstracted away more SSH-using code to not know what particular SSH backend that is being used while at the same time making it easier to write and provide new SSH backends in the future. I’m personally working a little slowly on one, to be talked about at a later point.

Disable HTTP/0.9 by default

If you want libcurl to accept and deliver HTTP/0.9 responses to your application, you need to tell it to do that. Starting in this version, curl will consider those invalid HTTP responses by default.

alt-svc improvements

We introduced alt-svc support a while ago but as it is marked experimental and nobody felt a strong need to use it, it clearly hasn’t been used or tested much in real life. When we’ve worked on using alt-svc to bootstrap into HTTP/3 we found and fixed a whole range of little issues with the alt-svc support and it is now in a much better shape. However, it is still marked experimental.

IPv6 addresses in URLs

It was reported that the URL parser would accept malformatted IPv6 addresses that subsequently and counter-intuitively would get resolved as a host name internally! An example URL would be “https://[ab.de]/’ – where all the letters and symbols within the brackets are individually allowed components of a IPv6 numerical address but it still isn’t a valid IPv6 syntax and instead is a legitimate and valid host name.

Going forward!

We recently ran a poll among users of what we feel are the more important things to work on, and with that the rough roadmap has been updated. Those are things I want to work on next but of course I won’t guarantee anything and I will greatly appreciate all help and assistance that I can get. And sure, we can and will work on other things too!

FIPS ready with curl

Download wolfSSL fips ready (in my case I got wolfssl-4.1.0-gplv3-fips-ready.zip)

Unzip the source code somewhere suitable

$ cd $HOME/src
$ unzip wolfssl-4.1.0-gplv3-fips-ready.zip
$ cd wolfssl-4.1.0-gplv3-fips-ready

Build the fips-ready wolfSSL and install it somewhere suitable

$ ./configure --prefix=$HOME/wolfssl-fips --enable-harden --enable-all
$ make -sj
$ make install

Download curl, the normal curl package. (in my case I got curl 7.65.3)

Unzip the source code somewhere suitable

$ cd $HOME/src
$ unzip curl-7.65.3.zip
$ cd curl-7.65.3

Build curl with the just recently built and installed fips ready wolfSSL version.

$ LD_LIBRARY_PATH=$HOME/wolfssl-fips/lib ./configure --with-wolfssl=$HOME/wolfssl-fips --without-ssl
$ make -sj

Now, verify that your new build matches your expectations by:

$ ./src/curl -V

It should show that it uses wolfSSL and that all the protocols and features you want are enabled and present. If not, iterate until it does!

FIPS Ready means that you have included the FIPS code into your build and that you are operating according to the FIPS enforced best practices of default entry point, and Power On Self Test (POST).”

more tiny curl

Without much fanfare or fireworks we put together and shipped a fresh new version of tiny-curl. We call it version 0.10 and it is based on the 7.65.3 curl tree.

tiny-curl is a patch set to build curl as tiny as possible while still being able to perform HTTPS GET requests and maintaining the libcurl API. Additionally, tiny-curl is ported to FreeRTOS.

Changes in 0.10

  • The largest and primary change is that this version is based on curl 7.65.3, which brings more features and in particular more bug fixes compared to tiny-curl 0.9.
  • Parts of the patches used for tiny-curl 0.9 was subsequently upstreamed and merged into curl proper, making the tiny-curl 0.10 patch much smaller.

Download

As before, tiny-curl is an effort that is on a separate track from the main curl. Download tiny-curl from wolfssl.com!

First HTTP/3 with curl

In the afternoon of August 5 2019, I successfully made curl request a document over HTTP/3, retrieve it and then exit cleanly again.

(It got a 404 response code, two HTTP headers and 10 bytes of content so the actual response was certainly less thrilling to me than the fact that it actually delivered that response over HTTP version 3 over QUIC.)

The components necessary for this to work, if you want to play along at home, are reasonably up-to-date git clones of curl itself and the HTTP/3 library called quiche (and of course quiche’s dependencies too, like boringssl), then apply pull-request 4193 (build everything accordingly) and run a command line like:

curl --http3-direct https://quic.tech:8443

The host name used here (“quic.tech”) is a server run by friends at Cloudflare and it is there for testing and interop purposes and at the time of this test it ran QUIC draft-22 and HTTP/3.

The command line option --http3-direct tells curl to attempt HTTP/3 immediately, which includes using QUIC instead of TCP to the host name and port number – by default you should of course expect a HTTPS:// URL to use TCP + TLS.

The official way to bootstrap into HTTP/3 from HTTP/1 or HTTP/2 is via the server announcing it’s ability to speak HTTP/3 by returning an Alt-Svc: header saying so. curl supports this method as well, it just needs it to be explicitly enabled at build-time since that also is still an experimental feature.

To use alt-svc instead, you do it like this:

curl --alt-svc altcache https://quic.tech:8443

The alt-svc method won’t “take” on the first shot though since it needs to first connect over HTTP/2 (or HTTP/1) to get the alt-svc header and store that information in the “altcache” file, but if you then invoke it again and use the same alt-svc cache curl will know to use HTTP/3 then!

Early days

Be aware that I just made this tiny GET request work. The code is not cleaned up, there are gaps in functionality, we’re missing error checks, we don’t have tests and chances are the internals will change quite a lot going forward as we polish this.

You’re of course still more than welcome to join in, play with it, report bugs or submit pull requests! If you help out, we can make curl’s HTTP/3 support better and getting there sooner than otherwise.

QUIC and TLS backends

curl currently supports two different QUIC/HTTP3 backends, ngtcp2 and quiche. Only the latter currently works this good though. I hope we can get up to speed with the ngtcp2 one too soon.

quiche uses and requires boringssl to be used while ngtcp2 is TLS library independent and will allow us to support QUIC and HTTP/3 with more TLS libraries going forward. Unfortunately it also makes it more complicated to use…

The official OpenSSL doesn’t offer APIs for QUIC. QUIC uses TLS 1.3 but in a way it was never used before when done over TCP so basically all TLS libraries have had to add APIs and do some adjustments to work for QUIC. The ngtcp2 team offers a patched version of OpenSSL that offers such an API so that OpenSSL be used.

Draft what?

Neither the QUIC nor the HTTP/3 protocols are entirely done and ready yet. We’re using the protocols as they are defined in the 22nd version of the protocol documents. They will probably change a little more before they get carved in stone and become the final RFC that they are on their way to.

The libcurl API so far

The command line options mentioned above of course have their corresponding options for libcurl using apps as well.

Set the right bit with CURLOPT_H3 to get direct connect with QUIC and control how to do alt-svc using libcurl with CURLOPT_ALTSVC and CURLOPT_ALTSVC_CTRL.

All of these marked EXPERIMENTAL still, so they might still change somewhat before they become stabilized.

Update

Starting on August 8, the option is just --http3 and you ask libcurl to use HTTP/3 directly with CURLOPT_HTTP_VERSION.

The slowest curl vendors of all time

In the curl project we make an effort to ship security fixes as soon as possible after we’ve learned about a problem. We also “prenotify” (inform them about a problem before it gets known to the public) vendors of open source OSes ahead of the release to alert them about what is about to happen and to make it possible for them to be ready and prepared when we publish the security advisory of the particular problems we’ve found.

These distributors ship curl to their customers and users. They build curl from the sources they host and they apply (our and their own) security patches to the code over time to fix vulnerabilities. Usually they start out with the clean and unmodified version we released and then over time the curl version they maintain and ship gets old (by my standards) and the number of patches they apply grow, sometimes to several hundred.

The distros@openwall mailing list allows no more than 14 days of embargo, so they can never be told any further than so in advance.

We always ship at least one official patch for each security advisory. That patch is usually made for the previous version of curl and it will of course sometimes take a little work to backport to much older curl versions.

Red Hat

The other day I was reading LWN when I saw their regular notices about security updates from various vendors and couldn’t help checking out a mentioned curl security fix from Red Hat for Red Hat Enterprise Linux 7. It was dated July 29, 2019 and fixed CVE-2018-14618, which we announced on September 5th 2018. 327 days ago.

Not quite reaching Apple’s level, Red Hat positions themselves as number three in this toplist with this release.

An interesting detail here is that the curl version Red Hat fixed here was 7.29.0, which is the exact same version our winner also patched…

(Update after first publication: after talks with people who know things I’ve gotten some further details. Red Hat did ship a fix for this problem already in 2018. This 2019 one was a subsequent update for complicated reasons, which may or may not make this entry disqualified for my top-list.)

Apple

At times when I’ve thought it has been necessary, I’ve separately informed the product security team at Apple about a pending release with fixes that might affect their users, and almost every time I’ve done that they’ve responded to me and asked that I give them (much) longer time between alert and release in the future. (Requests I’ve ignored so far because it doesn’t match how we work nor how the open vendors want us to behave). Back in 2010, I noticed how one of the security fixes took 391 days for Apple to fix. I haven’t checked, but I hope they’re better at this these days.

With the 391 days, Apple takes place number two.

Oracle

Oracle Linux published the curl errata named ELSA-2019-1880 on July 30 2019 and it apparently fixes nine different curl vulnerabilities. All nine were the result of the Cure53 security audit and we announced them on November 2 2016.

These problems had at that time been public knowledge for exactly 1000 days! The race is over and Oracle got this win by a pretty amazing margin.

In this case, they still ship curl 7.29.0 (released on February 6, 2013) when the latest curl version we ship is version 7.65.3. When I write this, we know about 47 security problems in curl 7.29.0. 14 of those problems were fixed after those nine problems that were reportedly fixed on July 30. It might mean, but doesn’t have to, that their shipped version still is vulnerable to some of those…

Top-3

Summing up, here’s the top-3 list of all times:

  1. Oracle: 1000 days
  2. Apple: 391 days
  3. Red Hat: 327 days

Ending notes

I’m bundling and considering all problems as equals here, which probably isn’t entirely fair. Different vulnerabilities will have different degrees of severity and thus will be more or less important to fix in a short period of time.

Still, these were security releases done by these companies so someone there at least considered them to be security related, worth fixing and worth releasing.

This list is entirely unscientific, I might have missed some offenders. There might also be some that haven’t patched these or even older problems and then they are even harder to spot. If you know of a case suitable for this top-list, let me know!

2000 contributors

Today when I ran the script that counts the total number of contributors that have helped out in the curl project (called contrithanks.sh) the number showing up in my terminal was

2000

At 7804 days since the birthday, it means one new contributor roughly every 4 days. For over 21 years. Kind of impressive when you think of it.

A “contributor” here means everyone that has reported bugs, helped out with fixing bugs, written documentation or authored commits (and whom we recorded the name at the time it happened, but this is something we really make an effort to not miss out on). Out of the 2000 current contributors, 708 are recorded in git as authors.

Plotted out on a graph, with the numbers from the RELEASE-NOTES over time we can see an almost linear growth. (The graph starts at 2005 because that’s when we started to log the number in that file.)

Number of contributors over time.

We crossed the 1000 mark on April 12 2013. 1400 on May 30th 2016 and 1800 on October 30 2018.

It took us almost six years to go from 1000 to 2000; roughly one new contributor every second day.

Two years ago in the curl 7.55.0, we were at exactly 1571 contributors so we’ve received help from over two hundred new persons per year recently. (Barring the miscalculations that occur when we occasionally batch-correct names or go through records to collect previously missed out names etc)

Thank you!

The curl project would not be what it is without all the help we get from all these awesome people. I love you!

docs/THANKS

That’s the file in the git repo that contains all the names of all the contributors, but if you check that right now you will see that it isn’t exactly 2000 names yet and that is because we tend to update that in batches around release time. So by the time the next release is coming, we will gather all the new contributors that aren’t already mentioned in that file and add them then and by then I’m sure we will be able to boast more than 2000 contributors. I hope you are one of the names in that list!

curl goez parallel

The first curl release ever saw the light of day on March 20, 1998 and already then, curl could transfer any amount of URLs given on the command line. It would iterate over the entire list and transfer them one by one.

Not even 22 years later, we introduce the ability for the curl command line tool to do parallel transfers! Instead of doing all the provided URLs one by one and only start the next one once the previous has been completed, curl can now be told to do all of them, or at least many of them, at the same time!

This has the potential to drastically decrease the amount of time it takes to complete an operation that involves multiple URLs.

–parallel / -Z

Doing transfers concurrently instead of serially of course changes behavior and thus this is not something that will be done by default. You as the user need to explicitly ask for this to be done, and you do this with the new –parallel option, which also as a short-hand in a single-letter version: -Z (that’s the upper case letter Z).

Limited parallelism

To avoid totally overloading the servers when many URLs are provided or just that curl runs out of sockets it can keep open at the same time, it limits the parallelism. By default curl will only try up to 50 transfers concurrently, so if there are more transfers given to curl those will wait to get started once one of the first transfers are completed. The new –parallel-max command line option can be used to change the concurrency limit.

Progress meter

Is different in this mode. The new progress meter that will show up for parallel transfers is one output for all transfers.

Transfer results

When doing many simultaneous transfers, how do you figure out how they all did individually, like from your script? That’s still to be figured out and implemented.

No same file splitting

This functionality makes curl do URLs in parallel. It will still not download the same URL using multiple parallel transfers the way some other tools do. That might be something to implement and offer in a future fine tuning of this feature.

libcurl already do this fine

This is a new command line feature that uses the fact that libcurl can already do this just fine. Thanks to libcurl being a powerful transfer library that curl uses, enabling this feature was “only” a matter of making sure libcurl was used in a different way than before. This parallel change is entirely in the command line tool code.

Ship

This change has landed in curl’s git repository already (since b8894085000) and is scheduled to ship in curl 7.66.0 on September 11, 2019.

I hope and expect us to keep improving parallel transfers further and we welcome all the help we can get!

curl 7.65.2 fixes even more

Six weeks after our previous bug-fix release, we ship a second release in a row with nothing but bug-fixes. We call it 7.65.2. We decided to go through this full release cycle with a focus on fixing bugs (and not merge any new features) since even after 7.65.1 shipped as a bug-fix only release we still seemed to get reports indicating problems we wanted fixed once and for all.

Download curl from curl.haxx.se as always!

Also, I personally had a vacation already planned to happen during this period (and I did) so it worked out pretty good to take this cycle as a slightly calmer one.

Of the numbers below, we can especially celebrate that we’ve now received code commits by more than 700 persons!

Numbers

the 183rd release
0 changes
42 days (total: 7,789)

76 bug fixes (total: 5,259)
113 commits (total: 24,500)
0 new public libcurl function (total: 80)
0 new curl_easy_setopt() option (total: 267)

0 new curl command line option (total: 221)
46 contributors, 25 new (total: 1,990)
30 authors, 19 new (total: 706)
1 security fix (total: 90)
200 USD paid in Bug Bounties

Security

Since the previous release we’ve shipped a security fix. It was special in the way that it wasn’t actually a bug in the curl source code, but in the build procedure for how we made curl builds for Windows. For this report, we paid out a 200 USD bug bounty!

Bug-fixes of interest

As usual I’ve carved out a list with some of the bugs since the previous release that I find interesting and that could warrant a little extra highlighting. Check the full changelog on the curl site.

bindlocal: detect and avoid IP version mismatches in bind

It turned out you could ask curl to connect to a IPv4 site and if you then asked it to bind to an interface in the local end, it could actually bind to an ipv6 address (or vice versa) and then cause havok and fail. Now we make sure to stick to the same IP version for both!

configure: more –disable switches to toggle off individual features

As part of the recent tiny-curl effort, more parts of curl can be disabled in the build and now all of them can be controlled by options to the configure script. We also now have a test that verifies that all the disabled-defines are indeed possible to set with configure!

(A future version could certainly get a better UI/way to configure which parts to enable/disable!)

http2: call done_sending on end of upload

Turned out a very small upload over HTTP/2 could sometimes end up not getting the “upload done” flag set and it would then just linger around or eventually cause a time-out…

libcurl: Restrict redirect schemes to HTTP(S), and FTP(S)

As a stronger safety-precaution, we’ve now made the default set of protocols that are accepted to redirect to much smaller than before. The set of protocols are still settable by applications using the CURLOPT_REDIR_PROTOCOLS option.

multi: enable multiplexing by default (again)

Embarrassingly enough this default was accidentally switched off in 7.65.0 but now we’re back to enabling multiplexing by default for multi interface uses.

multi: fix the transfer hashes in the socket hash entries

The handling of multiple transfers on the same socket was flawed and previous attempts to fix them were incorrect or simply partial. Now we have an improved system and in fact we now store a separate connection hash table for each internal separate socket object.

openssl: fix pubkey/signature algorithm detection in certinfo

The CURLINFO_CERTINFO option broke with OpenSSL 1.1.0+, but now we have finally caught up with the necessary API changes and it should now work again just as well independent of which version you build curl to use!

runtests: keep logfiles around by default

Previously, when you run curl’s test suite, it automatically deleted the log files on success and you had to use runtests.pl -k to prevent it from doing this. Starting now, it will erase the log files on start and not on exit so they will now always be kept on exit no matter how the tests run. Just a convenience thing.

runtests: report single test time + total duration

The output from runtests.pl when it runs each test, one by one, will now include timing information about each individual test. How long each test took and how long time it has spent on the tests so far. This will help us detect if specific tests suddenly takes a very long time and helps us see how they perform in the remote CI build farms etc.

Next?

I truly think we’ve now caught up with the worst problems and can now allow features to get merged again. We have some fun ones in the pipe that I can’t wait to put in the hands of users out there…

openssl engine code injection in curl

This flaw is known as CVE-2019-5443.

If you downloaded and installed a curl executable for Windows from the curl project before June 21st 2019, go get an updated one. Now.

On Windows, using OpenSSL

The official curl builds for Windows – that the curl project offers – are built cross-compiled on Linux. They’re made to use OpenSSL by default as the TLS backend, the by far most popular TLS backend by curl users.

The curl project has provided official curl builds for Windows on and off through history, but most recently this has been going on since August 2018.

OpenSSL engines

These builds use OpenSSL. OpenSSL has a feature called “engines”. Described by the project itself like this:

“a component to support alternative cryptography implementations, most commonly for interfacing with external crypto devices (eg. accelerator cards). This component is called ENGINE”

More simply put, an “engine” is a plugin for OpenSSL that can be loaded and run dynamically. The particular engine is activated either built-in or by loading a config file that specifies what to do.

curl and OpenSSL engines

When using curl built with OpenSSL, you can specify an “engine” to use, which in turn allows users to use their dedicated hardware when doing TLS related communications with curl.

By default, the curl tool allows OpenSSL to load a config file and figure out what engines to load at run-time but it also provides a build option to make it possible to build curl/libcurl without the ability to load that config file at run time – which some users want, primarily for security reasons.

The mistakes

The primary mistake in the curl build for Windows that we offered, was that the disabling of the config file loading had a typo which actually made it not disable it (because the commit message had it wrong). The feature was therefore still present and would load the config file if present when curl was invoked, contrary to the intention.

The second mistake comes a little more from the OpenSSL side: by default if you build OpenSSL cross-compiled like we do, the default paths where it looks for the above mentioned config file is under the c:\usr\local tree. It is in fact even complicated and impossible to fix this path in the build without a patch.

What the mistakes enable

A non-privileged user or program (the attacker) with access to the host to put a config file in the directory where curl would look for a config file (and create the directory first as it probably didn’t already exist) and the suitable associated engine code.

Then, when an privileged user subsequently executes curl, it will run with more power and run the code, the engine, the attacker had put there. An engine is a piece of compiled code, it can do virtually anything on the machine.

The fix

Already three days ago, on June 21st, a fixed version of the curl executable for Windows was uploaded to the curl web site (“curl 7.65.1_2”). All older versions that had been provided in the past were removed to reduce the risk of someone still using an old lingering download link.

The fix now makes the curl build switch off the loading of the config file, as was already intended. But also, the OpenSSL build that is used for the build is now modified to only load the config file from a privileged path that isn’t world writable (C:/Windows/System32/OpenSSL/).

Widespread mistake

This problem is very widespread among projects on Windows that use OpenSSL. The curl project coordinated this publication with the postgres project and have worked with OpenSSL to make them improve their default paths. We have also found a few other openssl-using projects that already have fixed their builds for this flaw (like stunnel) but I think we have reason to suspect that there are more vulnerable projects out there still not fixed.

If you know of a project that uses OpenSSL and ships binaries for Windows, give them a closer look and make sure they’re not vulnerable to this.

The cat is already out of the bag

When we got this problem reported, we soon realized it had already been publicly discussed and published for other projects even before we got to know about it. Due to this, we took it to publication as quick as possible to minimize user impact as much as we can.

Only on Windows and only with OpenSSL

This flaw only exists on curl for Windows and only if curl was built to use OpenSSL with this bad path and behavior.

Microsoft ships curl as part of Windows 10, but it does not use OpenSSL and is not vulnerable.

Credits

This flaw was reported to us by Rich Mirch.

The build was fixed by Viktor Szakats.

The image on the blog post comes from pixabay.