All posts by Daniel Stenberg

curl 8.12.1

This is a quick follow-up patch release due to the number of ugly regressions in the 8.12.0 release.

Release presentation

Live-streamed on twitch as always at 10:00 CET on the release day.

Numbers

the 265th release
0 changes
8 days (total: 9,827)

65 bugfixes (total: 11,428)
67 commits (total: 34,180)
0 new public libcurl function (total: 96)
0 new curl_easy_setopt() option (total: 306)

0 new curl command line option (total: 267)
25 contributors, 14 new (total: 3,332)
34 authors, 18 new (total: 1,341)
0 security fixes (total: 164)

Bugfixes

libcurl

  • asyn-thread: fix build with CURL_DISABLE_SOCKETPAIR
  • asyn-thread: fix the returned bitmask from Curl_resolver_getsock
  • asyn-thread: survive a c-ares/HTTPSRR channel set to NULL
  • content_encoding: #error on too old zlib
  • imap/pop3/smtp: TLS upgrade fixes
  • include necessary headers for inet_ntop/inet_pton
  • drop support for libssh older than 0.9.0
  • netrc: return code cleanup, fix missing file error
  • openssl-quic: ignore ciphers for h3
  • openssl: fix out of scope variables in goto
  • vtls: fix multissl-init
  • vtsl: eliminate ‘data->state.ssl_scache’
  • wakeup_write: make sure the eventfd write sends eight bytes

tool

  • tool_ssls: switch to tool-specific get_line function

scripts

  • build: add tool_hugehelp.c into IBMi build
  • configure/cmake: check for realpath
  • configure/cmake: set asyn-rr a feature only if httpsrr is enabled
  • runtests: fix the disabling of the memory tracking
  • runtests: quote commands to support paths with spaces

docs

  • CURLOPT_SSH_KNOWNHOSTS.md: strongly recommend using this
  • CURLSHOPT_SHARE.md: adjust for the new SSL session cache
  • SPONSORS.md: clarify that we don’t promise goods or services

disabling cert checks: we have not learned much

And by that I mean the global “we” as in the world of developers.

In the beginning there was SSL

When I first learned about SSL and how to use it in the mid to late 1990s, it took me a while to realize and understand the critical importance of having the client verifying the server’s certificate in the handshake. Once I had understood, we made sure that curl would default to doing the check correctly and refuse connecting if the certificate check fails.

Since curl and libcurl 7.10 (released in October 2002) we verify server certificates by default. Today, more than twenty-two years later, there should realistically be virtually no users left using a curl version that does not verify certificates by default.

What’s verifying

The standard way to verify a TLS server certificate is by A) checking that it is signed by a trusted certificate authority (CA) and B) that the cert was created for the thing you interact with; that the domain name is listed in the certificate.

Optionally, you can opt to “pin” a certificate which then verifies that the certificate is the one that corresponds to a specific hash. This is generally considered more fragile but avoids the use of a “CA store” (a set of certificates for the certificate authorities “you” trust) needed to verify the digital signature of the server certificate.

Skipping means insecure

Skipping the certificate verification makes the connection insecure. Because if you do not verify, there is nothing that prevents a middle-man to sit between you and the real server. Or even to just fake being the real server.

Challenges

If you try to use the production site’s certificate in your development environment, you might connect to the server using a different name and then the verification fails.

If you have an active middle man intercepting and wanting to snoop on the TLS traffic, it needs to provide a different certificate and unless that can get signed by a CA you trust, the verification fails.

If you have an outdated or maybe no CA store at all, then the verification fails.

If the server does not update its certificate correctly, it might expire and then the verification fails. Similarly, in order to do a correct verification your client needs a clock that is at least roughly in sync with reality or the verification might fail.

Verification also takes more time compared to how fast it is to just skip the entire step. Sometimes and to some, weirdly enticing.

And yet all curl and libcurl documentation for this strongly discourages users from disabling the check.

A libcurl timeline

curl added support for SSL in April 1998 (years before they renamed it TLS). curl makes certificate checks by default since 2002, both the tool and the library. At the time, I felt I was a little slow to react but at least we finally made sure that curl users would do this check by default.

Ten years later, in October 2012, there was a paper published called The most dangerous code in the world, in which the authors insisted that the widespread problem of applications not verifying TLS certificates with libcurl was because This interface is almost perversely bad. The problem was apparently libcurl’s API.

The same “fact” would be repeated later, for example in this 2014 presentation saying that this is our fault because the API (for PHP) looks like it takes a boolean when in reality it did not.

The libcurl API for this

I do not claim that we have the best API in libcurl, but I can say that extremely few libraries can boast an API and ABI stability that comes even close to ours. We have not broken the ABI since 2006. We don’t mind carrying a world on our shoulders that have learned to depend on this and us. So we don’t change the API, even though it could have been done a little better.

CURLOPT_SSL_VERIFYPEER is a boolean option to ask for server certificate verification against the CA store. It is set TRUE by default, so an application needs to set it to FALSE (0) to disable the check. This option works together with the next one.

CURLOPT_SSL_VERIFYHOST is a separate option to verify that the name embedded in the certificate matches the name in the URL (basically). This option was never a boolean but accepts a number. 0 disables the check, and 2 was for the maximum check level. With 2 being the default.

Both options are thus by default set to verify, and an application can lessen the checks by changing one or both of them.

Adaptations

After that most dangerous article was posted in 2012 that basically said we were worthless, without ever telling that to us or submitting an issue or pull-request with us, we changed how CURLOPT_SSL_VERIFYHOST worked in the 7.28.1 release – shipped in December 2012.

Starting then, we made setting the option to 1 an error (and it would just leave the original value untouched). Before that update, setting VERIFYHOST to 1 was a debug-like mode that made libcurl output warnings on mismatches but still let the connection through. A silly mode to offer.

In 2019 we tweaked the VERIFYHOST handling a little further and made the value 1 and 2 do the same thing: verify the name.

I have no idea what the authors of that 2012 paper would think about this API tweak, but at least the options are now two proper booleans.

I did not think the authors were right when they originally published that paper, but yet we improved the API a little. I dare to claim that the problem with disabled certificate checks is not because of a bad libcurl API.

curl

The curl tool of course is a libcurl using application and it itself offers the --insecure (-k) option which when used switches off both those above mentioned libcurl options. Also strongly discouraged to actually use beyond testing and triaging.

Other layers on top

libcurl is itself used by a lot of frameworks and languages that expose the options to their respective users. Often they then even use the same option names. We have over 60 documented language bindings for libcurl.

For example, the PHP/CURL binding is extremely popular and well used and it has the options provided and exposed using the exact same names, values and behavior.

Disabling the checks

More than twenty-two years of having this enabled by default. More than twelve years since the most dangerous paper. After countless articles on the topic. Everyone I talk to knows that we all must verify certificates.

In almost all cases, you can fix the failed verification the proper way instead of disabling the check. It is just usually a little more work.

State of checks using libcurl today

I searched GitHub on February 10 2025 for “CURLOPT_SSL_VERIFYPEER, FALSE” and it quickly showed me some 140,000 matching repositories. Sure, not all these matches are bad uses since they can be done conditionally etc and it can also be done using other bindings using different option names that this search does not catch etc. Or they might use pinning, which also is not caught by this simple search term.

Searching for “CURLOPT_SSL_VERIFYPEER, 0” shows 153,000 additional matches.

A quick walk-through shows that there are lot of genuine, mostly sloppy, certificate disabling curl using code among these matches.

We could fool ourselves into thinking that the state of certificate check disabling is better in modern software in wide use, made by big teams.

A quick CVE search immediately found several security vulnerabilities for exactly this problem published only last year:

  • CVE-2024-32928 – The libcurl CURLOPT_SSL_VERIFYPEER option was disabled on a subset of requests made by Nest production devices.
  • CVE-2024-56521 – An issue was discovered in TCPDF before 6.8.0. If libcurl is used, CURLOPT_SSL_VERIFYHOST and CURLOPT_SSL_VERIFYPEER are set unsafely.
  • CVE-2024-5261 – In affected versions of Collabora Online, in LibreOfficeKit, curl’s TLS certificate verification was disabled (CURLOPT_SSL_VERIFYPEER of false).

If I was into bug-bounties, I would know where to start.

What do we do?

Clearly, this is work that never gets complete or done – it might arguable actually get worse as the volume of software grows. We need to keep telling people to fix this. To stop encouraging others to do wrong. Lead by good example. Provide good documentation and snippets for people to copy from.

I took a very tiny step and reported a bug against a documentation that seemed encourage the disabling. If we all submit a bug or two when we see these problems, things might gradually improve.

When/if you submit bug reports as well, please remember to stay polite, friendly and to the point. Explain why disabling the check is bad. Why keeping the check is good.

Rinse and repeat. Until the end of time.

curl 8.12.0

Release presentation

Numbers

the 264th release
8 changes
56 days (total: 9,819)

244 bugfixes (total: 11,417)
367 commits (total: 34,180)
2 new public libcurl function (total: 96)
0 new curl_easy_setopt() option (total: 306)

1 new curl command line option (total: 267)
65 contributors, 34 new (total: 3,332)
34 authors, 18 new (total: 1,341)
3 security fixes (total: 164)

Security

CVE-2025-0167: netrc and default credential leak. When asked to use a .netrc file for credentials and to follow HTTP redirects, curl could leak the password used for the first host to the followed-to host under certain circumstances. This flaw only manifests itself if the netrc file has a default entry that omits both login and password. A rare circumstance.

CVE-2025-0665: eventfd double close. libcurl would wrongly close the same file descriptor twice when taking down a connection channel after having completed a threaded name resolve.

CVE-2025-0725: gzip integer overflow. When libcurl is asked to perform automatic gzip decompression of content-encoded HTTP responses with the CURLOPT_ACCEPT_ENCODING option, using zlib 1.2.0.3 or older, an attacker-controlled integer overflow would make libcurl perform a buffer overflow. There should be virtually no users left using such an old and vulnerable zlib version.

Changes

  • curl: add byte range support to –variable reading from file
  • curl: make –etag-save acknowledge –create-dirs
  • curl: add ‘time_queue’ variable to -w
  • getinfo: provide info which auth was used for HTTP and proxy:
  • openssl: add support to use keys and certificates from PKCS#11 provider
  • QUIC: 0RTT for gnutls via CURLSSLOPT_EARLYDATA
  • vtls: feature ssls-export for SSL session im-/export
  • hyper: dropped support

Bugfixes

Some of the bugfixes to highlight.

libcurl

  • acknowledge CURLOPT_DNS_SERVERS set to NULL
  • fix CURLOPT_CURLU override logic
  • initial HTTPS RR resolve support
  • ban use of sscanf()
  • conncache: count shutdowns against host and max limits
  • support use of custom libzstd memory functions
  • cap cookie expire times to 400 days
  • parse only the exact cookie expire date
  • include the shutdown connections in the set curl_multi_fdset returns
  • easy_lock: use Sleep(1) for thread yield on old Windows
  • ECH: update APIs to those agreed with OpenSSL maintainers
  • fix ‘time_appconnect’ for early data with GnuTLS
  • HTTP/2 and HTTP7/3: strip TE request header
  • mbedtls: fix handling of blocked sends
  • mime: explicitly rewind subparts at attachment time.
  • fix mprintf integer handling in float precision
  • terminate snprintf output on windows
  • fix curl_multi_waitfds reporting of fd_count
  • fix return code for an already-removed easy handle from multi handle
  • add an ssl_scache to the multi handle
  • auto-enable OPENSSL_COEXIST for wolfSSL + OpenSSL builds
  • use SSL_poll to determine writeability of OpenSSL QUIC streams
  • free certificate on error with Secure Transport
  • fix redirect handling to a new fragment or query (only)
  • return “IDN” feature set for winidn and appleidn

scripts

  • numerous cmake improvements
  • scripts/mdlinkcheck: markdown link checker

curl tool

  • return error if etag options are used with multiple URLs
  • accept digits in –form type= strings
  • make –etag-compare accept a non-existing file

docs

  • add INFRASTRUCTURE.md describing project infra

Next

The next release is probably going to be curl 8.13.0 and if things go well, it ships on April 2, 2025.

European Open Source Achievement Award

I have been awarded the European Open Source Achievement Award! Proud, happy and humble I decided to accept it, as well as the associated nomination for president of the new European Open Source Academy for the coming two years.

This information was not made public until the very same day of the award ceremony, on January 30th 2025, so I was not able to talk about it before. Then FOSDEM kept me occupied the days immediately following.

Official letter

Dear Mr. Daniel Stenberg,

On behalf of the OSAwards.eu initiative, it is our great honour to invite you to receive the European Open Source Achievement Award, on the occasion of the inaugural ceremony of the European Open Source Awards to be held in Brussels on Thursday, 30 January 2025 at 18:30.

In recognition of your leadership quality, we would also like to extend this invitation for you to join the European Open Source Academy in the quality of Academy President, for a two year tenure. As Academy President you will play a critical role in guiding the establishment and reputability of the European Open Source Academy and its annual European Open Source Awards. You can find more information about the provisional structure of the European Open Source Academy and expected involvement from founding members in the attached project brief.

This inaugural award, corresponding with an invitation to the European Open Source Academy, recognises your exceptional contribution as a European open source leader whose impact has transformed the European and global technological landscape, and whose engagement has highly contributed to a thriving open source community in Europe. Your founding and continuous contribution to cURL has had a tremendous impact on the global and European technological landscape thanks to its innovative nature. We also want to recognise your continuous commitment to open source maintenance and knowledge sharing, positioning you as a leading and respected figure in the European open source community.

The inaugural ceremony will be a formal event followed by a gala cocktail reception. You are welcome to bring a guest with you, please let us know their name for us to add them to the guest list. You can find more information about the programme in the attached concept note. Logistical information for the ceremony will be shared upon confirmation of your attendance.

Thank you for considering our invitation to receive the European Open Source Achievement award and to join the European Open Source Academy. We hope that you will accept this testimony of recognition and we remain available should you have any questions.

On the award

I have worked on and with Open Source for some thirty years. I believe in the model, I like the community, I enjoy the challenges. Some of my work in Open Source has been successful beyond my wildest dreams.

Getting recognition for my work in the wider world outside the inner circle is huge. The many thousands of hours of starring on screens, debugging code, tearing hair and silently yelling at my past self for not writing better comments actually sometimes produce something useful.

There are many awesome people in the European Open Source universe. I can only imagine the struggle the award committee had to select a single awardee.

Thank you!

My wife Anja joined me in Brussels and we participated at the award gala dinner there on January 30th, 2025 when I was handed the actual physical award. The Thursday just before the FOSDEM weekend. I sported an extra large smile on my face during that entire following FOSDEM conference.

The actual physical award trophy is shown off in a little video below.

On the academy

I believe Open Source has been an immense success story over the last three decades and it has become what is essentially the foundation of all current digital infrastructure. I am convinced that Europeans are already well positioned in this ecosystem but we should not lean back and think that anything is done or over. We need to keep on our toes and rather strengthen and enforce Open Source, our participation in and our understanding of it – and we need to fund it. This is where a lot of important software is made and controlled. We need to make sure that the EU leadership understands this.

Those are examples of what I hope this European Open Source Academy can help out with.

My role as president of the academy is not going to be a time-sink. I cannot allow myself that. I have curl work to do and I remain a full-time lead developer of curl. I will be president on the side, for a limited number of hours per month.

Next

It never gets old or boring to get awards, so even if I have been given a whole range of truly fabulous awards by now, every new one still makes me humble and super excited.

Getting recognition, awards and thank yous are superb ways to boost energy and motivation – I highly recommend it. I am totally set on continuing my work on curl and other Open Source for many more years to come.

I want to lead by example. I aspire to be the Open Source person I myself looked for and tried to mimic when I was younger.

Showing off

Some photos

Next to me you can also see the three additional awardees: Amandine Le Pape (The Business & Impact Award), Lydia Pintscher (The Advocacy & Awareness Award), and David Cuartielles (The Skills & Education Award). Awesome people.

The person handing me the award, seen on the photos, is Omar Mohsine, open source coordinator at the United Nations Office for Digital and Emerging Technologies.

Update

The seven stars on the OSAwards logo symbolize the core principles and values of open source software. They represent collaboration, transparency, community, innovation, freedom, diversity, and inclusivity.

A 1337 curl author

For quite some time now, I celebrate and welcome every new commit author in the curl project in the public. Recently, that means I send out a toot on Mastodon saying Welcome so and so as curl commit author number XYZ and a link to their initial curl work. (example 1, example 2).

This messaging is not done automatically. GitHub helps out by specifically mentioning in a PR when it is done by a first-timer to the repository, and I have a convenient local script that tells me how many authors we have so far, and then I type up the message myself and send it. (Sometimes I miss one, which I regret.)

This process takes me about seven point five seconds per case of manual labor. Writing an automated script to do this correctly, triggered for the right persons, would take the equivalent of many years of new authors.

For the last few months, people have more and more noticed and replied mentions about the fact that we were approaching commit author number 1337. Lots of people have said things in the style of “I should learn to program soon so that I can become number 1337”.

The number 1337, is of course just a number. I find it amusing and charming that it seems to have this almost magic aura and attraction to so many people in our community.

Today, commit author 1337 was finally announced, only three years since we announced author 1000. There are no permanent records or anything of this fact other than this blog post. Further, there is a risk that we have a duplicate or two somewhere in there so that a recount at a later time will end up differently.

Commit author 1337 became Michael Schuster who wrote this pull request, which fixed a minor build issue in the mbedTLS backend code. Thanks!

337 new authors over the last three years equals roughly two new commit authors per week on average. Pretty good. We have room for many more!

A relevant statistic in this context is also that 65% of all commit authors only ever authored a single commit.

Now, let’s go for author two thousand next…

CVSS is dead to us

CVSS is short for Common Vulnerability Scoring System and is according to Wikipedia a technical standard for assessing the severity of vulnerabilities in computing systems.

Typically you use an online CVSS calculator, click a few checkboxes and radio buttons and then you magically get a number from 0 to 10. There are also different versions of CVSS.

Every CVE filed to MITRE is supposed to have a CVSS score set. CVEs that are registered that lack this information will get “amended” by an ADP (Authorized Data Publishers) that think of it as their job. In the past NVD did this. Nowadays CISA does it. More on this below.

Problems

Let’s say you write a tool and library that make internet transfers. They are used literally everywhere, in countless environments and with an almost impossible number of different build combinations, target operating systems and CPU architectures. Let’s call it curl.

When you find a theoretical security problem in this product (theoretical because most problems are never actually spotted exploited), how severe is it? The CVSS calculation has a limited set of input factors that tend to result in a fairly high number for a network product. What if we can guess that the problem is only used by a few or only affects an unusual platform? Not included.

The CVSS scoring is really designed for when you know exactly when and how the product is used and how an exploit of the flaw affects it. Then it might at least work. For a generic code base shipped in a tarball that runs in more than twenty billion installations it does less so.

If you look around you can easily find numerous other (and longer) writings about the problems and challenges with CVSS. We are not alone thinking this.

CVSS is used

At the same time, it seems the popularity of security scanners have increased significantly over the last few years. The kind of products that scan your systems checking for vulnerable products and show you big alerts and warnings when they do.

The kind of programs that looks for a product, figures out a version number and then shouts if it finds a registered CVE for that product and version with a CVSS score above a certain threshold.

This kind of product that indirectly tricks users to deleting operating system components to silence these alerts. We even hear of people who have contractual agreements that say they must address these alerts with N number of business days or face consequences.

Just days ago I was contacted by users on macOS who were concerned about a curl CVE that their scanner found in the libcurl version shipped by Apple. Was their tool right or wrong? Do you think anyone involved in that process actually can tell? Do you think Apple cares?

curl skips CVSS

In the curl project we have given up trying to use CVSS to get a severity score and associated severity level.

In the curl security team we instead work hard to put all our knowledge together and give a rough indication about the severity by dividing it into one out of four levels: low, medium, high, critical.

We believe that because we are not tied to any (flawed and limited) calculator and because we are intimately familiar with the code base and how it is used, we can assess and set a better security severity this way. It serves our users better.

Part of our reason to still use these four levels is that our bug-bounty‘s reward levels are based on the level.

As a comparison, The Linux kernel does not even provide that course-grained indication, based on similar reasoning to why we don’t provide the numeric scores.

This is not treated well

The curl project is a CNA, which means that we reserve and publish our own CVE Ids to the CVE database. There is no middle man interfering and in fact no one else can file curl CVE entries anymore without our knowledge and us having a saying about it. That’s good.

However, the CVE system itself it built on the idea that every flaw has a CVSS score. When someone like us creates CVE entries without scores, that leaves something that apparently is considered a gaping sore in the system that someone needs to “fix”.

Who would “fix” this?

Authorized Data Publishers

A while ago this new role was added to the CVE ecosystem called ADPs. This job was previously done a little on the side but roughly the same way by NVD who would get all the CVEs, edit them and then publish them all themselves to the world with their additions. And the world really liked that and used the NVD database.

However NVD kind of drowned themselves by this overwhelming work and it has instead been replaced by CISA who is an “ADP” and is thus allowed to enrich CVE entries in the database that they think need “improvement”.

The main thing they seem to detect and help “fix” is the lack of CVSS in published CVE entries. Like every single curl CVE because we don’t participate in the CVSS dance.

No clues but it must get a score

Exactly in the same way this system was broken before when NVD did it, this new system is broken when CISA does it.

I don’t have the numbers for exactly how many CVE entries they do this “enrichment” for (there were over 40,000 CVEs last year but a certain amount of them had CVSS filed in by their CNAs). I think it is safe to assume that the volume is high and since they are filed for products in all sorts of categories it is certainly impossible for CISA to have experts in the many products and technologies each CVE describes and affects.

So: given limited time and having no real clue what the issues are about, the individuals in this team click some buttons in a CVSS calculator, get a score, a severity and then (presumably) quickly move on the next issue. And the next. And the next. In a never-ending stream of incoming security issues.

How on earth does anyone expect them to get this right? I mean sure, in some or perhaps even many cases they might get close because of luck, skill or something but the system is certainly built in a way that just screams: this will end up crazy wrong ever so often.

A recent example

In the end of 2024 I was informed by friends that several infosec related websites posted about a new curl-related critical security problem. Since we have not announced any critical security problems since 2013, that of course piqued my interest so I had a look.

It turned out that CISA had decided that CVE-2024-11053 should be earned a CVSS 9.1 score: CRITICAL, and now scanners and news outlets had figured that out. Or would very soon.

The curl security team had set the severity to LOW because of the low risk and special set of circumstances that are a precondition for the problem. Go read it yourself – the fine thing with CVEs for Open Source products is that the source, the fix and everything is there to read and inspect as much as we like.

The team of actual experts who knows this code and perfectly understands the security problem says LOW. The team at CISA overrides that and insists that are all wrong and that this problem risks breaking the Internet. Because we apparently need a CVSS at all costs.

A git repository

One positive change that the switch to CISA from NVD brought is that now they host their additional data in GitHub repository. Once I was made aware of this insane 9.1 score, I took time of my Sunday afternoon with my family and made a pull-request there urging them to at least lower the score to 5.3. That was a score I could get the calculator to tell me.

I wanted to have this issue sorted and stomped down as quickly as possible to if possible reduce the risk that security scanners everywhere would soon start alerting on this and we would get overloaded with queries from concerned and worried users.

It’s not like CISA gets overloaded by worried users when they do this. Their incompetence here puts a load on no one else but the curl project. But sure, they got their CVSS added.

After my pull request it took less than ninety minutes for them to update the curl records. Without explanation, with no reference to my PR, they now apparently consider the issue to be CVSS 3.4.

I’m of course glad it is no longer marked critical. I think you all understand exactly how arbitrary and random this scoring approach is.

A problem with the initial bad score getting published is of course that a certain number of websites and systems are really slow or otherwise bad at updating that information after they initially learned about the critical score. There will linger websites out there speaking about this “critical” curl bug for a long time now. Thanks CISA!

Can we avoid this?

In the curl security team we have discussed setting “fixed” (fake) scores on our CVE entries just in order to prevent CISA or anyone else to ruin them, but we have decided not to since that would be close to lying about them and we actually work fiercely to make sure we have everything correct and meticulously described.

So no, since we do not do the CVSS dance, we unfortunately will continue having CISA do this to us.

Stop mandatory CVSS?

I am of course advocating strongly within the CNA ecosystem that we should be able to stop CISA from doing this, but I am just a small cog in a very large machine. A large machine that seems to love CVSS. I do not expect to have much success in this area anytime soon.

And no, I don’t think switching to CVSS 4.0 or updates to this system is ultimately going to help us. The problem is grounded in the fact that a single one-dimensional score is just too limited. Every user or distributor of the project should set scores for their different use cases. Maybe even different ones for different cases. Then it could perhaps work.

But I’m not in this game for any quick wins. I’m on the barricades for better (Open Source) security information, and to stop security misinformation. Ideally for the wider ecosystem, because I think we are far from alone in this situation.

The love of CVSS is strong and there is a lot of money involved based on and relying on this.

Minor update

After posting this, I got confirmation that the Go Security team does what we do and has the same problems. Filippo Valsorda told me on Bluesky. Just to show that this is a common pattern.

Update two

Some fourteen hours after I posted this blog post and it spread around the world, my enrichment PR to CISA I mentioned above got this added comment:

While it is good to be recognized, it does not feel like it will actually address the underlying problem here.

Update three

What feels like two hundred persons have pointed out that the CVSS field is not mandatory in the CVE records. It is a clarification that does not add much. The reality is that users seem to want the scores so bad that CISA will add CVSS nonetheless, mandatory or not.

Presentation: curl from start to end

On Tuesday January 21st 2025, at 16:00 CET (15:00 UTC) I will do a presentation titled as per above. I have not done this one before.

The talk will be a detailed explainer and step-by-step going through exactly what happens when a curl command line is typed into a shell and the return key is pressed. As the image below illustrates.

The talk will be live-streamed on twitch, and will be recorded and hosted on YouTube after the fact. For free of course, no signup. Just show up.

Those who join the live-stream chat can of course also ask questions live in the following Q&A.

The recording

Secure Transport support in curl is on its way out

In May 2024 we finally decided that maybe the time has come for curl to drop support of older TLS libraries. Libraries that because they don’t support the modern TLS version (1.3) for many users are maybe not suitable to build upon for the future. We gave the world 12 months to adapt or to object. More than half of that time has passed.

This means that after May 2025, we intend to drop support for Secure Transport and BearSSL unless something changes drastically that makes us reconsider.

This blog post is an attempt to make everyone a little more aware and make sure that those who need to, prepare appropriately.

Secure Transport

Secure Transport is a quite a horrible name, but it is still the name of a TLS library written by Apple, shipped as a component for macOS and all the different flavors of iOS. It has been supported by curl since 2012 but as of a few years back it is considered deprecated and “legacy” by Apple themselves. Secure Transport only supports TLS up to but not later than 1.2.

Once upon the time Apple shipped curl built against Secure Transport with macOS but they switched over to LibreSSL several years ago.

I hear two primary reasons mentioned why people still like using libcurl/Secure Transport on iOS:

  1. It saves them from having to use and bundle a separate third party library that also adds to the footprint.
  2. It gives them easy and convenient use of the iOS certificate store instead of having to manage a separate one..

Network Framework

Continuing on their weird naming trajectory, the thing that Apple recommends the world to use instead of Secure Transport is called Network Framework.

Due to completely new paradigms and a (to me at least) funny way to design their API, it is far from straight-forward to write a backend for curl that uses the Network Framework for TLS. We have not even seen anyone try. Apple themselves certainly seem to be fine to simply not use their own TLS for their curl builds.

I am not sure it is even sensible to try to use the Network Framework in curl.

Options

Without Secure Transport and no prospect of seeing Network Framework support, users of libcurl on macOS and iOS flavors need to decide on what to do next.

I can imagine that there are a few different alternatives to select from.

  1. Stick to an old libcurl. At first an easy and convenient choice, but it will soon turn out to be a narrow path with potential security implications going forward.
  2. Maintain a custom patch. The TLS backends are fairly independent so this is probably not an impossible task, but still quite a lot of work that also takes a certain amount of skill.
  3. Switch off from libcurl. Assuming you find an alternative that offers similar features, stability, portability, speed and that supports the native cert storage fine. Could mean quite some work.
  4. Use libcurl with another TLS library. This is then by itself two sub-categories. A) The easiest route is to accept that you need to maintain a separate CA store and then you can do this immediately and you can use a TLS library that supports the latest standards and that is well supported. B) Use a TLS library that supports use of the native iOS cert store. I believe maybe right now wolfSSL is the only one that does this out of the box, but there is also the option to pay someone or write the code to add such features to another curl TLS backend.
  5. Some other approach

Post removal

After this removed support of two libraries from curl, there is still support for ten different TLS libraries. There should be an adequate choice for everyone – and there is nothing stopping us from adding support for future newcomers on the scene.

Protests are listened to

Part of the deprecation process in curl is that we listen to what possible objections people might have in the time leading up to the actual future date when the code is cut out. Given a proper motivation a deprecation decision can be canceled or at least postponed.

curl with partial files

Back in September 2023, we extended the curl command line tool with a new fairly advanced and flexible variable system. Using this, users can use files, environment variables and more in a powerful way when building curl command lines in ways not previously possible – with almost all existing command line options.

curl command lines were already quite capable before this, but these new variables certainly took it up several additional notches.

Come February 2025

In the pending curl 8.12.0 release, we extend this variable support a little further. Starting now, you can assign a variable to hold the contents of a partial file. Get a byte range from a given file into a variable and use that variable in the command line, instead of using the entire file.

You can get the first few bytes and use as a username, you can get a hundred bytes in the middle of a file and POST that or do countless other things.

Byte range

You ask curl to read a byte range from a file instead of the whole one by appending [n-M] to the variable name, when you assign a variable. Where N and M are the first and the last byte offsets into the file, 0 being the first byte. If you omit the second number, it means until the end of file.

For example, get the first 32 bytes from a file named secret and set as password for daniel:

curl --variable "pwd[0-31]@secret" \
--expand-user daniel:{{pwd}} \
https://example.com/

Skip the first thousand bytes from a file named localfile and send the rest of it in a POST:

curl --variable "upload[1000-]@localfile" \
--expand-post '{{upload}}' \
https://example.com/

With functions

You can of course also combine the byte offsets with the standard expand functions. For example, get the first hundred bytes from the file called random and send them base64 encoded in a POST:

curl --variable "binary[0-99]@random" \
--expand-post '{{binary:b64}}' \
https://example.com/

I hope you will like it.

Update

After his post was first published, we discussed the exact syntax for this feature and decided to tweak it a little to make it less likely that old curl versions could be tricked when trying a new command line options.

This version is now showing the updated syntax.

dropping hyper

The ride is coming to an end. The experiment is done. We tried, but we admit defeat.

Four years ago we started adding support for an alternative HTTP backend in curl. It would use a library written in rust, called hyper. The idea was to introduce an alternative implementation of HTTP internals that you could make curl/libcurl use instead of the native implementation.

This new backend that used a library written in rust would enable users to run a product where a larger piece of the total code than otherwise would be written in a memory-safe language: rust. Memory-safety being all the rage these days.

The initial work was generously sponsored by ISRG, the organization behind such excellent efforts such as Let’s Encrypt, which believes strongly in this concept. I cooperated intensely with Sean McArthur, the lead developer of hyper. We made it work.

We have shipped hyper support in curl labeled EXPERIMENTAL for several years by now, hoping to attract attention and trigger the experimental spirit in users out there. Seeing so many people seem to want more memory-safety, surely the users would come?

95% of the work is the easy part

I mean that we took it perhaps 95% of the way and almost the entire test suite ran identically independently of which backend we built curl to use. The final few percent would however turn out to be friction enough to now eventually make us admit defeat, give up and instead yank it all out again.

There simply were no users asking for it and there were almost no developers interested or knowledgeable enough to work on it. libcurl is written in C, hyper is written in rust and there is a C binding glue layer in between. It takes someone who is interested and good at both languages to dig in, understand the architectures, the challenges and the protocols to drive this all the way through.

But with no user demand, why do it?

It seems quite clear that rust users use hyper but few of them want to work on making it work for a C project like curl, and among existing curl users there is virtually no interest in hyper. The overlap in the Venn diagram of the two universes is not big enough.

With no expectation of seeing this work completed in the short to medium length term, the cost of keeping the hyper code is simply deemed too high. We gain code agility and reduce complexity by trimming this off.

We improved

While the experiment itself is deemed a failure, I think we learned from it and improved curl in the process. We had to rethink and reassess several implementation details when we aligned HTTP behavior with hyper. libcurl parses and handles HTTP stricter now. Better.

I also believe that hyper benefited from this journey and gained experiences and input from us that led to improvements in their end and in their HTTP library. Which then by extension have benefited the hyper users.

When we started this, even rust itself was not ready and over this time rust has improved and it is today a better language and it is better prepared to offer something like this. For us or for other projects.

Backends

With this amputation we are back to no separate HTTP/1 backends. We only provide the single native implementation.

I am not against revisiting the topic and the idea of providing alternative backends for HTTP/1 in the future, but I think we should proceed a little different next time. We also have a better internal architecture now to build on than what we had in 2020 when this attempt started.

Less rust (for now?)

Before this step, we supported three different backends backed up by libraries written in rust. Now we are down to two: rustls (for TLS) and quiche (for QUIC and HTTP/3). Both of them are still marked experimental.

These two backends use better internal APIs in curl and are hooked into libcurl in a cleaner way that makes them easier to support and less of burden to maintain over time.

Of course nothing prevents us from adding support for more and other rust libraries in the future. libcurl is a protocol engine using a plethora of different backends for many different protocols and services hooked into its core. Virtually all of those backends could be provided by a rust library.

Thanks

A big thank you go to Sean and all others who helped us take it as far as we did. You are great. Nothing of this should put any shade on you.

When

The hyper backend code has been removed in git as of December 21. There will be no traces left of it in the curl 8.12.0 release coming in February 2025.