Category Archives: cURL and libcurl

curl and/or libcurl related

My cookie spec problem

Before RFC 6265 was published in 2011, the world of cookies was a combination of anarchy and guesswork because the only “spec” there was was not actually a spec but just a brief text lacking a lot of details.

RFC 6265 brought order to a world of chaos. It was good. It made things a lot better. With this spec, it was suddenly much easier to write compliant and interoperable cookie implementations.

I think these are plain facts. I have written cookie code since 1998 and I thus know this from my own personal experience. Since I believe in open protocols and doing things right, I participated in the making of that first cookie spec. As a non-browser implementer I think I bring a slightly different perspective and different angle to what many of the other involved people have.

Consensus

The cookie spec was published by the IETF and it was created and written in a typical IETF process. Lots of the statements and paragraphs were discussed and even debated. Since there were many people and many opinions involved, of course not everything I think should have been included and stated in the spec ended up the way I wanted them to, but in a way that the consensus seemed to favor. That is just natural and the way of the process. Everyone involved accept this.

I have admitted defeat, twice

The primary detail in the spec, or should I say one of the important style decisions, is one that I disagree with. A detail that I have tried to bring up again when the cookie spec was up for a revision and a new draft was made (still known as 6265bis since it has not become an official RFC yet). A detail that I have failed the get others to agree with me about to an enough degree to have it altered. I have failed twice. The update will ship with this feature as well.

Cookie basics

Cookies are part of (all versions of ) HTTP but are documented and specified in a separate spec. It is a classic client-server setup where Set-Cookie: response headers are sent from a server to a client, the client stores the received cookies and sends them back to servers according to a set of rules and matching criteria using the Cookie: request header.

Set-Cookie

This is the key header involved here. So how does it work? What is the syntax for this header we need to learn so that we all can figure out how servers and clients should be implemented to do cookies interoperable?

As with most client-server protocols, one side generates this header, the other side consumes it. They need to agree on how this header works.

My problem is two

The problem is that this header’s syntax is defined twice in the spec. Differently.

Section 4.1 describes the header from server perspective while section 5.2 does it from a client perspective.

If you like me have implemented HTTP for almost thirty years you are used to reading protocol specifications and in particular HTTP related specification. HTTP has numerous headers described and documented. No other HTTP documents describe the syntax for header fields differently in separate places. Why would they? They are just single headers.

This double-syntax approach comes as a surprise to many readers, and I have many times discussed cookie syntax with people who have read the 6265 document but only stumbled over and read one of the places and then walked away with only a partial understanding of the syntax. I don’t blame them.

The spec insists that servers should send a rather conservative Set-Cookie header but knowing what the world looks like, it simultaneously recommends the client side for the same header to be much more liberal because servers might not be as conservative as this spec tells the server to be. Two different syntax.

The spec tries to be prescriptive for servers: thou shall do it like this, but we all know that cookies were wilder than this at the time 6265 was published and because we know servers won’t stick to these “new” rules, a client can’t trust that servers are that nice but instead need to accept a broader set of data. So clients are told to accept much more. A different syntax.

Servers do what works

As the spec tells clients to accept a certain syntax and widely deployed clients and cookie engines gladly accept this syntax, there is no incitement or motive for servers to change. The do this if you are a good server instruction serves as an ideal, but there is no particularly good way to push anyone in that direction because it works perfectly well to use the extended syntax that the spec says that the clients need to accept.

A spec explaining what is used

What I want? I want the Set-Cookie header to be described in a single place in the spec with a single unified syntax. The syntax that is used and that is interoperable on the web today.

It would probably even make the spec shorter, it would remove confusion and it would certainly remove the risk that people think just one of the places is the canonical syntax.

Will I bring this up again when the cookie spec is due for refresh again soon? Yes I will. Because I think it would make it a much better spec.

Do I accept defeat and accept that I am on the losing side in an argument when nobody else seems to agree with me? Yes to that as well. Just because I think like this does in no way mean that this is objectively right or that this is the way to go in a future cookie spec.

Adding curl release candidates

Heading towards curl release number 266 we have decided to spice up our release cycle with release candidates in an attempt to help us catch regressions better earlier.

It has become painfully obvious to us in the curl development team that over the past few years we have done several dot-zero releases in which we shipped quite terrible regressions. Several times those regressions have been so bad or annoying that we felt obligated to do quick follow-up releases a week later to reduce friction and pain among users.

Every such patch release have caused pain in our souls and have worked as proof that we to some degree failed in our mission.

We have thousands of tests. We run several hundred CI jobs for every change that verify them. We simply have too many knobs, features, build configs, users and combinations of them all to be able to catch all possible mistakes ourselves.

Release candidates

Decades ago we sometimes did release candidates, but we stopped. We have instead shipped daily snapshots, which is basically what a release would look like, packaged every day and made available. In theory this should remove the need and use of release candidates as people can always just get the latest snapshots, try those out and report problems back to us.

We are also acutely aware of the fact that only releases get tested properly.

Are release candidates really going to make a difference? I don’t know. I figure it is worth a shot. Maybe it is a matter of messaging and gathering the troops around these specific snapshots and by calling out the need for the testing to get done, maybe it will happen at least to some extent?

Let’s attempt this for a while and then come back in a few years and evaluate if it has seemed to help or otherwise improve the regression rate or not.

Release cycle

We have a standard release cycle in the curl project that is exactly eight weeks. When things run smoothly, we ship a new release on a Wednesday every 56 days.

The release cycle is divided into three periods, or phases, that control what kind of commits us maintainers are permitted to merge. Rules to help us ship solid software.

Immediately after a release, we have a ten day cool down period during which we absorb reactions and reports from the release. We only merge bugfixes and we are prepared to do a patch release if we need to.

Ten days after the release, we open the feature window in which we allow new features and changes to the project. The larger things. Innovations, features etc. Typically these are the most risky things that may cause regressions. This is a three-week period and those changes that do not get merged within this window get another chance again next cycle.

The longest phase is the feature freeze that kicks in twenty-five days before the pending release. During this period we only merge bugfixes and is intended to calm things down again, smooth all the frictions and rough corners we can find to make the pending release as good as possible.

Adding three release candidates

The first release candidate (rc1) is planned to ship on the same day we enter feature freeze. From that day on, there will be no more new features before the release so all the new stuff can be checked out and tested. It does not really make any sense to do a release candidate before that date.

We will highlight this release candidate and ask that everyone who can (and want) tests this one out and report every possible issue they find with it. This should be the first good opportunity to catch any possible regressions caused by the new features.

Nine days later we ship rc2. This will be done no matter what bugreports we had on rc1 or what possible bugs are still pending etc. This candidate will have additional bugfixes merged.

The final and third release candidate (rc3) is then released exactly one week before the pending release. A final chance to find nits and perfect the pending release.

I hope I don’t have to say this, but you should not use the release candidates in production, and they may contain more bugs than what a regular curl release normally does.

Technically

The release candidates will be created exactly like a real release, except that there will not be any tags set in the git repository and they will not be archived. The release candidates are automatically removed after a few weeks.

They will be named curl-X.Y.Z-rcN, where x.y.z is the version of the pending release and N is the release candidate number. Running “curl -V” on this build will show “x.y.x-rcN” as the version. The libcurl includes will say it is version x.y.z, so that applications can test out preprocessor conditionals etc exactly as they will work in the real x.y.z release.

You can help!

You can most certainly help us here by getting one of the release candidates when they ship and try it out in your use cases, your applications, your pipelines or whatever. And let us know how it runs.

I will do something on the website to help highlight the release candidates once there is one to show, to help willing contributors find them.

The curl roadmap webinar 2025

On March 6 2025 at 18:00 UTC, I am doing a curl roadmap webinar, talking through a set of features and things I want to see happen in curl during 2025.

Figure out my local time.

This is an opportunity for you to both learn about the plans but also to provide feedback on said ideas. The roadmap is not carved in stone, nor is it a promise that these things truly will happen. Things and plans might change through the year. We might change our minds. The world might change.

The event will be done in parallel over twitch and Zoom. To join the Zoom version you need to sign up for it in advance. To join the live-streamed twitch version, you just need to show up.

Sign up here for Zoom

You will be able to ask questions and provide comments using either channel.

Recording

A second curl distro meeting 2025

We are doing a rerun of last year’s successful curl + distro online meeting. A two-hour discussion, meeting, workshop for curl developers and curl distro maintainers. Maybe this is the start of a new tradition?

2025 event details

Last year I think we had a very productive meeting that led to several good outcomes. In fact, just seeing some faces and hearing voices from involved people is good and helps to enhance communication and cooperation going forward.

The objective for the meeting is to make curl better in distros. To make distros do better curl. To improve curl in all and every way we think we can, together.

Everyone who feels this is a subject they care about is welcome to join. The meeting is planned to take place in the early evening European time, early morning west coast US time. With the hope that it covers a large enough amount of curl interested people.

The plan is to do this on April 10, and all the details, planning and discussion items are kept on the dedicated wiki page for the event.

Feel free to add your own proposed discussion items, and if you feel inclined, add yourself as an intended participant. Feel free to help make this invite reach the proper people.

See you in April.

curl website traffic Feb 2025

Data without logs sure leaves us open for speculations.

I took a quick look at what the curl.se website traffic situation looks like right now. Just as a curiosity.

Disclaimer: we don’t log website visitors at all, we don’t run any web analytics on the site so we basically don’t know a lot of who does what on the site. This is done both for privacy reasons, but also for practical reasons. Managing logs for this setup is work I rather avoid to do and to pay for.

What do we have, is a website that is hosted (fronted) by Fastly on their CDN network, and as part of that setup we have an admin interface that offers accumulated traffic data. We get some numbers, but without details and specifics.

Bandwidth

Over the last month, the site served 62.95 TB. This makes it average over 2TB/day. On the most active day in the period it sent away 3.41 TB.

Requests

At 12.43 billion requests, it makes an average request transfer size 5568 bytes.

Downloads

Since we don’t have logs, we can’t count curl download perfectly. But we do have stats for request frequency for objects of different sizes from the site, and in the category 1MB-10MB we basically only have curl tarballs.

1.12 million such objects were downloaded over the last month. 37,000 downloads per day, or about one curl tarball downloaded every other second around the clock.

Of course most curl users never download it from curl.se. The source archives are also offered from github.com and users typically download curl from their distro or get it installed with their operating system etc.

But…?

The average curl tarball size from the last 22 releases is 4,182,317 bytes. 3.99 MB.

1.12 million x 3.99 MB is only 4,362 gigabytes. Not even ten percent of the total traffic.

Even if we count the average size of only the zip archives from recent releases, 6,603,978 bytes, it only makes 6,888 gigabytes in total. Far away from the almost 63 terabytes total amount.

This, combined with low average transfer size per request, seems to indicate that other things are transferred off the site at fairly extreme volumes.

Origin offload

99.77% of all requests were served by the CDN without reaching the origin site. I suppose this is one of the benefits of us having mostly a static site without cookies and dynamic choices. It allows us to get a really high degree of cache hits and content served directly from the CDN servers, leaving our origin server only a light load.

Regions

Fastly is a CDN with access points distributed over the globe, and the curl website is anycasted, so the theory is that users access servers near them. In the same region. If we assume this works, we can see from where most traffic to the curl website comes from.

The top-3:

  1. North America – 48% of the bandwidth
  2. Europe – 24%
  3. Asia – 22%

TLS

Now I’m not the expert on how exactly the TLS protocol negotiation works with Fastly, so I’m guessing a little here.

It is striking that 99% of the traffic uses TLS 1.2. It seems to imply that a vast amount of it is not browser-based, as all popular browsers these days mostly negotiate TLS 1.3.

HTTP

Seemingly agreeing with my TLS analysis, the HTTP version distribution also seem to point to a vast amount of traffic not being humans in front of browsers. They prefer HTTP/3 these days, and if that causes problems they use HTTP/2.

98.8% of the curl.se traffic uses HTTP/1, 1.1% use HTTP/2 and only the remaining tiny fraction of less than 0.1% uses HTTP/3.

Downloads by curl?

I have no idea how large share of the downloads that are actually done using curl. A fair share is my guess. The TLS + HTTP data imply a huge amount of bot traffic, but modern curl versions would at least select HTTP/2 unless the users guiding it specifically opted not to.

What is all the traffic then?

In the past, we have seen rather extreme traffic volumes from China downloading the CA cert store we provide, but these days the traffic load seems to be fairly evenly distributed over the world. And over time.

According to the stats, objects in the 100KB-1MB range were downloaded 207.31 million times. That is bigger than our images on the site and smaller than the curl downloads. Exactly the range for the CA cert PEM. The most recent one is at 247KB. Fits the reasoning.

A 247 KB file downloaded 207 million times equal 46 TB. Maybe that’s the explanation?

Sponsored

The curl website hosting is graciously sponsored by Fastly.

Changing every line three times

Is there some magic making three times, or even pi, the number of times you need to write code for it to be good?

So what am I talking about? Let’s rewind and start with talking about writing code.

Let’s say you start out by writing a program that is exactly one hundred lines long, and you release your creation to the world. Every line in this program was written just once.

Then someone reports a bug so you change source code lines. Say you change ten lines. Which is the equivalent of adding ten lines and removing ten lines. The total number of lines remains 100 lines, but you have written 110. The average line has then been changed 1.1 times.

Over time, you come to change more lines and if the project survives you probably add new code too. A living software project that is maintained is bound to have had many more lines added than what is currently present in the current working source code branch.

Exactly how many more lines were added than what is currently present?

That is the question that I asked myself regarding curl development. If we play with the thought that curl is a decently mature project as it has been developed for over twenty-five years maybe the number of times every line has been changed would tell us something?

By counting the number of added lines and comparing how many lines of code are still present, we know how often lines are changed – on average. Sure, some lines in the file headers and similar are probably rarely changed and some others are changed all the time, but let’s smear out the data and just count average.

curl is also an interesting test subject here because it has grown significantly over time. It started out as 180 lines of code in 1996 (then called httpget) and is almost 180,000 lines of code today in early 2025. An almost linear growth in number of lines of code over time, while at the same time having a fierce rate of bugfixes and corrections done.

I narrowed this research to all the product code only, so it does not include test cases, documentation, examples etc. I figured that would be the most interesting bits.

Number of lines of code

First a look at the raw number of how many lines of product code is present at different moments in time during the project’s history.

Added LOC per LOC still present

Then, counting the number of added lines of code (LOC) and comparing with how many lines of code are still present. As you can see here, the change rate is around three for a surprisingly long time.

Already by 2004 we had modified every line on average three times. The rate of changes then goes up and down a little but remains roughly three for years until 2015 something when the change rate start to gradually increase a little to 3.5 in early 2025 – while at the same time the number of lines of code in the project kept growing.

Today, February 18 2025 actually marks the day when it was calculated to a number above 3.5 for the first time ever.

What does it mean?

It means that every line in the product source code tree have by now been edited on average 3.5 times. It might been that we have written bad code and need to fix many bugs or that go back to refactor and improve existing lines frequently. Probably both.

Of course, some lines are edited and changed far more often than others, the 3.5 is just an average. We have some source lines left in the code that was brought before year 2000 and have not been changed since.

OpenSSL does a QUIC API

But will it satisfy the world?

I have blogged several times in the past about how OpenSSL decided to not ship the API for QUIC a long time ago, even though the entire HTTP world was hoping for it – or even expecting it. OpenSSL rejected the proposal to merge the proposed API and thereby implicitly decided to obstruct wide QUIC and HTTP/3 adoption outside browsers.

The OpenSSL team instead proclaimed that their ambition and goal was to implement their own QUIC stack and offer that to users. The OpenSSL team took a long time to implement it, but has shipped their own stack implementation and API since OpenSSL 3.2 – first released in November 2023.

Lagging behind

In the curl project we have been on top of this game all the way. We made curl capable of using OpenSSL-QUIC as a backend for QUIC (using nghttp3 for the HTTP/3 parts) as soon as that arrived. We immediately reported obvious flaws and omissions in their API and we have worked with the OpenSSL team over time as they have slowly but gradually addressed most (but not all) of our concerns.

Lots of other QUIC stack implementations have spent years in beta state, working out and polishing their implementations and APIs. OpenSSL went fairly quickly into shipping something they say can be used in production.

OpenSSL-QUIC considered experimental

We still consider the curl backend using the OpenSSL-QUIC implementation as experimental and discourage users from using such builds in production. Now primarily because it is a performance and resource hog compared to the competition.

The OpenSSL-QUIC implementation using curl has been measured up to 4 times (!) slower than ngtcp2 using up to 25 times (!) the amount of memory.

The API is different

The API OpenSSL finally merged on February 10, 2025 is however not the exact same API that was proposed many years ago, the API that BoringSSL, quictls and the others have been offering for many years by now. It is different, and because of this the QUIC implementations that want to use it need to adapt specifically for this and cannot just interchangeably switch between the many OpenSSL forks. It uses a pull concept versus the push of what other provides.

Additionally, the pull requests mentions: At the moment our QUIC stack does not support early data. A significant missing feature compared to what the OpenSSL forks (and other libraries) support.

When I asked the OpenSSL team about how they came to ship such a different API to what everyone else offers and what QUIC implementers have been asking for, the given explanation was:

– The API is layered on what we use internally to plug in our QUIC client and server implementations in a clean manner […] We did get some feedback from other QUIC stacks – but the proof will be in actual usage which we expect will occur most likely after the 3.5 release is out.

Rumors have it that they say they spoke to four QUIC stack authors and they all had planned to support it.

I don’t know which four stacks this was, and I’m curious to see this happen. Most QUIC stacks are not written to handle different TLS libraries with different APIs, so they would either add that support now or switch to a different API all together. Either way, not a trivial undertaking.

Also “planning to support it” is easy to say. Could also just mean at some point in a distant future.

ngtcp2 is the world’s generic QUIC stack

There are multiple QUIC implementations out there, but the only one that has had the idea of being TLS library independent already from its start, is ngtcp2.

Since curl supports ngtcp2 for QUIC, it can then also work with any of the TLS libraries ngtcp2 supports: BoringSSL, AWS-LC, quictls, wolfSSL, GnuTLS. This fits the curl mindset very well.

ngtcp2 is also the only QUIC stack curl supports right now that is not considered experimental.

Could in theory ease HTTP/3 adoption

Assuming the API works, and assuming ngtcp2 can make use of the API in a fine way, this unexpected change of attitude could be the move that suddenly and for real makes HTTP/3 adoption in and with curl take off. OpenSSL has a firm grip of the TLS library landscape (in particular in the Open Source realm) and a huge share of curl users uses it. Starting with the pending OpenSSL version 3.5 in the spring of 2025, building with curl + OpenSSL can then possibly enable HTTP/3.

So will it? I asked a lead developer of the ngtcp2 library if they are going to work on adding an adaptation for the new OpenSSL 3.5 API.

– I have no plan to do that. OpenSSL QUIC API lacks 0rtt support and its pulling crypto data is not ideal for me, other TLS stacks do not do that. Basically, the current state is inferior than what we have proposed 6 years ago. I will revisit this after OpenSSL adds 0rtt support and revise its pull model.

(Clearly they were not one of the four stack authors OpenSSL talked to.)

While this does of course not prevent someone else from doing the work, even thought the 0RTT limitation is something that primarily has to be added to the OpenSSL API, it might imply that it will not happen immediately.

In my opinion, I think it could be useful and educational if the OpenSSL project themselves wrote that adaptation for ngtcp2 to “dogfood” their own API.

curl

An attempt to illustrate the QUIC stack situation in current curl is shown below.

Currently curl supports four backends, out of which one (msh3) is scheduled for removal later this year and one (the kernel module version) is still only proposed for future inclusion – waiting for the kernel module to actually get adopted into the Linux kernel.

This leaves three actively developed backends, out of which one (ngtcp2) is the one we recommend and push for. The quiche and the OpenSSL-QUIC ones are still experimental.

The new OpenSSL API I discuss in this blog post is the one that would populate the lower right box, providing a TLS API for the QUIC library running on top of it. Hence me putting the red “happening?” text for this puzzle piece.

The red column second from the right is the OpenSSL-QUIC solution, using OpenSSL’s QUIC implementation.

Early days

It has not even been a week since this new API was merged into OpenSSL’s git repository, so it is far too early to give any predictions. Presumably it won’t even be used much for real by others until it gets shipped in a public release, planned to happen in April 2025.

Update

On February 26, there was another OpenSSL update: the QUIC API now offers 0-RTT support and they say this will be part of what ships in 3.5.

curl 8.12.1

This is a quick follow-up patch release due to the number of ugly regressions in the 8.12.0 release.

Release presentation

Numbers

the 265th release
0 changes
8 days (total: 9,827)

65 bugfixes (total: 11,428)
67 commits (total: 34,180)
0 new public libcurl function (total: 96)
0 new curl_easy_setopt() option (total: 306)

0 new curl command line option (total: 267)
25 contributors, 14 new (total: 3,332)
34 authors, 18 new (total: 1,341)
0 security fixes (total: 164)

Bugfixes

libcurl

  • asyn-thread: fix build with CURL_DISABLE_SOCKETPAIR
  • asyn-thread: fix the returned bitmask from Curl_resolver_getsock
  • asyn-thread: survive a c-ares/HTTPSRR channel set to NULL
  • content_encoding: #error on too old zlib
  • imap/pop3/smtp: TLS upgrade fixes
  • include necessary headers for inet_ntop/inet_pton
  • drop support for libssh older than 0.9.0
  • netrc: return code cleanup, fix missing file error
  • openssl-quic: ignore ciphers for h3
  • openssl: fix out of scope variables in goto
  • vtls: fix multissl-init
  • vtsl: eliminate ‘data->state.ssl_scache’
  • wakeup_write: make sure the eventfd write sends eight bytes

tool

  • tool_ssls: switch to tool-specific get_line function

scripts

  • build: add tool_hugehelp.c into IBMi build
  • configure/cmake: check for realpath
  • configure/cmake: set asyn-rr a feature only if httpsrr is enabled
  • runtests: fix the disabling of the memory tracking
  • runtests: quote commands to support paths with spaces

docs

  • CURLOPT_SSH_KNOWNHOSTS.md: strongly recommend using this
  • CURLSHOPT_SHARE.md: adjust for the new SSL session cache
  • SPONSORS.md: clarify that we don’t promise goods or services

disabling cert checks: we have not learned much

And by that I mean the global “we” as in the world of developers.

In the beginning there was SSL

When I first learned about SSL and how to use it in the mid to late 1990s, it took me a while to realize and understand the critical importance of having the client verifying the server’s certificate in the handshake. Once I had understood, we made sure that curl would default to doing the check correctly and refuse connecting if the certificate check fails.

Since curl and libcurl 7.10 (released in October 2002) we verify server certificates by default. Today, more than twenty-two years later, there should realistically be virtually no users left using a curl version that does not verify certificates by default.

What’s verifying

The standard way to verify a TLS server certificate is by A) checking that it is signed by a trusted certificate authority (CA) and B) that the cert was created for the thing you interact with; that the domain name is listed in the certificate.

Optionally, you can opt to “pin” a certificate which then verifies that the certificate is the one that corresponds to a specific hash. This is generally considered more fragile but avoids the use of a “CA store” (a set of certificates for the certificate authorities “you” trust) needed to verify the digital signature of the server certificate.

Skipping means insecure

Skipping the certificate verification makes the connection insecure. Because if you do not verify, there is nothing that prevents a middle-man to sit between you and the real server. Or even to just fake being the real server.

Challenges

If you try to use the production site’s certificate in your development environment, you might connect to the server using a different name and then the verification fails.

If you have an active middle man intercepting and wanting to snoop on the TLS traffic, it needs to provide a different certificate and unless that can get signed by a CA you trust, the verification fails.

If you have an outdated or maybe no CA store at all, then the verification fails.

If the server does not update its certificate correctly, it might expire and then the verification fails. Similarly, in order to do a correct verification your client needs a clock that is at least roughly in sync with reality or the verification might fail.

Verification also takes more time compared to how fast it is to just skip the entire step. Sometimes and to some, weirdly enticing.

And yet all curl and libcurl documentation for this strongly discourages users from disabling the check.

A libcurl timeline

curl added support for SSL in April 1998 (years before they renamed it TLS). curl makes certificate checks by default since 2002, both the tool and the library. At the time, I felt I was a little slow to react but at least we finally made sure that curl users would do this check by default.

Ten years later, in October 2012, there was a paper published called The most dangerous code in the world, in which the authors insisted that the widespread problem of applications not verifying TLS certificates with libcurl was because This interface is almost perversely bad. The problem was apparently libcurl’s API.

The same “fact” would be repeated later, for example in this 2014 presentation saying that this is our fault because the API (for PHP) looks like it takes a boolean when in reality it did not.

The libcurl API for this

I do not claim that we have the best API in libcurl, but I can say that extremely few libraries can boast an API and ABI stability that comes even close to ours. We have not broken the ABI since 2006. We don’t mind carrying a world on our shoulders that have learned to depend on this and us. So we don’t change the API, even though it could have been done a little better.

CURLOPT_SSL_VERIFYPEER is a boolean option to ask for server certificate verification against the CA store. It is set TRUE by default, so an application needs to set it to FALSE (0) to disable the check. This option works together with the next one.

CURLOPT_SSL_VERIFYHOST is a separate option to verify that the name embedded in the certificate matches the name in the URL (basically). This option was never a boolean but accepts a number. 0 disables the check, and 2 was for the maximum check level. With 2 being the default.

Both options are thus by default set to verify, and an application can lessen the checks by changing one or both of them.

Adaptations

After that most dangerous article was posted in 2012 that basically said we were worthless, without ever telling that to us or submitting an issue or pull-request with us, we changed how CURLOPT_SSL_VERIFYHOST worked in the 7.28.1 release – shipped in December 2012.

Starting then, we made setting the option to 1 an error (and it would just leave the original value untouched). Before that update, setting VERIFYHOST to 1 was a debug-like mode that made libcurl output warnings on mismatches but still let the connection through. A silly mode to offer.

In 2019 we tweaked the VERIFYHOST handling a little further and made the value 1 and 2 do the same thing: verify the name.

I have no idea what the authors of that 2012 paper would think about this API tweak, but at least the options are now two proper booleans.

I did not think the authors were right when they originally published that paper, but yet we improved the API a little. I dare to claim that the problem with disabled certificate checks is not because of a bad libcurl API.

curl

The curl tool of course is a libcurl using application and it itself offers the --insecure (-k) option which when used switches off both those above mentioned libcurl options. Also strongly discouraged to actually use beyond testing and triaging.

Other layers on top

libcurl is itself used by a lot of frameworks and languages that expose the options to their respective users. Often they then even use the same option names. We have over 60 documented language bindings for libcurl.

For example, the PHP/CURL binding is extremely popular and well used and it has the options provided and exposed using the exact same names, values and behavior.

Disabling the checks

More than twenty-two years of having this enabled by default. More than twelve years since the most dangerous paper. After countless articles on the topic. Everyone I talk to knows that we all must verify certificates.

In almost all cases, you can fix the failed verification the proper way instead of disabling the check. It is just usually a little more work.

State of checks using libcurl today

I searched GitHub on February 10 2025 for “CURLOPT_SSL_VERIFYPEER, FALSE” and it quickly showed me some 140,000 matching repositories. Sure, not all these matches are bad uses since they can be done conditionally etc and it can also be done using other bindings using different option names that this search does not catch etc. Or they might use pinning, which also is not caught by this simple search term.

Searching for “CURLOPT_SSL_VERIFYPEER, 0” shows 153,000 additional matches.

A quick walk-through shows that there are lot of genuine, mostly sloppy, certificate disabling curl using code among these matches.

We could fool ourselves into thinking that the state of certificate check disabling is better in modern software in wide use, made by big teams.

A quick CVE search immediately found several security vulnerabilities for exactly this problem published only last year:

  • CVE-2024-32928 – The libcurl CURLOPT_SSL_VERIFYPEER option was disabled on a subset of requests made by Nest production devices.
  • CVE-2024-56521 – An issue was discovered in TCPDF before 6.8.0. If libcurl is used, CURLOPT_SSL_VERIFYHOST and CURLOPT_SSL_VERIFYPEER are set unsafely.
  • CVE-2024-5261 – In affected versions of Collabora Online, in LibreOfficeKit, curl’s TLS certificate verification was disabled (CURLOPT_SSL_VERIFYPEER of false).

If I was into bug-bounties, I would know where to start.

What do we do?

Clearly, this is work that never gets complete or done – it might arguable actually get worse as the volume of software grows. We need to keep telling people to fix this. To stop encouraging others to do wrong. Lead by good example. Provide good documentation and snippets for people to copy from.

I took a very tiny step and reported a bug against a documentation that seemed encourage the disabling. If we all submit a bug or two when we see these problems, things might gradually improve.

When/if you submit bug reports as well, please remember to stay polite, friendly and to the point. Explain why disabling the check is bad. Why keeping the check is good.

Rinse and repeat. Until the end of time.

curl 8.12.0

Release presentation

Numbers

the 264th release
8 changes
56 days (total: 9,819)

244 bugfixes (total: 11,417)
367 commits (total: 34,180)
2 new public libcurl function (total: 96)
0 new curl_easy_setopt() option (total: 306)

1 new curl command line option (total: 267)
65 contributors, 34 new (total: 3,332)
34 authors, 18 new (total: 1,341)
3 security fixes (total: 164)

Security

CVE-2025-0167: netrc and default credential leak. When asked to use a .netrc file for credentials and to follow HTTP redirects, curl could leak the password used for the first host to the followed-to host under certain circumstances. This flaw only manifests itself if the netrc file has a default entry that omits both login and password. A rare circumstance.

CVE-2025-0665: eventfd double close. libcurl would wrongly close the same file descriptor twice when taking down a connection channel after having completed a threaded name resolve.

CVE-2025-0725: gzip integer overflow. When libcurl is asked to perform automatic gzip decompression of content-encoded HTTP responses with the CURLOPT_ACCEPT_ENCODING option, using zlib 1.2.0.3 or older, an attacker-controlled integer overflow would make libcurl perform a buffer overflow. There should be virtually no users left using such an old and vulnerable zlib version.

Changes

  • curl: add byte range support to –variable reading from file
  • curl: make –etag-save acknowledge –create-dirs
  • curl: add ‘time_queue’ variable to -w
  • getinfo: provide info which auth was used for HTTP and proxy:
  • openssl: add support to use keys and certificates from PKCS#11 provider
  • QUIC: 0RTT for gnutls via CURLSSLOPT_EARLYDATA
  • vtls: feature ssls-export for SSL session im-/export
  • hyper: dropped support

Bugfixes

Some of the bugfixes to highlight.

libcurl

  • acknowledge CURLOPT_DNS_SERVERS set to NULL
  • fix CURLOPT_CURLU override logic
  • initial HTTPS RR resolve support
  • ban use of sscanf()
  • conncache: count shutdowns against host and max limits
  • support use of custom libzstd memory functions
  • cap cookie expire times to 400 days
  • parse only the exact cookie expire date
  • include the shutdown connections in the set curl_multi_fdset returns
  • easy_lock: use Sleep(1) for thread yield on old Windows
  • ECH: update APIs to those agreed with OpenSSL maintainers
  • fix ‘time_appconnect’ for early data with GnuTLS
  • HTTP/2 and HTTP7/3: strip TE request header
  • mbedtls: fix handling of blocked sends
  • mime: explicitly rewind subparts at attachment time.
  • fix mprintf integer handling in float precision
  • terminate snprintf output on windows
  • fix curl_multi_waitfds reporting of fd_count
  • fix return code for an already-removed easy handle from multi handle
  • add an ssl_scache to the multi handle
  • auto-enable OPENSSL_COEXIST for wolfSSL + OpenSSL builds
  • use SSL_poll to determine writeability of OpenSSL QUIC streams
  • free certificate on error with Secure Transport
  • fix redirect handling to a new fragment or query (only)
  • return “IDN” feature set for winidn and appleidn

scripts

  • numerous cmake improvements
  • scripts/mdlinkcheck: markdown link checker

curl tool

  • return error if etag options are used with multiple URLs
  • accept digits in –form type= strings
  • make –etag-compare accept a non-existing file

docs

  • add INFRASTRUCTURE.md describing project infra

Next

The next release is probably going to be curl 8.13.0 and if things go well, it ships on April 2, 2025.