the 257th release 8 changes 56 days (total: 9,560) 220 bug-fixes (total: 10,271) 348 commits (total: 32,280) 1 new public libcurl function (total: 94) 1 new curl_easy_setopt() option (total: 305) 1 new curl command line option (total: 259) 84 contributors, 41 new (total: 3,173) 49 authors, 20 new (total: 1,272) 0 security fixes (total: 155)
Download the new curl release from curl.se as always.
Release presentation
Security
It feels good to be able to say that this time around we do not have a single security vulnerability to announce and we in fact do not have any in the queue either.
Some of the bugfixes from this cycle that might be worth noticing:
dist and build
reproducible tarballs. I will do a separate post with details later, but now it is easy for anyone who wants to, to generate an identical copy to verify what we ship.
docs/RELEASE-TOOLS.md into the tarball. This documents the tools and versions used to generate the files included in the tarball that are not present in git.
drop MSVC project files for recent versions. If you need to generate them for more recent versions, cmake can do it for you.
configure fix HAVE_IOCTLSOCKET_FIONBIO test for gcc 14. It runs more picky by default so it would always fail the check.
add -q as first option when invoking curl for tests. To reduce the risk of people having a ~/.curlrc file that ruins things.
fix make install with configure –disable-docs
tool
make –help adapt to the terminal width. Makes it easier on the eye when the terminal is wider.
limit rate unpause for -T . uploads. Avoids busy-looping
curl output warning for leading unicode quote character. Because it seems like a fairly common mistake when people copy and paste command lines from random sources
don’t truncate the etag save file by default. A regression less.
TLS
bearssl: use common code for cipher suite lookup
mbedtls: call mbedtls_ssl_setup() after RNG callback is set. Otherwise, more recent versions of mbedTLS will just return error.
mbedtls: support TLS 1.3. If you use a new enough version.
openssl: do not set SSL_MODE_RELEASE_BUFFERS. Uses slightly more memory, but uses fewer memory allocation calls.
wolfssl: plug memory leak in wolfssl_connect_step2()
bindings
openldap: create ldap URLs correctly for IPv6 addresses, doing LDAP with IPv6 numerical IP addresses in the URL just did not work previously.
quiche: expire all active transfers on connection close
quiche: trust its timeout handling
libcurl
fix curl_global_cleanup crash in Windows. A regression coming from the introduction of the async name resolver function.
brotli and others, pass through 0-length writes
ignore duplicate chunked encoding. Apparently some sites do this and browsers let them so we need to let it slide…
CURLINFO_REQUEST_SIZE: fixed
ftp: add tracing support. Gives us better tooling to track down FTP problems.
http2: emit RST when client write fails. Previously it would just silently leave the stream there…
http: reject HTTP major version switch mid connection. This should of course never happen, but if it does, curl will error out correctly.
multi: introduce SETUP state for better timeouts. This adds a proper separation for when the existing transfer is retried or when the state machine is restarted because it make as a new transfer.
multi: timeout handles even without connection. They would previously often be exempted from checks and would linger for too long until stopped.
fix handling of paused upload on completed download
do not URL decode proxy credentials
allow setting port number zero. Remember this old post?
In the 2015 time frame I had come to the conclusion that the curl logo could use modernization and I was toying with ideas of how it could be changed. The original had served us well, but it definitely had a 1990s era feel to it.
On June 11th 2015, I posted this image in the curl IRC channel as a proof of concept for a new curl logo idea I had: since curl works with URLs and all the URLs curl supports have the colon slash slash separator. Obviously I am not a designer so it was rough. This was back in the day when we still used this logo:
Frank Gevarts had a go at it. He took it further and tried to make something out of the idea. He showed us his tweaked take.
When we met up at the following FOSDEM in the end of January 2016, we sat down together and discussed the logo idea a bit to see if we could make it work somehow. Left from that exercise is this version below. As you can see, basically the same one. It was hard to make it work.
Later that spring, I was contacted by Soft Dreams, a designer company, who offered to help us design a new logo at no cost to us. I showed them some of these rough outlines of the colon slash slash idea and we did a some back-and-forthing to see if we could make something work with it, but we could not figure out a way to get the colon slash slash sequence actually into the word curl in a way that would look good. It just kept on looking different kinds of weird. Eventually we gave that up and we ended up putting it after the word, making it look like curl is a URL scheme. It was ended up much easier and ultimately the better and right choice for us. The new curl logo was made public in May 2016. Made by Adrian Burcea.
Just months later in 2016, Mozilla announced that they were working on a revamp of their logo. They made several different skews and there was a voting process during which they would eventually pick a winner. One of the options used colon slash slash embedded in the name and during the process a number of person highlighted the fact that the curl project just recently changed logo to use the colon slash slash.
In the Mozilla all-hands meeting in Hawaii in December 2016, I was approached by the Mozilla logo design team who asked me if I (we?) would have any issues with them moving forward with the logo version using the colon slash slash.
I had no objections. I think that was the coolest of the new logo options they had and I also thought that it sort of validated our idea of using the symbols in our logo. I was perhaps a bit jealous how Mozilla is a better word to actually integrate the symbol into the name…. the way we tried so hard to do for curl, but had to give up.
You know Tor, but do you know SOCKS5? It is an old and simple protocol for setting up a connection and when using it, the client can decide to either pass on the full hostname it wants to connect to, or it can pass on the exact IP address.
(SOCKS5 is by the way a minor improvement of the SOCKS4 protocol, which did not support IPv6.)
When you use curl, you decide if you want curl or the proxy to resolve the target hostname. If you connect to a site on the public Internet it might not even matter who is resolving it as either party would in theory get the same set of IP addresses.
The .onion TLD
There is a concept of “hidden” sites within the Tor network. They are not accessible on the public Internet. They have names in the .onion top-level domain. For example. the search engine DuckDuckGo is available at https://duckduckgogg42xjoc72x3sjasowoarfbgcmvfimaftt6twagswzczad.onion/.
.onion names are used to provide access to end to end encrypted, secure, anonymized services; that is, the identity and location of the server is obscured from the client. The location of the client is obscured from the server.
To access a .onion host, you must let Tor resolve it because a normal DNS server aware of the public Internet knows nothing about it.
This is why we recommend you ask the SOCKS5 proxy to resolve the hostname when accessing Tor with curl.
The proxy connection
The SOCKS5 protocol is clear text so you must make sure you do not access the proxy over a network as then it will leak the hostname to eavesdroppers. That is why you see the examples above use localhost for the proxy.
You can also step it up and connect to the SOCKS5 proxy over unix domain sockets with recent curl versions like this:
Sites using the .onion TLD are not on the public Internet and it is pointless to ask your regular DNS server to resolve them. Even worse: if you in fact ask your normal resolver you practically advertise your intention of connection to a .onion site and you give the full name of that site to the outsider. A potentially significant privacy leak.
To combat the leakage problem, RFC 7686The “.onion” Special-Use Domain Name was published in October 2015. With the involvement and consent from people involved in the Tor project.
It only took a few months after 7686 was published until there was an accurate issue filed against curl for leaking .onion names. Back then, in the spring of 2016, no one took upon themselves to fix this and it was instead simply added to the queue of known bugs.
This RFC details (among other things) how libraries should refuse to resolve .onion host names using the regular means in order to avoid the privacy leak.
After having stewed in the known bugs lists for almost five years, it was again picked up in 2023, a pull-request was authored, and when curl 8.1.0 shipped on May 17 2023 curl refused to resolve .onion hostnames.
Tor still works remember?
Since users are expected to connect using SOCKS5 and handing over the hostname to the proxy, the above mention refusal to resolve a .onion address did not break the normal Tor use cases with curl.
Turns out there is a group of people who runs transparent proxies who automatically “catches” all local traffic and redirects it over Tor. They have a local DNS server who can resolve .onion host names and they intercept outgoing traffic to instead tunnel it through Tor.
With this setup now curl no longer works because it will not send .onion addresses to the local resolver because RFC 7686 tells us we should not,
curl of course does not know when it runs in a presumed safe and deliberate transparent proxy network or when it does not. When a leak is not a leak or when it actually is a leak.
torsocks
A separate way to access tor is to use the torsocks tool. Torsocks allows you to use most applications in a safe way with Tor. It ensures that DNS requests are handled safely and explicitly rejects any traffic other than TCP from the application you’re using.
You run it like
torsocks curl https://example.com
Because of curl’s new .onion filtering, the above command line works fine for “normal” hostnames but no longer for .onion hostnames.
Arguably, this is less of a problem because when you use curl you typically don’t need to use torsocks since curl has full SOCKS support natively.
Option to disable the filter?
In the heated discussion thread we are told repeatedly how silly we are who block .onion name resolves – exactly in the way the RFC says, the RFC that had the backing and support from the Tor project itself. There are repeated cries for us to add ways to disable the filter.
I am of course sympathetic with the users whose use cases now broke.
A few different ways to address this have been proposed, but the problem is difficult: how would curl or a user know that it is fine to leak a name or not? Adding a command line option to say it is okay to leak would just mean that some scripts would use that option and users would run it in the wrong conditions and your evil malicious neighbors who “help out” will just add that option when they convince their victims to run an innocent looking curl command line.
The fact that several of the louder voices show abusive tendencies in the discussion of course makes these waters even more challenging to maneuver.
Future
I do not yet know how or where this lands. The filter has now been in effect in curl for a year. Nothing is forever, we keep improving. We listen to feedback and we are of course eager to make sure curl remains and awesome tool and library also for content over Tor.
Welcome to the 11th annual curl user survey. This is a once a year poll that we ask as many curl and libcurl users as possible to respond to.
>> Take the survey <<
This is in many ways the only real way we get to know what curl users think about all sorts of curl matters. Our website does not log, it has no adds, it uses no cookies and it does no tracking. We do not count downloads, we do not know which man pages are read the most. We mostly ship our code into the void without knowing a whole about what people do with and think about it.
Asking our users directly is in effect our only and best way to get proper answers. So we do this every year, and we ask a lot of questions in the same fashion as last year so that we can better detect trends and changes in the community.
Your help is not only appreciated, it is crucial. Tell us your honest opinion. And if you have friends you know use curl or libcurl, please ask them to submit a set of answers as well. You help us greatly by donating several minutes of your busy life.
>> Take the survey <<
The survey will be up during 14 days from May 14th until the end of May 28th 2024. It would be awesome to try to beat the last year’s submission numbers when 606 persons responded.
On Friday May 3, 2024 I had several of my curl friends over for dinner in my house. An unusually warm and sunny spring day with a temperature reaching twenty degrees centigrade.
The curl up 2024 weekend started excellently and the following morning we all squeezed ourselves into a conference room in downtown Stockholm. I had rented a room in a hotel in the city center for two days.
curl up is never a big meeting/conference but we have in the past sometimes been around twenty-five attendees. This year’s amount of fifteen was the smallest so far, but in this small set of people we have a set of long-term well-known curl contributors. It is not a big list of attendees that creates a good curl up.
Swag
We started by making sure every attendee got their needs of curl t-shirts, curl mugs, curl stickers and curl coasters satisfied. The t-shirts of the year are “forest green” with the curl logo in white on the front and the curl symbol slightly larger on the back.
I have spare t-shirts that I intend to distribute to people I meet over the coming year. Before you ask: no, there is no way to buy these.
Recordings
I had tested my external microphone setup at home but it just refused to work when at the venue. We struggled for a while until we had to surrender and fall back to using the built-in microphone in the webcam that we used for recording the video. This is why the sound is low in all recordings we did. A little disappointing. Sorry for this.
I live-streamed the entire event over twitch. We had in total over 460 unique viewers over the days and at times at least we had over 30 concurrent sustained viewers. This made us at least sometimes have twice the size audience online as in the room. In spite of the sound issue.
I also noticed that my trusty old laptop was maybe a little weak for this purpose as it struggled to stream and save the recordings at high frame rates.
I brought: two laptops, webcam on stand, mike, mike-stand, power for laptops, cable kit, repair kit, 12 curl mugs, eight packs with different curl stickers, carton coasters, pcb coasters, t-shirts, name tags + pens, two UCB-C to HDMI adapters.
Day one
The state of curl 2024
Where are we, what did we do last year or so? Who did the work? How often? How much?
Evolutions
Apparently this is not a real word, but Stefan Eissing pushes for language development in this presentation where he talks about changes and improvements he worked on in curl over the last few years.
Fuzzing curl
James Fuller talks about his work on generating “fun” curl command lines in order to find those that might not be handled correctly.
Implementing parallel testing
Dan Fandrich talks about the journey from serial to fully parallel tests in curl.
curl containers
James is back and talks about where the curl containers are right now.
Security
I talk about the security situation in curl as of right now and the last year.
End of day 1
We topped off this packed day with a twenty minute walk through a sunny Stockholm down to the water where we could sit outside and have a few drinks before we moved over to the restaurant where we ended the evening with a joint dinner. A great first day!
Day two
The nice weather was gone. The temperature dropped ten degrees and the rain poured down most of this day.
HTTP/1/2/3 Performance
Stefan Eissing warms up the day. About his work on HTTP refactors and related performance improvements.
trurl
This is the newcomer in the curl family and I talked a little about what it is and why it exists.
Apple Specialties
Christian has improved curl on Apple devices, which he talks about.
rust in curl
You can build curl to use third party components written in rust. This is where we are now and what might happen next. Or not.
Test clutch
Dan talks about his work on improving curl tests and their reliability.
Future
We don’t know much about the future but there are some plans and there are at least some ideas…
End of curl up 2024
The rest of day two was mostly spent hanging out and talking about life, the universe and various things curl. People started leaving and by five o’clock we shut the door for the last time this time around. We had survived curl up 2024.
After all these talks, discussions, dinners, beers, coffees, challenging questions, brainstorms over 48 hours, I was exhausted and drained of energy. Apart from the recording problem, I think almost everything else in the event organization went as smoothly as we could have wished for. The venue, the food, the coffee etc worked perfectly for us.
Planning ahead for 2025
We will most certainly run another curl up event in 2025 in roughly the same frame of the year as we did now. The idea is then to visit another capital city in Europe. Stay tuned for coming announcements of date and location for that.
Why would you use curl in a container? We actually don’t ask, we just provide the image, but I can think of a few reasons…
it is an easy way to use a modern curl version in a system that otherwise ships an ancient version. So many people are stuck on legacy distros with ancient curl versions.
it is an easy way to make use of a consistent fixed version from many places independently of what particular curl versions those systems otherwise can offer
CI jobs
other elaborate explanations
Six billion as of now
The official curl docker repository now (as of 06:43 UTC April 24, 2024) reports that the curl container has been pulled more than six billion times. Currently, people seem to be pulling the curl image from docker.com at a rate of 2-3 million pulls per day (about 25 per second).
It shall be noted that a pull does not necessary imply a download. The pull is a a check and the client may already have the latest version downloaded. It is therefore not equal to six billion downloads.
We started offering docker images to the world with curl 7.65.3, July 19 2019. Six billion pulls in 1832 days makes an average of 38 pulls/second through all this time. Less than five years.
How do I know the pull counter reached six billion? I asked their API:
We do not pay Docker anything for this service of theirs. They also do not pay anything to us for our service. The Docker Sponsored OSS program lists conditions that might make us disqualified for being part of it, but as long as you don’t tell them I won’t. And hey, at least the first six billion pulls have been served.
Other repositories
You can also opt to pull the container from other repositories like quay and GitHub. I have not included their pull counters in this.
Jan Gampe took things to the next level by actually making this cross-stitch out of the pattern I previously posted online. The flowers really gave it an extra level of charm I think.
As a cross-stitch
As a pillow
This quote is from a comment by an upset user on my blog, replying to one of my previous articles about curl.
Fact check: while curl is my hobby, I also work on curl as a full-time job. It is a business and I serve and communicate with many customers on a daily basis. curl provides service to way more than a billion people. I claim that every human being on the planet that is Internet-connected uses devices or services every day that run curl.
The pattern
curl in San Francisco
Meanwhile, another “curl craft” seen in the wild recently is this ad in San Francisco (photo by diego).
hi, Stytch team member here who worked on the PUT request ad from Daniel’s post — feel free to AMA
TL;DR on ‘why pay to put a curl request on an ad’ is what you all have already said — (1) unique concentration of tech in SF; and (2) to specifically engage software engineers. More on each…
(1) We wouldn’t have run this ad in Sydney, or New York, or LA. SF definitely has an uniquely tech-oriented culture, and in particular has lots of startups in our ideal customer persona (ICP) at Stytch – in this case, engineers building B2B SaaS apps.
But in addition to the people who live in SF, even more software engineers visit the city for conferences, or events, or to fundraise. For example, while our ads are up over the next month, SF will host Stripe Sessions and POSTCon (two entire conferences focused around APIs), plus RSA (security focused).
(2) And although even with that, only a small segment of the pop will understand the ads, those people might be intrigued enough to actually look at them – and our ultimate goal is to get more engineers to look at our code & our docs. Another perk is that engineers can’t use ad blockers if the ad is on a bus shelter 🙂
So that was a bit of the thinking for us – on why SF & why a PUT request on a billboard. We’re also making the physical ads into an anchor for a ‘marketing moment’ for Stytch — so pair offline ads with digital marketing, as well. So if we’re successful, maybe you’ll see more on that, soon.
Here follows a brief description on how you can detect if the curl package would ever make an xz.
xz (and its library liblzma) was presumably selected as a target because it is an often used component and by extension via systemd it often used by openssh in several Linux distros. libcurl is probably an even more widely used software component and if infected, could potentially serve as an effective vessel to distribute evil into the world.
Conceivably, the xz attackers have infiltrated more than one other Open Source project to cover their bases. Which ones?
No inexplicable binary blobs
First, you can verify that there are no binary blobs stored in git that could host an encrypted attack payload, planted there for the future.
Every file in the curl git repository has a benign meaning and purpose. As part of the products, the documentation, tooling or the test suites etc.
Without any secret “hide-out” in the git repository, you know that any backdoor needs to be provided either in plain code or using some crazy weird steganography. Or get inserted into the tarballs with content not present in git – read on for how to verify that this is not happening.
No disabled fuzzers
The xz attack could have been detected by proper fuzzing (and valgrind use) which is why the attacker made sure to sneakily disable such automated checks of code.
While somewhat hard to verify, you can make sure that no such activities have been done in curl’s fuzzing or curl’s automated and CI testing,
No hidden payloads in tarballs
In the curl project we generate several files as part of the release process and those files end up in the release tarball. This means that not all files in the tarball are found in the git repository. (Because we don’t commit generated files.)
The generated files are produced with a small set of tools, and these tools use the source code available in git at the release tag. Anyone can check out the same code from that same release tag, make sure to have the corresponding tools (and versions) installed and then generate a version of the tarball themselves – to verify that this tarball indeed becomes an identical copy of the public release.
That process verifies that the tarballs we ship are generated only with legitimate tools and that the release contents originate only from the files present in git. No secret sauce added in the process. No malicious code can get inserted.
Reproducible tarballs
We have recently improved reproducibility as a direct result of the post xz-attack debate. To make sure that a repeated tarball creation actually produces the exact same results, but also to make it easier for others to verify our release tarballs. With more documentation (releases now contain documentation of exactly which tools and versions that generated the tarball) and by making it easier to run the exact same virtual machine and tool setup like the one that created the release. We aim to soon provide a Dockerfile to make this process even smoother.
We also verify tarball reproducibility in a CI job: generating a release tarball with a given timestamp produces the identical binary output when done again for the same timestamp.
Signed tarballs
As an extra detail, everyone can also verify that the released tarballs are in fact shipped by me Daniel personally, as they are always signed with my GPG key as part of the release process. This should at least prove that new releases are made with the same keys as previous ones were, which should with a reasonable probability be me.
The signatures also help verify that the tarballs have not been tampered with in transition, from the point I generated them to the moment they land in your download directory. curl downloads are normally distributed via a third-party CDN which we normally trust of course, but if it would ever be breached or similar, a modified tarball would be detected when the digital signature is verified.
We do not provide checksums for the tarballs simply because providing checksums next to the downloads adds almost no extra verification. If someone can tamper with the tarballs, they can probably update the webpage with a fake checksum as well.
Signed commits
All commits done to curl done by me are signed, You can verify that I did them. Not all committers in the project do them signed, unfortunately. We hope to increase the share going forward. Over the last 365 days, 73% of the curl commits were signed.
These signatures only verify that the commits where done by a maintainer of the curl project (or someone who controls that account). A maintainer you may not trust and who might not be known under their real name and you do not even know in which country they live. And of course, even a trusted maintainer can suddenly go rogue.
Is the content in git benign?
The process above only verifies that tarballs are indeed generated (only) from contents present in git and that they are unaltered from the moment I made them.
How do you know that the contents in git does not contain any backdoors or other vulnerabilities?
Without trusting anyone else’s opinions and without just relying on the fact that you can run the test suite, fuzzers and static code analyzers without finding anything, you can review it. Or pay someone else to review it.
We have had curl audited several times by external organizations, but can you trust claimed random audits?
Anonymous contributors
We regularly accept contributions from anonymous and pseudonymous contributors in curl – and we always have. Our policy says that if a contribution is good: if it passes review and all tests run green, we have no reason to deny it – in the name of progress and improvement. That is also why we accept even single-letter typo fixes: even a very small fix is a step in the right direction.
A (to me) surprisingly large amount of contributions are done by people who do not state a full real name. They may chose to be anonymous for various reasons – we do not ask. Maybe they fear retaliation if they would propose something that ends up buggy? Sometimes people want to hide their affiliation/origin so that their contribution is not associated with the organization they work at. Another reason sometimes mentioned is that women do it to avoid revealing themselves as female. etc. As I said: we do not ask so I cannot tell for sure.
Anonymous maintainers
We do not have anonymous maintainers, but we don’t actually have rules against it.
Right now, we have 18 members in the GitHub curl organization with the rights to push commits. I have not met all of them. I have not even seen the faces of all of them. They have all proven themselves worthy of their administrative rights based on their track record. I cannot know if anyone of them is using a false identity and I do not ask nor keep track in which country they reside. A former top maintainer in the curl projected even landed a large amount of changes under a presumed/false name during several years.
If a curl maintainer suddenly goes rogue and attempts to land malicious content, our only effective remedy is review. To rely on the curl community to have eyes on the changes and to test things.
Can curl be targeted?
I think it would be very hard but I can of course not rule it out. Possibly I am also blind for our weaknesses because I have lived with them for so long.
Everyone can help the greater ecosystem by verifying a package or two. If we all tighten all screws just a little bit more, things will get better.
Vulnerabilities
I maintain that planting a backdoor in curl code is so infuriatingly hard to achieve that efforts and energy are probably much rather spent on finding security vulnerabilities and ways to exploit them. After all, we have 155 published past vulnerabilities in curl so far, out of which 42 have been at severity high or critical.
I can be fairly sure that none of those 42 somewhat serious issues were deliberately planted, because just about every one of them were found in code that I personally authored…
People often ask. But I have never seen a backdoor attempt in curl. Maybe that is just me being naive.
We keep track of bugfixes done to curl. All bugfixes ever done. A while back I also went back and populated the lists with details from all the releases to the pre-cursors of curl: httpget and urlget. All and every change made since November 1996.
The rate of bugfixes has been increasing over the years. I think in terms of actual bugs being squashed and fixes being merged, but also partly because we have gotten much better at keeping meticulous logs and do better release notes.
A bugfix can be a single letter typo fix in a document, a spell-fix in a source code comment or it can fix a serious security vulnerability. From high to low, from important to a small subtle detail. The counter does not value, it is just a counter.
When we shipped the recent curl version, 8.6.0, the counter said 9,888 shipped bugfixes. The other day, when 8.7.0 and 8.7.1 shipped, the counter was upped to surpass 10,000 and now says: 10,051.
These bugfixes happened thanks to 3,134 contributors, out of which 1,252 persons have authored commits merged into the curl source repository.
This journey started with httpget. The first ever release of httpget 0.1 that was made public happened on November 11 1996. Today, that is exactly 10,000 days ago.
We only have git commits stored from late 1999, but that counter is almost at 32,000 now. Making a little less than every third commit ever done a logged bugfix.
Number of bugfixes and bugfix rate over time in the curl project.
How I do the release notes
This is highly scripted task.
It starts with: every commit of the RELEASE-NOTES file in git that makes it up-to-date needs to use the single word “synced” as commit message. The commit that syncs it.
Further, I have an alias in my ~/.gitconfig file that says:
[alias]
latest = log @^{/RELEASE-NOTES:.synced}..
This allows me to invoke git latest to get a list of the latest changes done in the repository since I most recently synced the RELEASE-NOTES.
Sync
When that list starts to grow, typically roughly every four to ten days something, I invoke the release-notes.pl script we have in the curl git repository. This scripts gets all the changes since the most previous sync and inserts them into the RELEASE-NOTES file, complete with a correct reference to the associated GitHub issue or pull-request.
The actual bullet point text it inserts comes from the first line of the corresponding commit message. The links comes from parsing commit messages and finding keywords and links according to how the project dictates how they should be used. This is one reason why it is important to do good commit messages following the correct style in the project. It makes the release notes job easier and the results better.
The script does not know what’s a change, what’s a bugfix or what’s not even worthy of mentioning. It just adds all changes to top the list of changes (and includes a convenient separator so that it is easy to spot the newly added ones) and the next step for me is then to manually go over the list and delete the ones that aren’t intended to be mentioned there and move the few changes into the correct section of the release notes.
I run release-notes.pl cleanup which then sorts the lists alphabetically and removes dangling references (which are leftovers from the lines I removed).
Contributors
We keep track, try to say thanks to and give credit to every contributor that helps out in the project. No matter the size of the contribution. When someone has reported a bug. the reporter is credited in the commit message of the bugfix. We also give credit to co-authors and people assisting in solving the issues etc. To make sure we mention and give credit to the contributors and keep track of them beyond what git itself does.
We can also add names manually to the release notes file, like if we had forgotten to mention them in a commit message. Then I run the contributors.sh script. It reads the list of names currently in the RELEASE-NOTES and then scans all the git changes since the previous sync and generates an updated list of all git authors, committers and everyone else who are credited, and it outputs an updated list (and contributor counter). That updated list is then pasted into RELEASE-NOTES.
In recent years, in a normal eight week release cycle, we typically feature 60 to 80 named contributors in this file. Of course, top contributors in the project tend to get mentioned in just about every release notes file, as they just have to help out and contribute once every 56 days to appear there.
On release days, we update the docs/THANKS file (using the contrithanks.sh script) where all contributors who ever helped out are mentioned and saved for the future. That list of people is also made visible on the thanks page on the curl website.
Counters
At the top of the release notes we have a few counters displayed. It looks similar to:
Public curl releases: 255
Command line options: 258
curl_easy_setopt() options: 304
Public functions in libcurl: 93
Contributors: 3119
After the list of contributors have been pasted into the current release notes, I invoke the delta script, which shows a lot of curl git repository statistics since the most previous release tag. That input includes the numbers shown in the release notes top, so if they are different now I update the release notes accordingly with the updated data. Most frequently, the contributor counter has been bumped.
Commit
included the lists of bugfixes and changes
updated contributors
updated the counters
The RELEASE-NOTES file is then committed to git using “synced” as commit message. Until it is time to sync it again.
Because of this work, we can offer the pending release notes on the website, as it is the work in progress file with the changes we have already logged that is targeted to be included in the next release.
Release
Of course, on release days I make sure to do a final update so that all the last changes get into the file before release as then the file ends up in the release tarball, that is locked, signed and stored the archives.
After a release, I just manually erase the lists from the file and clear the list of names and commit. Then we start rebuilding it again with new stuff in the new release cycle.
the 255th and 256th releases 5 changes 56 days (total: 9,504) 162 bug-fixes (total: 10,050) 246 commits (total: 31,931) 0 new public libcurl function (total: 93) 0 new curl_easy_setopt() option (total: 304) 0 new curl command line option (total: 258) 92 contributors, 56 new (total: 3,133) 37 authors, 15 new (total: 1,252) 4 security fixes (total: 155)
Versions
I first released 8.7.0, but immediately someone pointed out that one of the files in the tarballs was broken, so I fixed the issue, created a new set of tarballs, bumped the version and uploaded the new set. The new release is 8.7.1 but of course it has the same set of changes. We just pretend we did not upload 8.7.0.
Release presentation
Security
CVE-2024-2004: Usage of disabled protocol. (low) When a protocol selection parameter option disables all protocols without adding any then the default set of protocols would remain in the allowed set due to an error in the logic for removing protocols.
CVE-2024-2398: HTTP/2 push headers memory-leak. (medium) When an application tells libcurl it wants to allow HTTP/2 server push, and the amount of received headers for the push surpasses the maximum allowed limit (1000), libcurl aborts the server push. When aborting, libcurl inadvertently does not free all the previously allocated headers and instead leaks the memory.
CVE-2024-2379: QUIC certificate check bypass with wolfSSL. (low) libcurl skips the certificate verification for a QUIC connection under certain conditions, when built to use wolfSSL. If told to use an unknown/bad cipher or curve, the error path accidentally skips the verification and returns OK, thus ignoring any certificate problems.
CVE-2024-2466: TLS certificate check bypass with mbedTLS. (medium) libcurl did not check the server certificate of TLS connections done to a host specified as an IP address, when built to use mbedTLS.
Changes
configure: add –disable-docs flag. This skips the step generating the manpages, which for many people is unnecessary.
CURLINFO_USED_PROXY: return bool whether the proxy was used. Useful when having a filter that only lets some transfers use the proxy.
write-out: add ‘%{proxy_used}’. The same as above but for the tool.
digest: support SHA-512/256. Support more modern digest authentication.
DoH: add trace configuration. Now you get more DoH tracing/logging using the general trace mechanism.
Bugfixes
Some of the bugfixes from this cycle that might be worth noticing:
configure: find libpsl with pkg-config. Makes configure better at finding libpsl and making use of the correct flags and sub-dependencies when linking with it.
configure: find rustls with pkg-config. Similar adjustment but for rustls.
cookie: if psl fails, reject the cookie. A run-time failure should not allow the cookie through.
curl: exit on config file parser errors. We can insist on the config file to be correct as otherwise something unintended might go through.
curl: make –libcurl output better CURLOPT_*SSLVERSION. This option takes a bitmask made out of two separate enum ranges.
file: use xfer buf for file:// transfers. The main effect being that it can use a larger buffer which can make faster transfers.
http: better error message for HTTP/1.x response without status line
https-proxy: use IP address and cert with ip in alt names. Connecting to a HTTPS proxy using an IP address with a URL also using an IP address and those addresses were different versions, curl would get it wrong.
mprintf: fix format prefix I32/I64 for windows compilers
OpenSSL QUIC: adapt to v3.3.x. Pending improvements in OpenSSL is going to enhance curl’s ability to do HTTP/3 using it.
paramhlp: fix CRLF-stripping files with “-d @file”. curl would do wrong for line ending consisting of CR only
rustls: make curl compile with 0.12.0. Adjusted to use the modified APIs.
schannel: fix hang on unexpected server close
sendf: ignore response body to HEAD. A regression made curl complain if a HEAD request would get body data.
smtp: fix STARTTLS. Another regression fixed.
strtoofft: fix the overflow check. The previous overflow check was relying on undefined behavior. This is in code only for platforms without a proper native parser for 64 bit sized numbers.
TLS: start shutdown only when peer did not already close.
curl: only parse etag + content-disposition for 2xx.
curl: accept a blank -w “”
curl: handle non-existing (out of range) short-options
curl: change precedence of server Retry-After time
curl: shorter –help texts. With some polish to make the output look nicer, in particular “curl –help all”.
transfer.c: break receive loop in speed limited transfers, To make libcurl adapt more precisely to the network speed limit set by the application.