On 110 operating systems

In November 2022, after I had been keeping track and adding names to this slide for a few years already, we could boast about curl having run on 89 different operating systems and only one year later we celebrated having reached 100 operating systems.

This time I am back with another update and I here is the official list of the 110 operating systems that have run curl.

I don’t think curl is unique in having reached this many operating systems, but I think it is a rare thing and I think it is even rarer that we actually have tracked all these names down to have them mentioned – and counted.

Disclaimers

For several of these cases, no patches or improvements were ever sent back to the curl project and we don’t know how much or little work that was required to make them happen.

The exact definition of “operating system” in this context is vague but separate Linux distributions do not count as another operating system.

There are probably more systems to include. Please tell me if you have run curl on something not currently mentioned.

AIxCC curl details

At the AIxCC competition at DEF CON 33 earlier this year, teams competed against each other to find vulnerabilities in provided Open Source projects by using (their own) AI powered tools.

An added challenge was that the teams were also tasked to have their tooling generate patches for the found problems, and the competitors could have a go to try to poke holes on the patches which if they were successful would lead to a reduced score for the patching team.

Injected vulnerabilities

In order to give the team actual and perhaps even realistic flaws to find, the organizers injected flaws into existing source code. I was curious about how exactly this was done as curl was one of the projects they used for this in the finals, so I had a look and I figured I would let you know. Should you also perhaps be curious.

Would your tools find these vulnerabilities?

Other C based projects used for this in the finals included OpenSSL, little-cms, libexif, libxml2, libavif, freerdp, dav1d and wireshark.

The curl intro

First, let’s paste their description of the curl project here to enjoy their heart-warming words.

curl is a command-line tool and library for transferring data with URLs, supporting a vast array of protocols including HTTP, HTTPS, FTP, SFTP, and dozens of others. Written primarily in C, this Swiss Army knife of data transfer has been a cornerstone of internet infrastructure since 1998, powering everything from simple web requests to complex API integrations across virtually every operating system. What makes curl particularly noteworthy is its incredible protocol support–over 25 different protocols–and its dual nature as both a standalone command-line utility and a powerful library (libcurl) that developers can embed in their applications. The project is renowned for its exceptional stability, security focus, and backward compatibility, making it one of the most widely deployed pieces of software in the world. From IoT devices to major web services, curl quietly handles billions of data transfers daily, earning it a reputation as one of the most successful and enduring open source projects ever created.

Five curl “tasks”

There is this website providing (partial) information about all the challenges in the final, or as they call them: tasks. Their site for this is very flashy and cyber I’m sure, but I find it super annoying. It doesn’t provide all the details but enough to give us some basic insights of what the teams were up against.

Task 9

The organizers wrote a new protocol handler into curl for supporting the “totallyfineprotocl” (yes, with a typo) and within that handler code they injected a rather crude NULL pointer assignment shown below. The result variable is an integer containing zero at that point in the code.

Task 10

This task had two vulnerabilities injected.

The first one is an added parser in the HTTP code for the response header X-Powered-by: where the code copies the header field value to a fixed-size 64 bytes buffer, so that if the contents is larger than so it is a heap buffer overflow.

The second one is curiously almost a duplicate of task 9 using code for a new protocol:

Task 20

Two vulnerabilities. The first one inserts a new authentication method to the DICT protocol code, where it contains a debug handler/message with string format vulnerability. The curl internal sendf() function takes printf() formatting options.

The second is hard to understand based on the incomplete code they provide, but the gist of it that the code uses an array for number of seconds in text format that it indexes with the given “current second” without taking leap seconds into account which then would access the stack out of bounds if tm->tm_sec is ever larger than 59:

Task 24

Third time’s the charm? Here’s the maybe not so sneaky NULL pointer dereference in a third made up protocol handler quite similar to the previous two:

Task 44

This task is puzzling to me because it is listed as “0 vulnerabilities” and there is no vulnerability details listed or provided. Is this a challenge no one cracked? A flaw on the site? A trick question?

Modern tools find these

Given what I recently have seen what modern tools from Aisle and ZeroPath etc can deliver, I suspect lots of tools can find these flaws now. As seen above here, they were all rather straight forward and not hidden or deeply layered very much. I think for future competitions they need to up their game. Caveat of course that I didn’t look much at the tasks related to other projects; maybe they were harder?

Of course making the problems harder to find will also make more work for the organizers.

I suspect a real obstacle for the teams to find these issues had to be the amount of other potential issues the tools also found and reported; some rightfully and some not quite as correctly. Remember how ZeroPath gave us over 600 potential issues on curl’s master repository just recently. I have no particular reason to think that other projects would have fewer, at least if at a comparable size.

[Addition after first post] I was told that a general idea for how to inject proper and sensible bugs for the competition, was to re-insert flaws from old CVEs, as they are genuine problems in the project that existed in the past. I don’t know why they ended up not doing this (for curl).

Reports?

I have unfortunately not seen much written in terms of reports and details from the competition from the competing teams. I am still waiting for details on some of their scans on curl.

A royal gold medal

The Royal Swedish Academy of Engineering Sciences (IVA) awards me a gold medal 2025 for my work on curl. (English version of IVA article)

This academy, established 1919 by the Swedish king Gustav V, has been awarding great achievers for over one hundred years and the simple idea behind the awards is, as quoted from their website:

Gold medals are awarded every year to people who, through outstanding deeds, have contributed to creating a better society.

I am of course humbled and greatly honored to have been selected as a receiver of said award this year. To be recognized as someone who have contributed to creating a better society, selected by top people in competition with persons of remarkable track records and achievements. Not too shabby for a wannabe-engineer like myself who did not even attend university.

There have been several software and tech related awardees for this prize before, but from what I can tell I am the first Open Source person to receive this recognition by the academy.

Justification

English version:

Daniel Stenberg, software developer, is awarded IVA’s Gold Medal for his outstanding contributions to software development, where he has played a central role in internet infrastructure and open source software. Through his work with curl – a tool now used by billions of devices worldwide – he has enabled reliable and secure data transfer over the internet, not only between traditional computer programmes but also across smartphones, vehicles, satellites and spacecraft.

The original Swedish “motivering”:

Systemutvecklare Daniel Stenberg tilldelas IVAs Guldmedalj för sina insatser inom mjukvaruutveckling där han haft en central betydelse för internetinfrastruktur och fri programvara. Genom sitt arbete med curl, verktyget som i dag används av miljarder enheter världen över, har han möjliggjort tillförlitlig och säker dataöverföring över internet. Inte bara mellan program i traditionella datorer utan allt från smartphones och bilar, till satelliter och rymdfarkoster.

The ceremony

The associated award ceremony when the physical medal is handed over happens this Friday at the Stockholm City Hall‘s Blue Hall, the same venue used for the annual Nobel Prize banquet.

I have invited my wife and my two adult kids to participate in those festivities.

A second medal indeed

Did I not already receive a gold medal? Why yes, I did eight years ago. Believe me, it does not get old. This is something I can get used to. But yes: it is beyond crazy to get one medal in your life. Getting two is simply incomprehensible.

This is also my third award received within this calendar year so I completely understand if you already feel bored by my blog posts constantly banging my own drum. See European Open Source Achievement Award and Developer of the year for the two previous ones.

The medal

I wanted to include a good high resolution image of the medal in this post, but I failed to find one. I suppose I will just have to make a few shots by myself after Friday and do a follow-up post!

chart: which host, which protocol

A flow chart describing some steps and decisions done within curl when a HTTP URL is provided. For hostnames, protocol and port numbers.

This flow chart ignores proxies, authentication considerations and use of unix domain sockets to keep things simpler.

URL

An initial step is of course to extract the hostname part from the URL. The hostname in a URL can be provided as a plain IP address or as a name. If a numerical IPv4 or IPv6 address are not provided in the URL, curl checks if the hostname is provided using IDN (International Domain Names) and if so, it converts the name into punycode that it then can continue with.

Existing connection

Given the protocol, the hostname and port number curl checks if it has an existing connection alive suitable for use. Reusing an existing connection is preferred as it is the fastest way to start the new transfer. Connection reuse is done based on the provided name and not the IP address so that curl can skip figuring that out if there already is a connection available.

–connect-to

When trying to connect to a host, curl first checks if there are any tricks selected, like this option that makes curl actually resolve hostname B even when asked to connect to host A.

alt-svc

curl might have a populated alt-svc cache from previous transfers. It is basically a mapping for specific HTTP versions and hostnames over to another HTTP version and hostname for a certain amount of time. This can change hostname A into hostname B.

–resolve

This is an option that populates the DNS cache with one or more user provided IP addresses for a given hostname.

DNS cache

Before curl resolves a hostname into a set of IP addresses, it checks if it already has the information in its DNS cache, as that is usually much faster than having to ask for that data again. Entries are typically only kept in this cache for a minute until evicted.

Resolving

When curl resolves a hostname, it wants the A, AAAA and HTTPS DNS records data. A and AAAA provides a list of IP addresses to try to connect to, and the HTTPS field provides HTTP version information, port number, ECH config and possibly more.

HSTS

curl might also have an HSTS cache, which is another map for when plain HTTP accesses should rather be internally upgraded to instead use HTTPS. This changes protocol to use and default port number.

Racing

Depending on what IP versions and HTTP versions the above steps have determined curl should try to use, curl starts a connection race with potentially quite a few parallel connection attempts, each started a little delayed after the previous.

  1. QUIC connect attempt over IPv6 starts first
  2. QUIC connect attempt over IPv4 runs as number two
  3. TCP connect attempt over IPv6 is third in line
  4. TCP connect attempt over IPv4 is the fourth

Of course, if any of them can’t be done or fails, they are immediately skipped and the next one in line starts. Each of them also possibly start a new one if the previous one has not connected with a certain time.

The first contender to successfully connect to the host wins and the other attempts are quickly discarded.

TLS handshake

If the protocol is HTTPS (which it always is if HTTP/3 is selected), the TLS handshake is performed after the TCP connection is established. For HTTP/3, the TLS handshake is integrated into the QUIC connection setup.

The TLS handshake can make curl reuse an existing session, decide ALPN, use ECH and send early data.

The session id/ticket handling is also a cache curl holds that allows for faster reconnects to hosts it has connected to before.

Connection

Once curl has an established connection to use, it starts with sending off the HTTP request, which begins the transfer.

The chart

A new breed of analyzers

(See how I cleverly did not mention AI in the title!)

You know we have seen more than our fair share of slop reports sent to the curl project so it seems only fair that I also write something about the state of AI when we get to enjoy some positive aspects of this technology.

Let’s try doing this in a chronological order.

The magnitude of things

curl is almost 180,000 lines of C89 code, excluding blank lines. About 637,000 words in C and H files.

To compare, the original novel War and Peace (a thick book) consisted of 587,000 words.

The first ideas and traces for curl originated in the httpget project, started in late 1996. Meaning that there is a lot of history and legacy here.

curl does network transfers for 28 URL schemes, it has run on over 100 operating systems and on almost 30 CPU architectures. It builds with a wide selection of optional third party libraries.

We have shipped over 270 curl releases for which we have documented a total of over 12,500 bugfixes. More than 1,400 humans have contributed with commits merged into the repository, over 3,500 humans are thanked for having helped out.

It is a very actively developed project.

It started with sleep

On August 11, 2025 there was a curl vulnerability reported against curl that would turn out legitimate and it would later be published as CVE-2025-9086. The reporter of this was the Google Big Sleep team. A team that claims they use “an AI agent developed by Google DeepMind and Google Project Zero, that actively searches and finds unknown security vulnerabilities in software”.

This was the first ever report we have received that seems to have used AI to accurately spot and report a security problem in curl. Of course, we don’t know how much AI and how much human that were involved in the research and the report. The entire reporting process felt very human.

krb5-ftp

In mid September 2025 we got new a security vulnerability reported against curl from a security researcher we had not been in contact with before.

The report which accurately identified a problem, was not turned into a CVE only because of sheer luck: the code didn’t work for other reasons so the vulnerability couldn’t actually be reached. As a direct result of this lesson, we ripped out support for krb5-ftp.

ZeroPath

The reporter of the krb5-ftp problem is called Joshua Rogers. He contacted us and graciously forwarded us a huge list of more potential issues that he had extracted. As I understand it, mostly done with the help of ZeroPath. A code analyzer with AI powers.

In the curl project we continuously run compilers with maximum pickiness enabled and we throw scan-build, clang-tidy, CodeSonar, Coverity, CodeQL and OSS-Fuzz at it and we always address and fix every warning and complaint they report so it was a little surprising that this tool now suddenly could produce over two hundred new potential problems. But it sure did. And it was only the beginning.

At three there is a pattern

As we started to plow through the huge list of issues from Joshua, we received yet another security report against curl. This time by Stanislav Fort from Aisle (using their own AI powered tooling and pipeline for code analysis). Getting security reports is not uncommon for us, we tend to get 2 -3 every week, but on September 23 we got another one we could confirm was a real vulnerability. Again, an AI powered analysis tool had been used. (At the time I write this blog entry, this particular issue has not been disclosed yet so I can’t link it.)

A shift in the wind

As I was amazed by the quality and insights in some of the issues in Joshua’s initial list he sent over I tooted about it on Mastodon, which later was picked up by Hacker news, The Register, Elektroniktidningen and more.

These new reported issues feel quite similar in nature to defects reported by code analyzers typically do: small mistakes, omissions, flaws, bugs. Most of them are just plain variable mixups, return code confusions, small memory leaks in weird situations, state transition mistakes and variable type conversions possibly leading to problems etc. Remarkably few of them complete false positives.

The quality of the reports make it feel like a new generation of issue identification. Like in this ladder of tool evolution from the old days. Each new step has taken the notch up a level:

  1. At some point I think starting in the early 2000s, the C compilers got better at actually warning and detecting many mistakes they just silently allowed back in the dark ages
  2. Then the code analyzers took us from there to the next level and found more mistakes in the code.
  3. We added fuzzing to the mix in the mid 2010s and found a whole slew of problems we never realized before we had.
  4. Now this new breed, almost like a new category, of analyzers that seem to connect the dots better and see patterns previous tools and analyzers have not been able to. And tell us about the discrepancies.

25% something

Out of that initial list, we merged about 50 separately identifiable bugfixes. The rest were some false positives but also lots of minor issues that we just didn’t think were worth poking at or we didn’t quite agree with.

A minor tsunami

We (primarily Stefan Eissing and myself) worked hard to get through that initial list from Joshua within only a couple of days. A list we mistakenly thought was “it”.

Joshua then spiced things up for us by immediately delivering a second list with 47 additional issues. Follow by a third list with yet another 158 additional potential problems. At the same time Stanislav did the similar thing and delivered to us two lists with a total of around twenty possible issues.

Don’t take me wrong. This is good. The issues are of high quality and even the ones we dismiss often have some insights and the rate of obvious false positive has remained low and quite manageable. Every bug we find and fix makes curl better. Every fix improves a software that impacts and empowers a huge portion of the world.

The total amount of suspected issues submitted by these two gentlemen are now at over four hundred. A fair pile of work for us curl maintainers!

Because how these reported issues might include security sensitive problems, we have decided to not publish them but limit access to the reporters and the curl security team.

As I write this, we are still working our way through these reports but it feels reasonable to assume that we will get even more soon…

All code

An obvious and powerful benefit this tool seems to have compared to others is that it scans all source code without having a build. That means it can detect problems in all backends used in all build combinations. Old style code analyzers require a proper build to analyze and since you can build curl in countless combinations with a myriad of backend setups (where several are architecture or OS specific), it is literally impossible to have all code analyzed with such tools.

Also, these tools can inject (parts of) third party libraries as well and find issues in the borderland between curl and its dependencies.

I think this is one primary reason it found so many issues: it checked lots of code barely any other analyzers have investigated.

A few examples

To illustrate the level of “smartness” in this tool, allow me to show a few examples that I think shows it off. These are issues reported against curl in the last few weeks and they have all been fixed. Beware that you might have to understand a thing or two about what curl does to properly follow here.

A function header comment was wrong

It correctly spotted that the documentation in the function header incorrectly said an argument is optional when in reality it isn’t. The fix was to correct the comment.

# `Curl_resolv`: NULL out-parameter dereference of `*entry`

* **Evidence:** `lib/hostip.c`. API promise: "returns a pointer to the entry in the `entry` argument (**if one is provided**)." However, code contains unconditional writes: `*entry = dns;` or `*entry = NULL;`.
* **Rationale:** The API allows `entry == NULL`, but the implementation dereferences it on every exit path, causing an immediate crash if a caller passes `NULL`.

I could add that the fact that it takes comments so seriously can also trick it to report wrong things when the comments are outdated and state bad “facts”. Which of course shouldn’t happen because comments should not lie!

code breaks the telnet protocol

It figured out that a piece of telnet code actually wouldn’t comply with the telnet protocol and pointed it out. Quite impressively I might add.

Telnet subnegotiation writes unescaped user-controlled values (tn->subopt_ttype, tn->subopt_xdisploc, tn->telnet_vars) into temp (lines 948–989) without escaping IAC (0xFF)
In lib/telnet.c (lines 948–989) the code formats Telnet subnegotiation payloads into temp using msnprintf and inserts the user-controllable values tn->subopt_ttype (lines 948–951), tn->subopt_xdisploc (lines 960–963), and v->data from tn->telnet_vars (lines 976–989) directly into the suboption data. The buffer temp is then written to the socket with swrite (lines 951, 963, 995) without duplicating CURL_IAC (0xFF) bytes. Telnet requires any IAC byte inside subnegotiation data to be escaped by doubling; because these values are not escaped, an 0xFF byte in any of them will be interpreted as an IAC command and can break the subnegotiation stream and cause protocol errors or malfunction.

no TFTP address pinning

Another case where it seems to know the best-practice for a TFTP implementation (pinning the used IP address for the duration of the transfer) and it detected that curl didn’t apply this best-practice in code so it correctly complained:

No TFTP peer/TID validation

The TFTP receive handler updates state->remote_addr from recvfrom() on every datagram and does not validate that incoming packets come from the previously established server address/port (transfer ID). As a result, any host able to send UDP packets to the client (e.g., on-path attacker or local network adversary) can inject a DATA/OACK/ERROR packet with the expected next block number. The client will accept the payload (Curl_client_write), ACK it, and switch subsequent communication to the attacker’s address, allowing content injection or session hijack. Correct TFTP behavior is to bind to the first server TID and ignore, or error out on, packets from other TIDs.

memory leaks no one else reported

Most memory leaks are reported when someone runs code and notices that not everything is freed in some specific circumstance. We of course test for leaks all the time in tests, but in order to see them in a test we need to run that exact case and there are many code paths that are hard to travel in tests.

Apart from doing tests you can of course find leaks by manually reviewing code, but history and experience tell us that is an error-prone method.

# GSSAPI security message: leaked `output_token` on invalid token length

* **Evidence:** `lib/vauth/krb5_gssapi.c:205--207`. Short quote:
```c
if(output_token.length != 4) { ... return CURLE_BAD_CONTENT_ENCODING; }
```
The `gss_release_buffer(&unused_status, &output_token);` call occurs later at line 215, so this early return leaks the buffer from `gss_unwrap`.
* **Rationale:** Reachable with a malicious peer sending a not-4-byte security message; repeated handshakes can cause unbounded heap growth (DoS).

This particular bug looks straight forward and in hindsight easy enough to spot, but it has existed like this in plain sight in code for over a decade.

More evolution than revolution

I think I maybe shocked some people when I stated that the AI tooling helped us find 22, 70 and then a 100 bugs etc. I suspect people in general are not aware of and does not think about what kind of bugfix frequency we work on in this project. Fixing several hundred bugs per release is a normal rate for us. Sure, this cycle we will probably reach a new record, but I still don’t grasp for breath because of this.

I don’t consider this new tooling a revolution. It does not massively or drastically change code or how we approach development. It is however an excellent new project assistant. A powerful tool that highlights code areas that need more attention. A much appreciated evolutionary step.

I might of course be speaking too early. Perhaps it will develop a lot more and it can then turn into a revolution.

Ethical and moral decisions

The AI engines burn the forests and they are built by ingesting other people’s code and work. Is it morally and ethically right to use AI for improving Open Source in this way? It is a question to wrestle with and I’m sure the discussion will go on. At least this use of AI does not generate duplicates of someone else’s code for us to use, but it certainly takes lessons from and find patterns based on others’ code. But so do we all, I hope.

Starting from a decent state

I can imagine that curl is a pretty good source code to use a tool of this caliber on, as curl is old, mature and all the minor nits and defect have been polished away. It is a project where we have a high bar and we want to raise it even higher. We love the opportunity to get additional help and figure out where we might have slipped. Then fix those and try again. Over and over until the end of time.

AIxCC

At the DEF CON 33 conference which took place in August 2025, DARPA ran a competition called the AI Cyber Challenge or AIxCC for short. In this contest, the competing teams used AI tools to find artificially injected vulnerabilities in projects – with zero human intervention. One of the projects used in the finals that the teams looked for problems in, was… curl!

I have been promised a report or a list of findings from that exercise, as presumably the teams found something more than just the fake inserted problems. I will report back when that happens.

Going forward

We do not yet have any AI powered code analyzer in our CI setup, but I am looking forward to adding such. Maybe several.

We can ask GitHub copilot for pull-request reviews but from the little I’ve tried copilot for reviews it is far from comparable to the reports I have received from Joshua and Stanislav, and quite frankly it has been mostly underwhelming. We do not use it. Of course, that can change and it might turn into a powerful tool one day.

We now have an established constructive communication setup with both these reporters, which should enable a solid foundation for us to improve curl even more going forward.

I personally still do not use any AI at all during development – apart from occasional small experiments. Partly because they all seem to force me into using VS code and I totally lose all my productivity with that. Partly because I’ve not found it very productive in my experiments.

Interestingly, this productive AI development happens pretty much concurrently with the AI slop avalanche we also see, proving that one AI is not necessarily like the other AI.

How I maintain release notes for curl

I believe a good product needs clear and thorough documentation. I think shipping a quality product requires you to provide detailed and informative release notes. I try to live up to this in the curl project, and this is how we do it.

A video presentation about how Daniel updates and maintains the curl RELEASE NOTES.

Scripts are your friends

Some of the scripts I use to maintain the RELEASE NOTES and the associated documentation.

maketgz

A foundational script to make things work smoothly is the single invoke script that puts a release tarball together from what is currently in the file system. We can run this in a cronjob and easily generate daily snapshots that look exactly like a release would look like if we had done one at that point. Our script for this purpose is called maketgz. We have a containerized version of that, which runs in a specific docker setup and we called that dmaketgz. This version of the script builds a fully reproducible release tarball.

If you want to verify that all the contents of a release tarball only originate from the git repository and the associated release tools, we provide a script for that purpose: verify-release.

release-notes.pl

An important documentation for each release is of course the RELEASE-NOTES file that details exactly what changes and fixes that have been done since the previous release. It also gives proper credit to all the people that were involved and helped making the release this particular way.

We use a quite simple git commit message standard for curl. It details how the first line should be constructed and how to specify meta-data in the message. Sticking to this message format allows us to write scripts and do automation around the git history.

When I invoke the release-notes.pl script, it performs a git log command that lists all changes done in the repository since the previous commit of the RELEASE-NOTES files with the commit message “synced”. Those changes are then parsed: the first line is used as a release notes entry and issue tracker references within the message are used for linking the changes to allow users to track their origins.

The script cannot itself actually know if a commit is a change, a bugfix or something else, so after it has been invoked I have to go over the updated release notes file manually. I check the newly added entries and I remove the ones that are irrelevant and I move the lines referring to changes to the changes list.

I then run release-notes.pl cleanup, which cleans up the release notes file – it sorts the bugfixes list alphabetically and removes pending orphaned references no longer used (for previously listed entries that were deleted in the process mentioned above).

contributors.sh

When invoked, this script extracts all contributors to the project since the most recent release (tag). Commit authors, committers and everyone given credit in all commit messages done since. Also all committers and authors in the web repository over the same period. It also takes the existing names mentioned in the existing RELEASE NOTES file.

It cleans up the names, runs them through the THANKS-filter and then outputs each unique name in a convenient way and format suitable for easy copy and paste into RELEASE-NOTES.

delta

The delta script outputs data and counters about the current state of the repository compared to the most recent release.

Invoking the script in a terminal shows something like this:

= Since curl-8_12_1 Feb 13 08:18:33 2025 +0100 =
Elapsed time: 10.4 days (total 9837 / 10331)
Commits: 122 (total 34405)
Commit authors: 14, 1 new (total 1343)
Contributors: 19, 8 new (total 3351)
New public functions: 0 (total 96)
New curl_easy_setopt() options: 0 (total 306)
New command line options: 0 (total 267)
Changes logged: 0
Bugfixes logged: 67 (6.44 per day)
Added files: 10 (total 4058)
Deleted files: 2 (delta: 8)
Files changed: 328 (8.08%)
Lines inserted: 7798
Lines deleted: 6318 (delta: 1480)

With this output, I can update the counters at the top of the RELEASE NOTES file.

I then commit the RELEASE-NOTES files with the commit message “RELEASE-NOTES: synced” so that the automation knows exactly when it was last updated.

As a courtesy to curious users and developers, we always keep an updated version of the current in progress release notes document on the curl website: https://curl.se/dev/release-notes.html.

Repetition

In my ~/.gitconfig file I have a useful alias that helps me:

[alias]
latest = log @^{/RELEASE-NOTES:.synced}..

This lets me easily list all changes done in the repository since I last updated the release notes file. I often list them like this:

git latest --oneline

As this then lists all the commits as one line per commit. If the list is large enough, maybe 20-30 lines or something and there has been at least a few days since the previous update, I might update the release notes.

Whenever there is a curl release, I also make sure the release notes notes document is fully updated and properly synced for that.