Tag Archives: cURL and libcurl

The end of the curl bug-bounty

tldr: an attempt to reduce the terror reporting.

There is no longer a curl bug-bounty program. It officially stops on January 31, 2026.

After having had a few half-baked previous takes, in April 2019 we kicked off the first real curl bug-bounty with the help of Hackerone, and while it stumbled a bit at first it has been quite successful I think.

We attracted skilled researchers who reported plenty of actual vulnerabilities for which we paid fine monetary rewards. We have certainly made curl better as a direct result of this: 87 confirmed vulnerabilities and over 100,000 USD paid as rewards to researchers. I’m quite happy and proud of this accomplishment.

I would like to especially highlight the awesome Internet Bug Bounty project, which has paid the bounties for us for many years. We could not have done this without them. Also of course Hackerone, who has graciously hosted us and been our partner through these years.

Thanks!

How we got here

Looking back, I think we can say that the downfall of the bug-bounty program started slowly in the second half of 2024 but accelerated badly in 2025.

We saw an explosion in AI slop reports combined with a lower quality even in the reports that were not obvious slop – presumably because they too were actually misled by AI but with that fact just hidden better.

Maybe the first five years made it possible for researchers to find and report the low hanging fruit. Previous years we have had a rate of somewhere north of 15% of the submissions ending up confirmed vulnerabilities. Starting 2025, the confirmed-rate plummeted to below 5%. Not even one in twenty was real.

The never-ending slop submissions take a serious mental toll to manage and sometimes also a long time to debunk. Time and energy that is completely wasted while also hampering our will to live.

I have also started to get the feeling that a lot of the security reporters submit reports with a bad faith attitude. These “helpers” try too hard to twist whatever they find into something horribly bad and a critical vulnerability, but they rarely actively contribute to actually improve curl. They can go to extreme efforts to argue and insist on their specific current finding, but not to write a fix or work with the team on improving curl long-term etc. I don’t think we need more of that.

There are these three bad trends combined that makes us take this step: the mind-numbing AI slop, humans doing worse than ever and the apparent will to poke holes rather than to help.

Actions

In an attempt to do something about the sorry state of curl security reports, this is what we do:

  • We no longer offer any monetary rewards for security reports – no matter which severity. In an attempt to remove the incentives for submitting made up lies.
  • We stop using Hackerone as the recommended channel to report security problems. To make the change immediately obvious and because without a bug-bounty program we don’t need it.
  • We refer everyone to submit suspected curl security problems on GitHub using their Private vulnerability reporting feature.
  • We continue to immediately ban and publicly ridicule everyone who submits AI slop to the project.

Maintain curl security

We believe that we can maintain and continue to evolve curl security in spite of this change. Maybe even improve thanks to this, as hopefully this step helps prevent more people pouring sand into the machine. Ideally we reduce the amount of wasted time and effort.

I believe the best and our most valued security reporters still will tell us when they find security vulnerabilities.

Instead

If you suspect a security problem in curl going forward, we advise you to head over to GitHub and submit them there.

Alternatively, you send an email with the full report to security @ curl.se.

In both cases, the report is received and handled privately by the curl security team. But with no monetary reward offered.

Leaving Hackerone

Hackerone was good to us and they have graciously allowed us to run our program on their platform for free for many years. We thank them for that service.

As we now drop the rewards, we feel it makes a clear cut and displays a clearer message to everyone involved by also moving away from Hackerone as a platform for vulnerability reporting. It makes the change more visible.

Future disclosures

It is probably going to be harder for us to publicly disclose every incoming security report in the same way we have done it on Hackerone for the last year. We need to work out something to make sure that we can keep doing it at least imperfectly, because I believe in the goodness of such transparency.

We stay on GitHub

Let me emphasize that this change does not impact our presence and mode of operation with the curl repository and its hosting on GitHub. We hear about projects having problems with low-quality AI slop submissions on GitHub as well, in the form of issues and pull-requests, but for curl we have not (yet) seen this – and frankly I don’t think switching to a GitHub alternative saves us from that.

Other projects do better

Compared to others, we seem to be affected by the sloppy security reports to a higher degree than the average Open Source project.

With the help of Hackerone, we got numbers of how the curl bug-bounty has compared with other programs over the last year. It turns out curl’s program has seen more volume and noise than other public open source bug bounty programs in the same cohort. Over the past four quarters, curl’s inbound report volume has risen sharply, while other bounty-paying open source programs in the cohort, such as Ruby, Node, and Rails, have not seen a meaningful increase and have remained mostly flat or declined slightly. In the chart, the pink line represents curl’s report volume, and the gray line reflects the broader cohort.

We suspect the idea of getting money for it is a big part of the explanation. It brings in real reports, but makes it too easy to be annoying with little to no penalty to the user. The reputation system and available program settings were not sufficient for us to prevent sand from getting into the machine.

The exact reason why we suffer more of this abuse than others remains a subject for further speculation and research.

If the volume keeps up

There is a non-zero risk that our guesses are wrong and that the volume and security report frequency will keep up even after these changes go into effect.

If that happens, we will deal with it then and take further appropriate steps. I prefer not to overdo things or overplan already now for something that ideally does not happen.

We won’t charge

People keep suggesting that one way to deal with the report tsunami is to charge security researchers a small amount of money for the privilege of submitting a vulnerability report to us. A curl reporters security club with an entrance fee.

I think that is a less good solution than just dropping the bounty. Some of the reasons include:

  • Charging people money in an International context is complicated and a maintenance burden.
  • Dealing with charge-backs, returns and other complaints and friction add work.
  • It would limit who could or would submit issues. Even some who actually find legitimate issues.

Maybe we need to do this later anyway, but we stay away from it for now.

Pull requests are less of a problem

We have seen other projects and repositories see similar AI-induced problems for pull requests, but this has not been a problem for the curl project. I believe that for PRs we have much better means to sort out the weed with automatic means, since we have tools, tests and scanners to verify such contributions. We don’t need to waste any human time on pull requests until the quality is good enough to get green check-marks from 200 CI jobs.

Related

I will do a talk at FOSDEM 2026 titled Open Source Security in spite of AI that of course will touch on this subject.

Future

We never say never. This is now and we might have reasons to reconsider and make a different decision in the future. If we do, we will let you know. These changes are applied now with the hope that they will have a positive effect for the project and its maintainers. If that turns out to not be the outcome, we will of course continue and apply further changes later.

Media

Since I created the pull request for updating the bug-bounty information for curl on January 14, almost two weeks before we merged it, various media picked up the news and published articles. Long before I posted this blog post.

Also discussed (indirectly) on Hacker News.

libcurl memory use some years later

One of the trickier things in software is gradual degradation. Development that happens in the wrong direction slowly over time which never triggers any alarms or upset users. Then one day you suddenly take a closer look at it and you realize that this area that used to be so fine several years ago no longer is.

Memory use is one of those things. It is easy to gradually add more and larger allocations over time as we add features and make new cool architectural designs.

Memory use

curl and libcurl literally run in billions of installations and it is important for us that we keep memory use and allocation count to a minimum. It needs to run on small machines and it needs to be able to scale to large number of parallel connections without draining available resources.

So yes, even in 2026 it is important to keep allocations small and as few as possible.

A line in the sand

In July 2025 we added a test case to curl’s test suite (3214) that simply checks the sizes of fifteen important structs. Each struct has a fixed upper limit which they may not surpass without causing the test to fail.

Of course we can adjust the limits when we need to, as it might be entirely okay to grow them when the features and functionalities motivate that, but this check makes sure that we do not mistakenly grow the sizes simply because of a mistake or bad planning.

Do we degrade?

It’s of course a question of a balance. How much memory is a feature and added performance worth? Every libcurl user probably has their own answers to that but I decided to take a look at how we do today, and compare with data I blogged five years ago.

The point in time I decided to compare with here, curl 7.75.0, is fun to use because it was a point in time where I had given the size use in curl some focused effort and minimization work. libcurl memory use was then made smaller and more optimized than it had been for almost a decade before that.

struct sizes

The struct sizes always vary depending on which features that are enabled, but in my tests here they are “maximized”, with as many features and backends enabled as possible.

Let’s take a look at three important structs. The multi handle, the easy handle and the connectdata struct. Now compared to then, five years ago.

8.19.0-DEV (now)7.75.0 (past)
multi816416
easy53525272
connectdata9121472

As seen in the table, two of the structs have grown and one has shrunken. Let’s see what impact that might have.

If we assume a libcurl-using application doing 10 parallel transfers that have 20 concurrent connections open, libcurl five ago needed:

1472 x 20 + 5272 x 10 + 416 = 82,576 bytes for that

While libcurl in current git needs:

912 x 20 + 5352 x 10 + 816 = 72,576 bytes.

Incidentally that is exactly 10,000 bytes less, five years and many new features later.

This said, part of the reason the structs change is that we move data between them and to other structs. The few mentioned here are not the whole picture.

Downloading a single HTTP 512MB file

curl http://localhost/512M --out-null

Using a bleeding edge curl build, this command line on my 64 bit Linux Debian host does 107 allocations, that needs at its maximum 133,856 bytes.

Compared to five years ago, where it needed 131,680 bytes done in a mere 96 allocations.

curl now needs 1.6% more memory for this, done with 11% more allocation calls.

I believe the current amounts are still okay considering we have refactored, developed and evolved the library significantly over the same period.

As a comparison, downloading the same file twenty times in parallel over HTTP/1 using the same curl build needs 2,222 allocations but only a total of 308,613 bytes allocated at peak. Twenty times the number of allocations but only three times the maximum size, compared to the single file download.

Caveat: this measures clear text HTTP downloads. Almost everything transferred these days is using TLS and if you add TLS to this transfer, curl itself does only a few additional allocations but more importantly the TLS library involved allocates much more memory and do many more allocations. I just consider those allocations to be someone else’s optimization work.

Visualized

I generated a few graphs that illustrate memory use changes in curl over time based on what I described above.

The “easy handle” is the handle an application creates and that is associated which each individual transfer done with libcurl.

The “multi handle” is a handle that holds one or more easy handles. An application has at least one of these and adds many easy handles to it, or the easy handles has one of its own internally.

The “connectdata” is an internal struct for each existing connection libcurl knows about. A normal application that makes multiple transfers, either serially or in parallel tends to make the easy handle hold at least a few of these since libcurl uses a connection pool by default to use for subsequent transfers.

Here is data from the internal tracking of memory allocations done when the curl tool is invoked to download a 512 megabyte file from a locally hosted HTTP server. (Generally speaking though, downloading a larger size does not use more memory.)

Conclusion

I think we are doing alright and none of these struct sizes or memory use have gone bad. We offer more features and better performance than ever, but keep memory spend at a minimum.

Now with MQTTS

Back in 2020 we added MQTT support to curl.

When curl 8.19.0 ships in the beginning of March 2026, we have also added MQTTS; meaning MQTT done securely over TLS.

This bumps the number of supported transfer protocols to 29 not too long after the project turned 29 years old.

What’s MQTT?

Wikipedia describes it as a lightweight, publish–subscribe, machine-to-machine network protocol for message queue/message queuing service. It is designed for connections with remote locations that have devices with resource constraints or limited network bandwidth, such as in the Internet of things (IoT). It must run over a transport protocol that provides ordered, lossless, bi-directional connections—typically, TCP/IP.

Coming protocol support reduction

If things go as planned, the number of supported protocols will decrease soon as we have RTMP scheduled for removal later in the spring of 2026.

More HTTP/3 focus, one backend less

In the curl project we have a long tradition of offering multiple optional backends for specific protocols. In this spirit we have added experimental support for a number of different HTTP/3 + QUIC backends over time. A while ago we dropped one of those experiments, the msh3 backend.

Today we cleanup even more and remove support for yet another backend: the OpenSSL-QUIC stack and we are now down to only supporting two different HTTP/3 alternatives: the ngtcp2 + nghttp3 combo or quiche. And out of those two, the quiche backend is still considered experimental.

The first release shipping with this change will be curl 8.19.0.

OpenSSL-QUIC

This is the QUIC stack implemented and provided by OpenSSL. To make matters a little complicated, this is a separate thing from the QUIC API that OpenSSL also offers. The first one is a full QUIC implementation, the second one is an API that is powerful enough to allow a separate QUIC implementation use OpenSSL for its cryptographic and TLS needs.

A quick recap how history unfolded

2019 – BoringSSL introduced an API for QUIC. QUIC implementations picked it up and it worked. A pull request was made for OpenSSL to allow them to provide the same API so that QUIC stacks all over could use OpenSSL.

2021 – OpenSSL eventually denied merging the pull-request and announced they would instead implement their own QUIC stack – that nobody had asked for.

2023 – OpenSSL 3.2 shipped with support for their own QUIC stack. It was broken in many ways.

2025: OpenSSL version 3.4.1 was released and now the QUIC stack worked decently. In OpenSSL 3.5.0 they announced a QUIC API that now finally allowed independent QUIC stacks to use OpenSSL.

Experimental

Skilled contributors added support for OpenSSL-QUIC to curl primarily to allow people using OpenSSL to still be able to use HTTP/3.

OpenSSL’s own QUIC implementation only reached experimental state in curl meaning that we explicitly and strongly discourage users from using it in production and reserve ourselves the right to change functionality and more between versions.

There are three reasons why it did not graduate from experimental and they are also the reasons why we think we are better off without offering support for it:

  1. The API is lacking. We have communicated with the OpenSSL-QUIC team since even before the API first shipped and it still does not offer the knobs and controls we would like to make it a competitive QUIC alternative. We don’t feel they care much.
  2. The performance is bad. And by bad I mean really bad. The leading QUIC implementation alternative ngtcp2 transfers data much faster in all benchmarks and comparisons. Sometimes up to a factor three difference.
  3. The memory use is abysmal. The amount of more memory required to do transfers with OpenSSL-QUIC compared to ngtcp2 can reach a factor twenty.

A drawing

This makes the curl backend situation simpler in the HTTP/3 and QUIC department as the image below tries to show.

curl 8.18.0

Download curl from curl.se!

Release presentation

Numbers

the 272nd release
5 changes
63 days (total: 10,155)
391 bugfixes (total: 13,376)
758 commits (total: 37,486)
0 new public libcurl function (total: 100)
0 new curl_easy_setopt() option (total: 308)
0 new curl command line option (total: 273)
69 contributors, 36 new (total: 3,571)
37 authors, 14 new (total: 1,430)
6 security fixes (total: 176)

Security

This time there is no less than six separate vulnerabilities announced.

Changes

There are a few this time, mostly around dropping support for various dependencies:

  • drop support for VS2008 (Windows)
  • drop Windows CE / CeGCC support
  • drop support for GnuTLS < 3.6.5
  • gnutls: implement CURLOPT_CAINFO_BLOB
  • openssl: bump minimum OpenSSL version to 3.0.0

Bugfixes

See the release presentation video for a walk-through of some of the most important/interesting fixes done for this release, or go check out the full list in the changelog.

6,000 curl stickers

I am heading to FOSDEM again at the end of January. I go there every year and I have learned that there is a really sticker-happy audience there. The last few times I have been there, I have given away several thousands of curl stickers.

As I realized I did not actually have a few thousand stickers left, I had to restock. I consider stickers a fun and somewhat easy way to market the curl project. It helps us getting known and seen out there in the world.

The stickers are paid for by curl donations. Thanks to all of you who have donated!

This time I ordered the stickers from stickerapp.se. They have a rather fancy web UI editor and tools to make sure the stickers become exactly the way I want them. I believe the total order price was actually slightly cheaper than the previous provider I used.

I ordered five classic curl sticker designs and I introduced a new one. Here is the full set:

Die cut curl logo 7.5cm x 2.8cm – the classic “small” curl logo sticker. (bottom left in the photo)

Die cut curl logo 10cm x 3.7cm – the slightly larger curl logo sticker. (top row in the photo)

Rounded rectangle 7.5cm x 4.1cmyes we curl, the curl symbol and my face (mid left in the photo)

Oval 7.5cm x 4cm – with the curl logo (bottom right in the photo)

Round 2.5cm x 2.5 cm – small curl symbol. (in the middle of the photo). My favorite. Perfect for the backside of a phone. Fits perfectly in the logo on the lid of a Frame Work laptop.

Round 4cm x 4cm – curl symbol in a slightly larger round version. The new sticker variant in the set. (on the right side in the middle row in the photo)

The quality and feel of the products are next to identical to previous sticker orders. They look great!

I got 1,000 copies of each variant this time.

The logo

The official curl logo, the curl symbol, the colors and everything related is freely available and anyone is welcome to print their own stickers at will: https://curl.se/logo/

How to get one?

I bring curl stickers to all events I go to. Ask me!

There is no way to buy stickers from me or from the curl project. I encourage you to look me up and ask for one or a few. At FOSDEM I try to make sure the wolfSSL stand has plenty to hand out, since it is a fixed geographical point that might be easier to find than me.

no strcpy either

Some time ago I mentioned that we went through the curl source code and eventually got rid of all strncpy() calls.

strncpy() is a weird function with a crappy API. It might not null terminate the destination and it pads the target buffer with zeroes. Quite frankly, most code bases are probably better off completely avoiding it because each use of it is a potential mistake.

In that particular rewrite when we made strncpy calls extinct, we made sure we would either copy the full string properly or return error. It is rare that copying a partial string is the right choice, and if it is, we can just as well memcpy it and handle the null terminator explicitly. This meant no case for using strlcpy or anything such either.

But strcpy?

strcpy however, has its valid uses and it has a less bad and confusing API. The main challenge with strcpy is that when using it we do not specify the length of the target buffer nor of the source string.

This is normally not a problem because in a C program strcpy should only be used when we have full control of both.

But normally and always are not necessarily the same thing. We are but all human and we all do mistakes. Using strcpy implies that there is at least one or maybe two, buffer size checks done prior to the function invocation. In a good situation.

Over time however – let’s imagine we have code that lives on for decades – when code is maintained, patched, improved and polished by many different authors with different mindsets and approaches, those size checks and the function invoke may glide apart. The further away from each other they go, the bigger is the risk that something happens in between that nullifies one of the checks or changes the conditions for the strcpy.

Enforce checks close to code

To make sure that the size checks cannot be separated from the copy itself we introduced a string copy replacement function the other day that takes the target buffer, target size, source buffer and source string length as arguments and only if the copy can be made and the null terminator also fits there, the operation is done.

This made it possible to implement the replacement using memcpy(). Now we can completely ban the use of strcpy in curl source code, like we already did strncpy.

Using this function version is a little more work and more cumbersome than strcpy since it needs more information, but we believe the upsides of this approach will help us have an oversight for the extra pain involved. I suppose we will see how that will fare down the road. Let’s come back in a decade and see how things developed!

void curlx_strcopy(char *dest,
size_t dsize,
const char *src,
size_t slen)
{
DEBUGASSERT(slen < dsize);
if(slen < dsize) {
memcpy(dest, src, slen);
dest[slen] = 0;
}
else if(dsize)
dest[0] = 0;
}

the strcopy source

AI slop

An additional minor positive side-effect of this change is of course that this should effectively prevent the AI chatbots to report strcpy uses in curl source code and insist it is insecure if anyone would ask (as people still apparently do). It has been proven numerous times already that strcpy in source code is like a honey pot for generating hallucinated vulnerability claims.

Still, this will just make them find something else to make up a report about, so there is probably no net gain. AI slop is not a game we can win.

A curl 2025 review

Let’s take a look back and remember some of what this year brought.

commits

At more than 3,400 commits we did 40% more commits in curl this year than any single previous year!

Since at some point during 2025, all the other authors in the project have now added more lines in total to the curl repository than I have. Meaning that out of all the lines ever added in the curl repository, I have now added less than half.

More than 150 individuals authored commits we merged during the year. Almost one hundred of them were first-timers. Thirteen authors wrote ten or more commits.

Viktor Szakats did the most number of commits per month for almost all months in 2025.

Stefan Eissing has now done the latest commit for 29% of the product source code lines – where my share is 36%.

About 598 authors have their added contributions still “surviving” in the product code. This is down from 635 at end of last year.

tests

We have 232 more tests at the end of this year compared to last December (now at 2179 separate test cases), and for the first time ever we have more than twelve test cases per thousand lines of product source code.

(Sure, counting test cases is rather pointless and weird since a single test can be small or big, simple or complex etc, but that’s the only count we have for this.)

releases

The eight releases we did through the year is a fairly average amount:

  • 8.12.0
  • 8.12.1
  • 8.13.0
  • 8.14.0
  • 8.14.1
  • 8.15.0
  • 8.16.0
  • 8.17.0

No major revolution happened this year in terms of big features or changes.

We reduced source code complexity a lot. We have stopped using some more functions we deem were often the reasons for errors or confusion. We have increased performance. We have reduced numbed of used allocations.

We added experimental support for HTTPS-RR, the DNS record.

The bugfix frequency rate beat new records towards the end of the year as nearly 450 bugfixes shipped in curl 8.17.0.

This year we started doing release candidates. For every release we upload a series of candidates before the actual release so that people can help us and test what is almost the finished version. This helps us detect and fix regressions before the final release rather than immediately after.

Command line options

We end the year with 6 more curl command line options than we had last new year’s eve; now at 273 in total.

8.17.0–knownhosts
8.16.0–out-null
–parallel-max-host
–follow
8.14.0–sigalgs
8.13.0–upload-flags
8.12.0–ssl-sessions

man page

The curl man page continued to grow; now more than 500 lines longer since last year (7090 lines), which means that even when counted number of man page lines per command line option it grew from 24.7 to 26.

Lines of code

libcurl grew with a mere 100 lines of code over the year while the command line tool got 1,150 new lines.

libcurl is now a little over 149,000 lines. The command line tool has 25,800 lines.

Most of the commits clearly went into improving the products rather than expanding them. See also the dropped support section below.

QUIC

This year OpenSSL finally introduced and shipped an API that allows QUIC stacks to use vanilla OpenSSL, starting with version 3.5.

As a direct result of this, the use of the OpenSSL QUIC stack has been marked as deprecated in curl and is queued for removal early next year.

As we also removed msh3 support during 2025, we are looking towards a 2026 with supporting only two QUIC and HTTP/3 backends in curl.

Security

This year the number of AI slop security reports for curl really exploded. The curl security team has gotten a lot of extra load because of this. We have been mentioned in media a lot during the year because of this.

The reports not evidently made with AI help have also gotten significantly worse quality wise while the total volume has increased – a lot. Also adding to our collective load.

We published nine curl CVEs during 2025, all at severity low or medium.

AI improvements

A new breed of AI-powered high quality code analyzers, primarily ZeroPath and Aisle Research, started pouring in bug reports to us with potential defects. We have fixed several hundred bugs as a direct result of those reports – so far.

This is in addition to the regular set of code analyzers we run against the code and for which we of course also fix the defects they report.

Web traffic

At the end of the year 2025 we see 79 TB of data getting transferred monthly from curl.se. This is up from 58 TB (+36%) for the exact same period last year.

We don’t have logs or analysis so we don’t know for sure what all this traffic is, but we know that only a tiny fraction is actual curl downloads. A huge portion of this traffic is clearly not human-driven.

GitHub activity

More than two hundred pull requests were opened each month in curl’s GitHub repository.

For a brief moment during the fall we reached zero open issues.

We have over 220 separate CI jobs that in the end of the year spend more than 25 CPU days per day verifying our ongoing changes.

Dashboard

The curl dashboard expanded a lot. I removed a few graphs that were not accurate anymore, but the net total change is still that we went up from 82 graphs in December 2024 to 92 separate illustrations in December 2025. Now with a total of 259 individual plots (+25).

Dropped support

We removed old/legacy things from the project this year, in an effort to remove laggards, to keep focus on what’s important and to make sure all of curl is secure.

  • Support for Visual Studio 2005 and older (removed in 8.13.0)
  • Secure Transport (removed in 8.15.0)
  • BearSSL (removed in 8.15.0)
  • msh3 (removed in 8.16.0)
  • winbuild build system (removed in 8.17.0)

Awards

It was a crazy year in this aspect (as well) and I was honored with:

I also dropped out of the Microsoft MVP program during the year, to which I was accepted into in October 2024.

Conferences / Talks

I attended these eight conferences and talked – in five countries. My talks are always related to curl in one way or another.

  • FOSDEM
  • foss-north
  • curl up
  • Open Infra Forum
  • Joy of Coding
  • FrOSCon
  • Open Source Summit Europe
  • EuroBSDCon

Podcasts

I participated on these podcasts during the year. Always related to curl.

  • Security Weekly
  • Open Source Security
  • Day Two DevOps
  • Netstack.FM
  • Software Engineering Radio
  • OsProgrammadores

20,000 issues on GitHub

The curl project moved over its source code hosting to GitHub in March 2010, but we kept the main bug tracker running like before – on Sourceforge.

It took us a few years, but in 2015 we finally ditched the Sourceforge version fully. We adopted and switched over to the pull request model and we labeled the GitHub issue tracker the official one to use for curl bugs. Announced on the curl website proper on March 9 2015.

GitHub holds issues and pull requests in the same number series, and since a few years back they also added discussions to the mix. This number is another pointless one, but it is large and even so let’s celebrate it!

Issue one in curl’s GitHub repository is from October 2010.

Issue 100 is from May 18, 2014.

Issue 500 is from Oct 20, 2015.

Issue 10,000 was created November 29, 2022. That meant 9,500 issues created in 2,597 days. 3.7 issues/day on average over seven years.

Issue 20,000 (a pull request really) was created today, on December 16, 2025. 10,000 more issues created in 1,113 days. 9 issues/day over the last three years.

The pace of which primarily new pull requests are submitted has certainly gone up over the recent years, as this graph clearly shows. (Since the current month is only half so far, the drop at the right end of the plot is quite expected.)

We work hard in the project to keep the number of open issues and pull requests low even when the frequency rises.

It can also be noted that issues and pull requests are typically closed fast. Out of the ones that are closed with instructions in the git commit message, the trend looks like below. Half of them are closed within 6 hours.

Of course, these graphs are updated daily and shown on the curl dashboard.

Note: we have not seen the AI slop tsunami in the issues and pull requests as we do on Hackerone. This growth is entirely human made and benign.

Parsing integers in C

In the standard libc API set there are multiple functions provided that do ASCII numbers to integer conversions.

They are handy and easy to use, but also error-prone and quite lenient in what they accept and silently just swallow.

atoi

atoi() is perhaps the most common and basic one. It converts from a string to signed integer. There is also the companion atol() which instead converts to a long.

Some problems these have include that they return 0 instead of an error, that they have no checks for under or overflow and in the atol() case there’s this challenge that long has different sizes on different platforms. So neither of them can reliably be used for 64-bit numbers. They also don’t say where the number ended.

Using these functions opens up your parser to not detect and handle errors or weird input. We write better and stricter parser when we avoid these functions.

strtol

This function, along with its siblings strtoul() and strtoll() etc, is more capable. They have overflow detection and they can detect errors – like if there is no digit at all to parse.

However, these functions as well too happily swallow leading whitespace and they allow a + or – in front of the number. The long versions of these functions have the problem that long is not universally 64-bit and the long long version has the problem that it is not universally available.

The overflow and underflow detection with these function is quite quirky, involves errno and forces us to spend multiple extra lines of conditions on every invoke just to be sure we catch those.

curl code

I think we in the curl project as well as more or less the entire world has learned through the years that it is usually better to be strict when parsing protocols and data, rather than be lenient and try to accept many things and guess what it otherwise maybe meant.

As a direct result of this we make sure that curl parses and interprets data exactly as that data is meant to look and we error out as soon as we detect the data to be wrong. For security and for solid functionality, providing syntactically incorrect data is not accepted.

This also implies that all number parsing has to be exact, handle overflows and maximum allowed values correctly and conveniently and errors must be detected. It always supports up to 64-bit numbers.

strparse

I have previously blogged about how we have implemented our own set of parsing function in curl, and these also include number parsing.

curlx_str_number() is the most commonly used of the ones we have created. It parses a string and stores the value in a 64-bit variable (which in curl code is always present and always 64-bit). It also has a max value argument so that it returns error if too large. And it of course also errors out on overflows etc.

This function of ours does not allow any leading whitespace and certainly no prefixing pluses or minuses. If they should be allowed, the surrounding parsing code needs to explicitly allow them.

The curlx_str_number function is most probably a little slower that the functions it replaces, but I don’t think the difference is huge and the convenience and the added strictness is much welcomed. We write better code and parsers this way. More secure. (curlx_str number source code)

History

As of yesterday, November 12 2025 all of those weak functions calls have been wiped out from the curl source code. The drop seen in early 2025 was when we got rid of all strtrol() variations. Yesterday we finally got rid of the last atoi() calls.

(Daily updated version of the graph.)

curlx

The function mentioned above uses a ‘curlx’ prefix. We use this prefix in curl code for functions that exist in libcurl source code but that be used by the curl tool as well – sharing the same code without them being offered by the libcurl API.

A thing we do to reduce code duplication and share code between the library and the command line tool.