Approaching zero bugs?

In this era of powerful tools to find software bugs, we now see tools find a lot of problems at a high speed. This causes problems for developers, as dealing with the growing list of issues is hard. It may take a longer time to address the problems than to find them – not to mention to put them into releases and then it takes yet another extended time until users out in the wild actually get that updated version into their hands.

In order to find many bugs fast, they have to already exist in source code. These new tools don’t add or create the problems. They just find them, filter them out and bring them to the surface for exposure. A better filter in the pool filters out more rubbish.

The more bugs we fix, the fewer bugs remain in the code. Assuming the developers manage to fix problems at a decent enough pace.

For every bugfix we merge, there is a risk that the change itself introduces one more more new separate problems. We also tend to keep adding features and changing behavior as we want to improve our products, and when doing so we occasionally slip up and introduce new problems as well.

Source code analyzing tools is a concept as old as source code itself. There has always existed tools that have tried to identify coding mistakes. Now they just recently got better so they can find more mistakes.

These new tools, similar to the old ones, don’t find all the problems. Even these new modern tools sometimes suggest fixes to the problems they find that are incomplete and in fact sometimes downright buggy.

Undoubtedly code analyzer tooling will improve further. The tools of tomorrow will find even more bugs, some of them were not found when the current generation of tools scanned the code yesterday.

Of course, we now also introduce these tools in CI and general development pipelines, which should make us land better code with fewer mistakes going forward. Ideally.

If we assume that we fix bugs faster than we introduce new ones and we assume that the AI tools can improve further, the question is then more how much more they can improve and for how long that improvement can go on. Will the tools find 10% more bugs? 100%? 1000%? Is the tool improving going to gradually continue for the next two, ten or fifty years? Can they actually find all bugs?

Can we reach the utopia where we have no bugs left in a given software project and when we do merge a new one, it gets detected and fixed almost instantly?

Are we close?

If we assume that there is at least a theoretical chance to reach that point, how would we know when we reach it? Or even just if we are getting closer?

I propose that one way to measure if we are getting closer to zero bugs is to check the age of reported and fixed bugs. If the tools are this good, we should soon only be fixing bugs we introduced very recently.

In the curl project we don’t keep track of the age of regular bugs, but we do for vulnerabilities. The worst kind of bugs. If the tools can find almost all problems, they should soon only be finding very recently added vulnerabilities too. The age of new finds should plummet and go towards zero.

If the age of newly reported vulnerabilities are getting younger, it should make the average and median age of the total collection go down over time.

Average age of vulnerabilities

The average and median time vulnerabilities had existed in the curl source code by the time they were found and reported to the project.

Bugfixes

When the tools have found most problems there should be less bugs left to fix. The bugfix rate should go down rapidly – independently of how you count them or how liberal we are in counting exactly what is a bugfix.

Given the data from the curl project, there does not seem to be fewer bugfixes done – yet. Maybe the bugfix speed goes up before it goes down?

We are not close

Given the look of these graphs I don’t think we are close to zero bugs yet. These two curves do not seem to even start to fall yet.

Yes, these graphs are based on data from a single project, which makes it super weak to draw statistical conclusions from, but this is all I have to work with.

So when?

I think that’s mostly an indication of what you believe the tooling can do and how good they can eventually end up becoming.

I don’t know. I will keep fixing bugs.

Inspired

In appendix A of the book Root cause: Stories and lessons from two decades of Backend Engineering Bugs, author Hussein Nasser has these wonderful words to say about me:

Daniel Stenberg is a Swedish engineer and the creator of curl (cURL), one of the most widely used tools and libraries for fetching content over various protocols. I’ve always admired Daniel’s work, reading his blogs and watching his talks on YouTube. He is one of the engineers who inspired me to start my own YouTube channel and teach backend engineering.

It warms my heart to read this. Words like this give me energy and motivation. My work has meaning.

curl 8.20.0

You always find the new curl releases on the curl site!

Release presentation

Numbers

the 274th release
8 changes
49 days (total: 10,761)
282 bugfixes (total: 13,922)
521 commits (total: 38,545)
0 new public libcurl function (total: 100)
0 new curl_easy_setopt() option (total: 308)
0 new curl command line option (total: 273)
73 contributors, 45 new (total: 3,664)
28 authors, 12 new (total: 1,463)
8 security fixes (total: 188)

Security

As mentioned elsewhere, the security reporting volume has been intense lately. We publish eight new curl vulnerabilities this time.

Changes

  • now uses a thread pool and queue for resolving
  • NTLM is disabled by default
  • dropped support for CMake 3.17 and older
  • dropped support for < c-ares 1.16.0
  • SMB is disabled by default
  • added CURLMNWC_CLEAR_ALL for all network changes
  • dropped RTMP support

Bugfixes

The official count says over 260 bugfixes were merged in this 49 day cycle. See the changelog for all the details.

Pending Removals

Planned upcoming removals include:

  • local crypto implementations
  • NTLM
  • SMB
  • TLS-SRP support

If you are concerned about any of these, speak up on the curl-library ASAP.

Next release

Unless we messed up this one and need to do a patch release, the pending next release is scheduled to happen on June 24.

High-Quality Chaos

As I have been preparing slides for my coming talk at foss-north on April 28, 2026 I figured I could take the opportunity and share a glimpse of the current reality here on my blog. The high quality chaos era, as I call it.

No more AI slop

I complained and I complained about the high frequency junk submissions to the curl bug-bounty that grew really intense during 2025 and early 2026. To the degree that we shut it down completely on February 1st this year. At the time we speculated if that would be sufficient or if the flood would go on.

Now we know.

Higher volume, higher quality

In March 2026, the curl project went back to Hackerone again once we had figured out that GitHub was not good enough.

From that day, the nature of the security report submissions have changed.

The slop situation is not a problem anymore.

The report frequency is higher than ever. Recently it’s been about double the rate we had through 2025, which already was more than double from previous years.

The quality is higher. The rate of confirmed vulnerabilities is back to and even surpassing the 2024 pre-AI level, meaning somewhere in the 15-16% range.

In addition to that, the share of reports that identify a bug, meaning that they aren’t vulnerabilities but still some kind of problem, is significantly higher than before.

Everything is AI now

Almost every security report now uses AI to various degrees. You can tell by the way they are worded, how the report is phrased and also by the fact that they now easily get very detailed duplicates in ways that can’t be done had they been written by humans.

The difference now compared to before however, is that they are mostly very high quality.

The reporters rarely mention exactly which AI tool or model they used (and really, we don’t care), but the evidence is strong that they used such help.

We are not unique

I did a quick unscientific poll on Mastodon to see if other Open Source projects see the same trends and man, do they! Friends from the following projects confirmed that they too see this trend. Of course the exact numbers and volumes vary, but it shows its not unique to any specific project.

Apache httpd, BIND, curl, Django, Elasticsearch Python client, Firefox, git, glibc, GnuTLS, GStreamer, Haproxy, Immich, libssh, libtiff, Linux kernel, OpenLDAP, PowerDNS, python, Prometheus, Ruby, Sequoia PGP, strongSwan, Temporal, Unbound, urllib3, Vikunja, Wireshark, wolfSSL, …

I bet this list of projects is just a random selection that just happened to see my question. You will find many more experiencing and confirming this reality view.

An explosion

When we ship curl 8.20.0 in the middle of next week – end of April 2026, we expect to announce at least six new vulnerabilities. Assuming that the trend keeps up for at least the rest of the year, and I think that is a fair assumption, we are looking at an estimated explosion and a record amount of CVEs to be published by the curl project this year.

We might publish closer to 50 curl vulnerabilities in 2026.

Given this universal trend, I cannot see how this pattern can not also be spotted and expected to happen in many other projects as well.

Where does it end?

The tools are still improving. We keep adding flaws when we do bugfixes and add new features.

Someone has suggested it might work as with fuzzing, that we will see a plateau within a few years. I suppose we just have to see how it goes.

This avalanche is going to make maintainer overload even worse. Some projects will have a hard time to handle this kind of backlog expansion without any added maintainers to help.

It is probably a good time for the bad guys who can easily find this many problems themselves by just using the same tools, before all the projects get time, manpower and energy to fix them.

Then everyone needs to update to the newly released fixed versions of all packages, which we know is likely to take an even longer time.

We are up for a bumpy ride.