Category Archives: Open Source

Open Source, Free Software, and similar

Death by a thousand slops

I have previously blogged about the relatively new trend of AI slop in vulnerability reports submitted to curl and how it hurts and exhausts us.

This trend does not seem to slow down. On the contrary, it seems that we have recently not only received more AI slop but also more human slop. The latter differs only in the way that we cannot immediately tell that an AI made it, even though we many times still suspect it. The net effect is the same.

The general trend so far in 2025 has been way more AI slop than ever before (about 20% of all submissions) as we have averaged in about two security report submissions per week. In early July, about 5% of the submissions in 2025 had turned out to be genuine vulnerabilities. The valid-rate has decreased significantly compared to previous years.

We have run the curl Bug Bounty since 2019 and I have previously considered it a success based on the amount of genuine and real security problems we have gotten reported and thus fixed through this program. 81 of them to be exact, with over 90,000 USD paid in awards.

End of the road?

While we are not going to do anything rushed or in panic immediately, there are reasons for us to consider changing the setup. Maybe we need to drop the monetary reward?

I want us to use the rest of the year 2025 to evaluate and think. The curl bounty program continues to run and we deal with everything as before while we ponder about what we can and should do to improve the situation. For the sanity of the curl security team members.

We need to reduce the amount of sand in the machine. We must do something to drastically reduce the temptation for users to submit low quality reports. Be it with AI or without AI.

The curl security team consists of seven team members. I encourage the others to also chime in to back me up (so that we act right in each case). Every report thus engages 3-4 persons. Perhaps for 30 minutes, sometimes up to an hour or three. Each.

I personally spend an insane amount of time on curl already, wasting three hours still leaves time for other things. My fellows however are not full time on curl. They might only have three hours per week for curl. Not to mention the emotional toll it takes to deal with these mind-numbing stupidities.

Times eight the last week alone.

Reputation doesn’t help

On HackerOne the users get their reputation lowered when we close reports as not applicable. That is only really a mild “threat” to experienced HackerOne participants. For new users on the platform that is mostly a pointless exercise as they can just create a new account next week. Banning those users is similarly a rather toothless threat.

Besides, there seem to be so many so even if one goes away, there are a thousand more.

HackerOne

It is not super obvious to me exactly how HackerOne should change to help us combat this. It is however clear that we need them to do something. Offer us more tools and knobs to tweak, to save us from drowning. If we are to keep the program with them.

I have yet again reached out. We will just have to see where that takes us.

Possible routes forward

People mention charging a fee for the right to submit a security vulnerability (that could be paid back if a proper report). That would probably slow them down significantly sure, but it seems like a rather hostile way for an Open Source project that aims to be as open and available as possible. Not to mention that we don’t have any current infrastructure setup for this – and neither does HackerOne. And managing money is painful.

Dropping the monetary reward part would make it much less interesting for the general populace to do random AI queries in desperate attempts to report something that could generate income. It of course also removes the traction for some professional and highly skilled security researchers, but maybe that is a hit we can/must take?

As a lot of these reporters seem to genuinely think they help out, apparently blatantly tricked by the marketing of the AI hype-machines, it is not certain that removing the money from the table is going to completely stop the flood. We need to be prepared for that as well. Let’s burn that bridge if we get to it.

The AI slop list

If you are still innocently unaware of what AI slop means in the context of security reports, I have collected a list of a number of reports submitted to curl that help showcase. Here’s a snapshot of the list from today:

  1. [Critical] Curl CVE-2023-38545 vulnerability code changes are disclosed on the internet. #2199174
  2. Buffer Overflow Vulnerability in WebSocket Handling #2298307
  3. Exploitable Format String Vulnerability in curl_mfprintf Function #2819666
  4. Buffer overflow in strcpy #2823554
  5. Buffer Overflow Vulnerability in strcpy() Leading to Remote Code Execution #2871792
  6. Buffer Overflow Risk in Curl_inet_ntop and inet_ntop4 #2887487
  7. bypass of this Fixed #2437131 [ Inadequate Protocol Restriction Enforcement in curl ] #2905552
  8. Hackers Attack Curl Vulnerability Accessing Sensitive Information #2912277
  9. (“possible”) UAF #2981245
  10. Path Traversal Vulnerability in curl via Unsanitized IPFS_PATH Environment Variable #3100073
  11. Buffer Overflow in curl MQTT Test Server (tests/server/mqttd.c) via Malicious CONNECT Packet #3101127
  12. Use of a Broken or Risky Cryptographic Algorithm (CWE-327) in libcurl #3116935
  13. Double Free Vulnerability in libcurl Cookie Management (cookie.c) #3117697
  14. HTTP/2 CONTINUATION Flood Vulnerability #3125820
  15. HTTP/3 Stream Dependency Cycle Exploit #3125832
  16. Memory Leak #3137657
  17. Memory Leak in libcurl via Location Header Handling (CWE-770) #3158093
  18. Stack-based Buffer Overflow in TELNET NEW_ENV Option Handling #3230082
  19. HTTP Proxy Bypass via CURLOPT_CUSTOMREQUEST Verb Tunneling #3231321
  20. Use-After-Free in OpenSSL Keylog Callback via SSL_get_ex_data() in libcurl #3242005
  21. HTTP Request Smuggling Vulnerability Analysis – cURL Security Report #3249936

How I do it

A while ago I received an email with this question.

I’ve been subscribed to your weekly newsletter for a while now, receiving your weekly updates every Friday. I’m writing because I admire your consistency, focus, and perseverance. I can’t help but wonder, with admiration, how you manage to do it.

Since this is a topic I receive questions about semi-regularly, I decided I would attempt to answer it. I have probably touched the subject in previous blog posts as well.

Work

Let me start out by defining what I consider my primary work to be. Or perhaps I should call it my mission because it goes way beyond just “work”. curl is irrevocably a huge part of me and my life.

  • I drive the curl project. Guide, develop, review, comment, admin, debug, merge, commit, support, assess security reports, lead, release, talk about it, inspire etc.
  • It does not necessarily mean that I do the most number of commits to curl every month. We have a set of very skilled and devoted committers that can do a lot without me.
  • I keep up with relevant Internet protocol developments and make sure to give feedback on what I think is good and bad, in particular from a small player’s/library’s view that is sometimes a bit different than the tech giants’ takes. This means participating actively in some IETF groups and keeping myself informed about what is happening in a number of other HTTP, web and browser oriented communities.
  • I keep up with related technologies and Open Source projects to understand how to navigate. Feedback issues, comments and pull requests to neighbor projects that we use – to strengthen them (and then by association the combination curl + them) and to increase the chances that they will help us out in similar fashion.
  • I use my position as lead developer of curl to blog and speak up about things I think need to be said, explained or giggled at. Be it stupid emails, bad uses of AI or inefficient security organizations. Ideally this occasionally helps other people and projects as well.

As a successful Open Source project I acknowledge and am aware that we (I mean curl) might get more attention than some others, and that we are used as or considered a “model” sometimes, making it even more important to do things right. From my language use in public to source code decisions. I try to live up to these expectations.

A part of my job is to make companies become paying customers so that I can afford working on curl – and once they have become customers I need to every now and then attend to support tickets from them. I can work full-time on curl thanks to my commercial customers.

Why

I have a strong sense of loyalty and commitment. When I join a project or a cause, I typically stick around and do my share of the job until it is finished.

I enjoy programming and software development – and I have done so ever since I first learned about programming as a teen in the mid 1980s. It is fun to create something that is useful and that can be used by others, but I also like solving the puzzles and challenges that come up in the process.

When the software project you work on never finishes, and is used by a crazy amount of users it gives you a sense of responsibility and pride. An even bigger incentive to make sure it actually works as intended. A desire to please the users. All the users.

Even after having reached many billions of installations there are still challenges to push the project further and harder on every possible front. Make it the best documented one. Make it an exemplary Open Source project. Make it newcomer friendly. Add more tests. Make sure not a single project in the world can claim they ship better security advisories. Work really hard on making it the most secure network library there is. While at the same being welcoming and friendly to new contributors.

If there is any area that curl is not best-in-class, we should put in more work and improve curl in that area. While at the same time keep up and polish it in all other aspects.

This is what drives me. This is what I want.

How

Getting top scores in every possible (imaginary and real) scorecard is accomplished through good old engineering. Do the job. Test. Iterate. Fail. Fix. Add tests. Do it again. Over and over.

A normal work day I sit down at my desk at about 8 in the morning and start. I iterate over issues, pull-requests and the everyday curl maintenance. I post silly messages on Mastodon and I chat with friends on IRC.

I try to end my regular work days at around 18:00, but I may go longer or shorter some days depending on what I feel like or if it’s “floorball day”. (I leave early on Wednesdays to go play with friends.)

As I live in Sweden and have many North-American colleagues and customers, I have occasional evening meetings to deal with the nine hour time difference to their west coast.

At some time between 22:00 and 23:00 I sit down in front of my computer again for the evening shift. I continue working on issues, fix bugs and review pull-requests. At 1am I sleep.

It makes me do maybe 50-55 hours of work per normal week. I call it all work hours plus plenty of spare time. Because this is the passion of my life. It is my job and my hobby. Because I want to. I love it. It is not a setup and number of hours I ask nor expect anyone else to do.

I have worked like this since early 2019 when I started doing curl full-time.

Independent

One explanation how this all works is that curl is independent. Truly independent in most senses of the word.

No companies control or own curl in any way. Yet every company is welcome to participate.

curl is not part of any foundation or umbrella organization. We range free.

curl is extremely liberally licensed.

On motivation

One of the hardest questions to answer is how I can keep up the motivation and still consider this fun and exciting after all this time.

First let’s not pretend that it always feels fun and thrilling. Sometimes it actually feels a bit boring and done. There is no shame in that and it is not strange or odd. Such periods come and go. When they come, I might do less curl for a while. Or maybe find a corner of the project that is not important but could be fun to poke at. I have learned that these periods come and go.

What motivates me is that everyone runs and uses curl and libcurl. Positive feedback is fuel that can keep me running for a long time. Making curl a leading tool that shoulders and carries a lot of digital infrastructure makes me feel a purpose. When there is a bug reported, I can feel almost hurt and sometimes ashamed and I need to get it fixed. curl is supposed to be one of the best in all categories and if it ever is not, I will work hard on making it so.

The social setup around Open Source and a success such as curl also makes it fun. I work full-time from home without geographical proximity to any other curl regulars. But I don’t need that. We can joke around in chat, we help each other in issues and pull-requests and we can do bad puns in video meetings. Contrary to “normal” job colleagues, these people are here because they want, believe and strive for something similar to me – and they are spread out across the world.

I feel that I work for the curl users. The users doing internet transfers. As opposed to any big company, tech giants or anyone else who could otherwise dictate direction. It’s highly motivational to be working for the users. Sure, the entities paying my wages are primarily a few huge companies, but the setup still makes this work and I still feel and act on the users behalf. Those companies have exactly no say in how we run the Open Source project.

I take criticism about curl personally because I have put so much of myself into it and as the BDFL for decades a lot of what it is today is ultimately the result of my choices.

Leading the troops

I try to lead by example. I still do a fair amount of development, debugging and architectural design in the project. I follow and perform the same steps I expect from the other contributors.

I’m a believer in lowering friction in the project, but still not relaxing the requirements: we still need tests and documentation for everything we do. Entering the project should be easy and welcoming, even if it can be hard to actually get a change merged.

I believe in reducing bureaucracy and formalities so that we can focus on development and getting things done. We don’t have or need manager levels or titles. We have things to do, people who do things and we have people that can review, comment and eventually merge those improvements. If there are fewer people participating during periods, then things just get down slower.

I invite discussions and participation and I encourage the same approach from my fellow contributors. When we want to do things, change things, improve things, we should inform and invite the greater community for comments, feedback and help. Oftentimes they may not have a lot to say, but we should still continue to ask for their opinions.

I use a direct and non-complicated communication style. I want to be friendly, I don’t curse, I focus on speaking about their suggestions and not the person. To the point rather than convoluted. When insulted, I try to not engage (which I sometimes fail at). But I also want to have a zero tolerance policy against bad behavior and abuse to enable the positive spirit to remain.

Like everyone else, I sometimes fail in my ambitions of how I want to behave and lead the project. Hopefully that happens less and less frequently over time.

I give this my everything

I think most of what has made curl good and successful has happened because I and the team around curl have worked hard on making it so. It has not happened by chance or by accident.

Family

I have a loving and understanding family. My wife and I celebrated our 25th anniversary earlier this year. My two kids are grown-ups now – both were born after I started working on curl.

Sponsor my laptop!

I need to get myself a new laptop. My existing one is from 2017 and was already then not the most powerful one.

It recently started to shut itself off when running on battery and during the two most recent curl up meetings it has proven itself to be rather sluggish and unable to save a live camera-recording while also streaming it, without stuttering or having other problems.

A framework laptop

I plan to get a new 13″ one from Framework, and a semi-beefy one from there runs at about 2,500 USD. I’m looking at roughly this configuration.

The curl fund pays

For the first time ever, the curl fund is going to help pay for this. The curl fund is all donations and sponsorships gathered. Money we only spend to improve curl and curl related activities. All my machines I have ever used to develop curl on up until now have been paid for by me personally.

You can help!

For this special occasion, we have created a small “crowd-source” like effort. You can help sponsor me this device and we have special little collectors pool for it here:

https://opencollective.com/curl/contribute/laptop-90642

If we get more than 1,000 USD donated to this, I can upgrade my laptop config. More CPU, more memory, more storage perhaps.

If this effort gets less than 1,000 donated, then I will stick to with the original “base” setup.

For everyone who donate 200 USD (or more) I offer space on the laptop cover for the donor to decide exactly what I should put there (in terms of stickers etc).

This program will run for a week as a start.

A developer’s device

I do my main curl development on a desktop PC in my home office. I use my laptop primarily when away, on travels and on vacations. I bring it to talks (10-15 a year) where I typically talk about curl or curl adjacent topics. I occasionally use it to live-stream with, like from our annual curl up meetings.

I have decided to go with Framework because I like their concept and I hear good things about them.

I run Linux. I prefer Debian. That is what I intend to use on this one as well.

The fund

We have a few regular gracious sponsors of the curl project that donates money to us on a regular basis. Their money is what pays for this if nobody else wants to participate.

Updates

It took nine minutes after I published this to get the first 200 USD donation.

We reached 1,000 USD already within the first hour. I am looking at upgrading the setup. Starting probably with the CPU.

90 minutes in “A friendly golem” changed the game when they donated 1,750 USD in one go and we are at a total of 3,770 USD! I think I can max up the config now.

July13, 17:21: The order has been placed. Said to be delivered within 5 days.

Thanks

Thank you everyone for chipping in. Truly amazing. I will keep you posted on the thing and follow up with some photos and a review later.

Cybersecurity Risk Assessment Request

With the new EU legislation Cyber Resiliency Act (CRA), there are new responsibilities and requirements put on manufacturers of digital products and services in Europe.

Going forward these manufacturers must be able to know and report the exact contents of their software, called a Software Bill of Material (SBOM) and they have requirements to check for vulnerabilities in those components etc. This implies that they need to have full control and knowledge about all of their Open Source components in their stack. (See the CRA Hub for a good resource on CRA for Open Source people.)

As a maintainer of a software component that is widely used, I have been curious to see how this will materialize for us. Today I got a first glimpse of what I can only guess will happen more going forward.

This multi-billion-dollar Fortune-500 company that I have no contract with and with which I have had no previous communication, sent me this email asking for a lot of curl information. A slightly redacted version is shown below.

Now that my curiosity has been satisfied a little bit I instead await the future and long to see how many more of these that will come. And how they will respond to my replies.

CRA_request_counter = 1;

The request

Hello,

I hope this message finds you well.

As part of our ongoing efforts to comply with the EU Cyber Resilience Act (CRA), we are currently conducting a cybersecurity risk assessment of third-party software vendors whose products or components are integrated into our systems.

To support this initiative, we kindly request your input on the following questions related to your software product “libcurl” with version 7.87.0. Please provide your responses directly in the table below and do reply to all added in this email,

Additional Information:

  • Purpose: This security assessment is part of our due diligence and regulatory compliance obligations under the EU CRA.
  • Confidentiality: All information shared will be treated as confidential and used solely for the purpose of this assessment.
  • Contact: Should you have any questions or need further clarification, please feel free to reach out by replying directly to this email.

We kindly request your response by Friday, July 25, 2025, to ensure timely completion of our assessment process. Thank you for your cooperation and continued partnership in maintaining a secure and resilient digital environment.

My reaction and response

I am not their vendor without having a more formal relationship established and I am certainly not going to spend a few hours of my spare time gathering a lot of information for them for free for their commercial benefit.

They “kindly” want me to respond within two weeks.

Their use of double quotes around “libcurl” feels odd, and they claim to be using a version that is now more than 2.5 years old.

Most if not all of the information they are asking for is already publicly and openly accessible and readable. I suspect they want the information in this more formal way to make it appear more reliable or trustworthy perhaps. Or maybe it just follows their processes better.

(It also reminded me of the NASA emails.)

I responded like this

Hello,

I will be happy to answer all curl and libcurl related questions and assist you with this inquiry as soon as we have a support contract setup. You can get the process started immediately by emailing support@wolfssl.com.

Thanks, I’m looking forward to future cooperation.

/ Daniel

I will let you know if they take me up on my offer .

The screenshot

This snapshot of how it looked also shows the actual nine-question form table.

Why the company name is redacted

I’m looking forward to eventually do business with this company, I don’t want them to feel targeted or “ridiculed”. I also suspect that there will be many more emails like this going forward. The company name is not the interesting part of this story.

more views on curl vulnerabilities

This is an intersection of two of my obsessions: graphs and vulnerability data for the curl project.

In order to track and follow every imaginable angle of development, progression and (possible) improvements in the curl project we track and log lots of metadata.

In order to educate and inform users about past vulnerabilities, but also as a means for the project team to find patterns and learn from past mistakes, we extract and document every detail.

Do we improve?

The grand question. Let’s get back to this a little later. Let’s first walk through some of the latest additions to the collection of graphs on the curl dashboard.

The here data is mostly based on the 167 published curl vulnerabilities to date.

vulnerability severity distribution

Twenty years ago, we got very few vulnerability reports. The ones we got were only for the most serious problems and lots of the smaller problems were just silently fixed without being considered anything else than bugs.

Over time, security awareness has become more widespread and nowadays many more problems are reported. Because people are more vigilant, more people are looking and problems are now more often considered security problems. In recent years also because we offer monetary rewards.

This development is clearly visible in this new graph showing the severity distribution among all confirmed curl vulnerabilities through time. It starts out with the first report being a critical one, adding only high severity ones for a few years until the first low appears in 2006. Today, we can see that almost half of all reports so far has been graded medium severity. The dates in the X-axis are when the reports were submitted to us.

Severity distribution in code

One of the tricky details with security reports is that they tend to identify a problem that has existed in code already for quite some time. For a really long time even in many cases. How long you may ask? I know I did.

I created a graph to illustrate this data already years ago, but it was a little quirky and hard to figure out. What you learn after a while trying to illustrate data over time as a graph, is sometimes you need to try a few different ways and layouts before it eventually “speaks” to you. This is one of those cases.

For every confirmed vulnerability report we receive, we backtrack and figure out exactly which the first release was that shipped the vulnerability. For the last decades we also identify the exact commit that brought it and of course the exact commit that fixed it. This way, we know the exact age of every vulnerability we ever had.

Hold on to something now, because here comes an information dense graph if there ever was one.

  • There is a dot in the graph for every known vulnerability
  • The X-axis is the date the vulnerability was fixed
  • The Y-axis is the number of years the flaw existed in code before we fixed it
  • The color of each dot indicates the severity level of the vulnerability (see the legend)

To guide the viewer, there is also a few diagonal lines. They show the release dates of a number of curl versions. I’ll explain below how they help.

Now, look at the graph here and I’ll continue below.

Yes, you are reading it right. If you count the dots above the twenty year line, you realize that no less than twelve of the flaws existed in code that long before found and fixed. Above the fifteen year line is almost too many to even count.

If you check how many dots that are close to the the “4.0” diagonal line, it shows how many bugs that have been found throughout the decades that were introduced in code not long after the initial curl release. The other diagonal lines help us see around which particular versions other bugs were introduced.

The green dotted median line we see bouncing around is drawn where there are exactly as many older reports as there are newer. It has hovered around seven years for several recent years but has fallen down to about six recently. Probably too early to tell if this is indeed a long-term evolution or just a temporary blip.

The average age is even higher, about eight years.

You can spot a cluster of fixed issues in 2016. It remains the year with most number of vulnerabilities reported and fixed in curl: 24. Partly because of a security audit.

A key take-away here is that vulnerabilities linger a long time before found. It means that whatever we change in code today, we cannot see the exact effect on vulnerability frequency until many years into the future. We can’t even know exactly how long time we need to tell for sure.

Current knowledge, applied to old data

The older the projects gets, the more we learn about mistakes we did in the past. The more we realize that some of the past releases were quite riddled with vulnerabilities. Something nobody knew back then.

For every release ever made from the first curl release in 1998 we increase a counter for every vulnerability we now know was present. Make it a different color depending on vulnerability severity.

If we lay all this out in a graph, it becomes an interesting “mountain range” style look. In the end of 2013, we shipped a release that contained no less than (what we now know were) 87 security problems.

In this image we can spot that around 2017, the amount of high severity flaws present in the code decreased and they have been almost extinct since 2019. We also see how the two critical flaws thankfully only existed for brief periods.

However. Recalling that the median time for a vulnerability to exist before getting reported is six years, we know that there is a high probability that at least the rightmost 6-10 years of the graph is going to look differently when we redraw this same graph 6-10 years into the future. We simply don’t know how different it will be.

Did we do anything different in the project starting 2017? I have not been able to find any major distinct thing that stands out. We still only had a dozen CI builds but we started fuzzing curl that year. Maybe that is the change that is now visible?

C mistakes

curl is written in C and C is not a memory-safe language. People keep suggesting that we should rewrite it in other languages. In jest and for real. (Spoiler: we won’t rewrite it in any language.)

To get a feel for how much the language itself impacts our set of vulnerabilities, we analyze every flaw and assess if it is likely to have been avoided had we not used C. By manual review. This helps us satisfy our curiosity. Let me be clear that the mistakes are still ours and not because of the language. They are our mistakes that the language did not stop or prevent.

To also get a feel for how or if this mistake rate changes over time, I decided to use the same mountain layout as the previous graph: iterate over all releases and this time count the vulnerabilities they had and instead separate them only between C mistakes and not C mistakes. In the graph the amount of C mistakes is shown in a red-brown nuance.

C mistakes among the vulnerabilities present in code

The dotted line shows the share of the total that is C mistakes, and the Y axis for that is on the right side.

Again, since it takes six years to get half of the reports, we must take at least the rightmost side of the graph as temporary as it will be changed going forward.

The trend looks like we are reducing the share of C bugs though. I don’t think there is anything that suggests that such bugs would be harder to detect than others (quite the opposite actually) so even if we know the graph will change, we can probably say with some certainty that the C mistake rate has indeed been reduced the last six seven years? (See also writing C for curl on how we work consciously on this.)

Do we improve?

I think (hope?) we are, even if the graphs are still not reliably showing this. We can come back here in 2030 or so and verify. It would be annoying if we weren’t.

We do much more testing than ever: more test cases, more CI jobs with more build combinations, using more and better analyzer tools. Combined with concerned efforts to make us write better code that helps us reduce mistakes.

keeping tabs on curl’s memory use

One of the harder things to look out for in a software project is slow or gradual decay over a long period of time. Like if we gradually make a library 1% slower or use 2% more memory every other month.

Sometimes it is totally acceptable to make code slower and use more memory because everything we do is a balance and sometimes we want new features or improved performance that might have to use more memory etc.

We don’t want the growth or slowing down to happen without it being an explicit decision and known trade-off. If we know what the trade-off is, we can reconsider and turn down a feature because we deem the cost too high. Or we accept it because the feature is useful.

In the curl project we make an concerned effort to keep memory use and allocations to a minimum and we are proud of our work. But we also continuously try to encourage and involve more contributors and it is easy to sometimes slip and do something in the code that maybe is not the wisest idea – memory wise.

Memory

In curl we have recently introduced a number of different checks to help us remain aware of the exact memory allocation and use situation.

An added complication for us is that curl builds and runs on numerous architectures, with lots of features on and off and with different sets of third party libraries. It means that internal struct sizes are rarely exactly the same on two different builds and code paths differ that may allocate data differently. We must make all memory limit checks with a certain amount of flexibility and margin.

Per test-case

We have introduced a system where we can specify exact limits for a single test case: this test may not do more than N allocations and it may not have more than Z bytes allocated concurrently.

We do this in debug-builds only where we have wrapper functions for all memory functions used in curl so doing this accounting is quite easy.

The idea is to set fairly strict memory limits in a number of selected typical test cases. We don’t use them in all test cases because when we in the future deem we want to allow increased memory use, it could easily become inconvenient and burdensome.

There is also default limits brought with this, so that tests that really need many allocations (more than 1,000) or unusually large amount of memory (more than 1MB concurrently) have to declare that in the test case or fail because of the suspicious behavior.

Primary struct sizes

A second size check was added in a new dedicated test case: it verifies that a number of important internal structs are sized within their allowed limits.

Keeping such struct sizes in check is important because we allocate a certain struct for each easy handle, each multi handle and for each concurrent connection etc. Because applications sometimes want to use a lot of those (from hundreds to several thousands), it is important that we keep them small.

This new test case makes sure that we don’t accidentally enlarge these structs and make users suffer. Maybe as a secondary effect, we can also use this test case and come back in ten years and see how much the sizes changed.

Memory allocated by others

While we work hard on reducing and keeping curl’s own memory use in check, curl also normally uses a number of third party libraries for fundamental parts of its operations: for TLS, compression and more. The memory monitoring and checks I write about in this post are however explicitly designed and intended to not check or include memory allocated and used by such third parties because we cannot easily affect them. It is up to every such library’s dev team to work on their code towards their own goals that may not be the same as ours.

This is of course frustrating at the same time. Downloading https://curl.se/ using the curl tool uses around 134 allocations done from curl and libcurl code. If curl is built with OpenSSL 3.5.0, the total amount of allocations such a command perform is over 54,000. Down from OpenSSL 3.4.1 which used over 200K!

Different TLS libraries clearly have totally different characteristics here. Rustls for example performed the same simple use case needing just 2,176 allocations and a much smaller peak usage at the same time.

My friends working on wolfSSL have several different configure options to tweak and optimize the malloc patterns. The full build I tested with used more allocations than OpenSSL 3.5.0 but less than half the peak amount.

Still worth it

I am a strong believer in each project making their best and keeping their own backyard clean and tidy.

Sure, curl does less than 0.3% of the allocations by itself when downloading https://curl.se using the latest OpenSSL version for TLS. This is still not a reason for us to be sloppy or to lower our guard rails. Instead I hope that we can lead by example.

This is what makes us proud as engineers and it makes our users trust us and appreciate what we ship.

People can use other TLS libraries. TLS library developers can improve their allocation patterns. And perhaps most importantly: in many cases the number of allocations or amount of used memory do not matter much.

Transfer speed checks next?

We want to add similar checks and verification for transfer speeds but that is an entirely different challenge and something that is being worked on separately from these changes.

Credits

Top image by LoggaWiggler from Pixabay

curl user survey 2025 analysis

I’m pleased to announce that once again I have collected the results, generated the graphs and pondered over conclusions to make after the annual curl user survey.

Get the curl user survey 2025 analysis here

Take-aways

I don’t think I spoil it too much if I say that there aren’t too many drastic news in this edition. I summed up ten key findings from it, but they are all more or less expected:

  1. Linux is the primary curl platform
  2. HTTPS and HTTP remain the most used protocols
  3. Windows 11 is the most used Windows version people run curl on
  4. 32 bit x86 is used by just 7% of the users running curl on Windows
  5. all supported protocols are used by at least some users
  6. OpenSSL remain the most used TLS backend
  7. libssh2 is the most used SSH backend
  8. 85% of respondents scored curl 5 out of 5 for “security handling”
  9. Mastodon is a popular communication channel, and is wanted more
  10. The median used curl version is just one version away from the latest

On the process

Knowing that it is quite a bit of work, it took me a while just to get started this time – but when I finally did I decided to go about it a little different this year.

This time, the twelfth time I faced this task, I converted the job into a programming challenge. I took it upon me to generate all graphs with gnuplot and write the entire document using markdown (and write suitable glue code for everything necessary in between). This way, it should be easier to reuse large portions of the logic and framework for future years and it also helped me generate all the graphs in more consistent and streamlined way.

The final report could then eventually be rendered into single page HTML and PDF versions with pandoc; using 100% Open Source and completely avoiding the use of any word processor or similar. Pretty nice.

As a bonus, this document format makes it super flexible and easy should we need to correct any mistakes and generate updated follow-up versions etc in a very clean way. Just like any other release.

Get the curl user survey 2025 analysis here

A website section

It also struck me that we never actually created a single good place on the curl website for the survey. I thus created such a section on the site and made sure it features links to all the previous survey reports I have created over the years.

That new website section is what this blog post now points to for the 2025 analysis. Should thus also make it easier for any curious readers to find the old documents.

Get the curl user survey 2025 analysis here

Enjoy!

A family of forks

curl supports getting built with eleven different TLS libraries. Six of these libraries are OpenSSL or forks of OpenSSL. Allow me to give you a glimpse of their differences, similarities and some insights into what it takes to support them all.

SSLeay

It all started with SSLeay. This was the first SSL library I found out to exist and we added the first HTTPS support to curl using this library in the spring of 1998. Apparently the SSLeay project was started already back in 1995.

This was back in the days we still only had SSL; TLS would come later.

OpenSSL

This project was created (forked) from the ashes of SSLeay in late 1998 and curl supported it already from its start. SSLeay was abandoned.

OpenSSL always had a quirky, inconsistent and extremely large API set (a good chunk of that was inherited from SSLeay), that is further complicated by documentation that is sparse at best and leaves a lot to the users’ imagination and skill to dig through source code to get the last details answered (still today in 2025). In curl we keep getting occasional problems reported with how we use this library even decades in. Presumably this is the same for every OpenSSL user out there.

The OpenSSL project is often criticized for having dropped the ball on performance since they went to version 3 a few years back. They have also been slow and/or unwilling at adopt new TLS technologies like for QUIC and ECH.

In spite of all this, OpenSSL has become a dominant TLS library especially in Open Source.

LibreSSL

Back in the days of Heartbleed, the LibreSSL forked off and became its own. They trimmed off things they think didn’t belong in the library, they created their own TLS library API and a few years in, Apple ships curl on macOS using LibreSSL. They have some local patches on their build to make it behave differently than others.

LibreSSL was late to offer QUIC, they don’t support SSLKEYLOGFILE, ECH and generally seem to be even slower than OpenSSL to implement new things these days.

curl has worked perfectly with LibreSSL since it was created.

BoringSSL

Forked off by Google in the Heartbleed days. Done by Google for Google without any public releases they have cleaned up the prototypes and variable types a lot, and were leading the QUIC API push. In general, most new TLS inventions have since been implemented and supported by BoringSSL before the other forks.

Google uses this in Android and other places.

curl has worked perfectly with BoringSSL since it was created.

AmiSSL

A fork or flavor of OpenSSL done for the sole purpose of making it build and run properly on AmigaOS. I don’t know much about it but included it here for completeness. It seems to be more or less a port of OpenSSL for Amiga.

curl works with AmiSSL when built for AmigaOS.

QuicTLS

As OpenSSL dragged their feet and refused to provide the QUIC API the other forks did back in the early 2020s (for reasons I have yet to see anyone explain), Microsoft and Akamai forked OpenSSL and produced QuicTLS which has since tried to be a light-weight fork that mostly just adds the QUIC API in the same style BoringSSL and LibreSSL support it. Light-weight in the meaning that they were tracking upstream closely and did not intend to deviate from that in other ways than the QUIC API.

With OpenSSL3.5 they finally shipped a QUIC API that is different than the QUIC API the forks (including QuicTLS) provide. I believe this triggered QuicTLS to reconsider their direction going forward but we are still waiting to see exactly how. (Edit: see below for a comment from Rich Salz about this.)

curl has worked perfectly with QuicTLS since it was created.

AWS-LC

This is a fork off BoringSSL maintained by Amazon. As opposed to BoringSSL, they do actual (frequent) releases and therefore seem like a project even non-Amazon users could actually use and rely on – even though their stated purpose for existing is to maintain a secure libcrypto that is compatible with software and applications used at AWS. Strikingly, they maintain more than “just” libcrypto though.

This fork has shown a lot of activity recently, even in the core parts. Benchmarks done by the HAProxy team in May 2025 shows that AWS-LC outperforms OpenSSL significantly.

The API AWS-LC offers is not identical to BoringSSL’s.

curl works perfectly with AWS-LC since early 2023.

Family Tree

The family life

Each of these six different forks has its own specifics, APIs and features that also change and varies over their different versions. We remain supporting these six forks for now as people still seem to use them and maintenance is manageable.

We support all of them using the same single source code with an ever-growing #ifdef maze, and we verify builds using the forks in CI – albeit only with a limited set of recent versions.

Over time, the forks seem to be slowly drifting apart more and more. I don’t think it has yet become a concern, but we are of course monitoring the situation and might at some point have to do some internal refactors to cater for this.

Future

I can’t foresee what is going to happen. If history is a lesson, we seem to rather go towards more forks rather than fewer, but every reader of this blog post of course now ponders over how much duplicated efforts are spent on all these forks and the implied inefficiencies of that. On the libraries themselves but also on users such as curl.

I suppose we just have to wait and see.

Dropping some TLS laggards

In the curl project we have a long tradition of supporting a range of different third party libraries that provide similar functionality. The person who builds curl needs to decide which of the backends they want to use out of the provided alternatives. For example when selecting which TLS library to use.

This is a fundamental and appreciated design principle of curl. It allows different users to make different choices and priorities depending on their use cases.

Up until May 2025, curl has supported thirteen different TLS libraries. They differ in features, footprint, speed and licenses.

Raising the bar

We implicitly tell the user that you can use one of the libraries from this list to get good curl functionality. The libraries we support have met our approval. They passed the tests. They are okay.

As we support a large number of them, we can raise the bar and gradually increase the requirements we set for them to remain approved. For the good of our users. To make sure that the ones we support truly are good quality choices to build upon – ideally for years to come.

TLS 1.3

The latest TLS version is called TLS 1.3 and the corresponding RFC 8443 was published in August 2018, almost seven years ago. While there are no known major problems or security issues with the predecessor version 1.2, a modern TLS library that has not yet implemented and provide support for TLS 1.3 is a laggard. It is behind.

We take this opportunity to raise the bar and say that starting June 2025, curl only supports TLS libraries that supports TLS 1.3 (in their modern versions). The first curl release shipping with this change is the pending 8.15.0 release, scheduled for mid July 2025.

This move has been announced, planned and repeatedly communicated for over a year. It should not come as a surprise, even if I have no doubt that this will be considered a such by some.

This makes sure that users and applications that decide to lean on curl are more future-proof. We no longer recommend using one of the laggards.

Removed

This action affects these two specific TLS backends:

  • BearSSL
  • Secure Transport

BearSSL

This embedded and small footprint focused library is probably best replaced by wolfSSL or mbedTLS.

Secure Transport

This is a native library in Apple operating systems that has been deprecated by Apple themselves for a long time. There is no obvious native replacement for this, but we probably recommend either wolfSSL or an OpenSSL fork. Apple themselves have used libreSSL for their curl builds for a long time.

The main feature user might miss from Secure Transport that is not yet provided by any other backend, is the ability to use the native CA store on the Apple operating systems – iOS, macOS etc. We expect this feature to get implemented for other TLS backends soon.

Network framework

On Apple operating systems, there is a successor to Secure Transport: the Network framework. This is however much more than just a TLS layer and because of their design decisions and API architecture it is totally unsuitable for curl’s purposes. It does not expose/use sockets properly and the only way to use it would be to hand over things like connecting, name resolving and parts of the protocol management to it, which is totally unacceptable and would be a recipe for disaster. It is therefore highly unlikely that curl will again have support for a native TLS library on Apple operating systems.

Eleven remaining TLS backends in curl

In the order we added them.

  1. OpenSSL
  2. GnuTLS
  3. wolfSSL
  4. SChannel
  5. libressl – an OpenSSL fork
  6. BoringSSL – an OpenSSL fork
  7. mbedTLS
  8. AmiSSL – an OpenSSL fork
  9. rustls
  10. quictls – an OpenSSL fork
  11. AWS-LC – an OpenSSL fork

Eight removed TLS backends

With these two new removals, the set of TLS libraries we have removed support for over the years are, in the order we removed them:

  1. QsoSSL
  2. axTLS
  3. PolarSSL
  4. MesaLink
  5. NSS
  6. gskit
  7. BearSSL
  8. Secure Transport

Going forward

Currently we have no plans for removing support for any other TLS backends, but we will of course reserve ourselves the right to do so when we feel the need, for the good of the project and our users.

We similarly have no plans to add support for any additional TLS libraries, but if someone would bring such work to the project for one of the few remaining quality TLS libraries that exist that curl does not already support, then we would most probably welcome such an effort.

What we can’t measure

The curl project is an independent Open Source project. Our ambition is to do internet transfers right and securely with the features “people” want. But how do we know if we do this successfully or not?

Possibly one rough way to measure if users are happy would be to know if the number of users go up or down.

How do we know?

Number of users

We don’t actually know how many users we have – which devices, tools and services that are powered by our code. We don’t know how many users install curl. We also don’t know how many install it and then immediately uninstall it again because there is something about it they don’t like.

Most of our users install curl and libcurl from a distribution, unless they already had it installed there from the beginning without them having to do anything. They don’t download anything from us. Most users likely never visit our website for any purpose.

No telemetry nor logs

We cannot do and will never try to do any kind of telemetry in the command line tool or library, so there is no automated way we can actually know how much any of them are used unless we are told explicitly.

We can search the web, guess and ask around.

Tarball downloads

We can estimate how many people download the curl release tarballs from the website every month, but that is a nearly useless number without meaning. What does over a million downloads per month mean in this context? Presumably a fair share of these are just repeated CI jobs.

A single download of a curl tarball be used to build curl for a long time, for countless products and get installed in several billions of devices or never get used anywhere. Or somewhere in between. We will never know.

GitHub

Our GitHub repository has a certain amount of stars. This number does not mean anything as just a random subset of developers ever see it, and just some of those decide to do the rather meaningless act of staring it. The git repository has been forked on GitHub several thousand times but that’s an almost equally pointless number.

We can get stats for how often our source code git repository is cloned, but then again that number probably gets heavily skewed as CI use of it goes up and down.

Binary downloads

We offer curl for Windows binaries but since we run a website entirely without logs, those downloads are bundled with the tarballs in our rough stats. We only know how many objects in the 1M-10M size range are downloaded over a period of time. Besides, Windows ships with curl bundled so most Windows users never download anything from us.

We provide curl containers and since they are hosted by others, we can get some “pull” numbers. They mostly tell us people use the containers – but growing and shrinking trends don’t help us much as we don’t know who or why.

Ecosystems

Because how libcurl is a fairly low-level C library, it is usually left outside of all ecosystems. With most infrastructure tooling for listing, counting and tracking dependencies etc, libcurl is simply left out and invisible. As if it is not actually used. Presumably just assumed to be part of the operating system or something.

These tools are typically done for Python, Node, Java, Rust, Perl, etc ecosystems where dependencies are easy to track via their package systems. Therefore, we cannot easily check how many projects or products that depend on libcurl with these tools. Because that number would be strangely low.

Users

I try to avoid talking about number of users because for curl and libcurl I can’t really tell what a user is. curl is used directly by users, sure, but it is also used in countless scripts that run without a user directly running it.

libcurl is used many magnitudes more than the curl tool, and that is a component built-in into devices, tools and services that often operate independent of users being present.

Installations

I tend to make my (wild) guesses on number of (lib)curl installations even though that is also highly error-prone.

I don’t know even nearly all the tools, games, devices and services that use libcurl because most of them never tell me or anyone else. They don’t have to. If we find out while searching the web or someone points us to a credit mention then we know. Otherwise we don’t.

I don’t know how many of those libcurl using applications exist in the world. New versions come, old versions die.

The largest volume libcurl users are most probably the mobile phones: libcurl is part of the operating systems in Apple’s iOS and in both Google’s and Samsung’s default Android setup. Probably in a few of the other popular Androids as well.

Since the libcurl API is not exposed by the mobile phone operating systems, a large amount of mobile phone applications subsequently build their own libcurl and ship with their apps, on both iOS and Android. This way, a single mobile phone can easily contain a dozen different libcurl installations, depending on exactly what set of apps that are used.

There is an estimated seven billion smart phones and one billion tablets in the world. Do they all have five applications on average that bundle libcurl? Who knows. If they do, that makes roughly eight billion times six installations.

Also misleading

Staring on and focusing that outrageously large number is also complicated and may not be a particularly good indicator that we are on the right path. So ten or perhaps forty-eight billion libcurl installations are controlled and done by basically just a handful of applications and companies. Should some of them switch over to an alternative the number would dwindle immediately. And similarly if we get twice that amount of new users but on low volume installations (compared to smart phones everything is low volume), the total number of installations won’t really change, but we may have more satisfied users.

Maybe the best indicator of us keeping on the right track is the number of different users or applications that are using libcurl and then we would count Android, iOS and the mobile YouTube application as three. Of course we have no means to even guess how many different users there are. That’s again also a very time-specific question as maybe there are a few new since yesterday and tomorrow a few existing users may ditch libcurl for something else.

We just don’t know and we can’t tell. With no expectations of this to change.

Success

In many ways this is of course a success beyond our wildest dreams and a luxury position many projects only dream of. Don’t read this blog post as a complaint in any way. It just describes a challenge and reality.

The old fashioned way

With no way to automatically or even half-decently guess how we are doing, we instead do it the old way. We rely on users to tell us what they think. We work on issues, we respond to questions and we do an annual survey. We try to be open for feedback and listen in how people and users want modern internet transfers done.

We make an effort to ship quality products and run a tight ship. To score top scores in all and every way you can evaluate a software project and our products.

Hopefully this will keep us on the right track. Let me know if you ever think we veer off.