The annual curl users and developers meeting, curl up, takes place May 23-24 2026 in Prague, Czechia.
We are in fact returning to the same city and the exact same venue as in 2025. We liked it so much!
curl up
This is a cozy and friendly event that normally attracts around 20-30 attendees. We gather in a room through a weekend and we talk curl. The agenda is usually setup with a number of talks through the two days, and each talk ends with a follow-up Q&A and discussion session. So no big conference thing, just a bunch of friends around a really large table. Over a weekend.
Anyone is welcome to attend – for free – and everyone is encouraged to submit a talk proposal – anything that is curl and Internet transfer related goes.
We make an effort to attract and lure the core curl developers and the most active contributors of recent years into the room. We do this by reimbursing their travel and hotel expenses.
Agenda
The agenda is a collaborative effort and we are going to work on putting it together from now all the way until the event, in order to make sure we make the best of the weekend and we get to talk to and listen to all the curl related topics we can think of!
Meeting up in the real world as opposed to doing video meetings helps us get to know each other better, allows us to socialize in ways we otherwise never can do and in the end it helps us work better together – which subsequently helps us write better code and produce better outcomes!
It also helps us meet and welcome newcomers and casual contributors. Showing up at curl up is an awesome way to dive into the curl world wholeheartedly and in the deep end.
Sponsor
Needless to say this event costs money to run. We pay our top people to come, we pay for the venue and pay for food.
We would love to have your company mentioned as top sponsor of the event or perhaps a social dinner on the Saturday? Get in touch and let’s get it done!
Attend!
Everyone is welcome and encouraged to attend – at no cost. We only ask that you register in advance (the registration is not open yet).
Video
We always record all sessions on video and make them available after the fact. You can catch up on previous years’ curl up sessions on the curl website’s video section.
We also live-stream all the sessions on curl up during both days. To be found on my twitch channel: curlhacker.
Friendly
Our events are friendly to everyone. We abide to the code of conduct and we never had anyone be even close to violating that,
When we announced the end of the curl bug-bounty at the end of January 2026, we simultaneously moved over and started accepting curl security reports on GitHub instead of its previous platform.
This move turns out to have been a mistake and we are now undoing that part of the decision. The reward money is still gone, there is no bug-bounty, no money for vulnerability reports, but we return to accepting and handling curl vulnerability and security reports on Hackerone. Starting March 1st 2026, this is now (again) the official place to report security problems to the curl project.
This zig-zagging is unfortunate but we do it with the best of intentions. In the curl security team we were naively thinking that since so many projects are already using this setup it should be good enough for us too since we don’t have any particular special requirements. We wrongly thought. Now I instead question how other Open Source projects can use this. It feels like an area and use case for Open Source projects that is under-focused: proper, secure and efficient vulnerability reporting without bug-bounty.
What we want from a security reporting system
To illustrate what we are looking for, I made a little list that should show that we’re not looking for overly crazy things.
Incoming submissions are reports that identify security problems.
The reporter needs an account on the system.
Submissions start private; only accessible to the reporter and the curl security team
All submissions must be disclosed and made public once dealt with. Both correct and incorrect ones. This is important. We are Open Source. Maximum transparency is key.
There should be a way to discuss the problem amongst security team members, the reporter and per-report invited guests.
It should be possible to post security-team-only messages that the reporter and invited guests cannot see
For confirmed vulnerabilities, an advisory will be produced that the system could help facilitate
If there’s a field for CVE, make it possible to provide our own. We are after all our own CNA.
Closed and disclosed reports should be clearly marked as invalid/valid etc
Reports should have a tagging system so that they can be marked as “AI slop” or other terms for statistical and metric reasons
Abusive users should be possible to ban/block from this program
Additional (customizable) requirements for the privilege of submitting reports is appreciated (rate limit, time since account creation, etc)
What’s missing in GitHub’s setup?
Here is a list of nits and missing features we fell over on GitHub that, had we figured them out ahead of time, possibly would have made us go about this a different way. This list might interest fellow maintainers having the same thoughts and ideas we had. I have provided this feedback to GitHub as well – to make sure they know.
GitHub sends the whole report over email/notification with no way to disable this. SMTP and email is known for being insecure and cannot assure end to end protection. This risks leaking secrets early to the entire email chain.
We can’t disclose invalid reports (and make them clearly marked as such)
Per-repository default collaborators on GitHub Security Advisories is annoying to manage, as we now have to manually add the security team for each advisory or have a rather quirky workflow scripting it. https://github.com/orgs/community/discussions/63041
We can’t edit the CVE number field! We are a CNA, we mint our own CVE records so this is frustrating. This adds confusion.
We want to (optionally) get rid of the CVSS score + calculator in the form as we actively discourage using those in curl CVE records
No CI jobs working in private forks is going to make us effectively not use such forks, but is not a big obstacle for us because of our vulnerability working process. https://github.com/orgs/community/discussions/35165
No “quote” in the discussions? That looks… like an omission.
We want to use GitHub’s security advisories as the report to the project, not the final advisory (as we write that ourselves) which might get confusing, as even for the confirmed ones, the project advisories (hosted elsewhere) are the official ones, not the ones on GitHub
No number of advisories count is displayed next to “security” up in the tabs, like for issues and Pull requests. This makes it hard to see progress/updates.
When looking at an individual advisory, there is no direct button/link to go back to the list of current advisories
In an advisory, you can only “report content”, there is no direct “block user” option like for issues
There is no way to add private comments for the team-only, as when discussing abuse or details not intended for the reporter or other invited persons in the issue
There is a lack of short (internal) identifier or name per issue, which makes it annoying and hard to refer to specific reports when discussing them in the security team. The existing identifiers are long and hard to differentiate from each other.
You quite weirdly cannot get completion help for @nick in comments to address people that were added into the advisory thanks to them being in a team you added to the issue?
There are no labels, like for issues and pull requests, which makes it impossible for us to for example mark the AI slop ones or other things, for statistics, metrics and future research
Email?
Sure, we could switch to handling them all over email but that also has its set of challenges. Including:
Hard to keep track of the state of each current issue when a number of them are managed in parallel. Even just to see how many cases are still currently open or in need of attention.
Hard to publish and disclose the invalid ones, as they never cause an advisory to get written and we rather want the initial report and the full follow-up discussion published.
Hard to adapt to or use a reputation system beyond just the boolean “these people are banned”. I suspect that we over time need to use more crowdsourced knowledge or reputation based on how the reporters have behaved previously or in relation to other projects.
Onward and upward
Since we dropped the bounty, the inflow tsunami has dried out substantially. Perhaps partly because of our switch over to GitHub? Perhaps it just takes a while for all the sloptimists to figure out where to send the reports now and perhaps by going back to Hackerone we again open the gates for them? We just have to see what happens.
We will keep iterating and tweaking the program, the settings and the hosting providers going forward to improve. To make sure we ship a robust and secure set of products and that the team doing so can do that
Gitlab, Codeberg and others are GitHub alternatives and competitors, but few of them offer this kind of security reporting feature. That makes them bad alternatives or replacements for us for this particular service.
Last spring I wrote a blog post about our ongoing work in the background to gradually simplify the curl source code over time.
This is a follow-up: a status update of what we have done since then and what comes next.
In May 2025 I had just managed to get the worst function in curl down to complexity 100, and the average score of all curl production source code (179,000 lines of code) was at 20.8. We had 15 functions still scoring over 70.
Almost ten months later we have reduced the most complex function in curl from 100 to 59. Meaning that we have simplified a vast number of functions. Done by splitting them up into smaller pieces and by refactoring logic. Reviewed by humans, verified by lots of test cases, checked by analyzers and fuzzers,
The current 171,000 lines of code now has an average complexity of 15.9.
Complexity
The complexity score in this case is just the cold and raw metric reported by the pmccabe tool. I decided to use that as the absolute truth, even if of course a human could at times debate and argue about its claims. It makes it easier to just obey to the tool, and it is quite frankly doing a decent job at this so it’s not a problem.
How to simplify
In almost all cases the main problem with complex functions is that they do a lot of things in a single function – too many – where the functionality performed could or should rather be split into several smaller sub functions. In almost every case it is also immediately obvious that when splitting a function into two, three or more sub functions with smaller and more specific scopes, the code gets easier to understand and each smaller function is subsequently easier to debug and improve.
Development
I don’t know how far we can take the simplification and what the ideal average complexity score of a the curl code base might be. At some point it becomes counter-effective and making functions even smaller then just makes it harder to follow code flows and absorbing the proper context into your head.
Graphs
To illustrate our simplification journey, I decided to render graphs with a date axle starting at 2022-01-01 and ending today. Slightly over four years, representing a little under 10,000 git commits.
First, a look a the complexity of the worst scored function in curl production code over the last four years. Comparing with P90 and P99.
The most complex function in curl over time
Identifying the worst function might not say too much about the code in general, so another check is to see how the average complexity has changed. This is calculated like this:
All functions get a complexity score by pmccabe
Each function has a number of lines
For all functions, add its function-score x function-length to a total complexity score, and in the end, divide that total complexity score on total number of lines used for all functions. Also do the same for a median score.
Average and median complexity per source code line in curl, over time.
When 2022 started, the average was about 46 and as can be seen, it has been dwindling ever since, with a few steep drops when we have merged dedicated improvement work.
One way to complete the average and median lines to offer us a better picture of the state, is to investigate the complexity distribution through-out the source code.
How big portion of the curl source code is how complex
This reveals that the most complex quarter of the code in 2022 has since been simplified. Back then 25% of the code scored above 60, and now all of the code is below 60.
It also shows that during 2025 we managed to clean up all the dark functions, meaning the end of 100+ complexity functions. Never to return, as the plan is at least.
Does it matter?
We don’t really know. We believe less complex code is generally good for security and code readability, but it is probably still too early for us to be able to actually measure any particular positive outcome of this work (apart from fancy graphs). Also, there are many more ways to judge code than by this complexity score alone. Like having sensible APIs both internal and external and making sure that they are properly and correctly documented etc. The fact that they all interact together and they all keep changing, makes it really hard to isolate a single factor like complexity and say that changing this alone is what makes an impact.
Additionally: maybe just the refactor itself and the attention to the functions when doing so either fix problems or introduce new problems, that is then not actually because of the change of complexity but just the mere result of eyes giving attention on that code and changing it right then.
Maybe we just need to allow several more years to pass before any change from this can be measured?
The title of my ending keynote at FOSDEM February 1, 2026.
As the last talk of the conference, at 17:00 on the Sunday lots of people had already left, and presumably a lot of the remaining people were quite tired and ready to call it a day.
Still, the 1500 seats in Janson got occupied and there was even a group of more people outside wanting to get in that had to be refused entry.
The video recording
Thanks to the awesome FOSDEM video team, the recording was made available this quickly after the presentation.
In January 2025 I received the European Open Source Achievement Award. The physical manifestation of that prize was a trophy made of translucent acrylic (or something similar). The blog post I above has a short video where I show it off.
In the year that passed since, we have established an organization for how do the awards going forward in the European Open Source Academy and we have arranged the creation of actual medals for the awardees.
That was the medal we gave the award winners last week at the award ceremony where I handed Greg his prize.
I was however not prepared for it, but as a direct consequence I was handed a medal this year, in recognition for the award a got last year, because now there is a medal. A retroactive medal if you wish. It felt almost like getting the award again. An honor.
The boxThe backsideFront
The medal design
The medal is made in a shiny metal, roughly 50mm in diameter. In the middle of it is a modern version (with details inspired by PCB looks) of the Yggdrasil tree from old Norse mythology – the “World Tree”. A source of life, a sacred meeting place for gods.
In a circle around the tree are twelve stars, to visualize the EU and European connection.
On the backside, the year and the name are engraved above an EU flag, and the same circle of twelve stars is used there as a margin too, like on the front side.
The medal has a blue and white ribbon, to enable it to be draped over the head and hung from the neck.
The box is sturdy thing in dark blue velvet-like covering with European Open Source Academy printed on it next to the academy’s logo. The same motif is also in the inside of the top part of the box.
Many
I do feel overwhelmed and I acknowledge that I have receive many medals by now. I still want to document them and show them in detail to you, dear reader. To show appreciation; not to boast.
I had the honor and pleasure to hand over this prize to its first real laureate during the award gala on Thursday evening in Brussels, Belgium.
This annual award ceremony is one of the primary missions for the European Open Source Academy, of which I am the president since last year.
As an academy, we hand out awards and recognition to multiple excellent individuals who help make Europe the home of excellent Open Source. Fellow esteemed academy members joined me at this joyful event to perform these delightful duties.
As I stood on the stage, after a brief video about Greg was shown I introduced Greg as this year’s worthy laureate. I have included the said words below. Congratulations again Greg. We are lucky to have you.
Me introducing Greg Kroah-Hartman
There are tens of millions of open source projects in the world, and there are millions of open source maintainers. Many more would count themselves as at least occasional open source developers. These are the quiet builders of Europe’s digital world.
When we work on open source projects, we may spend most of our waking hours deep down in the weeds of code, build systems, discussing solutions, or tearing our hair out because we can’t figure out why something happens the way it does, as we would prefer it didn’t.
Open source projects can work a little like worlds on their own. You live there, you work there, you debate with the other humans who similarly spend their time on that project. You may not notice, think, or even care much about other projects that similarly have a set of dedicated people involved. And that is fine.
Working deep in the trenches this way makes you focus on your world and maybe remain unaware and oblivious to champions in other projects. The heroes who make things work in areas that need to work for our lives to operate as smoothly as they, quite frankly, usually do.
Greg Kroah-Hartman, however, our laureate of the Prize for Excellence in Open Source 2026, is a person whose work does get noticed across projects.
Our recognition of Greg honors his leading work on the Linux kernel and in the Linux community, particularly through his work on the stable branch of Linux. Greg serves as the stable kernel maintainer for Linux, a role of extraordinary importance to the entire computing world. While others push the boundaries of what Linux can do, Greg ensures that what already exists continues to work reliably. He issues weekly updates containing critical bug fixes and security patches, maintaining multiple long-term support versions simultaneously. This is work that directly protects billions of devices worldwide.
It’s impossible to overstate the importance of the work Greg has done on Linux. In software, innovation grabs headlines, but stability saves lives and livelihoods. Every Android phone, every web server, every critical system running Linux depends on Greg’s meticulous work. He ensures that when hospitals, banks, governments, and individuals rely on Linux, it doesn’t fail them. His work represents the highest form of service: unglamorous, relentless, and essential.
Without maintainers like Greg, the digital infrastructure of our world would crumble. He is, quite literally, one of the people keeping the digital infrastructure we all depend on running.
As a fellow open source maintainer, Greg and I have worked together in the open source security context. Through my interactions with him and people who know him, I learned a few things:
Greg is competent. a custodian and maintainer of many parts and subsystems of the Linux kernel tree and its development for decades.
Greg has a voice. He doesn’t bow to pressure or take the easy way out. He has integrity.
Greg is persistent. He has been around and done hard work for the community for decades.
Greg is a leader. He shares knowledge, spreads the word, and talks to crowds. In a way that is heard and appreciated. He is a mentor.
An American by origin, Greg now calls Europe his home, having lived in the Netherlands for many years. While on this side of the pond, he has taken on an important leadership role in safeguarding and advocating for the interests of the open source community. This is most evident through his work on the Cyber Resilience Act, through which he has educated and interacted with countless open source contributors and advocates whose work is affected by this legislation.
We — if I may be so bold — the Open Source community in Europe — and yes, the whole world, in fact — appreciate your work and your excellence. Thank you, Greg. Please come on stage and collect your award.
In addition to the medal, Greg was given this funky-looking award “thing” with the tree symbol of the European Open Source Academy.
Daniel to the left, Greg Kroah-Hartman on the right
The whole event
Here is the entire ceremony, from start to finish.
We are doing another curl + distro online meeting this spring in what now has become an established annual tradition. A two-hour discussion, meeting, workshop for curl developers and curl distro maintainers.
The objective for these meetings is simply to make curl better in distros. To make distros do better curl. To improve curl in all and every way we think we can, together.
A part of this process is to get to see the names and faces of the people involved and to grease the machine to improve cross-distro collaboration on curl related topics.
Anyone who feels this is a subject they care about is welcome to join. We aim for the widest possible definition of distro and we don’t attempt to define the term.
The 2026 version of this meeting is planned to take place in the early evening European time, morning west coast US time. With the hope that it covers a large enough amount of curl interested people.
The plan is to do this on March 26, and all the details, planning and discussion items are kept on the dedicated wiki page for the event.
Please add your own discussion topics that you want to know or talk about, and if you feel inclined, add yourself as an intended participant. Feel free to help make this invite reach the proper people.
We introduced curl’s -J option, also known as --remote-header-name back in February 2010. A decent amount of years ago.
The option is used in combination with -O (--remote-name) when downloading data from a HTTP(S) server and instructs curl to use the filename in the incoming Content-Disposition: header when saving the content, instead of the filename of the URL passed on the command line (if provided). That header would later be explained further in RFC 6266.
The idea is that for some URLs the server can provide a more suitable target filename than what the URL contains from the beginning. Like when you do a command similar to:
curl -O -J https://example.com/download?id=6347d
Without -J, the content would be save in the target output filename called ‘download’ – since curl strips off the query part.
With -J, curl parses the server’s response header that contains a better filename; in the example below fun.jpg.
Content-Disposition: filename="fun.jpg";
But redirects?
The above approach mentioned works pretty well, but has several limitations. One of them being that the obvious that if the site instead of providing a Content-Disposition header perhaps only redirects the client to a new URL to the download from, curl does not pick up the new name but instead keeps using the one from the originally provided URL.
This is not what most users want and not what they expect. As a consequence, we have had this potential improvement mentioned in the TODO file for many years. Until today.
We have now merged a change that makes curl with -J pick up the filename from Location: headers and it uses that filename if no Content-Disposition.
This means that if you now rerun a similar command line as mentioned above, but this one is allowed to follow redirects:
And that site redirects curl to the actual download URL for the tarball you want to download:
HTTP/1 301 redirect
Location: https://example.org/release.tar.gz
… curl now saves the contents of that transfer in a local file called release.tar.gz.
If there is both a redirect and a Content-Disposition header, the latter takes precedence.
The filename is set remotely
Since this gets the filename from the server’s response, you give up control of the name to someone else. This can of course potentially mess things up for you. curl ignores all provided directory names and only uses the filename part.
If you want to save the download in a dedicated directory other than the current one, use –output-dir.
As an additional precaution, using -J implies that curl avoids to clobber, overwrite, any existing files already present using the same filename unless you also use –clobber.
What name did it use?
Since the selected final name used for storing the data is selected based on contents of a header passed from the server, using this option in a scripting scenario introduces the challenge: what filename did curl actually use?
A user can easily extract this information with curl’s -w option. Like this:
This command line outputs the used filename to stdout.
Tweak the command line further to instead direct that name to stderr or to a specific file etc. Whatever you think works.
Remaining restrictions
The content-disposition RFC mentioned above details a way to provide a filename encoded as UTF-8 using something like the below, which includes a U+20AC Euro sign:
curl still does not support this filename* style of providing names. This limitation remains because curl cannot currently convert that provided name into a local filename using the provided characters – with certainty.
Room for future improvement!
Ships
This -J improvement ships in curl 8.19.0, coming in March 2026.
There is no longer a curl bug-bounty program. It officially stops on January 31, 2026.
After having had a few half-baked previous takes, in April 2019 we kicked off the first real curl bug-bounty with the help of Hackerone, and while it stumbled a bit at first it has been quite successful I think.
We attracted skilled researchers who reported plenty of actual vulnerabilities for which we paid fine monetary rewards. We have certainly made curl better as a direct result of this: 87 confirmed vulnerabilities and over 100,000 USD paid as rewards to researchers. I’m quite happy and proud of this accomplishment.
I would like to especially highlight the awesome Internet Bug Bounty project, which has paid the bounties for us for many years. We could not have done this without them. Also of course Hackerone, who has graciously hosted us and been our partner through these years.
Thanks!
How we got here
Looking back, I think we can say that the downfall of the bug-bounty program started slowly in the second half of 2024 but accelerated badly in 2025.
We saw an explosion in AI slop reports combined with a lower quality even in the reports that were not obvious slop – presumably because they too were actually misled by AI but with that fact just hidden better.
Maybe the first five years made it possible for researchers to find and report the low hanging fruit. Previous years we have had a rate of somewhere north of 15% of the submissions ending up confirmed vulnerabilities. Starting 2025, the confirmed-rate plummeted to below 5%. Not even one in twenty was real.
The never-ending slop submissions take a serious mental toll to manage and sometimes also a long time to debunk. Time and energy that is completely wasted while also hampering our will to live.
I have also started to get the feeling that a lot of the security reporters submit reports with a bad faith attitude. These “helpers” try too hard to twist whatever they find into something horribly bad and a critical vulnerability, but they rarely actively contribute to actually improve curl. They can go to extreme efforts to argue and insist on their specific current finding, but not to write a fix or work with the team on improving curl long-term etc. I don’t think we need more of that.
There are these three bad trends combined that makes us take this step: the mind-numbing AI slop, humans doing worse than ever and the apparent will to poke holes rather than to help.
Actions
In an attempt to do something about the sorry state of curl security reports, this is what we do:
We no longer offer any monetary rewards for security reports – no matter which severity. In an attempt to remove the incentives for submitting made up lies.
We stop using Hackerone as the recommended channel to report security problems. To make the change immediately obvious and because without a bug-bounty program we don’t need it.
We continue to immediately ban and publiclyridicule everyone who submits AI slop to the project.
Maintain curl security
We believe that we can maintain and continue to evolve curl security in spite of this change. Maybe even improve thanks to this, as hopefully this step helps prevent more people pouring sand into the machine. Ideally we reduce the amount of wasted time and effort.
I believe the best and our most valued security reporters still will tell us when they find security vulnerabilities.
Instead
If you suspect a security problem in curl going forward, we advise you to head over to GitHub and submit them there.
Alternatively, you send an email with the full report to security @ curl.se.
In both cases, the report is received and handled privately by the curl security team. But with no monetary reward offered.
Leaving Hackerone
Hackerone was good to us and they have graciously allowed us to run our program on their platform for free for many years. We thank them for that service.
As we now drop the rewards, we feel it makes a clear cut and displays a clearer message to everyone involved by also moving away from Hackerone as a platform for vulnerability reporting. It makes the change more visible.
Future disclosures
It is probably going to be harder for us to publicly disclose every incoming security report in the same way we have done it on Hackerone for the last year. We need to work out something to make sure that we can keep doing it at least imperfectly, because I believe in the goodness of such transparency.
We stay on GitHub
Let me emphasize that this change does not impact our presence and mode of operation with the curl repository and its hosting on GitHub. We hear about projects having problems with low-quality AI slop submissions on GitHub as well, in the form of issues and pull-requests, but for curl we have not (yet) seen this – and frankly I don’t think switching to a GitHub alternative saves us from that.
Other projects do better
Compared to others, we seem to be affected by the sloppy security reports to a higher degree than the average Open Source project.
With the help of Hackerone, we got numbers of how the curl bug-bounty has compared with other programs over the last year. It turns out curl’s program has seen more volume and noise than other public open source bug bounty programs in the same cohort. Over the past four quarters, curl’s inbound report volume has risen sharply, while other bounty-paying open source programs in the cohort, such as Ruby, Node, and Rails, have not seen a meaningful increase and have remained mostly flat or declined slightly. In the chart, the pink line represents curl’s report volume, and the gray line reflects the broader cohort.
Inbound Report Volume on Hackerone: curl compared to OSS peers
We suspect the idea of getting money for it is a big part of the explanation. It brings in real reports, but makes it too easy to be annoying with little to no penalty to the user. The reputation system and available program settings were not sufficient for us to prevent sand from getting into the machine.
The exact reason why we suffer more of this abuse than others remains a subject for further speculation and research.
If the volume keeps up
There is a non-zero risk that our guesses are wrong and that the volume and security report frequency will keep up even after these changes go into effect.
If that happens, we will deal with it then and take further appropriate steps. I prefer not to overdo things or overplan already now for something that ideally does not happen.
We won’t charge
People keep suggesting that one way to deal with the report tsunami is to charge security researchers a small amount of money for the privilege of submitting a vulnerability report to us. A curl reporters security club with an entrance fee.
I think that is a less good solution than just dropping the bounty. Some of the reasons include:
Charging people money in an International context is complicated and a maintenance burden.
Dealing with charge-backs, returns and other complaints and friction add work.
It would limit who could or would submit issues. Even some who actually find legitimate issues.
Maybe we need to do this later anyway, but we stay away from it for now.
Pull requests are less of a problem
We have seen other projects and repositories see similar AI-induced problems for pull requests, but this has not been a problem for the curl project. I believe that for PRs we have much better means to sort out the weed with automatic means, since we have tools, tests and scanners to verify such contributions. We don’t need to waste any human time on pull requests until the quality is good enough to get green check-marks from 200 CI jobs.
We never say never. This is now and we might have reasons to reconsider and make a different decision in the future. If we do, we will let you know. These changes are applied now with the hope that they will have a positive effect for the project and its maintainers. If that turns out to not be the outcome, we will of course continue and apply further changes later.
Media
Since I created the pull request for updating the bug-bounty information for curl on January 14, almost two weeks before we merged it, various media picked up the news and published articles. Long before I posted this blog post.
One of the trickier things in software is gradual degradation. Development that happens in the wrong direction slowly over time which never triggers any alarms or upset users. Then one day you suddenly take a closer look at it and you realize that this area that used to be so fine several years ago no longer is.
Memory use is one of those things. It is easy to gradually add more and larger allocations over time as we add features and make new cool architectural designs.
Memory use
curl and libcurl literally run in billions of installations and it is important for us that we keep memory use and allocation count to a minimum. It needs to run on small machines and it needs to be able to scale to large number of parallel connections without draining available resources.
So yes, even in 2026 it is important to keep allocations small and as few as possible.
A line in the sand
In July 2025 we added a test case to curl’s test suite (3214) that simply checks the sizes of fifteen important structs. Each struct has a fixed upper limit which they may not surpass without causing the test to fail.
Of course we can adjust the limits when we need to, as it might be entirely okay to grow them when the features and functionalities motivate that, but this check makes sure that we do not mistakenly grow the sizes simply because of a mistake or bad planning.
Do we degrade?
It’s of course a question of a balance. How much memory is a feature and added performance worth? Every libcurl user probably has their own answers to that but I decided to take a look at how we do today, and compare with data I blogged five years ago.
The point in time I decided to compare with here, curl 7.75.0, is fun to use because it was a point in time where I had given the size use in curl some focused effort and minimization work. libcurl memory use was then made smaller and more optimized than it had been for almost a decade before that.
struct sizes
The struct sizes always vary depending on which features that are enabled, but in my tests here they are “maximized”, with as many features and backends enabled as possible.
Let’s take a look at three important structs. The multi handle, the easy handle and the connectdata struct. Now compared to then, five years ago.
8.19.0-DEV (now)
7.75.0 (past)
multi
816
416
easy
5352
5272
connectdata
912
1472
As seen in the table, two of the structs have grown and one has shrunken. Let’s see what impact that might have.
If we assume a libcurl-using application doing 10 parallel transfers that have 20 concurrent connections open, libcurl five ago needed:
1472 x 20 + 5272 x 10 + 416 = 82,576 bytes for that
While libcurl in current git needs:
912 x 20 + 5352 x 10 + 816 = 72,576 bytes.
Incidentally that is exactly 10,000 bytes less, five years and many new features later.
This said, part of the reason the structs change is that we move data between them and to other structs. The few mentioned here are not the whole picture.
Downloading a single HTTP 512MB file
curl http://localhost/512M --out-null
Using a bleeding edge curl build, this command line on my 64 bit Linux Debian host does 107 allocations, that needs at its maximum 133,856 bytes.
Compared to five years ago, where it needed 131,680 bytes done in a mere 96 allocations.
curl now needs 1.6% more memory for this, done with 11% more allocation calls.
I believe the current amounts are still okay considering we have refactored, developed and evolved the library significantly over the same period.
As a comparison, downloading the same file twenty times in parallel over HTTP/1 using the same curl build needs 2,222 allocations but only a total of 308,613 bytes allocated at peak. Twenty times the number of allocations but only three times the maximum size, compared to the single file download.
Caveat: this measures clear text HTTP downloads. Almost everything transferred these days is using TLS and if you add TLS to this transfer, curl itself does only a few additional allocations but more importantly the TLS library involved allocates much more memory and do many more allocations. I just consider those allocations to be someone else’s optimization work.
Visualized
I generated a few graphs that illustrate memory use changes in curl over time based on what I described above.
The “easy handle” is the handle an application creates and that is associated which each individual transfer done with libcurl.
curl easy handle size changes over time
The “multi handle” is a handle that holds one or more easy handles. An application has at least one of these and adds many easy handles to it, or the easy handles has one of its own internally.
curl multi handle size changes over time
The “connectdata” is an internal struct for each existing connection libcurl knows about. A normal application that makes multiple transfers, either serially or in parallel tends to make the easy handle hold at least a few of these since libcurl uses a connection pool by default to use for subsequent transfers.
curl connectdata struct size over time
Here is data from the internal tracking of memory allocations done when the curl tool is invoked to download a 512 megabyte file from a locally hosted HTTP server. (Generally speaking though, downloading a larger size does not use more memory.)
curl downloading a 512MB file needs this much memory and allocations
Conclusion
I think we are doing alright and none of these struct sizes or memory use have gone bad. We offer more features and better performance than ever, but keep memory spend at a minimum.