Category Archives: cURL and libcurl

curl and/or libcurl related

curl adds parallel host control

I’m convinced a lot of people have not yet figured out that curl has supported parallel downloads for six years already by now.

Provided a practically unlimited number of URLs, curl can be asked to get them in a parallel fashion. It then makes sure to keep N transfers alive for as long as there is N or more transfers left to complete, where X is a custom number but 50 by default.

Concurrently transferring data from potentially a large number of different hosts can drastically shorten transfer times and who doesn’t prefer to complete their download job sooner rather than later?

Limit connections per host

At times however, you may want to do a lot of transfers, and you want to do them in parallel for speed, but maybe you prefer to limit how many connections curl should use per each hostname among all the URLs?

This per-host limit is a feature libcurl has offered applications for a long time and now the time has come for curl tool users to also enjoy its powers.

Per host should perhaps be called per origin if we spoke web lingo, because it rather limits the number of connections to the same protocol + hostname + port number. We call that host here for simplicity.

To set a cap on how many connections curl is allowed to use for each specific server use --parallel-max-host [number].

For example, if you want to download ten million images from this site, but never use more than six connections:

curl --parallel --parallel-max-host 6 https://example.com/[1-10000000].jpg --remote-name

Connections

Pay special attention to the exact term: this limits the number of connections used to each host. If the transfers are done using HTTP/2 or HTTP/3, they can be done using many streams over just one or a few connections so doing 50 or 200 transfers in parallel should still be perfectly doable even with a limited number of connections. Not so much with HTTP/1.

Ships in 8.16.0

This command line option will become available in the pending curl version 8.16.0 release.

option parsing in curl

We have always had a custom command line option parser in curl. It is fast and uncomplicated and gives us the perfect mix of flexibility and function. It also saves us from importing or using code with another license.

In one aspect it has behaved slightly different than many other command line parsers: the way it accepts arguments to long options.

Long options are the options provided using a name that starts with two dashes and are often not single-letters. Example:

curl --user-agent "curl/2000" https://example.com/

The example above tells curl to use the user agent curl/2000 in the transfer. The argument provided to the --user-agent option is provided separated with a space.

When instead using the short version of the same option, the argument can be specified with a space in between or not:

curl -A curl/2000 https://example.com/

or

curl -Acurl/2000 https://example.com/

What about equals sign?

A common paradigm and syntax style for accepting long options in command line tools is the “equals sign” approach. When you provide an argument to a long option you do this by appending an equals sign followed by the argument to the option; with no space. Like this:

curl --user-agent="curl/2000" https://example.com/

This example uses double quotes but they are of course not necessary if there is no space or similar in the argument.

Bridging the gap

To make life easier for future users, curl now also support this latter style – starting in curl 8.16.0. With this syntax supported, curl accepts a more commonly used style and therefore should induce less surprises to users. To make it easier to write curl command lines.

I emphasize that change this is an improvement for future users, because I really don’t think it is a good idea for most user to switch to this syntax immediately. This of course because all the older curl versions that are still used widely around the word do not support it.

I think it is better if we wait a year or two until we start using this option style in curl documentation and example command lines. To give time for users to upgrade to a version that has support for it.

Output nothing with –out-null

Downloading data from a remote URL is probably the single most common operation people do with curl.

Often, users then add various additional options to the command line to extract information from that transfer but may also decide that the actually fetched data is not interesting. Sometimes they don’t get the accurate meta-data if the full download is not made, sometimes they run performance measurements where the actual content is not important, and so on. Users sometimes have reasons for not saving their downloads.

They do downloads where the actual downloaded content is tossed away. On GitHub alone, we can find almost one million command lines doing such curl invokes.

curl of course offers multiple ways to discard the downloaded data, but the maybe most straight-forward way is to write the contents to a null device such as /dev/null on *nix systems or NUL: on windows. Like this:

curl https://example.com/ --output /dev/null

or using the short option

curl https://example.com/ -o /dev/null

In many cases we can accomplish the same thing with a shell redirect – which also redirects multiple URLs at once:

curl https://example.com/ >/dev/null

Improving nothing

The command line above is perfectly fine and works fine and has been doing so for decades. It does however have two drawbacks:

  1. Lack of portability. curl runs on most operating systems and most options and operations work identically, to the degree that you can often copy command lines back and forth between machines without thinking much about it. Outputting data to /dev/null is however not terribly portable and trying that operation on Windows for example will cause the command line to fail.
  2. Performance. It may not look like much, but completely avoiding writing the data instead of writing it to /dev/null makes benchmarks show a measurable improvement. So if you don’t want the data, why not do the operation faster rather than slower?

The shell redirect approach has the same drawbacks.

Usage

The new option is used as follows, where it needs one --out-null occurrence per URL it wants to redirect.

curl https://example.com/ --out-null

This allows you to for example send one to null and save the other:

curl https://example.com/ --out-null https://example.net/ --output save-data

Coming in 8.16.0

This command line option debuts in curl 8.16.0, shipping in September 2025.

Credits

Stefan Eissing brought this option. He also benchmarked this option.

Carving out msh3

I hope that by now most readers of my blog have understood that curl, and libcurl specifically, is an architecture with a transfer core with a set of different backends plugged in. Backends powered by different third party libraries.

The exact set of backends used in a particular build is decided by the person that builds curl.

What backends that curl supports varies over time (and platform). We appreciate adding support for more backends and to let users decide which ones to use, as this allows us to approach it with a survival of the fittest attitude. What does not work in the long run or what isn’t actually used, we can deprecate and remove again. Ideally this helps us select the better ones for the future.

HTTP/3

For the last few years curl has supported the HTTP/3 protocol powered by one out of four different backends:

  1. nghttp3 + ngtcp2
  2. quiche
  3. nghttp3 + OpenSSL-QUIC
  4. msh3 + msquic

(All except the first listed combination, we still label experimental.)

Dropping msh3

In this quartet, there is one option that stands out a little: the last one. The msh3 powered backend was brought in and merged into the curl source tree a few years ago with the hope that this solution would end up a good choice for people on Windows since it is the only choice in the list that can get built to use the native Windows TLS solution SChannel.

Unfortunately, this work was never finalized. It never worked correctly in curl and the API and architecture of msh3 makes it quirky and cumbersome to integrate – and quite frankly we can’t seem to drum up any interest for people to test nor work on improving this backend.

As we have three other working backends, all of which also can build and run on Windows, we see no benefit in dragging msh3 along. In fact, there is a cost in maintenance and keeping the build working and the tests running etc that we rather avoid. In particular as we seem to be doing that for virtually no gain.

I want to stress that I don’t think there is anything wrong with msh3 nor its underlying msquic library. They simply have not been made to work properly in curl.

Updated backend map

The msh3 backend has now been removed from git in the current master branch and this is how the HTTP/3 offer will look like in the coming curl 8.16.0 release.

Hello Sprout

Sprout is the name of my new machine that just arrived. The crowd-funded laptop. Since this beauty is graciously sponsored by a large crowd of people I felt I should share a little bit of its journey and entry into my life.

First I needed a name for it, and since it is small and is meant to grow with me a bit, I think Sprout feels apt.

The crowd-funding

Starting the initiative on a Saturday afternoon might not have been the most clever thing to get widest possible reach, but it seems it did not matter. We reached the goal of 3,500 USD within 90 minutes and people have kept on donating even after that and the counter is now at 7,000 USD. Amazing.

As mentioned: all surplus ends up in the general curl fund and will be used solely and exclusively to cover expenses that benefit and favor curl and its development. That is a promise. The curl fund is also completely open and transparent so everyone who wants to can in fact monitor our finances to verify this.

Specs

I decided to go with a Framework laptop because I like and want to support their concept of modular and upgradable laptops. After the overwhelming funding round, I decided to go with the top of the line AMD CPU alternative they offer, 96GB of RAM and 4TB of storage. This should make the laptop last a while I think.

  • CPU: AMD Ryzen AI 9 HX 370. Up to 5.1 GHz. 12 cores, 24 threads.
  • Graphics (integrated): AMD Radeon 890M. Up to 2.9GHz. 16 Graphics Cores
  • Wifi: AMD RZ717 Wi-Fi 7
  • Display: 13.5″ 2880×1920 120Hz matte display (3:2 ratio)
  • Memory: DDR5-5600 – 96GB (2 x 48GB)
  • Storage: WD_BLACK SN850X NVMe – M.2 2280 – 4TB
  • Laptop Bezel: Framework Laptop 13 Bezel – Black
  • Keyboard: Swedish/Finnish (2nd Gen)
  • Dimensions: 15.85mm x 296.63mm x 228.98mm
  • Weight: 1.3 Kg

Outputs

The laptop has four slots available for ports. I have USB-C, USB-A, HDMI and external Ethernet modules. I bought a few more than four, because I don’t know which exact setup I will prefer and they are interchangeable so I can change them according to the situation I’m in.

Dimensions compared to the old

My old laptop was a Lenovo T470S 14″.

Dimensions: 18.8 mm x 331 mm x 226.8 mm
Weight 1.32 kg

So the new one is 3 mm thinner, 3 cm narrower and pretty much the same depth (+2mm) and pretty much the same weight.

Assembling

Ordered without Windows installed (of course), this thing arrived like an IKEA flat-pack and there was some assembly required. The necessary screwdriver comes included and I could complete the task in under ten minutes. Not at all complicated.

Linux

I noticed two different Linux distributions offered as “easy installs” with guides from Framework, but as none of them were Debian I opted to take the more complicated route.

Debian

I downloaded a DVD iso image for Debian testing, copied it onto a USB stick and booted up Sprout with it. The installation went like a breeze and it detected the Wifi networking just fine.

Once the system came up for real without the USB stick, I edited the necessary files and took it up to current Debian Unstable over wifi with no problems.

Initial glitches

I experienced some glitches (X or the keyboard or something would stop accepting input after 5-15 minutes of use), which I first thought was due to an older Linux kernel as I had friends tell me that I might need 6.15+ for proper hibernation support and Debian unstable only has a 6.12 one just now. I switched to the Debian experimental kernel (6.16-rc7) but the issue remained. Hm?

I then remembered I hadn’t upgraded the laptop BIOS to its latest version yet, and after having invoked

fwupdmgr refresh --force
fwupdmgr get-updates
fwupdmgr update

and done a reboot, it first seemed to have fixed the problems but I was wrong. Is it X11 related? I have now switched my desktop to Plasma/Wayland to see if it fixes the problem. I might switch around a little bit more if I see it again because it is clearly a software glitch and not a hardware problem. Hardly Framework’s fault but instead more of a thing that happens occasionally when you run bleeding edge stuff. I’ll sort it out.

Console

Having a small but high DPI screen and trying to use the console with its default (tiny) font is next to impossible, at least with my aging eyes, so I spent a few minutes to figure out how to use setfont and then to invoke dpkg-reconfigure console-setup.

I find it a little curious that the Debian installer doesn’t have any easy provided option to do this already at install time.

A message

A few days after I had received my laptop I received a package via FedEx, and as I opened it I found this lovely note and some presents from Framework!

I know some of my followers tagged and mentioned Framework during the crowdfunding campaign but I of course didn’t expect anything from that.

The thing that looks like a CD-R among the gifts is actually a mouse mat, slightly larger than a CD. The small packages are USB-C modules for the laptop.

This little message still holds and shows more appreciation than what I have received from most companies that ever used my Open Source. It’s not a high bar. I truly appreciate it – said entirely without sarcasm.

Impressions and Performance

Just to give you a small idea of the performance difference, I decided to compare a simple but common operation I do. Build curl. It basically requires three command lines:

autoreconf -fi

This invokes a series of tools to setup the build.

Sprout: 4.8 seconds

Old: 9.3 seconds

Diff: 1.9 times faster

configure –with-openssl

A long series of single-threaded tests of the environment. Lots of invokes of gcc to check for features, functions etc.

Sprout: 10.4 seconds

Old: 11.1 seconds

Diff: 1.1 times faster

make -sj

This invokes gcc and forks off lots of new processes. The old machine’s 4 threads vs the new 24 threads probably plays a role here.

Sprout: 8.9 seconds

Old: 60.6 seconds

Diff: 6.8 times faster

(My desktop PC does the same in under 4 seconds.)

Keyboard

This is not a full-time development machine for me and I have never been fully productive on a laptop and I don’t expect to be on this new one either. I don’t think a laptop keyboard exists that can satisfy me the way a proper one can.

The Framework one does not have dedicated page up/down keys for example. The keys still feel decently fine to press and I think I will adjust to the layout over time.

Stickers

I offered everyone who donated 200 USD or more for the laptop sticker space on my cover, but so far not a single one has reached out to make this reality. To honor my promise I intend to wait a little while before I put my first stickers on it.

For reference this is what my old laptop looks like.

curl 8.15.0

Welcome to another curl release. A shorter cycle this time so we did not have time to merge many changes: there is just one logged. See below.

This is the 269th release featuring 269 command line options.

Release presentation

Numbers

the 269th release
1 change
42 days (total: 9,980)
233 bugfixes (total: 12,282)
334 commits (total: 35,572)
0 new public libcurl function (total: 96)
0 new curl_easy_setopt() option (total: 308)
0 new curl command line option (total: 269)
57 contributors, 29 new (total: 3,460)
37 authors, 16 new (total: 1,392)
0 security fix (total: 167)

Change

Removed support for Secure Transport and BearSSL.

Bugfixes

We manage to yet again land over 230 documented bugfixes (5.5 per day!). Read about them in the full changelog. A set of them are discussed in the release video.

Death by a thousand slops

I have previously blogged about the relatively new trend of AI slop in vulnerability reports submitted to curl and how it hurts and exhausts us.

This trend does not seem to slow down. On the contrary, it seems that we have recently not only received more AI slop but also more human slop. The latter differs only in the way that we cannot immediately tell that an AI made it, even though we many times still suspect it. The net effect is the same.

The general trend so far in 2025 has been way more AI slop than ever before (about 20% of all submissions) as we have averaged in about two security report submissions per week. In early July, about 5% of the submissions in 2025 had turned out to be genuine vulnerabilities. The valid-rate has decreased significantly compared to previous years.

We have run the curl Bug Bounty since 2019 and I have previously considered it a success based on the amount of genuine and real security problems we have gotten reported and thus fixed through this program. 81 of them to be exact, with over 90,000 USD paid in awards.

End of the road?

While we are not going to do anything rushed or in panic immediately, there are reasons for us to consider changing the setup. Maybe we need to drop the monetary reward?

I want us to use the rest of the year 2025 to evaluate and think. The curl bounty program continues to run and we deal with everything as before while we ponder about what we can and should do to improve the situation. For the sanity of the curl security team members.

We need to reduce the amount of sand in the machine. We must do something to drastically reduce the temptation for users to submit low quality reports. Be it with AI or without AI.

The curl security team consists of seven team members. I encourage the others to also chime in to back me up (so that we act right in each case). Every report thus engages 3-4 persons. Perhaps for 30 minutes, sometimes up to an hour or three. Each.

I personally spend an insane amount of time on curl already, wasting three hours still leaves time for other things. My fellows however are not full time on curl. They might only have three hours per week for curl. Not to mention the emotional toll it takes to deal with these mind-numbing stupidities.

Times eight the last week alone.

Reputation doesn’t help

On HackerOne the users get their reputation lowered when we close reports as not applicable. That is only really a mild “threat” to experienced HackerOne participants. For new users on the platform that is mostly a pointless exercise as they can just create a new account next week. Banning those users is similarly a rather toothless threat.

Besides, there seem to be so many so even if one goes away, there are a thousand more.

HackerOne

It is not super obvious to me exactly how HackerOne should change to help us combat this. It is however clear that we need them to do something. Offer us more tools and knobs to tweak, to save us from drowning. If we are to keep the program with them.

I have yet again reached out. We will just have to see where that takes us.

Possible routes forward

People mention charging a fee for the right to submit a security vulnerability (that could be paid back if a proper report). That would probably slow them down significantly sure, but it seems like a rather hostile way for an Open Source project that aims to be as open and available as possible. Not to mention that we don’t have any current infrastructure setup for this – and neither does HackerOne. And managing money is painful.

Dropping the monetary reward part would make it much less interesting for the general populace to do random AI queries in desperate attempts to report something that could generate income. It of course also removes the traction for some professional and highly skilled security researchers, but maybe that is a hit we can/must take?

As a lot of these reporters seem to genuinely think they help out, apparently blatantly tricked by the marketing of the AI hype-machines, it is not certain that removing the money from the table is going to completely stop the flood. We need to be prepared for that as well. Let’s burn that bridge if we get to it.

The AI slop list

If you are still innocently unaware of what AI slop means in the context of security reports, I have collected a list of a number of reports submitted to curl that help showcase. Here’s a snapshot of the list from today:

  1. [Critical] Curl CVE-2023-38545 vulnerability code changes are disclosed on the internet. #2199174
  2. Buffer Overflow Vulnerability in WebSocket Handling #2298307
  3. Exploitable Format String Vulnerability in curl_mfprintf Function #2819666
  4. Buffer overflow in strcpy #2823554
  5. Buffer Overflow Vulnerability in strcpy() Leading to Remote Code Execution #2871792
  6. Buffer Overflow Risk in Curl_inet_ntop and inet_ntop4 #2887487
  7. bypass of this Fixed #2437131 [ Inadequate Protocol Restriction Enforcement in curl ] #2905552
  8. Hackers Attack Curl Vulnerability Accessing Sensitive Information #2912277
  9. (“possible”) UAF #2981245
  10. Path Traversal Vulnerability in curl via Unsanitized IPFS_PATH Environment Variable #3100073
  11. Buffer Overflow in curl MQTT Test Server (tests/server/mqttd.c) via Malicious CONNECT Packet #3101127
  12. Use of a Broken or Risky Cryptographic Algorithm (CWE-327) in libcurl #3116935
  13. Double Free Vulnerability in libcurl Cookie Management (cookie.c) #3117697
  14. HTTP/2 CONTINUATION Flood Vulnerability #3125820
  15. HTTP/3 Stream Dependency Cycle Exploit #3125832
  16. Memory Leak #3137657
  17. Memory Leak in libcurl via Location Header Handling (CWE-770) #3158093
  18. Stack-based Buffer Overflow in TELNET NEW_ENV Option Handling #3230082
  19. HTTP Proxy Bypass via CURLOPT_CUSTOMREQUEST Verb Tunneling #3231321
  20. Use-After-Free in OpenSSL Keylog Callback via SSL_get_ex_data() in libcurl #3242005
  21. HTTP Request Smuggling Vulnerability Analysis – cURL Security Report #3249936

How I do it

A while ago I received an email with this question.

I’ve been subscribed to your weekly newsletter for a while now, receiving your weekly updates every Friday. I’m writing because I admire your consistency, focus, and perseverance. I can’t help but wonder, with admiration, how you manage to do it.

Since this is a topic I receive questions about semi-regularly, I decided I would attempt to answer it. I have probably touched the subject in previous blog posts as well.

Work

Let me start out by defining what I consider my primary work to be. Or perhaps I should call it my mission because it goes way beyond just “work”. curl is irrevocably a huge part of me and my life.

  • I drive the curl project. Guide, develop, review, comment, admin, debug, merge, commit, support, assess security reports, lead, release, talk about it, inspire etc.
  • It does not necessarily mean that I do the most number of commits to curl every month. We have a set of very skilled and devoted committers that can do a lot without me.
  • I keep up with relevant Internet protocol developments and make sure to give feedback on what I think is good and bad, in particular from a small player’s/library’s view that is sometimes a bit different than the tech giants’ takes. This means participating actively in some IETF groups and keeping myself informed about what is happening in a number of other HTTP, web and browser oriented communities.
  • I keep up with related technologies and Open Source projects to understand how to navigate. Feedback issues, comments and pull requests to neighbor projects that we use – to strengthen them (and then by association the combination curl + them) and to increase the chances that they will help us out in similar fashion.
  • I use my position as lead developer of curl to blog and speak up about things I think need to be said, explained or giggled at. Be it stupid emails, bad uses of AI or inefficient security organizations. Ideally this occasionally helps other people and projects as well.

As a successful Open Source project I acknowledge and am aware that we (I mean curl) might get more attention than some others, and that we are used as or considered a “model” sometimes, making it even more important to do things right. From my language use in public to source code decisions. I try to live up to these expectations.

A part of my job is to make companies become paying customers so that I can afford working on curl – and once they have become customers I need to every now and then attend to support tickets from them. I can work full-time on curl thanks to my commercial customers.

Why

I have a strong sense of loyalty and commitment. When I join a project or a cause, I typically stick around and do my share of the job until it is finished.

I enjoy programming and software development – and I have done so ever since I first learned about programming as a teen in the mid 1980s. It is fun to create something that is useful and that can be used by others, but I also like solving the puzzles and challenges that come up in the process.

When the software project you work on never finishes, and is used by a crazy amount of users it gives you a sense of responsibility and pride. An even bigger incentive to make sure it actually works as intended. A desire to please the users. All the users.

Even after having reached many billions of installations there are still challenges to push the project further and harder on every possible front. Make it the best documented one. Make it an exemplary Open Source project. Make it newcomer friendly. Add more tests. Make sure not a single project in the world can claim they ship better security advisories. Work really hard on making it the most secure network library there is. While at the same being welcoming and friendly to new contributors.

If there is any area that curl is not best-in-class, we should put in more work and improve curl in that area. While at the same time keep up and polish it in all other aspects.

This is what drives me. This is what I want.

How

Getting top scores in every possible (imaginary and real) scorecard is accomplished through good old engineering. Do the job. Test. Iterate. Fail. Fix. Add tests. Do it again. Over and over.

A normal work day I sit down at my desk at about 8 in the morning and start. I iterate over issues, pull-requests and the everyday curl maintenance. I post silly messages on Mastodon and I chat with friends on IRC.

I try to end my regular work days at around 18:00, but I may go longer or shorter some days depending on what I feel like or if it’s “floorball day”. (I leave early on Wednesdays to go play with friends.)

As I live in Sweden and have many North-American colleagues and customers, I have occasional evening meetings to deal with the nine hour time difference to their west coast.

At some time between 22:00 and 23:00 I sit down in front of my computer again for the evening shift. I continue working on issues, fix bugs and review pull-requests. At 1am I sleep.

It makes me do maybe 50-55 hours of work per normal week. I call it all work hours plus plenty of spare time. Because this is the passion of my life. It is my job and my hobby. Because I want to. I love it. It is not a setup and number of hours I ask nor expect anyone else to do.

I have worked like this since early 2019 when I started doing curl full-time.

Independent

One explanation how this all works is that curl is independent. Truly independent in most senses of the word.

No companies control or own curl in any way. Yet every company is welcome to participate.

curl is not part of any foundation or umbrella organization. We range free.

curl is extremely liberally licensed.

On motivation

One of the hardest questions to answer is how I can keep up the motivation and still consider this fun and exciting after all this time.

First let’s not pretend that it always feels fun and thrilling. Sometimes it actually feels a bit boring and done. There is no shame in that and it is not strange or odd. Such periods come and go. When they come, I might do less curl for a while. Or maybe find a corner of the project that is not important but could be fun to poke at. I have learned that these periods come and go.

What motivates me is that everyone runs and uses curl and libcurl. Positive feedback is fuel that can keep me running for a long time. Making curl a leading tool that shoulders and carries a lot of digital infrastructure makes me feel a purpose. When there is a bug reported, I can feel almost hurt and sometimes ashamed and I need to get it fixed. curl is supposed to be one of the best in all categories and if it ever is not, I will work hard on making it so.

The social setup around Open Source and a success such as curl also makes it fun. I work full-time from home without geographical proximity to any other curl regulars. But I don’t need that. We can joke around in chat, we help each other in issues and pull-requests and we can do bad puns in video meetings. Contrary to “normal” job colleagues, these people are here because they want, believe and strive for something similar to me – and they are spread out across the world.

I feel that I work for the curl users. The users doing internet transfers. As opposed to any big company, tech giants or anyone else who could otherwise dictate direction. It’s highly motivational to be working for the users. Sure, the entities paying my wages are primarily a few huge companies, but the setup still makes this work and I still feel and act on the users behalf. Those companies have exactly no say in how we run the Open Source project.

I take criticism about curl personally because I have put so much of myself into it and as the BDFL for decades a lot of what it is today is ultimately the result of my choices.

Leading the troops

I try to lead by example. I still do a fair amount of development, debugging and architectural design in the project. I follow and perform the same steps I expect from the other contributors.

I’m a believer in lowering friction in the project, but still not relaxing the requirements: we still need tests and documentation for everything we do. Entering the project should be easy and welcoming, even if it can be hard to actually get a change merged.

I believe in reducing bureaucracy and formalities so that we can focus on development and getting things done. We don’t have or need manager levels or titles. We have things to do, people who do things and we have people that can review, comment and eventually merge those improvements. If there are fewer people participating during periods, then things just get done slower.

I invite discussions and participation and I encourage the same approach from my fellow contributors. When we want to do things, change things, improve things, we should inform and invite the greater community for comments, feedback and help. Oftentimes they may not have a lot to say, but we should still continue to ask for their opinions.

I use a direct and non-complicated communication style. I want to be friendly, I don’t curse, I focus on speaking about their suggestions and not the person. To the point rather than convoluted. When insulted, I try to not engage (which I sometimes fail at). But I also want to have a zero tolerance policy against bad behavior and abuse to enable the positive spirit to remain.

Like everyone else, I sometimes fail in my ambitions of how I want to behave and lead the project. Hopefully that happens less and less frequently over time.

I give this my everything

I think most of what has made curl good and successful has happened because I and the team around curl have worked hard on making it so. It has not happened by chance or by accident.

Family

I have a loving and understanding family. My wife and I celebrated our 25th anniversary earlier this year. My two kids are grown-ups now – both were born after I started working on curl.

Sponsor my laptop!

I need to get myself a new laptop. My existing one is from 2017 and was already then not the most powerful one.

It recently started to shut itself off when running on battery and during the two most recent curl up meetings it has proven itself to be rather sluggish and unable to save a live camera-recording while also streaming it, without stuttering or having other problems.

A framework laptop

I plan to get a new 13″ one from Framework, and a semi-beefy one from there runs at about 2,500 USD. I’m looking at roughly this configuration.

The curl fund pays

For the first time ever, the curl fund is going to help pay for this. The curl fund is all donations and sponsorships gathered. Money we only spend to improve curl and curl related activities. All my machines I have ever used to develop curl on up until now have been paid for by me personally.

You can help!

For this special occasion, we have created a small “crowd-source” like effort. You can help sponsor me this device and we have special little collectors pool for it here:

https://opencollective.com/curl/contribute/laptop-90642

If we get more than 1,000 USD donated to this, I can upgrade my laptop config. More CPU, more memory, more storage perhaps.

If this effort gets less than 1,000 donated, then I will stick to with the original “base” setup.

For everyone who donate 200 USD (or more) I offer space on the laptop cover for the donor to decide exactly what I should put there (in terms of stickers etc).

This program will run for a week as a start.

A developer’s device

I do my main curl development on a desktop PC in my home office. I use my laptop primarily when away, on travels and on vacations. I bring it to talks (10-15 a year) where I typically talk about curl or curl adjacent topics. I occasionally use it to live-stream with, like from our annual curl up meetings.

I have decided to go with Framework because I like their concept and I hear good things about them.

I run Linux. I prefer Debian. That is what I intend to use on this one as well.

The fund

We have a few regular gracious sponsors of the curl project that donates money to us on a regular basis. Their money is what pays for this if nobody else wants to participate.

Updates

It took nine minutes after I published this to get the first 200 USD donation.

We reached 1,000 USD already within the first hour. I am looking at upgrading the setup. Starting probably with the CPU.

90 minutes in “A friendly golem” changed the game when they donated 1,750 USD in one go and we are at a total of 3,770 USD! I think I can max up the config now.

July13, 17:21: The order has been placed. Said to be delivered within 5 days.

Thanks

Thank you everyone for chipping in. Truly amazing. I will keep you posted on the thing and follow up with some photos and a review later.

Cybersecurity Risk Assessment Request

With the new EU legislation Cyber Resiliency Act (CRA), there are new responsibilities and requirements put on manufacturers of digital products and services in Europe.

Going forward these manufacturers must be able to know and report the exact contents of their software, called a Software Bill of Material (SBOM) and they have requirements to check for vulnerabilities in those components etc. This implies that they need to have full control and knowledge about all of their Open Source components in their stack. (See the CRA Hub for a good resource on CRA for Open Source people.)

As a maintainer of a software component that is widely used, I have been curious to see how this will materialize for us. Today I got a first glimpse of what I can only guess will happen more going forward.

This multi-billion-dollar Fortune-500 company that I have no contract with and with which I have had no previous communication, sent me this email asking for a lot of curl information. A slightly redacted version is shown below.

Now that my curiosity has been satisfied a little bit I instead await the future and long to see how many more of these that will come. And how they will respond to my replies.

CRA_request_counter = 1;

The request

Hello,

I hope this message finds you well.

As part of our ongoing efforts to comply with the EU Cyber Resilience Act (CRA), we are currently conducting a cybersecurity risk assessment of third-party software vendors whose products or components are integrated into our systems.

To support this initiative, we kindly request your input on the following questions related to your software product “libcurl” with version 7.87.0. Please provide your responses directly in the table below and do reply to all added in this email,

Additional Information:

  • Purpose: This security assessment is part of our due diligence and regulatory compliance obligations under the EU CRA.
  • Confidentiality: All information shared will be treated as confidential and used solely for the purpose of this assessment.
  • Contact: Should you have any questions or need further clarification, please feel free to reach out by replying directly to this email.

We kindly request your response by Friday, July 25, 2025, to ensure timely completion of our assessment process. Thank you for your cooperation and continued partnership in maintaining a secure and resilient digital environment.

My reaction and response

I am not their vendor without having a more formal relationship established and I am certainly not going to spend a few hours of my spare time gathering a lot of information for them for free for their commercial benefit.

They “kindly” want me to respond within two weeks.

Their use of double quotes around “libcurl” feels odd, and they claim to be using a version that is now more than 2.5 years old.

Most if not all of the information they are asking for is already publicly and openly accessible and readable. I suspect they want the information in this more formal way to make it appear more reliable or trustworthy perhaps. Or maybe it just follows their processes better.

(It also reminded me of the NASA emails.)

I responded like this

Hello,

I will be happy to answer all curl and libcurl related questions and assist you with this inquiry as soon as we have a support contract setup. You can get the process started immediately by emailing support@wolfssl.com.

Thanks, I’m looking forward to future cooperation.

/ Daniel

I will let you know if they take me up on my offer .

The screenshot

This snapshot of how it looked also shows the actual nine-question form table.

Why the company name is redacted

I’m looking forward to eventually do business with this company, I don’t want them to feel targeted or “ridiculed”. I also suspect that there will be many more emails like this going forward. The company name is not the interesting part of this story.