Category Archives: cURL and libcurl

curl and/or libcurl related

c10kday

From March 20, 1998 when the first curl release was published, to this day August 5, 2025 is exactly 10,000 days. We call it the curl-10000-day. Or just c10kday. c ten K day.

We want to celebrate this occasion by collecting and sharing stories. Your stories about curl. Your favorite memories. When you used curl for the first time. When curl saved your situation. When curl rescued your lost puppy. What curl has meant or perhaps still means to you, your work, your business, or your life. We want to favor and prioritize the good, the fun, the nostalgic and the emotional stories but it is of course up to your discretion.

We have created this thread in curl’s GitHub Discussion section for this purpose, so please go there and submit your story or read what others have shared.

https://github.com/curl/curl/discussions/17930

In the curl factory this day is nothing special. We keep hammering out new features and bugfixes – just like we always do.

Thanks for flying curl.

Even happier eyeballs

Back in 2012, the Happy Eyeballs RFC 6555 was published. It details how a sensible Internet client should proceed when connecting to a server. It basically goes like this:

Give the IPv6 attempt priority, then with a delay start a separate IPv4 connection in parallel with the IPv6 one; then use the connection that succeeds first.

We also tend to call this connection racing, since it is like a competition where multiple attempts compete trying to “win”.

In a normal name resolve, a client may get a list of several IPv4 and IPv6 addresses to try. curl would then pick the first, try that and if that fails, move on the next etc. If a whole family fails, it would start the other immediately.

v2

The updated Happy Eyeballs v2 RFC 8305 was published in 2017. It focused a lot on that the client should start its connections earlier in the process, preferably while getting responses from DNS instead of waiting for the hostname resolve phase to end before starting that.

This is complicated for lots of clients because there is no established (POSIX) API for doing such name resolves, so for a portable network library like libcurl we could not follow most of the new advice in this spec.

QUIC added a dimension

In 2012 we did not have QUIC on the map and not practically in 2017 either so those eyeballing specs did not include such details.

Even later, HTTP/3 was documented to require an alt-svc response header before a client would know if the server speaks HTTP/3 and only then could it attempt QUIC with it and expect it to work.

While curl works with alt-svc response approach, that’s information arriving far too late for many users – and it is especially damning for a command line tool as opposed to a browser, since lots of users just do single shot transfers and then never get to use HTTP/3 at all.

To combat that drawback, we decided that adding QUIC to the mix should add a separate connection competition. To allow faster and earlier use of QUIC.

Start the QUIC-IPv6 connect attempt first, then in order the QUIC-IPv4, TCP-ipv6 and finally the TCP-ipv4.

To users, this typically makes a very smooth operation where the client just automatically connects to the “best” alternative without it having to make any particular choices or decisions. It graciously and transparently adapts to situations where IPv6 or UDP have problems etc.

v3 and HTTPS-RR

With the introduction of HTTPS-RR, there are also more ways introduced to get IP addresses for hosts and there is now ongoing work within the IETF on making a v3 of the Happy Eyeballs specification detailing how exactly everything should be put together.

We are of course following that development to monitor and learn how we should adapt and polish curl connects further.

Parallel more

While waiting on the happy eyeballs v3 work to land in a document, Stefan Eissing took it upon himself to further tweak how curl behaves in an attempt to find the best connection even faster. Using more parallelism.

Starting in curl 8.16.0, curl will start the first IPv6 and the first IPv4 connection attempts exactly like before, but then, if none of them have connected after 200 milliseconds curl continues to the next address in the list and starts another attempt in parallel.

An illustration

Let’s take a look at an example of curl connecting to a server, let’s call the server curl.se. The orange numbers show the order of things after the DNS response has been received.

  1. The first connect attempt starts using the first IPv6 address from the DNS response. If it has not succeeded within 200 milliseconds…
  2. The second attempt starts in parallel, using the first IPv4 address. Now two connect attempts are running and if neither have succeeded in yet another 200 milliseconds…
  3. A second IPv6 connect attempt is started in parallel, using the second IPv6 address from the list. Now three connect attempts are racing. If none of them succeeds in another 200 milliseconds…
  4. A second IPv4 race starts, using the second IPv4 address from the list.
  5. … and this can continue, if this is a really slow or problematic server with many IP addresses.

Of course, each failed attempt makes curl immediately move to the next address in the list until all alternatives have been tested.

Add QUIC to that

The illustration above can be seen as “per transport”. If only TCP is wanted, there is a single such race going on. With potentially quite a few parallel attempts in the worst cases.

If instead HTTP/3 or a lower HTTP version is wanted, curl first starts a QUIC connection race as illustrated and then after 200 milliseconds it starts a similar TCP race in parallel to the QUIC one! Both run at the same time, the first one to connect wins.

A little table to illustrate when the different connect attempts starts when either QUIC or TCP is okay:

Time (ms)QUICTCP
0Start IPv6 connect
200Start IPv4 connectStart IPv6 connect
400Start 2nd IPv6 connectStart IPv4 connect
600Start 2nd IPv4 connectStart 2nd IPv6 connect
800Start 3rd IPv6 connectStart 2nd IPv4 connect

So in the case of trying to connect to a server that doesn’t respond that has more than two IPV6 and IPv4 addresses each, there could be nine connection attempts running after 801 milliseconds.

200 ms can be changed

The 200 milliseconds delay mentioned above is just the default time. It can easily be changed both using the library or the command line tool.

Credit

Image by Ilona Ilyés from Pixabay (heavily cropped)

curl adds parallel host control

I’m convinced a lot of people have not yet figured out that curl has supported parallel downloads for six years already by now.

Provided a practically unlimited number of URLs, curl can be asked to get them in a parallel fashion. It then makes sure to keep N transfers alive for as long as there is N or more transfers left to complete, where X is a custom number but 50 by default.

Concurrently transferring data from potentially a large number of different hosts can drastically shorten transfer times and who doesn’t prefer to complete their download job sooner rather than later?

Limit connections per host

At times however, you may want to do a lot of transfers, and you want to do them in parallel for speed, but maybe you prefer to limit how many connections curl should use per each hostname among all the URLs?

This per-host limit is a feature libcurl has offered applications for a long time and now the time has come for curl tool users to also enjoy its powers.

Per host should perhaps be called per origin if we spoke web lingo, because it rather limits the number of connections to the same protocol + hostname + port number. We call that host here for simplicity.

To set a cap on how many connections curl is allowed to use for each specific server use --parallel-max-host [number].

For example, if you want to download ten million images from this site, but never use more than six connections:

curl --parallel --parallel-max-host 6 https://example.com/[1-10000000].jpg --remote-name

Connections

Pay special attention to the exact term: this limits the number of connections used to each host. If the transfers are done using HTTP/2 or HTTP/3, they can be done using many streams over just one or a few connections so doing 50 or 200 transfers in parallel should still be perfectly doable even with a limited number of connections. Not so much with HTTP/1.

Ships in 8.16.0

This command line option will become available in the pending curl version 8.16.0 release.

option parsing in curl

We have always had a custom command line option parser in curl. It is fast and uncomplicated and gives us the perfect mix of flexibility and function. It also saves us from importing or using code with another license.

In one aspect it has behaved slightly different than many other command line parsers: the way it accepts arguments to long options.

Long options are the options provided using a name that starts with two dashes and are often not single-letters. Example:

curl --user-agent "curl/2000" https://example.com/

The example above tells curl to use the user agent curl/2000 in the transfer. The argument provided to the --user-agent option is provided separated with a space.

When instead using the short version of the same option, the argument can be specified with a space in between or not:

curl -A curl/2000 https://example.com/

or

curl -Acurl/2000 https://example.com/

What about equals sign?

A common paradigm and syntax style for accepting long options in command line tools is the “equals sign” approach. When you provide an argument to a long option you do this by appending an equals sign followed by the argument to the option; with no space. Like this:

curl --user-agent="curl/2000" https://example.com/

This example uses double quotes but they are of course not necessary if there is no space or similar in the argument.

Bridging the gap

To make life easier for future users, curl now also support this latter style – starting in curl 8.16.0. With this syntax supported, curl accepts a more commonly used style and therefore should induce less surprises to users. To make it easier to write curl command lines.

I emphasize that change this is an improvement for future users, because I really don’t think it is a good idea for most user to switch to this syntax immediately. This of course because all the older curl versions that are still used widely around the word do not support it.

I think it is better if we wait a year or two until we start using this option style in curl documentation and example command lines. To give time for users to upgrade to a version that has support for it.

Output nothing with –out-null

Downloading data from a remote URL is probably the single most common operation people do with curl.

Often, users then add various additional options to the command line to extract information from that transfer but may also decide that the actually fetched data is not interesting. Sometimes they don’t get the accurate meta-data if the full download is not made, sometimes they run performance measurements where the actual content is not important, and so on. Users sometimes have reasons for not saving their downloads.

They do downloads where the actual downloaded content is tossed away. On GitHub alone, we can find almost one million command lines doing such curl invokes.

curl of course offers multiple ways to discard the downloaded data, but the maybe most straight-forward way is to write the contents to a null device such as /dev/null on *nix systems or NUL: on windows. Like this:

curl https://example.com/ --output /dev/null

or using the short option

curl https://example.com/ -o /dev/null

In many cases we can accomplish the same thing with a shell redirect – which also redirects multiple URLs at once:

curl https://example.com/ >/dev/null

Improving nothing

The command line above is perfectly fine and works fine and has been doing so for decades. It does however have two drawbacks:

  1. Lack of portability. curl runs on most operating systems and most options and operations work identically, to the degree that you can often copy command lines back and forth between machines without thinking much about it. Outputting data to /dev/null is however not terribly portable and trying that operation on Windows for example will cause the command line to fail.
  2. Performance. It may not look like much, but completely avoiding writing the data instead of writing it to /dev/null makes benchmarks show a measurable improvement. So if you don’t want the data, why not do the operation faster rather than slower?

The shell redirect approach has the same drawbacks.

Usage

The new option is used as follows, where it needs one --out-null occurrence per URL it wants to redirect.

curl https://example.com/ --out-null

This allows you to for example send one to null and save the other:

curl https://example.com/ --out-null https://example.net/ --output save-data

Coming in 8.16.0

This command line option debuts in curl 8.16.0, shipping in September 2025.

Credits

Stefan Eissing brought this option. He also benchmarked this option.

Carving out msh3

I hope that by now most readers of my blog have understood that curl, and libcurl specifically, is an architecture with a transfer core with a set of different backends plugged in. Backends powered by different third party libraries.

The exact set of backends used in a particular build is decided by the person that builds curl.

What backends that curl supports varies over time (and platform). We appreciate adding support for more backends and to let users decide which ones to use, as this allows us to approach it with a survival of the fittest attitude. What does not work in the long run or what isn’t actually used, we can deprecate and remove again. Ideally this helps us select the better ones for the future.

HTTP/3

For the last few years curl has supported the HTTP/3 protocol powered by one out of four different backends:

  1. nghttp3 + ngtcp2
  2. quiche
  3. nghttp3 + OpenSSL-QUIC
  4. msh3 + msquic

(All except the first listed combination, we still label experimental.)

Dropping msh3

In this quartet, there is one option that stands out a little: the last one. The msh3 powered backend was brought in and merged into the curl source tree a few years ago with the hope that this solution would end up a good choice for people on Windows since it is the only choice in the list that can get built to use the native Windows TLS solution SChannel.

Unfortunately, this work was never finalized. It never worked correctly in curl and the API and architecture of msh3 makes it quirky and cumbersome to integrate – and quite frankly we can’t seem to drum up any interest for people to test nor work on improving this backend.

As we have three other working backends, all of which also can build and run on Windows, we see no benefit in dragging msh3 along. In fact, there is a cost in maintenance and keeping the build working and the tests running etc that we rather avoid. In particular as we seem to be doing that for virtually no gain.

I want to stress that I don’t think there is anything wrong with msh3 nor its underlying msquic library. They simply have not been made to work properly in curl.

Updated backend map

The msh3 backend has now been removed from git in the current master branch and this is how the HTTP/3 offer will look like in the coming curl 8.16.0 release.

Hello Sprout

Sprout is the name of my new machine that just arrived. The crowd-funded laptop. Since this beauty is graciously sponsored by a large crowd of people I felt I should share a little bit of its journey and entry into my life.

First I needed a name for it, and since it is small and is meant to grow with me a bit, I think Sprout feels apt.

The crowd-funding

Starting the initiative on a Saturday afternoon might not have been the most clever thing to get widest possible reach, but it seems it did not matter. We reached the goal of 3,500 USD within 90 minutes and people have kept on donating even after that and the counter is now at 7,000 USD. Amazing.

As mentioned: all surplus ends up in the general curl fund and will be used solely and exclusively to cover expenses that benefit and favor curl and its development. That is a promise. The curl fund is also completely open and transparent so everyone who wants to can in fact monitor our finances to verify this.

Specs

I decided to go with a Framework laptop because I like and want to support their concept of modular and upgradable laptops. After the overwhelming funding round, I decided to go with the top of the line AMD CPU alternative they offer, 96GB of RAM and 4TB of storage. This should make the laptop last a while I think.

  • CPU: AMD Ryzen AI 9 HX 370. Up to 5.1 GHz. 12 cores, 24 threads.
  • Graphics (integrated): AMD Radeon 890M. Up to 2.9GHz. 16 Graphics Cores
  • Wifi: AMD RZ717 Wi-Fi 7
  • Display: 13.5″ 2880×1920 120Hz matte display (3:2 ratio)
  • Memory: DDR5-5600 – 96GB (2 x 48GB)
  • Storage: WD_BLACK SN850X NVMe – M.2 2280 – 4TB
  • Laptop Bezel: Framework Laptop 13 Bezel – Black
  • Keyboard: Swedish/Finnish (2nd Gen)
  • Dimensions: 15.85mm x 296.63mm x 228.98mm
  • Weight: 1.3 Kg

Outputs

The laptop has four slots available for ports. I have USB-C, USB-A, HDMI and external Ethernet modules. I bought a few more than four, because I don’t know which exact setup I will prefer and they are interchangeable so I can change them according to the situation I’m in.

Dimensions compared to the old

My old laptop was a Lenovo T470S 14″.

Dimensions: 18.8 mm x 331 mm x 226.8 mm
Weight 1.32 kg

So the new one is 3 mm thinner, 3 cm narrower and pretty much the same depth (+2mm) and pretty much the same weight.

Assembling

Ordered without Windows installed (of course), this thing arrived like an IKEA flat-pack and there was some assembly required. The necessary screwdriver comes included and I could complete the task in under ten minutes. Not at all complicated.

Linux

I noticed two different Linux distributions offered as “easy installs” with guides from Framework, but as none of them were Debian I opted to take the more complicated route.

Debian

I downloaded a DVD iso image for Debian testing, copied it onto a USB stick and booted up Sprout with it. The installation went like a breeze and it detected the Wifi networking just fine.

Once the system came up for real without the USB stick, I edited the necessary files and took it up to current Debian Unstable over wifi with no problems.

Initial glitches

I experienced some glitches (X or the keyboard or something would stop accepting input after 5-15 minutes of use), which I first thought was due to an older Linux kernel as I had friends tell me that I might need 6.15+ for proper hibernation support and Debian unstable only has a 6.12 one just now. I switched to the Debian experimental kernel (6.16-rc7) but the issue remained. Hm?

I then remembered I hadn’t upgraded the laptop BIOS to its latest version yet, and after having invoked

fwupdmgr refresh --force
fwupdmgr get-updates
fwupdmgr update

and done a reboot, it first seemed to have fixed the problems but I was wrong. Is it X11 related? I have now switched my desktop to Plasma/Wayland to see if it fixes the problem. I might switch around a little bit more if I see it again because it is clearly a software glitch and not a hardware problem. Hardly Framework’s fault but instead more of a thing that happens occasionally when you run bleeding edge stuff. I’ll sort it out.

Console

Having a small but high DPI screen and trying to use the console with its default (tiny) font is next to impossible, at least with my aging eyes, so I spent a few minutes to figure out how to use setfont and then to invoke dpkg-reconfigure console-setup.

I find it a little curious that the Debian installer doesn’t have any easy provided option to do this already at install time.

A message

A few days after I had received my laptop I received a package via FedEx, and as I opened it I found this lovely note and some presents from Framework!

I know some of my followers tagged and mentioned Framework during the crowdfunding campaign but I of course didn’t expect anything from that.

The thing that looks like a CD-R among the gifts is actually a mouse mat, slightly larger than a CD. The small packages are USB-C modules for the laptop.

This little message still holds and shows more appreciation than what I have received from most companies that ever used my Open Source. It’s not a high bar. I truly appreciate it – said entirely without sarcasm.

Impressions and Performance

Just to give you a small idea of the performance difference, I decided to compare a simple but common operation I do. Build curl. It basically requires three command lines:

autoreconf -fi

This invokes a series of tools to setup the build.

Sprout: 4.8 seconds

Old: 9.3 seconds

Diff: 1.9 times faster

configure –with-openssl

A long series of single-threaded tests of the environment. Lots of invokes of gcc to check for features, functions etc.

Sprout: 10.4 seconds

Old: 11.1 seconds

Diff: 1.1 times faster

make -sj

This invokes gcc and forks off lots of new processes. The old machine’s 4 threads vs the new 24 threads probably plays a role here.

Sprout: 8.9 seconds

Old: 60.6 seconds

Diff: 6.8 times faster

(My desktop PC does the same in under 4 seconds.)

Keyboard

This is not a full-time development machine for me and I have never been fully productive on a laptop and I don’t expect to be on this new one either. I don’t think a laptop keyboard exists that can satisfy me the way a proper one can.

The Framework one does not have dedicated page up/down keys for example. The keys still feel decently fine to press and I think I will adjust to the layout over time.

Stickers

I offered everyone who donated 200 USD or more for the laptop sticker space on my cover, but so far not a single one has reached out to make this reality. To honor my promise I intend to wait a little while before I put my first stickers on it.

For reference this is what my old laptop looks like.

curl 8.15.0

Welcome to another curl release. A shorter cycle this time so we did not have time to merge many changes: there is just one logged. See below.

This is the 269th release featuring 269 command line options.

Release presentation

Numbers

the 269th release
1 change
42 days (total: 9,980)
233 bugfixes (total: 12,282)
334 commits (total: 35,572)
0 new public libcurl function (total: 96)
0 new curl_easy_setopt() option (total: 308)
0 new curl command line option (total: 269)
57 contributors, 29 new (total: 3,460)
37 authors, 16 new (total: 1,392)
0 security fix (total: 167)

Change

Removed support for Secure Transport and BearSSL.

Bugfixes

We manage to yet again land over 230 documented bugfixes (5.5 per day!). Read about them in the full changelog. A set of them are discussed in the release video.

Death by a thousand slops

I have previously blogged about the relatively new trend of AI slop in vulnerability reports submitted to curl and how it hurts and exhausts us.

This trend does not seem to slow down. On the contrary, it seems that we have recently not only received more AI slop but also more human slop. The latter differs only in the way that we cannot immediately tell that an AI made it, even though we many times still suspect it. The net effect is the same.

The general trend so far in 2025 has been way more AI slop than ever before (about 20% of all submissions) as we have averaged in about two security report submissions per week. In early July, about 5% of the submissions in 2025 had turned out to be genuine vulnerabilities. The valid-rate has decreased significantly compared to previous years.

We have run the curl Bug Bounty since 2019 and I have previously considered it a success based on the amount of genuine and real security problems we have gotten reported and thus fixed through this program. 81 of them to be exact, with over 90,000 USD paid in awards.

End of the road?

While we are not going to do anything rushed or in panic immediately, there are reasons for us to consider changing the setup. Maybe we need to drop the monetary reward?

I want us to use the rest of the year 2025 to evaluate and think. The curl bounty program continues to run and we deal with everything as before while we ponder about what we can and should do to improve the situation. For the sanity of the curl security team members.

We need to reduce the amount of sand in the machine. We must do something to drastically reduce the temptation for users to submit low quality reports. Be it with AI or without AI.

The curl security team consists of seven team members. I encourage the others to also chime in to back me up (so that we act right in each case). Every report thus engages 3-4 persons. Perhaps for 30 minutes, sometimes up to an hour or three. Each.

I personally spend an insane amount of time on curl already, wasting three hours still leaves time for other things. My fellows however are not full time on curl. They might only have three hours per week for curl. Not to mention the emotional toll it takes to deal with these mind-numbing stupidities.

Times eight the last week alone.

Reputation doesn’t help

On HackerOne the users get their reputation lowered when we close reports as not applicable. That is only really a mild “threat” to experienced HackerOne participants. For new users on the platform that is mostly a pointless exercise as they can just create a new account next week. Banning those users is similarly a rather toothless threat.

Besides, there seem to be so many so even if one goes away, there are a thousand more.

HackerOne

It is not super obvious to me exactly how HackerOne should change to help us combat this. It is however clear that we need them to do something. Offer us more tools and knobs to tweak, to save us from drowning. If we are to keep the program with them.

I have yet again reached out. We will just have to see where that takes us.

Possible routes forward

People mention charging a fee for the right to submit a security vulnerability (that could be paid back if a proper report). That would probably slow them down significantly sure, but it seems like a rather hostile way for an Open Source project that aims to be as open and available as possible. Not to mention that we don’t have any current infrastructure setup for this – and neither does HackerOne. And managing money is painful.

Dropping the monetary reward part would make it much less interesting for the general populace to do random AI queries in desperate attempts to report something that could generate income. It of course also removes the traction for some professional and highly skilled security researchers, but maybe that is a hit we can/must take?

As a lot of these reporters seem to genuinely think they help out, apparently blatantly tricked by the marketing of the AI hype-machines, it is not certain that removing the money from the table is going to completely stop the flood. We need to be prepared for that as well. Let’s burn that bridge if we get to it.

The AI slop list

If you are still innocently unaware of what AI slop means in the context of security reports, I have collected a list of a number of reports submitted to curl that help showcase. Here’s a snapshot of the list from today:

  1. [Critical] Curl CVE-2023-38545 vulnerability code changes are disclosed on the internet. #2199174
  2. Buffer Overflow Vulnerability in WebSocket Handling #2298307
  3. Exploitable Format String Vulnerability in curl_mfprintf Function #2819666
  4. Buffer overflow in strcpy #2823554
  5. Buffer Overflow Vulnerability in strcpy() Leading to Remote Code Execution #2871792
  6. Buffer Overflow Risk in Curl_inet_ntop and inet_ntop4 #2887487
  7. bypass of this Fixed #2437131 [ Inadequate Protocol Restriction Enforcement in curl ] #2905552
  8. Hackers Attack Curl Vulnerability Accessing Sensitive Information #2912277
  9. (“possible”) UAF #2981245
  10. Path Traversal Vulnerability in curl via Unsanitized IPFS_PATH Environment Variable #3100073
  11. Buffer Overflow in curl MQTT Test Server (tests/server/mqttd.c) via Malicious CONNECT Packet #3101127
  12. Use of a Broken or Risky Cryptographic Algorithm (CWE-327) in libcurl #3116935
  13. Double Free Vulnerability in libcurl Cookie Management (cookie.c) #3117697
  14. HTTP/2 CONTINUATION Flood Vulnerability #3125820
  15. HTTP/3 Stream Dependency Cycle Exploit #3125832
  16. Memory Leak #3137657
  17. Memory Leak in libcurl via Location Header Handling (CWE-770) #3158093
  18. Stack-based Buffer Overflow in TELNET NEW_ENV Option Handling #3230082
  19. HTTP Proxy Bypass via CURLOPT_CUSTOMREQUEST Verb Tunneling #3231321
  20. Use-After-Free in OpenSSL Keylog Callback via SSL_get_ex_data() in libcurl #3242005
  21. HTTP Request Smuggling Vulnerability Analysis – cURL Security Report #3249936

How I do it

A while ago I received an email with this question.

I’ve been subscribed to your weekly newsletter for a while now, receiving your weekly updates every Friday. I’m writing because I admire your consistency, focus, and perseverance. I can’t help but wonder, with admiration, how you manage to do it.

Since this is a topic I receive questions about semi-regularly, I decided I would attempt to answer it. I have probably touched the subject in previous blog posts as well.

Work

Let me start out by defining what I consider my primary work to be. Or perhaps I should call it my mission because it goes way beyond just “work”. curl is irrevocably a huge part of me and my life.

  • I drive the curl project. Guide, develop, review, comment, admin, debug, merge, commit, support, assess security reports, lead, release, talk about it, inspire etc.
  • It does not necessarily mean that I do the most number of commits to curl every month. We have a set of very skilled and devoted committers that can do a lot without me.
  • I keep up with relevant Internet protocol developments and make sure to give feedback on what I think is good and bad, in particular from a small player’s/library’s view that is sometimes a bit different than the tech giants’ takes. This means participating actively in some IETF groups and keeping myself informed about what is happening in a number of other HTTP, web and browser oriented communities.
  • I keep up with related technologies and Open Source projects to understand how to navigate. Feedback issues, comments and pull requests to neighbor projects that we use – to strengthen them (and then by association the combination curl + them) and to increase the chances that they will help us out in similar fashion.
  • I use my position as lead developer of curl to blog and speak up about things I think need to be said, explained or giggled at. Be it stupid emails, bad uses of AI or inefficient security organizations. Ideally this occasionally helps other people and projects as well.

As a successful Open Source project I acknowledge and am aware that we (I mean curl) might get more attention than some others, and that we are used as or considered a “model” sometimes, making it even more important to do things right. From my language use in public to source code decisions. I try to live up to these expectations.

A part of my job is to make companies become paying customers so that I can afford working on curl – and once they have become customers I need to every now and then attend to support tickets from them. I can work full-time on curl thanks to my commercial customers.

Why

I have a strong sense of loyalty and commitment. When I join a project or a cause, I typically stick around and do my share of the job until it is finished.

I enjoy programming and software development – and I have done so ever since I first learned about programming as a teen in the mid 1980s. It is fun to create something that is useful and that can be used by others, but I also like solving the puzzles and challenges that come up in the process.

When the software project you work on never finishes, and is used by a crazy amount of users it gives you a sense of responsibility and pride. An even bigger incentive to make sure it actually works as intended. A desire to please the users. All the users.

Even after having reached many billions of installations there are still challenges to push the project further and harder on every possible front. Make it the best documented one. Make it an exemplary Open Source project. Make it newcomer friendly. Add more tests. Make sure not a single project in the world can claim they ship better security advisories. Work really hard on making it the most secure network library there is. While at the same being welcoming and friendly to new contributors.

If there is any area that curl is not best-in-class, we should put in more work and improve curl in that area. While at the same time keep up and polish it in all other aspects.

This is what drives me. This is what I want.

How

Getting top scores in every possible (imaginary and real) scorecard is accomplished through good old engineering. Do the job. Test. Iterate. Fail. Fix. Add tests. Do it again. Over and over.

A normal work day I sit down at my desk at about 8 in the morning and start. I iterate over issues, pull-requests and the everyday curl maintenance. I post silly messages on Mastodon and I chat with friends on IRC.

I try to end my regular work days at around 18:00, but I may go longer or shorter some days depending on what I feel like or if it’s “floorball day”. (I leave early on Wednesdays to go play with friends.)

As I live in Sweden and have many North-American colleagues and customers, I have occasional evening meetings to deal with the nine hour time difference to their west coast.

At some time between 22:00 and 23:00 I sit down in front of my computer again for the evening shift. I continue working on issues, fix bugs and review pull-requests. At 1am I sleep.

It makes me do maybe 50-55 hours of work per normal week. I call it all work hours plus plenty of spare time. Because this is the passion of my life. It is my job and my hobby. Because I want to. I love it. It is not a setup and number of hours I ask nor expect anyone else to do.

I have worked like this since early 2019 when I started doing curl full-time.

Independent

One explanation how this all works is that curl is independent. Truly independent in most senses of the word.

No companies control or own curl in any way. Yet every company is welcome to participate.

curl is not part of any foundation or umbrella organization. We range free.

curl is extremely liberally licensed.

On motivation

One of the hardest questions to answer is how I can keep up the motivation and still consider this fun and exciting after all this time.

First let’s not pretend that it always feels fun and thrilling. Sometimes it actually feels a bit boring and done. There is no shame in that and it is not strange or odd. Such periods come and go. When they come, I might do less curl for a while. Or maybe find a corner of the project that is not important but could be fun to poke at. I have learned that these periods come and go.

What motivates me is that everyone runs and uses curl and libcurl. Positive feedback is fuel that can keep me running for a long time. Making curl a leading tool that shoulders and carries a lot of digital infrastructure makes me feel a purpose. When there is a bug reported, I can feel almost hurt and sometimes ashamed and I need to get it fixed. curl is supposed to be one of the best in all categories and if it ever is not, I will work hard on making it so.

The social setup around Open Source and a success such as curl also makes it fun. I work full-time from home without geographical proximity to any other curl regulars. But I don’t need that. We can joke around in chat, we help each other in issues and pull-requests and we can do bad puns in video meetings. Contrary to “normal” job colleagues, these people are here because they want, believe and strive for something similar to me – and they are spread out across the world.

I feel that I work for the curl users. The users doing internet transfers. As opposed to any big company, tech giants or anyone else who could otherwise dictate direction. It’s highly motivational to be working for the users. Sure, the entities paying my wages are primarily a few huge companies, but the setup still makes this work and I still feel and act on the users behalf. Those companies have exactly no say in how we run the Open Source project.

I take criticism about curl personally because I have put so much of myself into it and as the BDFL for decades a lot of what it is today is ultimately the result of my choices.

Leading the troops

I try to lead by example. I still do a fair amount of development, debugging and architectural design in the project. I follow and perform the same steps I expect from the other contributors.

I’m a believer in lowering friction in the project, but still not relaxing the requirements: we still need tests and documentation for everything we do. Entering the project should be easy and welcoming, even if it can be hard to actually get a change merged.

I believe in reducing bureaucracy and formalities so that we can focus on development and getting things done. We don’t have or need manager levels or titles. We have things to do, people who do things and we have people that can review, comment and eventually merge those improvements. If there are fewer people participating during periods, then things just get done slower.

I invite discussions and participation and I encourage the same approach from my fellow contributors. When we want to do things, change things, improve things, we should inform and invite the greater community for comments, feedback and help. Oftentimes they may not have a lot to say, but we should still continue to ask for their opinions.

I use a direct and non-complicated communication style. I want to be friendly, I don’t curse, I focus on speaking about their suggestions and not the person. To the point rather than convoluted. When insulted, I try to not engage (which I sometimes fail at). But I also want to have a zero tolerance policy against bad behavior and abuse to enable the positive spirit to remain.

Like everyone else, I sometimes fail in my ambitions of how I want to behave and lead the project. Hopefully that happens less and less frequently over time.

I give this my everything

I think most of what has made curl good and successful has happened because I and the team around curl have worked hard on making it so. It has not happened by chance or by accident.

Family

I have a loving and understanding family. My wife and I celebrated our 25th anniversary earlier this year. My two kids are grown-ups now – both were born after I started working on curl.