Tag Archives: cURL and libcurl

curl tells the %time

The curl command line option --write-out or just -w for short, is a powerful and flexible way to extract information from transfers done with the tool. It was introduced already back in version 6.5 in the early 2000.

This option takes an argument in which you can add “variables” that hold all sorts of different information, from time information, to speed, sizes, header content and more.

Some users have right out started to use the -w output for logging of the performed transfer, and when you do that there was a little detail missing: the ability to output the time the transfer completed. After all, most log lines actually feature the time in one way or another.

Starting in curl 8.16.0, curl -w knows the time and now allows the user to specify exactly how to output that time in the output. Suddenly this output is a whole notch better for logging purposes.

%time{format}

Since log files also tend to use different time formats I decided I didn’t want to use a fixed format and risk that a huge portion of users will think it is the wrong one, so I went straight with strftime formatting: the user controls the time format using standard %-flags: different ones for year, month, day, hour, minute, second etc.

Some details to note:

  1. The time is provided using UTC, not local.
  2. It also supports %f for microseconds, which is a POSIX extension already used by Python and possible others
  3. %z and %Z (for time zone offset and name) had to be fixed to become portable and identical across systems and platforms

Example

Here’s a sample command line outputting the time the transfer completed:

curl -w "%time{%a %b %e %Y - %H:%M:%S.%f} %{response_code}\n" https://example.com -o saved

When I ran this command line it gave me this output:

Wed Aug 6 2025 - 12:43:45.160382 200

Credits

The clock image by Alexa from Pixabay

Follow redirects but differently

In the early days of curl development we (I suppose it was me personally but let’s stick with we so that I can pretend the blame is not all on me) made the possibly slightly unwise decision to make the -X option change the HTTP method for all requests in a curl transfer, even when -L is used – and independently of what HTTP responses the server returns.

That decision made me write blog posts and inform people all over about how using -X superfluously causes problems.

In curl 8.16.0, we introduce a different take on the problem, or better yet, a solution really: a new command line option that offers a modified behavior. Possibly the behavior people were thinking curl was having all along.

Just learn to use --follow going forward (in curl 8.16.0 and later).

This option works fine together with -X and will adjust the method in the possible subsequent requests according to the HTTP response code.

A long time ago I wrote separately about the different HTTP response codes and what they mean in terms of changing (or not) the method.

–location remains the same

Since we cannot break existing users and scripts, we had to leave the exiting --location option working exactly like it always has. This option is this mutually exclusive with --follow, so only pick one.

QUERY friendly

Part of the reason for this new option is to make sure curl can follow redirects correctly for other HTTP methods than the good old fashioned GET, POST and PUT. We already see PATCH used to some extent but perhaps more important is the work on the spec for the new QUERY method. It is a flavor of POST, but with a few minor but important different properties. Possibly enough for me to write a separate blog post about, but right now we can stick to it being “like POST”, in particular from a HTTP client’s perspective.

We want curl to be able to do a “post” but with a QUERY method and still follow redirects correctly. The -L and -X combination does not support this.

curl can be made to issue a proper QUERY request and follow redirects correctly like this:

curl -X QUERY --follow -d sendthis https://example.com/

Thank you for flying curl!

c10kday

From March 20, 1998 when the first curl release was published, to this day August 5, 2025 is exactly 10,000 days. We call it the curl-10000-day. Or just c10kday. c ten K day.

We want to celebrate this occasion by collecting and sharing stories. Your stories about curl. Your favorite memories. When you used curl for the first time. When curl saved your situation. When curl rescued your lost puppy. What curl has meant or perhaps still means to you, your work, your business, or your life. We want to favor and prioritize the good, the fun, the nostalgic and the emotional stories but it is of course up to your discretion.

We have created this thread in curl’s GitHub Discussion section for this purpose, so please go there and submit your story or read what others have shared.

https://github.com/curl/curl/discussions/17930

In the curl factory this day is nothing special. We keep hammering out new features and bugfixes – just like we always do.

Thanks for flying curl.

Even happier eyeballs

Back in 2012, the Happy Eyeballs RFC 6555 was published. It details how a sensible Internet client should proceed when connecting to a server. It basically goes like this:

Give the IPv6 attempt priority, then with a delay start a separate IPv4 connection in parallel with the IPv6 one; then use the connection that succeeds first.

We also tend to call this connection racing, since it is like a competition where multiple attempts compete trying to “win”.

In a normal name resolve, a client may get a list of several IPv4 and IPv6 addresses to try. curl would then pick the first, try that and if that fails, move on the next etc. If a whole family fails, it would start the other immediately.

v2

The updated Happy Eyeballs v2 RFC 8305 was published in 2017. It focused a lot on that the client should start its connections earlier in the process, preferably while getting responses from DNS instead of waiting for the hostname resolve phase to end before starting that.

This is complicated for lots of clients because there is no established (POSIX) API for doing such name resolves, so for a portable network library like libcurl we could not follow most of the new advice in this spec.

QUIC added a dimension

In 2012 we did not have QUIC on the map and not practically in 2017 either so those eyeballing specs did not include such details.

Even later, HTTP/3 was documented to require an alt-svc response header before a client would know if the server speaks HTTP/3 and only then could it attempt QUIC with it and expect it to work.

While curl works with alt-svc response approach, that’s information arriving far too late for many users – and it is especially damning for a command line tool as opposed to a browser, since lots of users just do single shot transfers and then never get to use HTTP/3 at all.

To combat that drawback, we decided that adding QUIC to the mix should add a separate connection competition. To allow faster and earlier use of QUIC.

Start the QUIC-IPv6 connect attempt first, then in order the QUIC-IPv4, TCP-ipv6 and finally the TCP-ipv4.

To users, this typically makes a very smooth operation where the client just automatically connects to the “best” alternative without it having to make any particular choices or decisions. It graciously and transparently adapts to situations where IPv6 or UDP have problems etc.

v3 and HTTPS-RR

With the introduction of HTTPS-RR, there are also more ways introduced to get IP addresses for hosts and there is now ongoing work within the IETF on making a v3 of the Happy Eyeballs specification detailing how exactly everything should be put together.

We are of course following that development to monitor and learn how we should adapt and polish curl connects further.

Parallel more

While waiting on the happy eyeballs v3 work to land in a document, Stefan Eissing took it upon himself to further tweak how curl behaves in an attempt to find the best connection even faster. Using more parallelism.

Starting in curl 8.16.0, curl will start the first IPv6 and the first IPv4 connection attempts exactly like before, but then, if none of them have connected after 200 milliseconds curl continues to the next address in the list and starts another attempt in parallel.

An illustration

Let’s take a look at an example of curl connecting to a server, let’s call the server curl.se. The orange numbers show the order of things after the DNS response has been received.

  1. The first connect attempt starts using the first IPv6 address from the DNS response. If it has not succeeded within 200 milliseconds…
  2. The second attempt starts in parallel, using the first IPv4 address. Now two connect attempts are running and if neither have succeeded in yet another 200 milliseconds…
  3. A second IPv6 connect attempt is started in parallel, using the second IPv6 address from the list. Now three connect attempts are racing. If none of them succeeds in another 200 milliseconds…
  4. A second IPv4 race starts, using the second IPv4 address from the list.
  5. … and this can continue, if this is a really slow or problematic server with many IP addresses.

Of course, each failed attempt makes curl immediately move to the next address in the list until all alternatives have been tested.

Add QUIC to that

The illustration above can be seen as “per transport”. If only TCP is wanted, there is a single such race going on. With potentially quite a few parallel attempts in the worst cases.

If instead HTTP/3 or a lower HTTP version is wanted, curl first starts a QUIC connection race as illustrated and then after 200 milliseconds it starts a similar TCP race in parallel to the QUIC one! Both run at the same time, the first one to connect wins.

A little table to illustrate when the different connect attempts starts when either QUIC or TCP is okay:

Time (ms)QUICTCP
0Start IPv6 connect
200Start IPv4 connectStart IPv6 connect
400Start 2nd IPv6 connectStart IPv4 connect
600Start 2nd IPv4 connectStart 2nd IPv6 connect
800Start 3rd IPv6 connectStart 2nd IPv4 connect

So in the case of trying to connect to a server that doesn’t respond that has more than two IPV6 and IPv4 addresses each, there could be nine connection attempts running after 801 milliseconds.

200 ms can be changed

The 200 milliseconds delay mentioned above is just the default time. It can easily be changed both using the library or the command line tool.

Credit

Image by Ilona Ilyés from Pixabay (heavily cropped)

curl adds parallel host control

I’m convinced a lot of people have not yet figured out that curl has supported parallel downloads for six years already by now.

Provided a practically unlimited number of URLs, curl can be asked to get them in a parallel fashion. It then makes sure to keep N transfers alive for as long as there is N or more transfers left to complete, where X is a custom number but 50 by default.

Concurrently transferring data from potentially a large number of different hosts can drastically shorten transfer times and who doesn’t prefer to complete their download job sooner rather than later?

Limit connections per host

At times however, you may want to do a lot of transfers, and you want to do them in parallel for speed, but maybe you prefer to limit how many connections curl should use per each hostname among all the URLs?

This per-host limit is a feature libcurl has offered applications for a long time and now the time has come for curl tool users to also enjoy its powers.

Per host should perhaps be called per origin if we spoke web lingo, because it rather limits the number of connections to the same protocol + hostname + port number. We call that host here for simplicity.

To set a cap on how many connections curl is allowed to use for each specific server use --parallel-max-host [number].

For example, if you want to download ten million images from this site, but never use more than six connections:

curl --parallel --parallel-max-host 6 https://example.com/[1-10000000].jpg --remote-name

Connections

Pay special attention to the exact term: this limits the number of connections used to each host. If the transfers are done using HTTP/2 or HTTP/3, they can be done using many streams over just one or a few connections so doing 50 or 200 transfers in parallel should still be perfectly doable even with a limited number of connections. Not so much with HTTP/1.

Ships in 8.16.0

This command line option will become available in the pending curl version 8.16.0 release.

option parsing in curl

We have always had a custom command line option parser in curl. It is fast and uncomplicated and gives us the perfect mix of flexibility and function. It also saves us from importing or using code with another license.

In one aspect it has behaved slightly different than many other command line parsers: the way it accepts arguments to long options.

Long options are the options provided using a name that starts with two dashes and are often not single-letters. Example:

curl --user-agent "curl/2000" https://example.com/

The example above tells curl to use the user agent curl/2000 in the transfer. The argument provided to the --user-agent option is provided separated with a space.

When instead using the short version of the same option, the argument can be specified with a space in between or not:

curl -A curl/2000 https://example.com/

or

curl -Acurl/2000 https://example.com/

What about equals sign?

A common paradigm and syntax style for accepting long options in command line tools is the “equals sign” approach. When you provide an argument to a long option you do this by appending an equals sign followed by the argument to the option; with no space. Like this:

curl --user-agent="curl/2000" https://example.com/

This example uses double quotes but they are of course not necessary if there is no space or similar in the argument.

Bridging the gap

To make life easier for future users, curl now also support this latter style – starting in curl 8.16.0. With this syntax supported, curl accepts a more commonly used style and therefore should induce less surprises to users. To make it easier to write curl command lines.

I emphasize that change this is an improvement for future users, because I really don’t think it is a good idea for most user to switch to this syntax immediately. This of course because all the older curl versions that are still used widely around the word do not support it.

I think it is better if we wait a year or two until we start using this option style in curl documentation and example command lines. To give time for users to upgrade to a version that has support for it.

Output nothing with –out-null

Downloading data from a remote URL is probably the single most common operation people do with curl.

Often, users then add various additional options to the command line to extract information from that transfer but may also decide that the actually fetched data is not interesting. Sometimes they don’t get the accurate meta-data if the full download is not made, sometimes they run performance measurements where the actual content is not important, and so on. Users sometimes have reasons for not saving their downloads.

They do downloads where the actual downloaded content is tossed away. On GitHub alone, we can find almost one million command lines doing such curl invokes.

curl of course offers multiple ways to discard the downloaded data, but the maybe most straight-forward way is to write the contents to a null device such as /dev/null on *nix systems or NUL: on windows. Like this:

curl https://example.com/ --output /dev/null

or using the short option

curl https://example.com/ -o /dev/null

In many cases we can accomplish the same thing with a shell redirect – which also redirects multiple URLs at once:

curl https://example.com/ >/dev/null

Improving nothing

The command line above is perfectly fine and works fine and has been doing so for decades. It does however have two drawbacks:

  1. Lack of portability. curl runs on most operating systems and most options and operations work identically, to the degree that you can often copy command lines back and forth between machines without thinking much about it. Outputting data to /dev/null is however not terribly portable and trying that operation on Windows for example will cause the command line to fail.
  2. Performance. It may not look like much, but completely avoiding writing the data instead of writing it to /dev/null makes benchmarks show a measurable improvement. So if you don’t want the data, why not do the operation faster rather than slower?

The shell redirect approach has the same drawbacks.

Usage

The new option is used as follows, where it needs one --out-null occurrence per URL it wants to redirect.

curl https://example.com/ --out-null

This allows you to for example send one to null and save the other:

curl https://example.com/ --out-null https://example.net/ --output save-data

Coming in 8.16.0

This command line option debuts in curl 8.16.0, shipping in September 2025.

Credits

Stefan Eissing brought this option. He also benchmarked this option.

Carving out msh3

I hope that by now most readers of my blog have understood that curl, and libcurl specifically, is an architecture with a transfer core with a set of different backends plugged in. Backends powered by different third party libraries.

The exact set of backends used in a particular build is decided by the person that builds curl.

What backends that curl supports varies over time (and platform). We appreciate adding support for more backends and to let users decide which ones to use, as this allows us to approach it with a survival of the fittest attitude. What does not work in the long run or what isn’t actually used, we can deprecate and remove again. Ideally this helps us select the better ones for the future.

HTTP/3

For the last few years curl has supported the HTTP/3 protocol powered by one out of four different backends:

  1. nghttp3 + ngtcp2
  2. quiche
  3. nghttp3 + OpenSSL-QUIC
  4. msh3 + msquic

(All except the first listed combination, we still label experimental.)

Dropping msh3

In this quartet, there is one option that stands out a little: the last one. The msh3 powered backend was brought in and merged into the curl source tree a few years ago with the hope that this solution would end up a good choice for people on Windows since it is the only choice in the list that can get built to use the native Windows TLS solution SChannel.

Unfortunately, this work was never finalized. It never worked correctly in curl and the API and architecture of msh3 makes it quirky and cumbersome to integrate – and quite frankly we can’t seem to drum up any interest for people to test nor work on improving this backend.

As we have three other working backends, all of which also can build and run on Windows, we see no benefit in dragging msh3 along. In fact, there is a cost in maintenance and keeping the build working and the tests running etc that we rather avoid. In particular as we seem to be doing that for virtually no gain.

I want to stress that I don’t think there is anything wrong with msh3 nor its underlying msquic library. They simply have not been made to work properly in curl.

Updated backend map

The msh3 backend has now been removed from git in the current master branch and this is how the HTTP/3 offer will look like in the coming curl 8.16.0 release.

Death by a thousand slops

I have previously blogged about the relatively new trend of AI slop in vulnerability reports submitted to curl and how it hurts and exhausts us.

This trend does not seem to slow down. On the contrary, it seems that we have recently not only received more AI slop but also more human slop. The latter differs only in the way that we cannot immediately tell that an AI made it, even though we many times still suspect it. The net effect is the same.

The general trend so far in 2025 has been way more AI slop than ever before (about 20% of all submissions) as we have averaged in about two security report submissions per week. In early July, about 5% of the submissions in 2025 had turned out to be genuine vulnerabilities. The valid-rate has decreased significantly compared to previous years.

We have run the curl Bug Bounty since 2019 and I have previously considered it a success based on the amount of genuine and real security problems we have gotten reported and thus fixed through this program. 81 of them to be exact, with over 90,000 USD paid in awards.

End of the road?

While we are not going to do anything rushed or in panic immediately, there are reasons for us to consider changing the setup. Maybe we need to drop the monetary reward?

I want us to use the rest of the year 2025 to evaluate and think. The curl bounty program continues to run and we deal with everything as before while we ponder about what we can and should do to improve the situation. For the sanity of the curl security team members.

We need to reduce the amount of sand in the machine. We must do something to drastically reduce the temptation for users to submit low quality reports. Be it with AI or without AI.

The curl security team consists of seven team members. I encourage the others to also chime in to back me up (so that we act right in each case). Every report thus engages 3-4 persons. Perhaps for 30 minutes, sometimes up to an hour or three. Each.

I personally spend an insane amount of time on curl already, wasting three hours still leaves time for other things. My fellows however are not full time on curl. They might only have three hours per week for curl. Not to mention the emotional toll it takes to deal with these mind-numbing stupidities.

Times eight the last week alone.

Reputation doesn’t help

On HackerOne the users get their reputation lowered when we close reports as not applicable. That is only really a mild “threat” to experienced HackerOne participants. For new users on the platform that is mostly a pointless exercise as they can just create a new account next week. Banning those users is similarly a rather toothless threat.

Besides, there seem to be so many so even if one goes away, there are a thousand more.

HackerOne

It is not super obvious to me exactly how HackerOne should change to help us combat this. It is however clear that we need them to do something. Offer us more tools and knobs to tweak, to save us from drowning. If we are to keep the program with them.

I have yet again reached out. We will just have to see where that takes us.

Possible routes forward

People mention charging a fee for the right to submit a security vulnerability (that could be paid back if a proper report). That would probably slow them down significantly sure, but it seems like a rather hostile way for an Open Source project that aims to be as open and available as possible. Not to mention that we don’t have any current infrastructure setup for this – and neither does HackerOne. And managing money is painful.

Dropping the monetary reward part would make it much less interesting for the general populace to do random AI queries in desperate attempts to report something that could generate income. It of course also removes the traction for some professional and highly skilled security researchers, but maybe that is a hit we can/must take?

As a lot of these reporters seem to genuinely think they help out, apparently blatantly tricked by the marketing of the AI hype-machines, it is not certain that removing the money from the table is going to completely stop the flood. We need to be prepared for that as well. Let’s burn that bridge if we get to it.

The AI slop list

If you are still innocently unaware of what AI slop means in the context of security reports, I have collected a list of a number of reports submitted to curl that help showcase. Here’s a snapshot of the list from today:

  1. [Critical] Curl CVE-2023-38545 vulnerability code changes are disclosed on the internet. #2199174
  2. Buffer Overflow Vulnerability in WebSocket Handling #2298307
  3. Exploitable Format String Vulnerability in curl_mfprintf Function #2819666
  4. Buffer overflow in strcpy #2823554
  5. Buffer Overflow Vulnerability in strcpy() Leading to Remote Code Execution #2871792
  6. Buffer Overflow Risk in Curl_inet_ntop and inet_ntop4 #2887487
  7. bypass of this Fixed #2437131 [ Inadequate Protocol Restriction Enforcement in curl ] #2905552
  8. Hackers Attack Curl Vulnerability Accessing Sensitive Information #2912277
  9. (“possible”) UAF #2981245
  10. Path Traversal Vulnerability in curl via Unsanitized IPFS_PATH Environment Variable #3100073
  11. Buffer Overflow in curl MQTT Test Server (tests/server/mqttd.c) via Malicious CONNECT Packet #3101127
  12. Use of a Broken or Risky Cryptographic Algorithm (CWE-327) in libcurl #3116935
  13. Double Free Vulnerability in libcurl Cookie Management (cookie.c) #3117697
  14. HTTP/2 CONTINUATION Flood Vulnerability #3125820
  15. HTTP/3 Stream Dependency Cycle Exploit #3125832
  16. Memory Leak #3137657
  17. Memory Leak in libcurl via Location Header Handling (CWE-770) #3158093
  18. Stack-based Buffer Overflow in TELNET NEW_ENV Option Handling #3230082
  19. HTTP Proxy Bypass via CURLOPT_CUSTOMREQUEST Verb Tunneling #3231321
  20. Use-After-Free in OpenSSL Keylog Callback via SSL_get_ex_data() in libcurl #3242005
  21. HTTP Request Smuggling Vulnerability Analysis – cURL Security Report #3249936

How I do it

A while ago I received an email with this question.

I’ve been subscribed to your weekly newsletter for a while now, receiving your weekly updates every Friday. I’m writing because I admire your consistency, focus, and perseverance. I can’t help but wonder, with admiration, how you manage to do it.

Since this is a topic I receive questions about semi-regularly, I decided I would attempt to answer it. I have probably touched the subject in previous blog posts as well.

Work

Let me start out by defining what I consider my primary work to be. Or perhaps I should call it my mission because it goes way beyond just “work”. curl is irrevocably a huge part of me and my life.

  • I drive the curl project. Guide, develop, review, comment, admin, debug, merge, commit, support, assess security reports, lead, release, talk about it, inspire etc.
  • It does not necessarily mean that I do the most number of commits to curl every month. We have a set of very skilled and devoted committers that can do a lot without me.
  • I keep up with relevant Internet protocol developments and make sure to give feedback on what I think is good and bad, in particular from a small player’s/library’s view that is sometimes a bit different than the tech giants’ takes. This means participating actively in some IETF groups and keeping myself informed about what is happening in a number of other HTTP, web and browser oriented communities.
  • I keep up with related technologies and Open Source projects to understand how to navigate. Feedback issues, comments and pull requests to neighbor projects that we use – to strengthen them (and then by association the combination curl + them) and to increase the chances that they will help us out in similar fashion.
  • I use my position as lead developer of curl to blog and speak up about things I think need to be said, explained or giggled at. Be it stupid emails, bad uses of AI or inefficient security organizations. Ideally this occasionally helps other people and projects as well.

As a successful Open Source project I acknowledge and am aware that we (I mean curl) might get more attention than some others, and that we are used as or considered a “model” sometimes, making it even more important to do things right. From my language use in public to source code decisions. I try to live up to these expectations.

A part of my job is to make companies become paying customers so that I can afford working on curl – and once they have become customers I need to every now and then attend to support tickets from them. I can work full-time on curl thanks to my commercial customers.

Why

I have a strong sense of loyalty and commitment. When I join a project or a cause, I typically stick around and do my share of the job until it is finished.

I enjoy programming and software development – and I have done so ever since I first learned about programming as a teen in the mid 1980s. It is fun to create something that is useful and that can be used by others, but I also like solving the puzzles and challenges that come up in the process.

When the software project you work on never finishes, and is used by a crazy amount of users it gives you a sense of responsibility and pride. An even bigger incentive to make sure it actually works as intended. A desire to please the users. All the users.

Even after having reached many billions of installations there are still challenges to push the project further and harder on every possible front. Make it the best documented one. Make it an exemplary Open Source project. Make it newcomer friendly. Add more tests. Make sure not a single project in the world can claim they ship better security advisories. Work really hard on making it the most secure network library there is. While at the same being welcoming and friendly to new contributors.

If there is any area that curl is not best-in-class, we should put in more work and improve curl in that area. While at the same time keep up and polish it in all other aspects.

This is what drives me. This is what I want.

How

Getting top scores in every possible (imaginary and real) scorecard is accomplished through good old engineering. Do the job. Test. Iterate. Fail. Fix. Add tests. Do it again. Over and over.

A normal work day I sit down at my desk at about 8 in the morning and start. I iterate over issues, pull-requests and the everyday curl maintenance. I post silly messages on Mastodon and I chat with friends on IRC.

I try to end my regular work days at around 18:00, but I may go longer or shorter some days depending on what I feel like or if it’s “floorball day”. (I leave early on Wednesdays to go play with friends.)

As I live in Sweden and have many North-American colleagues and customers, I have occasional evening meetings to deal with the nine hour time difference to their west coast.

At some time between 22:00 and 23:00 I sit down in front of my computer again for the evening shift. I continue working on issues, fix bugs and review pull-requests. At 1am I sleep.

It makes me do maybe 50-55 hours of work per normal week. I call it all work hours plus plenty of spare time. Because this is the passion of my life. It is my job and my hobby. Because I want to. I love it. It is not a setup and number of hours I ask nor expect anyone else to do.

I have worked like this since early 2019 when I started doing curl full-time.

Independent

One explanation how this all works is that curl is independent. Truly independent in most senses of the word.

No companies control or own curl in any way. Yet every company is welcome to participate.

curl is not part of any foundation or umbrella organization. We range free.

curl is extremely liberally licensed.

On motivation

One of the hardest questions to answer is how I can keep up the motivation and still consider this fun and exciting after all this time.

First let’s not pretend that it always feels fun and thrilling. Sometimes it actually feels a bit boring and done. There is no shame in that and it is not strange or odd. Such periods come and go. When they come, I might do less curl for a while. Or maybe find a corner of the project that is not important but could be fun to poke at. I have learned that these periods come and go.

What motivates me is that everyone runs and uses curl and libcurl. Positive feedback is fuel that can keep me running for a long time. Making curl a leading tool that shoulders and carries a lot of digital infrastructure makes me feel a purpose. When there is a bug reported, I can feel almost hurt and sometimes ashamed and I need to get it fixed. curl is supposed to be one of the best in all categories and if it ever is not, I will work hard on making it so.

The social setup around Open Source and a success such as curl also makes it fun. I work full-time from home without geographical proximity to any other curl regulars. But I don’t need that. We can joke around in chat, we help each other in issues and pull-requests and we can do bad puns in video meetings. Contrary to “normal” job colleagues, these people are here because they want, believe and strive for something similar to me – and they are spread out across the world.

I feel that I work for the curl users. The users doing internet transfers. As opposed to any big company, tech giants or anyone else who could otherwise dictate direction. It’s highly motivational to be working for the users. Sure, the entities paying my wages are primarily a few huge companies, but the setup still makes this work and I still feel and act on the users behalf. Those companies have exactly no say in how we run the Open Source project.

I take criticism about curl personally because I have put so much of myself into it and as the BDFL for decades a lot of what it is today is ultimately the result of my choices.

Leading the troops

I try to lead by example. I still do a fair amount of development, debugging and architectural design in the project. I follow and perform the same steps I expect from the other contributors.

I’m a believer in lowering friction in the project, but still not relaxing the requirements: we still need tests and documentation for everything we do. Entering the project should be easy and welcoming, even if it can be hard to actually get a change merged.

I believe in reducing bureaucracy and formalities so that we can focus on development and getting things done. We don’t have or need manager levels or titles. We have things to do, people who do things and we have people that can review, comment and eventually merge those improvements. If there are fewer people participating during periods, then things just get done slower.

I invite discussions and participation and I encourage the same approach from my fellow contributors. When we want to do things, change things, improve things, we should inform and invite the greater community for comments, feedback and help. Oftentimes they may not have a lot to say, but we should still continue to ask for their opinions.

I use a direct and non-complicated communication style. I want to be friendly, I don’t curse, I focus on speaking about their suggestions and not the person. To the point rather than convoluted. When insulted, I try to not engage (which I sometimes fail at). But I also want to have a zero tolerance policy against bad behavior and abuse to enable the positive spirit to remain.

Like everyone else, I sometimes fail in my ambitions of how I want to behave and lead the project. Hopefully that happens less and less frequently over time.

I give this my everything

I think most of what has made curl good and successful has happened because I and the team around curl have worked hard on making it so. It has not happened by chance or by accident.

Family

I have a loving and understanding family. My wife and I celebrated our 25th anniversary earlier this year. My two kids are grown-ups now – both were born after I started working on curl.