Category Archives: cURL and libcurl

curl and/or libcurl related

changelog changes

On the curl website we of course list exactly what changes that go into each and every single release we do. In recent years I have even gone back and made sure we provide this information for every single release ever done. At the moment that means 258 releases, listing over 10,000 bugfixes and almost 1,000 changes. From 1996 until today.

This is literally a wall of changes.

Since we keep doing somewhere between 150 and 250 bugfixes per release and we do a new release very eight weeks, the page with all changes these keeps growing quite fast.

Right now, the HTML of this page is at 1.1 megabytes.

Use case

Most typically I think the use case for users visiting the changelog is to view what changes that were done in one specific curl release. Possibly checking out a few different ones. Very few users actually want tens of thousands of lines of text to scroll through. I believe.

Enter single release changelogs

To make sure that people can read the changes for a single release only, and to reduce the amount of data a user needs to download in order to view those single release changes, I worked on a setup that generates separate individual changelog pages for every release. Easy to bookmark, load fast, contain only information about the specific releases and they make it easy to skip back and forth between past and future releases.

I deployed these changes today and if you go to https://curl.se/ch/ now, you will see the changelog for the most recent release only.

The all changes changelog remains

The changelog showing everything will remain and is still an option to browse. I personally use it at times when I want to control-f and look for a change done in a previous curl version that I cannot remember exactly which. This all-changes page remains only a click away if you rather view that one instead of the single-version thing.

Design

I am not a web developer and I am not web designer. I know just enough HTML and CSS to be able to publish these things, but I do not do fancy and I am fully aware that I am not good at making “nice” or “attractive” designs. I focus on usable and practical.

As per curl website standards these pages are all static content using no JavaScript and only a few small images. Excellent for rendering fast and for caching well in the CDN.

Known vulnerabilities

I did not especially mention this before, but only a few days ago I added direct links from each version header to the page for known vulnerabilities for that specific version and that link of course is now also present in the single version changelog page. Next to the link that goes directly to the release presentation video.

Feedback?

If you find problems or have ideas on how to further improve the curl website, let us know!

curl 8.9.0

Numbers

the 258th release
11 changes
63 days (total: 9,623)

260 bugfixes (total: 10,531)
423 commits (total: 32,704)
0 new public libcurl function (total: 94)
1 new curl_easy_setopt() option (total: 306)

4 new curl command line option (total: 263)
80 contributors, 38 new (total: 3,209)
47 authors, 16 new (total: 1,288)
2 security fixes (total: 157)

Download the new curl release from curl.se as always.

Release presentation

At 10:00 CEST (08:00 UTC) I will do a live-streamed release presentation of curl 8.9.0 on twitch. Afterwards, this paragraph will be replaced with a link to the recorded video of it.

Security

Today we fix two security vulnerabilities and publish all details about them.

  • CVE-2024-6197: freeing stack buffer in utf8asn1str. (severity medium) libcurl’s ASN1 parser has this utf8asn1str() function used for parsing an ASN.1 UTF-8 string. It can detect an invalid field and return error. Unfortunately, when doing so it also invokes free() on a 4 byte local stack buffer.
  • CVE-2024-6874: macidn punycode buffer overread. (severity low) libcurl’s URL API function curl_url_get() offers punycode conversions, to and from IDN. Asking to convert a name that is exactly 256 bytes, libcurl ends up reading outside of a stack based buffer when built to use the macidn IDN backend. The conversion function then fills up the provided buffer exactly – but does not null terminate the string.

Changes

  • –ip-tos (IP Type of Service / Traffic Class). Lets users set this IP header field to a number.
  • –mptcp. Asks curl to enable the Multipath TCP option for this connection, which if the server also allows it may make the TCP connection to go over multiple network paths.
  • –vlan-priority. Makes curl set the VLAN priority field for its IP traffic. This is typically a field used in the network layer below IP (think Ethernet), so it is not likely to survive through IP routers. A local network thing.
  • –keepalive-cnt (and CURLOPT_TCP_KEEPCNT). Specify how many keeplive probes curl should send before it considers the connection to be dead.
  • –write-out ‘%{num_retries}’ is a new variable for the info output that outputs the number of retries that were done for the previous transfer (when –retry was used).
  • gnutls now supports CA caching. For libcurl using applications, this can really speed up doing serial TLS connections.
  • mbedtls supports CURLOPT_CERTINFO. Returns certificate information to the application.
  • noproxy patterns need to be comma separated. Space separation is no longer enough.
  • Support binding a connection to both interface and IP, not just one of them.
  • The URL API added CURLU_NO_GUESS_SCHEME, to allow an application to figure out if the scheme for a previously parsed URL was set or guessed.
  • wolfssl now supports CA caching

Bugfixes

In no other release ever before in curl’s long history have there been this many bugfixes: 260. Some of my favorites are:

  • cmake: 26 separate bugfixes
  • configure: 10 separate bugfixes
  • –help category cleanup and list categories in –help
  • allow etag and content-disposition for 3xx reply
  • docs: countless fixes, polish and corections
  • show name and keywords for failed tests in summary
  • avoid using GetAddrInfoExW with impersonation
  • URL encode the canonical path for aws-sigv4
  • fix DoH cleanup
  • fix memory leak and zero-length HTTPS RR crash in DoH
  • allow DoH transfers to override max connection limit
  • fix ß with AppleIDN
  • fix compilation with OpenSSL 1.x with md4 disabled
  • do a final progress update on connect failure
  • multi: fix pollset during RESOLVING phase
  • enable UDP GRO for QUIC
  • require at least OpenSSL 3.3 for QUIC
  • add shutdown support for HTTP/3 (QUIC)
  • fix CRLF conversion of input
  • fixed starttls for SMTP
  • change TCP keepalive from ms to seconds on DragonFly BSD
  • support TCP keepalive parameters on Solaris <11.4
  • shutdown TLS and TCP better
  • gnutls: pass in SNI name, not hostname when checking cert
  • gnutls: rectify the TLS version checks for QUIC
  • mbedtls v3.6.0 workarounds
  • several x509 asn.1 parser fixes

Next

Because the 8.9.0 release spent an extra week for its release cycle, the next one is going to be one week shorter. We do this by shortening the feature window to just two weeks this time, which might impact how many new features and changes we manage to merge.

We have a large amount of pull requests for changes already pending merge, waiting for the release window to open.

If all goes well, the next release is named 8.10.0 and eventually ships on September 11, 2024.

curl for QNX

Starting now, there are official curl releases for QNX hosted on the curl.se website. See https://curl.se/qnx.

QNX is a commercial real-time operating system and these curl release packages are produced as a result of a business arrangement.

The plan is to from now on ship curl tarballs for three different QNX versions, and each archive contains curl and libcurl built for several different targets. The curl for QNX releases should be possible to release in sync with the regular releases, but they can also be updated out of sync if need be.

Every curl release from here on out will be packaged for QNX and made available.

curl and libcurl have been functional on QNX since decades – the first mention of curl and QNX together that I could find is from October 2000. curl releases for QNX were previously packaged and provided to end users by the QNX team themselves.

This move will allow QNX users to get the latest curl faster and make them able to keep up better with curl development. For features, bugfixes and perhaps most of all security.

We will also make sure that curl keeps building fine for QNX straight from the tarball.

The complete set of build and setup scripts for curl on QNX are maintained in the curl-for-qnx git repository. Of course we will appreciate submitted issues and pull requests in that repository as well.

This commercial agreement is between Blackberry and wolfSSL. I am employed by wolfSSL. If you want your operating system to have equally fancy and always up-to-date releases, you know who to contact.

wcurl is here

Users tell us that remembering what curl options to use when they just want to download the contents of a URL is hard. This is one often repeated reason why some users reach for wget instead of curl on the command line. It downloads the data from the URL without you needing to provide any extra arguments. Without you needing to remember which option(s) to use.

In the curl user survey of 2024, it was again mentioned several times.

Enter wcurl

Samuel Henrique decided to do something about it. Today he announced that he not only created wcurl as a curl wrapper aimed at meeting this exact need, he also created a Debian package out of it and made sure wcurl now ships as part of the curl package. Starting in 8.8.0-2. I already have it on my Debian unstable installations.

wcurl is implemented a shell script that uses curl. It also ships with its own manpage.

Take it for a spin. Tell the team what you think!

Discussion

Hacker news

long term curl versions

In the curl project we ship new releases based on the master branch of our git repository, in a clean and linear commit history. We have never maintained an old branch for long term or stability etc. Instead we promise to not break user behavior nor the ABI or API. All users should be able to always upgrade to the latest.

A never-ending stream of releases you can always upgrade to; a new one every 8th week.

We build infrastructure you can lean on.

But

Sometimes reality does not match our intentions and we ship regressions.

Sometimes users are too scared that there might be a regression so they refrain from upgrading. risk averse is probably how they view themselves.

Sometimes users, organizations and Linux distros have policies that say they do not upgrade versions. Usually based on how software in general works and there needs to be a single fixed policy for managing software versions and then curl gets treated the same way.

For those situations and other related scenarios, repeating the top paragraphs does not help.

long term branches by others

In practice, just about every major Linux distributor maintains one or more stable curl branches. They backport security fixes to those versions to keep them secure for their users. Some vendors also merge selected bugfixes into their branches.

Every Linux distributor picks the particular curl version they stick to by themselves without coordination with other distros. They all do it at different times and they all have their own specific criteria and work processes for doing this. This, in combination with curl’s frequent releases, tend to make them all pick different versions for their different branches. And keep them alive for different lengths.

Some vendors maintain their stable branches for extended periods of time. Upwards and beyond ten years happen.

These long-lived branches may eventually end up having literally hundreds of patches applied to them. The curl builds done from these branches still report as version x.y.z but in reality they are mutated versions that can be significantly different compared to the original x.y.z version that the curl project shipped in a tarball back in the day.

That’s the comfort you get for picking (and paying?) a Linux distribution. (Yes, you also easily get stuck with an ancient version because of this.)

Some users also simply get stuck on older versions for other reasons and do not security-patch them over time (by ignorance or incompetence), making them more and more insecure over time.

Reality

At the time I write this post, the curl release with the largest number of known security vulnerabilities has 85 published CVEs.

By asking users and by looking in logs in various servers, we know that just about every curl versions we have shipped the last dozen of years or so remain in use somewhere. We can only hope that most of them are security patched.

In reality, every release we do becomes a long term release for someone.

Long term support?

Every once in a while a discussion pops out in or close to the curl project whether we should consider starting to maintain one or more LTS branches.

We have never completely dismissed those ideas. We are however acutely aware of the extra effort and energy such an endeavor requires, so we have so far shrugged it off. But should there come users and sponsors willing to help make it happen, we would not be shy of implementing something.

After all, the ones most interested in LTS branches are usually people and companies with an economic gain to be had; with businesses using and relying on a rock solid curl. They should then also be able to help pay for this.

If you or the company you work for would be interested in something like this, please reach out and we can get the conversation going. Maybe we can do something to improve the lives of people out there?

Until then, we stick to a single release branch.

Credits

Image by Julius Silver from Pixabay

Inside 22,734 Steam games

About a year ago I blogged about games that use curl. In that post I listed a bunch of well-known titles I knew use curl and there was a list of 136 additional games giving credit to curl.

Kind of amazing that over one hundred games decided to use curl!

At the time, lots of people told me that number was probably way low and while I kind of had that feeling as well it was just a feeling and nothing else. We cannot be absolutely certain unless there is data or evidence to actually back it up.

The speculation could stop this week when someone provided me with a link to a database of Steam titles (Steam, as in the video game service). SteamDB is a third-party site that among other things extracts data and figures out which “SDKs” are used by Steam games: Their list of game titles on Steam using curl.

Since that list is capped at 10,000 titles, I had to filter it and add up the number of titles based on release year. Out of the 91,559 titles they currently list in their database, 22,734 are identified to be using curl: 24.8%.

Not too shabby for a hobby.

Discussion

Hacker news

curl user survey 2024 analysis

As tradition dictates, I have spent many hours walking through the responses to the curl user survey of the year. I have sorted tables, rendered updated graphs and tried to wrap my head around what all these numbers might mean and what conclusions and lessons we should draw.

I present the results, the collected answers, to the survey mostly raw without a lot of analysis or decisions. This, to allow everyone who takes the time to reads through to form their own opinion and thoughts. It also gives me more time to glance over the numbers many more times before I make up my mind about possible outcomes.

The 2024 user survey analysis document

If you find any mistakes or omissions in this document, let me know and I might fix and update corrected versions.

63 pages and 14,800 words. Enjoy!

Why curl closes PRs on GitHub

Contributors to the curl project on GitHub tend to notice the above sequence quite quickly: pull requests submitted do not generally appear as “merged” with its accompanying purple blob, instead they are said to be “closed”. This has been happening since 2015 and is probably not going to change anytime soon.

Let me explain why this happens.

I blame GitHub

GitHub’s UI does not allow us to review or comment on commit messages for pull requests. Therefore, it is hard to insist on contributors to provide the correct message, using the proper language in the correct format.

If you make a pull request based on a single commit, the initial PR message is based on the commit message but when follow-up fixes are done and perhaps force-pushed, the PR message is not updated accordingly with the commit message’s updates.

Commit messages with style

I believe having good commit messages following a fixed style and syntax helps the project. It makes the git history better and easier to browse. It allows us to write tools and scripts around git and the git history. Like how we for example generate release notes and project stat graphs based on git log basically.

We also like and use a strictly linear history in curl, meaning that all commits are rebased on the master branch. Lots of the scripting mentioned above depends on this fact.

Manual merges

In order to make sure the commit message is correct, and in fact that the entire commit looks correct, we merge pull requests manually. That means that we pull down the pull request into a local git repository, clean up the commit message to adhere to project standards.

And then we push the commit to git. One or more of the commit messages in such a push then typically contains lines like:

Fixes #[number] and Closes #[number]. Those are instructions to GitHub and we use them like this:

Fixes means that this commit fixed an issue that was reported in the GitHub issue with that id. When we push a commit with that instruction, GitHub closes that issue.

Closes means that we merged a pull request with this id. (GitHub has no way for us to tell it that we merged the pull request.) This instruction makes GitHub closes the corresponding pull request: “[committer] closed this in [commit hash]”.

We do not let GitHub dictate how we do git. We use git and expect GitHub to reflect our git activity.

We COULD but we won’t

We could in theory fix and cleanup the commits locally and manually exactly the way we do now and then force-push them to the remote branch and then use the merge button on the GitHub site and then they would appear as “merged”.

That is however a clunky, annoying and time-consuming extra-step that not only requires that we (always) push code to other people’s branches, it also triggers a whole new round of CI jobs. This, only to get a purple blob instead of a red one. Not worth it.

If GitHub would allow it, I would disable the merge button in the GitHub PR UI for curl since it basically cannot be used correctly in the project.

Squashing all the commits in the PR is also not something we want since in many cases the changes should be kept as more than one commit and they need their own dedicated and correct commit message.

What GitHub could do

GitHub could offer a Merged keyword in the exact same style as Fixed and Closes, that just tells the service that we took care of this PR and merged it as this commit. It’s on me. My responsibility. I merged it. It would help users and contributors to better understand that their closed PR was in fact merged as that commit.

It would also have saved me from having to write this blog post.

Discussion

Hacker news

Addendum

In some post-publish discussions I have seen people ask about credits. This method to merge commits does not break or change how the authors are credited for their work. The commit authors remain the commit authors, and the one doing the commits (which is I when I do them) is stored separately. Like git always do. Doing the pushes manually this way does in no way change this model. GitHub will even count the commits correctly for the committer – assuming they use an email address their GitHub account does (I think).

HTTP/3 in curl mid 2024

Time for another checkup. Where are we right now with HTTP/3 support in curl for users?

I think curl’s situation is symptomatic for a lot of other HTTP tools and libraries. HTTP/3 has been and continues to be a much tougher deployment journey than HTTP/2 was.

curl supports four alternative HTTP/3 solutions

You can enable HTTP/3 for curl using one of these four different approaches. We provide multiple different ones to let “the market” decide and to allow different solutions to “compete” with each other so that users eventually can get the best one. The one they prefer. That saves us from the hard problem of trying to pick a winner early in the race.

More details about the four different approaches follow below.

Why is curl not using HTTP/3 already?

It already does if you build it yourself with the right set of third party libraries. Also, the curl for windows binaries provided by the curl project supports HTTP/3.

For Linux and other distributions and operating system packagers, a big challenge remains that the most widely used TLS library (OpenSSL) does not offer the widely accepted QUIC API that most other TLS libraries provide. (Remember that HTTP/3 uses QUIC which uses TLS 1.3 internally.) This lack of API prevents existing QUIC libraries to work with OpenSSL as their TLS solution forcing everyone who want to use a QUIC library to use another TLS library – because curl does not easily allows itself to get built using multiple TLS libraries . Having a separate TLS library for QUIC than for other TLS based protocols is not supported.

Debian tries an experiment to enable HTTP/3 in their shipped version of curl by switching to GnuTLS (and building with ngtcp2 + nghttp3).

HTTP/3 backends

To get curl to speak HTTP/3 there are three different components that need to be provided, apart from the adjustments in the curl code itself:

  • TLS 1.3 support for QUIC
  • A QUIC protocol library
  • An HTTP/3 protocol library

Illustrated

Below, you can see the four different HTTP/3 solutions supported by curl in different columns. All except the right-most solution are considered experimental.

From left to right:

  1. the quiche library does both QUIC and HTTP/3 and it works with BoringSSL for TLS
  2. msh3 is an HTTP/3 library that uses mquic for QUIC and either a fork family or Schannel for TLS
  3. nghttp3 is an HTTP/3 library that in this setup uses OpenSSL‘s QUIC stack, which does both QUIC and TLS
  4. nghttp3 for HTTP/3 using ngtcp2 for QUIC can use a range of different TLS libraries: fork family, GnuTLS and wolfSSL. (picotls is supported too, but curl itself does not support picotls for other TLS use)

ngtcp2 is ahead

ngtcp2 + nghttp3 was the first QUIC and HTTP/3 combination that shipped non-beta versions that work solidly with curl, and that is the primary reason it is the solution we recommend.

The flexibility in TLS solutions in that vertical is also attractive as this allows users a wide range of different libraries to select from. Unfortunately, OpenSSL has decided to not participate in that game so this setup needs another TLS library.

OpenSSL QUIC

OpenSSL 3.2 introduced a QUIC stack implementation that is not “beta”. As the second solution curl can use. In OpenSSL 3.3 they improved it further. Since early 2024 curl can get built and use this library for HTTP/3 as explained above.

However, the API OpenSSL provide for doing transfers is lacking. It lacks vital functionality that makes it inefficient and basically forces curl to sometimes busy-loop to figure out what to do next. This fact, and perhaps additional problems, make the OpenSSL QUIC implementation significantly slower than the competition. Another reason to advise users to maybe use another solution.

We keep communicating with the OpenSSL team about what we think needs to happen and what they need to provide in their API so that we can do QUIC efficiently. We hope they will improve their API going forward.

Stefan Eissing produced nice comparisons graph that I have borrowed from his Performance presentation (from curl up 2024. Stefan also blogged about h3 performance in curl earlier.). It compares three HTTP/3 curl backends against each other. (It does not include msh3 because it does not work good enough in curl.)

As you can see below, in several test setups OpenSSL is only achieving roughly half the performance of the other backends in both requests per second and raw transfer speed. This is on a localhost, so basically CPU bound transfers.

I believe OpenSSL needs to work on their QUIC performance in addition to providing an improved API.

quiche and msh3

quiche is still labeled beta and is only using BoringSSL which makes it harder to use in a lot of situations.

msh3 does not work at all right now in curl after a refactor a while ago.

HTTP/3 is a CPU hog

This is not news to anyone following protocol development. I have been repeating this over and over in every HTTP/3 presentation I have done – and I have done a few by now, but I think it is worth repeating and I also think Stefan’s graphs for this show the situation in a crystal clear way.

HTTP/3 is slow in terms of transfer performance when you are CPU bound. In most cases of course, users are not CPU bound because typically networks are the bottlenecks and instead the limited bandwidth to the remote site is what limits the speed on a particular transfer.

HTTP/3 is typically faster to completing a handshake, thanks to QUIC, so a HTTP/3 transfer can often get the first byte transmitted sooner than any other HTTP version (over TLS) can.

To show how this looks with more of Stefan’s pictures, let’s first show the faster handshakes from his machine somewhere in Germany. These tests were using a curl 8.8.0-DEV build, from a while before curl 8.8.0 was released.

Nope, we cannot explain why google.com actually turned out worse with HTTP/3. It can be added that curl.se is hosted by Fastly’s CDN, so this is really comparing curl against three different CDN vendors’ implementations.

Again: these are CPU bound transfers so what this image really shows is the enormous amounts of extra CPU work that is required to push these transfers through. As long as you are not CPU bound, your transfers should of course run at the same speeds as they do with the older HTTP versions.

These comparisons show curl’s treatment of these protocols as they are not generic protocol comparisons (if such are even possible). We cannot rule out that curl might have some issues or weird solutions in the code that could explain part of this. I personally suspect that while we certainly always have areas for improvement remaining, I don’t think we have any significant performance blockers lurking. We cannot be sure though.

OpenSSL-QUIC stands out here as well, in the not so attractive end.

HTTP/3 deployments

w3techs, Mozilla and Cloudflare data all agree that somewhere around 28-30% of the web traffic is HTTP/3 right now. This is a higher rate than HTTP/1.1 for browser traffic.

An interesting detail about this 30% traffic share is that all the big players and CDNs (Google, Facebook, Cloudflare, Akamai, Fastly, Amazon etc) run HTTP/3, and I would guess that they combined normally have a much higher share of all the web traffic than 30%. Meaning that there is a significant amount of browser web traffic that could use HTTP/3 but still does not. Unfortunately I don’t have the means to figure out explanations for this.

HTTPS stack overview

In case you need a reminder, here is how an HTTPS stack works.

My BDFL guiding principles

The thing about me being a BDFL for curl is that it has the D in there. I have the means and ability to push for or veto just about anything I like or don’t like in the project, should I decide to. In my public presentations about curl I emphasize that I truly try to be a benevolent dictator, but then I also presume quite a few dictators would say and believe so whether that is true or not to the outside world.

I think we can say with some certainty that dictatorships are not the ideal way of running a country, and it might also go for Open Source projects.

In curl we remain using this model because it works and changing it to something else is a large and complicated process which we have not wanted to get to because we have not had any strong reason. There is anecdotal evidence that this way of running the project works somewhat.

A significant difference between being a dictator for an Open Source project compared to a country is however the ease with which every citizen could just leave one day and start a new clone country, with all the same citizens and the same layout, just without the dictator. I’m easily replaceable and made into past tense if I would abuse my role.

So there is this inherent force to push me to do good for the project even if I am a “dictator”.

As a BDFL of curl…

This is what I think the curl project should focus on. What I want the curl project to be. These are the ten commandments I think should remain our guiding principles. I think this is what makes curl. My benevolent guidelines.

My ten guiding principles for curl

  1. Be open and friendly
  2. Ship rock-solid products
  3. Be a leader in Open Source
  4. Maintain a security first focus
  5. Provide top-notch documentation
  6. Remain independent
  7. Respond timely
  8. Keep up with the world
  9. Stay on the bleeding edge
  10. Respect feedback

This list of ten areas are perhaps things every open source projects want to focus and excel in, but I think that is irrelevant here. These are then ten key and core focus points for me when I work on curl. We should be best-in-class in each and every one of them.

Let me elaborate

1. Be open and friendly to all contributors, new and old

I think open source projects have a lot to gain by making efforts in being friendly and approachable. We were all newcomers into a project once. We gain more contributors, better, by remaining a friendly and open project.

This does not mean that we should tolerate abuse or trolling.

I try to lead this by example. I do not always succeed.

2. Ship rock-solid products for the universe to depend upon

Reviews, tests, analyzers, fuzzers, audits, bug bounty programs etc are means to make sure the code runs smoothly everywhere. Studying protocol specs and inter-operating with servers and other clients on the Internet ensure that the products we make work as expect by billions of end users. On this planet and beyond. If our products cannot carry the world on their shoulders, we fail.

Rock-solid also means we are reliable over time – we do not break users’ scripts and applications. We maintain ABI/API compatibility. The command line options the curl tool introduces are supported until the end of time.

3. Be a leader in Open Source, follow every best practice

As true believers in the powers of Open Source we lead by example. We are here to show that you not only can do all development and everything in the open and using open source practices, but that it it also makes the project thrive and deliver state of the art outcomes.

4. Always keep users secure, maintain a security first focus

Provide features and functionality with user and protocol security in focus. Address security concerns and reports without delays. Document every past mistake in thorough detail. Help users do secure and safe Internet transfers – by default.

5. Provide industry-leading quality documentation

A key to successful usage of our products, to give users the means and ability to use our project fully, we need to document how it works and how to use it. Everything needs to be documented with clarity and detail enough so that users understand and are empowered.

No comparable software project, open or proprietary, can compete with the quality and amount of documentation we provide.

6. Roam free, independent from all companies and organizations

curl shall forever remain independent. curl is not part of any umbrella organization, it is not owned or controlled by any company. It makes us entirely independent and free to do what we think is best for the community, for our users and for Internet transfers in general. Its license shall remain set and it ensures that curl remains free. Copyright holders are individuals, there is no assignment or licensing of copyrights involved.

7. Respond timely on issues and questions

We shall strive to respond to issues, reports and questions sent to the project within a reasonable time. To show that we care and to help users solve their problems. We want users to solve their problems sooner rather than later.

It does not mean that we always can fix the problems or give a good answer immediately. Sometimes we just have to say that we can’t fix it for now.

8. Remain the internet transfer choice, keep up with the world

In the curl project we should keep up with protocol development, updates and changes. The way we do Internet transfers changes over time and curl needs to keep up to remain relevant.

We also need to write our protocol implementations sensibly, knowing that we are being watched and our way of doing things are often copied, referred to and relied upon by other Internet clients.

This also implies that we are never done and that we can always improve. In every aspect of the project.

9. Offer bleeding edge protocol support to aid early adopters

When new protocols or ways to do protocols are introduced to the world, curl can play a great role in providing and offering early support of such protocols. This has through the years helped countless of other implementers or even protocol authors of these protocols and with this, we help improving the world around us. It also helps us get early feedback on our implementation and thus ship better code earlier.

10. Listen to and respect community feedback

I might be a dictator, but this dictatorship would not work if I and the rest of the curl maintainers did not listen to what the users and the greater curl community have to say. We need to stay agile and have a sense of what people want our products to do and to not do. Now and in the future.

An open source project can always get forked the second it makes a bad turn and somehow gives up on, sells out or betrays its users. Me being a dictator does not protect us from that. We need to stay responsive, listening and caring. We are here for our users.

Flag?

If curl was an evil empire, I figure we would sport this flag: