Category Archives: Open Source

Open Source, Free Software, and similar

Don’t trust, verify

Software and digital security should rely on verification, rather than trust. I want to strongly encourage more users and consumers of software to verify curl. And ideally require that you could do at least this level of verification of other software components in your dependency chains.

Attacks are omnipresent

With every source code commit and every release of software, there are risks. Also entirely independent of those.

Some of the things a widely used project can become the victim of, include…

  • Jia Tan is a skilled and friendly member of the project team but is deliberately merging malicious content disguised as something else.
  • An established committer might have been breached unknowingly and now their commits or releases contain tainted bits.
  • A rando convinced us to merge what looks like a bugfix but is a small step in a long chain of tiny pieces building up a planted vulnerability or even backdoor
  • Someone blackmails or extorts an existing curl team member into performing changes not otherwise accepted in the project
  • A change by an established and well-meaning project member that adds a feature or fixes a bug mistakenly creates a security vulnerability.
  • The website on which tarballs are normally distributed gets hacked and now evil alternative versions of the latest release are provided, spreading malware.
  • Credentials of a known curl project member is breached and misinformation gets distributed appearing to be from a known and trusted source. Via email, social media or websites. Could even be this blog!
  • Something in this list is backed up by an online deep-fake video where a known project member seemingly repeats something incorrect to aid a malicious actor.
  • A tool used in CI, hosted by a cloud provider, is hacked and runs something malicious
  • While the primary curl git repository has a downtime, someone online (impersonating a curl team member?) offers a temporary “curl mirror” that contains tainted code.

In the event any of these would happen, they could of course also happen in combinations and in a rapid sequence.

You can verify

curl, mostly in the shape of libcurl, runs in tens of billions of devices. Clearly one of the most widely used software components in the world.

People ask me how I sleep at night given the vast amount of nasty things that could occur virtually at any point.

There is only one way to combat this kind of insomnia: do everything possible and do it openly and transparently. Make it a little better this week than it was last week. Do software engineering right. Provide means for everyone to verify what we do and what we ship. Iterate, iterate, iterate.

If even just a few users verify that they got a curl release signed by the curl release manager and they verify that the release contents is untainted and only contains bits that originate from the git repository, then we are in a pretty good state. We need enough independent outside users to do this, so that one of them can blow the whistle if anything at any point would look wrong.

I can’t tell you who these users are, or in fact if they actually exist, as they are and must be completely independent from me and from the curl project. We do however provide all the means and we make it easy for such users to do this verification.

We must verify

The few outsiders who verify that nothing was tampered with in the releases can only validate that the releases are made from what exists in git. It is our own job to make sure that what exists in git is the real thing. The secure and safe curl.

We must do a lot to make sure that whatever we land in git is okay. Here’s a list of activities we do.

  1. we have a consistent code style (invalid style causes errors). This reduces the risk for mistakes and makes it easier to debug existing code.
  2. we ban and avoid a number of “sensitive” and “hard-to-use” C functions (use of such functions causes errors)
  3. we have a ceiling for complexity in functions to keep them easy to follow, read and understand (failing to do so causes errors)
  4. we review all pull requests before merging, both with humans and with bots. We link back commits to their origin pull requests in commit messages.
  5. we ban use of “binary blobs” in git to not provide means for malicious actors to bundle encrypted payloads (trying to include a blob causes errors)
  6. we actively avoid base64 encoded chunks as they too could function as ways to obfuscate malicious contents
  7. we ban most uses of Unicode in code and documentation to avoid easily mixed up characters that look like other characters. (adding Unicode characters causes errors)
  8. we document everything to make it clear how things are supposed to work. No surprises. Lots of documentation is tested and verified in addition to spellchecks and consistent wording.
  9. we have thousands of tests and we add test cases for (ideally) every functionality. Finding “white spots” and adding coverage is a top priority. curl runs on countless operating systems, CPU architectures and you can build curl in billions of different configuration setups: not every combination is practically possible to test
  10. we build curl and run tests in over two hundred CI jobs that are run for every commit and every PR. We do not merge commits that have unexplained test failures.
  11. we build curl in CI with the most picky compiler options enabled and we never allow compiler warnings to linger. We always use -Werror that converts warnings to errors and fail the builds.
  12. we run all tests using valgrind and several combinations of sanitizers to find and reduce the risk for memory problems, undefined behavior and similar
  13. we run all tests as “torture tests”, where each test case is rerun to have every invoked fallible function call fail once each, to make sure curl never leaks memory or crashes due to this.
  14. we run fuzzing on curl: non-stop as part of Google’s OSS-Fuzz project, but also briefly as part of the CI setup for every commit and PR
  15. we make sure that the CI jobs we have for curl never “write back” to curl. They access the source repository read-only and even if they would be breached, they cannot infect or taint source code.
  16. we run zizmor and other code analyzer tools on the CI job config scripts to reduce the risk of us running or using insecure CI jobs.
  17. we are committed to always fix reported vulnerabilities in the following release. Security problems never linger around once they have been reported.
  18. we document everything and every detail about all curl vulnerabilities ever reported
  19. our commitment to never breaking ABI or API allows all users to easily upgrade to new releases. This enables users to run recent security-fixed versions instead of legacy insecure versions.
  20. our code has been audited several times by external security experts, and the few issues that have been detected in those were immediately addressed
  21. Two-factor authentication on GitHub is mandatory for all committers

All this done in the open with full transparency and full accountability. Anyone can follow along and verify that we follow this.

Require this for all your dependencies.

Not paranoia

We plan for the event when someone actually wants and tries to hurt us and our users really bad. Or when that happens by mistake. A successful attack on curl can in theory reach widely.

This is not paranoia. This setup allows us to sleep well at night.

This is why users still rely on curl after thirty years in the making.

Documented

I recently added a verify page to the curl website explaining some of what I write about in this post.

One hundred weirdo emails

  1. My email address is spelled out in the curl license
  2. The curl license appears in many products
  3. Some people have problems with their products and need someone to email
  4. A few of these discover my email in their product
  5. Occasionally, the person in need of help emails me about their product.
  6. I collect some of those and make them public

I hope I don’t have to spell it out but I will do it anyway: in these cases I don’t know anything about their products and I cannot help them. Quite often I first need to search around only to figure out what the product is or does, that the person asks me about.

Over the years I have collected such emails that end up in my inbox. Out of those that I have received, I have cherry-picked my favorites: the best, the weirdest, the most offensive and the most confused ones and I put them up online. A few of then also triggered separate blog posts of their own in the past.

They help us remember that the world is complicated and hard to understand.

Today, my online collection reached the magical amount: 100 emails. The first one in the stash was received in 2009 and the latest arrived just the other day. I expect I’ll keep adding occasional new ones going forward as well.

Enjoy!

NTLM and SMB go opt-in

The NTLM authentication method was always a beast.

It is a proprietary protocol designed by Microsoft which was reverse engineered a long time ago. That effort resulted in the online documentation that I based the curl implementation on back in 2003. I then also wrote the NTLM code for wget while at it.

NTLM broke with the HTTP paradigm: it is made to authenticate the connection instead of the request, which is what HTTP authentication is supposed to do and what all the other methods do. This might sound like a tiny and insignificant detail, but it has a major impact in all HTTP implementations everywhere. Indirectly it is also the cause for quite a few security related issues in HTTP code, because NTLM needs many special exceptions and extra unique treatments.

curl has recorded no less than seven past security vulnerabilities in NTLM related code! While that may not be only NTLM’s fault, it certainly does not help.

The connection-based concept also makes the method incompatible with HTTP/2 and HTTP/3. NTLM requires services to stick to HTTP/1.

NTLM (v1) uses super weak cryptographic algorithms (DES and MD5), which makes it a bad choice even when disregarding the other reasons.

We are slowly deprecating NTLM in curl, but we are starting out by making it opt-in. Starting in curl 8.20.0, NTLM is disabled by default in the build unless specifically enabled.

Microsoft themselves have deprecated NTLM already. The wget project looks like it is about to make their NTLM support opt-in.

SMB

curl only supports SMB version 1. This protocol uses NTLM for the authentication and it is equally bad in this protocol. Without NTLM enabled in the build, SMB support will also get disabled.

But also: SMBv1 is in itself a weak protocol that is barely used by curl users, so this protocol is also opt-in starting in curl 8.20.0. You need to explicitly enable it in the build to get it added.

Not removed yet

I want to emphasize that we have not removed support for these ancient protocols, we just strongly discourage using them and I believe this is a first step down the ladder that in a future will make them get removed completely.

bye bye RTMP

In May 2010 we merged support for the RTMP protocol suite into curl, in our desire to support the world’s internet transfer protocols.

RTMP

The protocol is an example of the spirit of an earlier web: back when we still thought we would have different transfer protocols for different purposes. Before HTTP(S) truly became the one protocol that rules them all.

RTMP was done by Adobe, used by Flash applications etc. Remember those? RTMP is an ugly proprietary protocol that simply was never used much in Open Source.

The common Open Source implementation of this protocol is done in the rtmpdump project. In that project they produce a library, librtmp, which curl has been using all these years to handle the actual binary bits over the wire. Build curl to use librtmp and it can transfer RTMP:// URLs for you.

librtmp

In our constant pursuit to improve curl, to find spots that are badly tested and to identify areas that could be weak from a security and functionality stand-point, our support of RTMP was singled out.

Here I would like to stress that I’m not suggesting that this is the only area in need of attention or improvement, but this was one of them.

As I looked into the RTMP situation I realized that we had no (zero!) tests of our own that actually verify RTMP with curl. It could thus easily break when we refactor things. Something we do quite regularly. I mean refactor (but also breaking things). I then took a look upstream into the librtmp code and associated project to investigate what exactly we are leaning on here. What we implicitly tell our users they can use.

I quickly discovered that the librtmp project does not have a single test either. They don’t even do releases since many years back, which means that most Linux distros have packaged up their code straight from their repositories. (The project insists that there is nothing to release, which seems contradictory.)

Is there perhaps any librtmp tests perhaps in the pipe? There had not been a single commit done in the project within the last twelve months and when I asked one of their leading team members about the situation, I was made clear to me that there is no tests in the pipe for the foreseeable future either.

How about users?

In November 2025 I explicitly asked for RTMP users on the curl-library mailing list, and one person spoke up who uses it for testing.

In the 2025 user survey, 2.2% of the respondents said they had used RTMP within the last year.

The combination of few users and untested code is a recipe for pending removal from curl unless someone steps up and improves the situation. We therefor announced that we would remove RTMP support six months into the future unless someone cried out and stepped up to improve the RTMP situation.

We repeated this we-are-doing-to-drop-RTMP message in every release note and release video done since then, to make sure we do our best to reach out to anyone actually still using RTMP and caring about it.

If anyone would come out of the shadows now and beg for its return, we can always discuss it – but that will of course require work and adding test cases before it would be considered.

Compatibility

Can we remove support for a protocol and still claim API and ABI backwards compatibility with a clean conscience?

This is the first time in modern days we remove support for a URL scheme and we do this without bumping the SONAME. We do not consider this an incompatibility primarily because no one will notice. It is only a break if it actually breaks something.

(RTMP in curl actually could be done using six separate URL schemes, all of which are no longer supported: rtmp,rtmpe,rtmps, rtmpt,rtmpte,rtmpts.)

The offical number of URL schemes supported by curl is now down to 27: DICT, FILE, FTP, FTPS, GOPHER, GOPHERS, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, MQTT, MQTTS, POP3, POP3S, RTSP, SCP, SFTP, SMB, SMBS, SMTP, SMTPS, TELNET, TFTP, WS and WSS.

When

The commit that actually removed RTMP support has been merged. We had the protocol supported for almost sixteen years. The first curl release without RTMP support will be 8.20.0 planned to ship on April 29, 2026

One hundred curl graphs

In the spring of 2020 I decided to finally do something about the lack of visualizations for how the curl project is performing, development wise.

How does the line of code growth look like? How many command line options have we had over time and how many people have done more than 10 commits per year over time?

I wanted to have something that visually would show me how the project is doing, from different angles, viewpoints and probes. In my mind it would be something like a complicated medical device monitoring a patient that a competent doctor could take a glance at and assess the state of the patient’s health and welfare. This patient is curl, and the doctors would be fellow developers like myself.

GitHub offers some rudimentary graphs but I found (and still find) them far too limited. We also ran gitstats on the repository so there were some basic graphs to get ideas from.

Make it myself

I did a look-around to see what existing frameworks and setups that existed that I should base this one, as I was convinced I would have to do quite some customizing myself. Nothing I saw was close enough to what I was looking for. I decided to make my own, at least for a start.

I decided to generate static images for this, not add some JavaScript framework that I don’t know how to use to the website. Static daily images are excellent for both load speed and CDN caching. As we already deny running JavaScript on the site that saved me from having to work against that. SVG images are still vector based and should scale nicely.

SVG is also a better format from a download size perspective, as PNG almost always generate much larger images for this kind of images.

When this started, I imagined that it would be a small number of graphs mostly showing timelines with plots growing from lower left to upper right. It would turn out to be a little naive.

gnuplot

I knew some basics about gnuplot from before as I had seen images and graphs generated by others in the past. Since gitstats already used it I decided to just dive in deeper and use this. To learn it.

gnuplot is a 40 year old (!) command line tool that can generate advanced graphs and data visualizations. It is a powerful tool, which also means that not everything is simple to understand and use at once, but there is almost nothing in terms of graphs, plots and curves that it cannot handle in one way or another.

I happened to meet Lee Phillips online who graciously gave me a PDF version of his book aptly named gnuplot. That really helped!

Produce data to feed gnuplot

I decided that for every graph I want to generate, I first gather and format the data with one script, then render an image in a separate independent step using gnuplot. It made it easy to work on them in separate steps and also subsequently tune them individually and to make it easy to view the data behind every graph if I ever think there’s a problem in one etc.

It took me about about two weeks of on and off working in the background to get a first set of graphs visualizing curl development status.

I then created the glue scripting necessary to add a first dashboard with the existing graphs to the curl website. Static HTML showing static SVG images.

On March 20, 2020 the first version of the dashboard showed no less than twenty separate graphs. I refer to “a graph” as a separate image, possibly showing more than one plot/line/curve. That first dashboard version had twenty graphs using 23 individual plots.

Since then, we display daily updated graphs there.

The data

All data used for populating the graphs is open and available, and I happily use whatever is available:

  • git repository (source, tags, etc)
  • GitHub issues
  • mailing list archives
  • curl vulnerability data
  • hackerone reports
  • historic details from the curl past

Open and transparent as always.

Then it grew

Every once in a while since then I get to think of something else in the project, the code, development, the git history, community, emails etc that could be fun or interesting to visualize and I add a graph or two more to the dashboard. Six years after its creation, the initial twenty images have grown to one hundred graphs including almost 300 individual plots.

Most of them show something relevant, while a few of them are in the more silly and fun category. It’s a mix.

Graph 100

The 100th graph was added on March 15, 2026 when I brought back the “vulnerable releases” graph (appearing on the site on March 16 for the first time). It shows the number of known vulnerabilities each past release has. I removed it previously because it became unreadable, but in this new edition I made it only show the label for every 4th release which makes it slightly less crowded than otherwise.

This day we also introduce a new 8-column display mode.

Custom but available

Many of the graphs are internal and curl specific of course. The scripts for this, and the entire dashboard, remain written specifically for curl and curl’s circumstances and data. They would need some massaging and tweaking in order to work for someone else.

All the scripts are of course open and available for everyone.

I used to also offer all the CSV files generated to render the graphs in an easy accessible form on the site, but this turned out to be work done for virtually no audience, so I removed that again. If you replace the .svg extension with .csv, you can still get most of the data – if you know.

Data is knowledge

The graphs and illustrations are not only silly and fun. They also help us see development from different angles and views, and they help us draw conclusions or at least try to. As an established and old project that makes an effort to do right, some of what we learn from this curl data might be possible to learn from and use even in other projects. Maybe even use as basis when we decide what to do next.

I personally have used these graphs in countless blog posts, Mastodon threads and public curl presentations. They help communicate curl development progress.

The jokes

On Mastodon I keep joking about me being a graphaholic and often when I have presented yet another graph added the collection, someone has asked the almost mandatory question: how about a graph over number of graphs on the dashboard?

Early on I wrote up such a script as well, to immediately fulfill that request. On March 14 2026, I decided to add it it as a permanent graph on the dashboard.

The next-level joke (although some would argue that this is not fun anymore) is then to ask me for a graph showing the number of graphs for graphs. As I aim to please, I have that as well. Although this is not on the dashboard:

More graphs

I am certain I (we?) will add more graphs over time. If you have good ideas for what source code or development details we should and could illustrate, please let me know.

Links

The git repository: https://github.com/curl/stats/

Daily updated curl dashboard: https://curl.se/dashboard.html

curl gitstats: https://curl.se/gitstats/

chicken nuget

Background: nuget.org is a Microsoft owned and run service that allows users to package software and upload it to nuget so that other users can download it. It is targeted for .Net developers but there is really no filter in what you can offer through their service.

Three years ago I reported on how nuget was hosting and providing ancient, outdated and insecure curl packages. Random people download a curl tarball, build curl and then upload it to nuget, and nuget then offers those curl builds to the world – forever.

To properly celebrate the three year anniversary of that blog post, I went back to nuget.org, entered curl into the search bar and took a look at the results.

I immediately found at least seven different packages where people were providing severely outdated curl versions. The most popular of those, rmt_curl, reports that it has been downloaded almost 100,000 times over the years and is still downloaded almost 1,000 times/week the last few weeks. It is still happening. The packages I reported three years ago are gone, but now there is a new set of equally bad ones. No lessons learned.

rmt_curl claims to provide curl 7.51.0, a version we shipped in November 2016. Right now it has 64 known vulnerabilities and we have done more than 9,000 documented bugfixes since then. No one in their right mind should ever download or use this version.

Conclusion: the state of nuget is just as sad now as it was three years ago and this triggered another someone is wrong on the internet moments for me. I felt I should do my duty and tell them. Again. Surely they will act this time! Surely they think of the security of their users?

Trusting randos

The entire nuget concept is setup and destined to end up like this: random users on the internet put something together, upload it to nuget and then the rest of the world downloads and uses those things – trusting that whatever the description says is accurate and well-meaning. Maybe there are some additional security scans done in the background, but I don’t see how anyone can know that they don’t contain any backdoors, trojans or other nasty deliberate attacks.

And whatever has been uploaded once seems to then be offered in perpetuity.

I reported this again

Like three years ago I listed a bunch of severely outdated curl packages in my report. nuget says I can email them a report, but that just sent me a bounce back saying they don’t accept email reports anymore. (Sigh, and yes I reported that as a separate issue.)

I was instead pointed over to the generic Microsoft security reporting page where there is not even any drop-down selection to use for “nuget” so I picked “.NET” instead when I submitted my report.

“This is not a Microsoft problem”

Almost identically to three years ago, my report was closed within less than 48 hours. It’s not a nuget problem they say.

Thank you again for submitting this report to the Microsoft Security Response Center (MSRC).

After careful investigation, this case has been assessed as not a vulnerability and does not meet Microsoft’s bar for immediate servicing. None of these packages are Microsoft owned, you will need to reach out directly to the owners to get patched versions published. Developers are responsible for removing their own packages or updating the dependencies.

In other words: they don’t think it’s nuget’s responsibility to keep the packages they host, secure and safe for their users. I should instead report these things individually to every outdated package provider, who if they cared, would have removed or updated these packages many years ago already.

Also, that would imply a never-ending wack-a-mole game for me since people obviously keep doing this. I think I have better things to do in my life.

Outdated efforts

In the cases I reported, the packages seem to be of the kind that once had the attention and energy by someone who kept them up-to-date with the curl releases for a while and then they stopped and since then the packages on nuget has just collected dust and gone stale.

Still, apparently users keep finding and downloading them, even if maybe not at terribly high numbers.

Thousands of fooled users per week is thousands too many.

How to address

The uploading users are perfectly allowed to do this, legally, and nuget is perfectly allowed to host these packages as per the curl license.

I don’t have a definite answer to what exactly nuget should do to address this problem once and for all, but as long as they allow packages uploaded nine years ago to still get downloaded today, it seems they are asking for this. They contribute and aid users getting tricked into downloading and using insecure software, and they are indifferent to it.

A rare few applications that were uploaded nine years ago might actually still be okay but those are extremely rare exceptions.

Conclusion

The last time I reported this nuget problem nothing happened on the issue until I tweeted about it. This time around, a well-known Microsoft developer (who shall remain nameless here) saw my Mastodon post about this topic when mirrored over to Bluesky and pushed for the case internally – but not even that helped.

The nuget management thinks this is okay.

If I were into puns I would probably call them chicken nuget for their unwillingness to fix this. Maybe just closing our eyes and pretending it doesn’t exist will just make it go away?

Absolutely no one should use nuget.

curl 8.19.0

Release presentation

Numbers

the 273rd release
8 changes
63 days (total: 10,712)
264 bugfixes (total: 13,640)
538 commits (total: 38,024)
0 new public libcurl function (total: 100)
0 new curl_easy_setopt() option (total: 308)
0 new curl command line option (total: 273)
77 contributors, 48 new (total: 3,619)
37 authors, 21 new (total: 1,451)
4 security fixes (total: 180)

Security

We stopped the bug-bounty but it has not stopped people from finding vulnerabilities in curl.

Changes

  • We stopped the bug-bounty. It’s worth repeating, even if it was no code change.
  • The cmake build got a CURL_BUILD_EVERYTHING option
  • Initial support for MQTTS was merged
  • curl now supports fractions for –limit-rate and –max-filesize
  • curl’s -J option now uses the redirect name as a backup
  • we no longer support OpenSSL-QUIC
  • on Windows, curl can now get built to use the native CA store by default
  • the minimum Windows version curl supports is now Vista (up from XP)

Pending removals

The following upcoming changes might be worth noticing. See the deprecate documentation for details.

  • NTLM support becomes opt-in
  • RTMP support is getting dropped
  • SMB support becomes opt-in
  • Support for c-ares versions before 1.16 goes away
  • Support for CMake 3.17 and earlier gets dropped
  • TLS-SRP support will be removed

Next

We plan to ship the next curl release on April 29. See you then!

Dependency tracking is hard

curl and libcurl are written in C. Rather low level components present in many software systems.

They are typically not part of any ecosystem at all. They’re just a tool and a library.

In lots of places on the web when you mention an Open Source project, you will also get the option to mention in which ecosystem it belongs. npm, go, rust, python etc. There are easily at least a dozen well-known and large ecosystems. curl is not part of any of those.

Recently there’s been a push for PURLs (Package URLs), for example when describing your specific package in a CVE. A package URL only works when the component is part of an ecosystem. curl is not. We can’t specify curl or libcurl using a PURL.

SBOM generators and related scanners use package managers to generate lists of used components and their dependencies. This makes these tools quite frequently just miss and ignore libcurl. It’s not listed by the package managers. It’s just in there, ready to be used. Like magic.

It is similarly hard for these tools to figure out that curl in turn also depends and uses other libraries. At build-time you select which – but as we in the curl project primarily just ships tarballs with source code we cannot tell anyone what dependencies their builds have. The additional libraries libcurl itself uses are all similarly outside of the standard ecosystems.

Part of the explanation for this is also that libcurl and curl are often shipped bundled with the operating system many times, or sometimes perceived to be part of the OS.

Most graphs, SBOM tools and dependency trackers therefore stop at the binding or system that uses curl or libcurl, but without including curl or libcurl. The layer above so to speak. This makes it hard to figure out exactly how many components and how much software is depending on libcurl.

A perfect way to illustrate the problem is to check GitHub and see how many among its vast collection of many millions of repositories that depend on curl. After all, curl is installed in some thirty billion installations, so clearly it used a lot. (Most of them being libcurl of course.)

It lists one dependency for curl.

What makes this even more amusing is that it looks like this single dependent repository (Pupibent/spire) lists curl as a dependency by mistake.

10K curl downloads per year

The Linux Foundation, the organization that we want to love but that so often makes that a hard bargain, has created something they call “Insights” where they gather lots of metrics on Open Source projects.

I held back so I never blogged and taunted OpenSSF for their scorecard attempts that were always lame and misguided. This Insights thing looks like their next attempt to “grade” and “rate” Open Source. It is so flawed and full of questionable details that I decided there is no point in me listing them all in a blog post – it would just be too long and boring. Instead I will just focus on a single metric. The one that made me laugh out loud when I saw it.

Package downloads

They claim curl was downloaded 10,467 times the last year. (source)

What does “a download” mean? They refer to statistics from ecosyste.ms, which is an awesome site and service, but it has absolutely no idea about curl downloads.

How often is curl “downloaded”?

curl release tarballs are downloaded from curl.se at a rate of roughly 250,000 / month.

curl images are currently pulled from docker at a rate of around 400,000 – 700,000 / day. curl is pulled from quay.io at roughly the same rate.

curl’s git repository is cloned roughly 32,000 times / day

curl is installed from Linux and BSD distributions at an unknown rate.

curl, in the form of libcurl, is bundled in countless applications, games, devices, cars, TVs, printers and services, and we cannot even guess how often it is downloaded as such an embedded component.

curl is installed by default on every Windows and macOS system since many years back.

But no, 10,467 they say.

curl up 2026

The annual curl users and developers meeting, curl up, takes place May 23-24 2026 in Prague, Czechia.

We are in fact returning to the same city and the exact same venue as in 2025. We liked it so much!

curl up

This is a cozy and friendly event that normally attracts around 20-30 attendees. We gather in a room through a weekend and we talk curl. The agenda is usually setup with a number of talks through the two days, and each talk ends with a follow-up Q&A and discussion session. So no big conference thing, just a bunch of friends around a really large table. Over a weekend.

Anyone is welcome to attend – for free – and everyone is encouraged to submit a talk proposal – anything that is curl and Internet transfer related goes.

We make an effort to attract and lure the core curl developers and the most active contributors of recent years into the room. We do this by reimbursing their travel and hotel expenses.

Agenda

The agenda is a collaborative effort and we are going to work on putting it together from now all the way until the event, in order to make sure we make the best of the weekend and we get to talk to and listen to all the curl related topics we can think of!

Help us improve the Agenda in the curl-up wiki: https://github.com/curl/curl-up/wiki/2026

Human to human

Meeting up in the real world as opposed to doing video meetings helps us get to know each other better, allows us to socialize in ways we otherwise never can do and in the end it helps us work better together – which subsequently helps us write better code and produce better outcomes!

It also helps us meet and welcome newcomers and casual contributors. Showing up at curl up is an awesome way to dive into the curl world wholeheartedly and in the deep end.

Sponsor

Needless to say this event costs money to run. We pay our top people to come, we pay for the venue and pay for food.

We would love to have your company mentioned as top sponsor of the event or perhaps a social dinner on the Saturday? Get in touch and let’s get it done!

Attend!

Everyone is welcome and encouraged to attend – at no cost. We only ask that you register in advance (the registration is not open yet).

Video

We always record all sessions on video and make them available after the fact. You can catch up on previous years’ curl up sessions on the curl website’s video section.

We also live-stream all the sessions on curl up during both days. To be found on my twitch channel: curlhacker.

Friendly

Our events are friendly to everyone. We abide to the code of conduct and we never had anyone be even close to violating that,