Category Archives: Open Source

Open Source, Free Software, and similar

From suspicion to published curl CVE

Every curl security report starts out with someone submitting an issue to us on https://hackerone.com/curl. The reporter tells us what they suspect and what they think the problem is. This report is kept private, visible only to the curl security team and the reporter while we work on it.

In recent months we have gotten 3-4 security reports per week. The program has run for over six years now, with almost 600 reports accumulated.

On average, someone in the team makes a first response to that report already within the first hour.

Assess

The curl security team right now consists of seven long time and experienced curl maintainers. We immediately start to analyze and assess the received issue and its claims. Most reports are not identifying actual security problems and are instead quickly dismissed and closed. Some of them identify plain bugs that are not security issues and then we move the discussion over to the public bug tracker instead.

This part can take anything from hours up to multiple days and usually involves several curl security team members.

If we think the issue might have merit, we ask follow-up questions, test reproducible code and discuss with the reporter.

Valid

A small fraction of the incoming reports is actually considered valid security vulnerabilities. We work together with the reporter to reach a good understanding of what exactly is required for the bug to trigger and what the flaw can lead to. Together we set a severity for the problem (low, medium, high, critical) and we work out a first patch – which also helps to make sure we understand the issue. Unless the problem is deemed serious we tend to sync the publication of the new vulnerability with the pending next release. Our normal release cycle is eight weeks so we are never farther than 56 days away from the next release.

Fix

For security issues we deem to be severity low or medium we create a pull request for the problem in the public repository – but we don’t mention the security angle of the problem in the public communication of it. This way, we also make sure that the fix gets added test exposure and time to get polished before the pending next release. Over the last five or so years, only two in about eighty confirmed security vulnerabilities have been rated a higher severity than medium. Fixes for vulnerabilities we consider to be severity high or critical are instead merged into the git repository when there is approximately 48 hours left to the pending release – to limit the exposure time before it is announced properly. We need to merge it into the public before the release because our entire test infrastructure and verification system is based on public source code.

Advisory

Next, we write up a detailed security advisory that explains the problem and exactly what the mistake is and how it can lead to something bad – including all the relevant details we can think of. This includes version ranges for affected curl versions and the exact git commits that introduced the problem as well as which commit that fixed the issue – plus credits to the reporter and to the patch author etc. We have the ambition to provide the best security advisories you can find in the industry. (We also provide them in JSON format etc on the site for the rare few users who care about that.) We of course want the original reporter involved as well so that we make sure that we get all the angles of the problem covered accurately.

CVE

As we are a CNA (CVE Numbering Authority), we reserve and manage CVE Ids for our own issues ourselves.

Pre-notify

About a week before the pending release when we also will publish the CVE, we inform the distros@openwall mailing list about the issue, including the fix, and when it is going to be released. It gives Open Source operating systems a little time to prepare their releases and adjust for the CVE we will publish.

Publish

On the release day we publish the CVE details and we ship the release. We then also close the HackerOne report and disclose it to the world. We disclose all HackerOne reports once closed for maximum transparency and openness. We also inform all the curl mailing lists and the oss-security mailing list about the new CVE. Sometimes we of course publish more than one CVE for the same release.

Bounty

Once the HackerOne report is closed and disclosed to the world, the vulnerability reporter can claim a bug bounty from the Internet Bug Bounty which pays the researcher a certain amount of money based on the severity level of the curl vulnerability.

(The original text I used for this blog post was previously provided to the interview I made for Help Net Security. Tweaked and slightly extended here.)

The team

The heroes in the curl security team who usually work on all this in silence and without much ado, are currently (in no particular order):

  • Max Dymond
  • Dan Fandrich
  • Daniel Gustafsson
  • James Fuller
  • Viktor Szakats
  • Stefan Eissing
  • Daniel Stenberg

Developer of the year

Developers Day is a recent annual Swedish gala organized by the Stockholm-based company Developers Bay. This is its third year running.

They have an ambition to highlight and celebrate Swedish software developers (or perhaps it is developers based in Sweden?) and hand out a series of awards for that purpose.

A jury that consists of six persons receives nominations through-out the year and then they decide which of them who get awards in six different categories.

Awarded

This year, I was graciously nominated as, and subsequently, awarded Developer of the year at the award gala on September 12, 2025.

The motivation, as shown in Swedish in the image above, translates into something like:

This year’s winner is a developer with a lifelong passion for technology and problem solving. His interest was awaken already in the 1980s with a commodore64 and has since grown into a career characterized by curiosity and drive. After starting his professional life at IBM, the developer has contributed to the open source world for a long time – both as a coder and as an ambassador for open collaboration. For this year’s winner, development is also a way to understand people and the most challenging part of technology is the collaboration between them. He created curl, one of the world’s most installed software products, with over 20 billion installations.

Getting recognition for my work and many years in software development is truly awesome and heartwarming. It energizes me and motivates me to go further. Clearly I must be doing something right!

I aspire to make top quality software entirely free and Open Source. I want to provided stellar tools and means for my fellow developers that make them productive and allow them to build awesome things. I try to explain what I do, how things work and I how I think things should be done, to perhaps in some small ways push things in the world in the appropriate direction.

The award

Yeah, this is probably a little navel-gazing and inside baseball, as this is just a (small) company and its associated network that give out awards within a relatively small Swedish community by jury members who given their public bios do not have a terribly long or extensive experience out in the big wide world.

of the year? Yeah a quite legitimate question could be what special action or activity I have done in 2025 to earn the honor this particular time and not last year or next, but I think it simply boils down to the fact that someone nominated me this year.

Best developer? Comparing different persons working with completely different things in completely different areas and saying that one of them is “best” is certainly futile and of course not actually possible. We have numerous excellent developers in Sweden.

In spite of that, getting recognition in the form of an award is simply wonderful.

Thank you!

curl 8.16.0

Welcome to one of the more feature-packed curl releases we have had in a while. Exactly eight weeks since we shipped 8.15.0.

Release presentation

Numbers

the 270th release
17 changes
56 days (total: 10,036)
260 bugfixes (total: 12,538)
453 commits (total: 36,025)
2 new public libcurl function (total: 98)
0 new curl_easy_setopt() option (total: 308)
3 new curl command line option (total: 272)
76 contributors, 39 new (total: 3,499)
32 authors, 17 new (total: 1,410)
2 security fixes (total: 169)

Security

We publish two severity-low vulnerabilities in sync with this release:

  • CVE-2025-9086 identifies a bug in the cookie path handler that can make curl get confused and override a secure cookie with a non-secure one using the same name. If the planets all happen to align correctly.
  • CVE-2025-10148 points out a mistake in the WebSocket implementation that makes curl not update the frame mask correctly for each new outgoing frame – as it is supposed to.

Changes

We have a long range of changes this time:

  • curl gets a --follow option
  • curl gets an --out-null option
  • curl gets a --parallel-max-host option to limit concurrent connections per host
  • --retry-delay and --retry-max-time accept decimal seconds
  • curl gets support for --longopt=value
  • curl -w now supports %time{}
  • now libcurl caches negative name resolves
  • ip happy eyeballing: keep attempts running
  • bump minimum mbedtls version required to 3.2.0
  • add curl_multi_get_offt() for getting multi related information
  • add CURLMOPT_NETWORK_CHANGED to signal network changed to libcurl
  • use the NETRC environment variable (first) if set
  • bump minimum required mingw-w64 to v3.0 (from v1.0)
  • smtp: allow suffix behind a mail address for RFC 3461
  • make default TLS version be minimum 1.2
  • drop support for msh3
  • support CURLOPT_READFUNCTION for WebSocket

Bugfixes

The official bugfix count surpassed 250 this cycle and we have documented them all in the changelog, including links to most issues or pull-requests where they originated.

See the release presentation for a walk-through of some of the perhaps most interesting ones.

preparing for the worst

One of these mantras I keep repeating is how we in the curl project keep improving, keep polishing and keep tightening every bolt there is. No one can do everything right from day one, but given time and will we can over time get a lot of things lined up in neat and tidy lines.

And yet new things creep up all the time that can be improved and taken yet a little further.

An exercise

Back in the spring of 2025 we had an exercise at our curl up meeting in Prague. Jim Fuller played up an imaginary life-like scenario for a bunch of curl maintainers. In this role played major incident we got to consider how we would behave and what we would do in the curl project if something like Heartbleed or a serious breach occur.

It was a little of an eye opener for several of us. We realized we should probably get some more details written down and planned for.

Plan ahead

We of course arrange for things and work on procedures and practices to the best of our abilities to make sure that there will never be any such major incident in the curl project. However, as we are all human and we all do mistakes, it would be foolish to think that we are somehow immune against incidents of the highest possible severity level. Rationally, we should just accept the fact that even though the risk is ideally really slim, it exists. It could happen.

What if the big bad happens

We have now documented some guidelines about what exactly constitutes a major incident, how it is declared, some roles we need to shoulder while it is ongoing, with a focus on both internal and external communication, and how we declare that the incident has ended. It’s straight forward and quite simple.

Feel free to criticize or improve those if you find omissions or mistakes. I imagine that if we ever get to actually use these documented steps because of such a project-shaking event, we will get reasons to improve it. Until then we just need to apply our imagination and make sure it seems reasonable.

giants, standing on the shoulders of

This was the title of my keynote at the Open Source Summit Europe 2025 conference in Amsterdam that I delivered on August 25, 2025.

The giants, as in fact large parts of modern infrastructure, stand on the shoulders of Open Source projects and their maintainers. But maybe these projects and people are not treated in optimal ways.

The slides are available.

My slot allowed me 15 minutes and I completed my presentation with around two minutes left. I have rarely received so many positive comments as after this talk.

Credits

The painting in the top image, The Colossus (also known as The Giant), is traditionally attributed to Francisco de Goya but might actually have been made by his friend Asensio Juliá.

Dropping old OpenSSL

curl added support for OpenSSL immediately when it was first released, as they switched away from SSLeay, in the late 1990s.

We have since supported it over the decades as both OpenSSL and curl have developed.

A while back the OpenSSL project stopped updating their 1.0.x and 1.1.x public branches. This means that unless you are paying for support from someone, and only relies on the public open versions these OpenSSL releases are going to decay and soon be insecure choices. Nothing to rely on.

As a direct result of this, the curl project has decided to drop support for OpenSSL 1.0.2 and 1.1.1 soon.

We stop supporting OpenSSL 1.0.2 in December 2025.

We stop supporting OpenSSL 1.1.1 in June 2026.

Starting in June 2026, we plan to only support OpenSSL 3 and later. Of course with the caveat that we might change our minds or schedule as we go along and things happen.

All pending removals from curl are listed here.

Contract support remains

Part of the reason for us dropping this support is the fact that basically only users who are already paying for OpenSSL support are the ones who are going to be using these versions.

We will offer commercial support for curl with OpenSSL 1.1.1 for as long as customers want it, even when support gets removed from the public curl version.

The forks remain

This news is for OpenSSL support only and does not affect the forks. We intend to keep supporting the whole fork family AmiSSL, AWS-LC, BoringSSL, LibreSSL and QuicTLS going forward as well.

car brands running curl

Seven years ago I wrote about how a hundred million cars were running curl and as I brought up this blog post in a discussion recently, I came to reflect over how the world might have changed since. Is curl perhaps used in more cars now?

Yes it is.

With the help of friendly people on Mastodon, and a little bit of Googling, the current set of car brands known to have cars running curl contains 47 names. Most of the world’s top brands:

Acura, Alfa Romeo, Audi, Baojun, Bentley, BMW, Buick, Cadillac, Chevrolet, Chrysler, Citroen, Dacia, Dodge, DS, Fiat, Ford, GMC, Holden, Honda, Hyundai, Infiniti, Jeep, Kia, Lamborghini, Lexus, Lincoln, Mazda, Mercedes, Mini, Nissan, Opel, Peugeot, Polestar, Porsche, RAM, Renault, Rolls Royce, Seat, Skoda, Smart, Subaru, Suzuki, Tesla, Toyota, Vauxhall, Volkswagen, Volvo

I think it is safe to claim that curl now runs in several hundred million cars.

How do we know?

This is based on curl or curl’s copyright being listed in documentation and/or shown on screen on the car’s infotainment system.

The manufacturers need to provide that information per the curl license. Even if some of course still don’t.

Some brands are missing

For brands missing in the list, we don’t know their status. There are many more car brands that we can suspect probably also run and use curl, but for which we have not found enough evidence for it. If you do, please let me know!

What curl are the running?

These are all using libcurl, not the command line tool. It is not uncommon for them to run fairly old versions.

What are they using curl for?

I can’t tell for sure as they don’t tell me. Presumably though, a modern care does a lot of Internet transfers for all sorts of purposes and curl is a reliable library for doing that. Download firmware images, music, maps or media. Upload statistics, messages, high-scores etc. Modern cars are full-blown computers plus mobile phones combined, of course they transfer data.

Brands, not companies

The list contains 47 brands right now. They are however manufactured by a smaller number of companies, as most car companies sell cars under multiple different brands. So maybe 15 car companies?

Additionally, many of these companies buy their software from a provider who bundles it up for them. Several of these companies probably get their software from the same suppliers. So maybe there is only 7 different ones?

I have still chosen to list and talk about the brands because those are the consumer facing names used in everyday conversations, and they are the names we mere mortals are most likely to recognize.

Not a single sponsor or customer

Ironically enough, while curl runs in practically almost every new modern car that comes out from factories, not a single of the companies producing the cars or the software they run, are sponsors of curl or customers of curl support. Not one.

An Open Source sustainability story in two slides

Yes they are allowed to

We give away curl for free for everyone to use at no cost and there is no obligation for anyone to pay anyone for this. These companies are perfectly in their rights to act like this.

You could possibly argue that companies should think about their own future and make sure that dependencies they rely on and would like to keep using, also survive so that they can keep depending on these components going forward as well. But obviously that is not how this works.

curl is liberally licensed under an MIT-like license.

What to do

I want curl to remain Open Source and I really like providing it in a way, under a liberal license, that makes it possible to get used everywhere. I mean, if we use the measurement of how widely used a software is, I think we can agree that curl is a top candidate.

I would like the economics and financials around the curl project to work out anyway, but maybe that is a utopia we can never reach. Maybe we eventually will have to change the license or something to entice or force a different behavior.

HTTP is not simple

I often hear or see people claim that HTTP is a simple protocol. Primarily of course from people without much experience or familiarity with actual implementations. I think I personally also had thoughts in that style back when I started working with the protocol.

After personally having devoted soon three decades on writing client-side code doing HTTP and having been involved in the IETF on all the HTTP specs produced since 2008 or so, I think I am in a decent position to give a more expanded view on it. HTTP is not a simple protocol. Far from it. Even if we presume that people actually mean HTTP/1 when they say that.

HTTP/1 may appear simple because of several reasons: it is readable text, the most simple use case is not overly complicated and existing tools like curl and browsers help making HTTP easy to play with.

The HTTP idea and concept can perhaps still be considered simple and even somewhat ingenious, but the actual machinery is not.

But yes, you can telnet to a HTTP/1 server and enter a GET / command manually and see a response. However I don’t think that is enough to qualify the entire thing as simple.

I don’t believe anyone has tried to claim that HTTP/2 or HTTP/3 are simple. In order to properly implement version two or three, you pretty much have to also implement version one so in that regard they are accumulating complexity and bring quite a lot of extra challenges in their own respective specifications.

Let me elaborate on some aspects of the HTTP/1 protocol that make me say it is not simple.

newlines

HTTP is not only text-based, it is also line-based; the header parts of the protocol that is. A line can be arbitrarily long as there is no limit in the specs – but they need to have a limit in implementations to prevent DoS etc. How long can they be before a server rejects them? Each line ends with a carriage-return and linefeed. But in some circumstances only a linefeed is enough.

Also, headers are not UTF-8, they are octets and you must not assume that you can just arbitrarily pass through anything you like.

whitespace

Text based protocols easily gets this problem. Between fields there can be one or more whitespace characters. Some of these are mandatory, some are optional. In many cases HTTP also does tokens that can either be a sequence of characters without any whitespace, or it can be text within double quotes (“). In some cases they are always within quotes.

end of body

There is not one single way to determine the end of a HTTP/1 download – the “body” as we say in protocol lingo. In fact, there are not even two. There are at least three (Content-Length, chunked encoding and Connection: close). Two of them require that the HTTP client parses content size provided in text format. These many end-of-body options have resulted in countless security related problems involving HTTP/1 over the years.

parsing numbers

Numbers provided as text are slow to parse and sometimes error-prone. Special care needs to be taken to avoid integer overflows, handle whitespace, +/- prefixes, leading zeroes and more. While easy to read for humans, less ideal for machines.

folding headers

As if the arbitrary length headers with unclear line endings are not enough, they can also be “folded” – in two ways. First: a proxy can merge multiple headers into a single one, comma-separated – except some headers (like cookies) that cannot. Then, a server can send a header as a continuation of the previous header by adding leading whitespace. This is rarely used (and discouraged in recent spec versions), but a protocol detail that an implementation needs to care about because it is used.

never-implemented

HTTP/1.1 ambitiously added features that at the time were not used or deployed onto the wide Internet so while the spec describes how for example HTTP Pipelining works, trying to use it in the wild is asking for a series of problems and is nothing but a road to sadness.

Later HTTP versions added features that better fulfilled the criteria that Pipelining failed to: mostly in the way of multiplexing.

The 100 response code is in similar territory: specified, but rarely actually used. It complicates life for new implementations. The fact that there is a discussion this week about particulars in the 100 response state handling, twenty-eight years since it was first published in a spec I think tells something.

so many headers

The HTTP/1 spec details a lot of headers and their functionality, but that is not enough for a normal current HTTP implementation to support. This, because things like cookies, authentication, new response codes and much more that an implementation may want to support today are features outside of the main spec and are described in additional separate documents. Some details, like NTLM, are not even found in RFC documents.

Thus, a modern HTTP/1 client needs to implement and support and a whole range of additional things and headers to work fine across the web. “HTTP/1.1” is mentioned in at least 40 separate RFC documents. Some of them quite complex by themselves.

not all methods are alike

While the syntax should ideally be possible to work exactly the same independently of which method that is used (sometimes referred to as verb), that is not how the reality works.

For example, if the method is GET we can also indeed send a body in the request similar to how we typically do with POST and PUT, but due to how it was never properly spelled out in the past, that is not interoperable today to the extent that doing it is just recipe for failure in a high enough share of attempts across the web.

This is one of the reasons why there is now work on a new HTTP method called QUERY which is basically what GET + request body should have been. But that does not simplify the protocol.

not all headers are alike

Because of the organic way several headers were created, deployed and evolved, a proxy for example cannot blindly just combine two headers into one, as the generic rules say it could. Because there are headers that specifically don’t follow there rules and need to be treated differently. Like for example cookies.

spineless browsers

Remember how browser implementations of protocols always tend to prefer to show the user something and guess the intention rather than showing an error because if they would be stringent and strict they risk that users would switch to another browsers that is not.

This impacts how the rest of the world gets to deal with HTTP, as users then come to expect that what works with the browsers should surely also work with non-browsers and their HTTP implementations.

This makes interpreting and understanding the spec secondary compared to just following what the major browsers have decided to do in particular circumstances. They may even change their stances over time and they may at times contradict explicit guidance in the specs.

size of the specs

The first HTTP/1.1 RFC 2068 from January 1997 was 52,165 words in its plain text version – which almost tripled the size from the HTTP/1.0 document RFC1945 at merely 18,615. A clear indication how the perhaps simple HTTP 1.0 was no longer simple anymore in 1.1.

In June 1999, the updated RFC 2616 added several hundred lines and clocked in at 57,897 words. Almost 6K more words.

A huge work was then undertaken within the IETF and in the fifteen years following the single document HTTP/1.1 spec was instead converted into six separate documents.

RFC7230 to RFC7235 were published in June 2014 and they hold a total of 90,358 words. It had grown another 56%. It is comparable to an average sized novel in number of words.

The whole spec was subsequently rearranged and reorganized again to better cater for the new HTTP versions, and the latest update was published in June 2022. The HTTP/1.1 parts had then been compacted into three documents RFC 9110 to RFC9112, with a total of 95,740 words.

For the argument sake, let’s say we can read two hundred words per minute when plowing this. It is probably a little slower than average reading speed, but I imagine we read standard specs a little slower than we read novels for example. Let’s say that 10% of the words are cruft we don’t need to read.

If we read only the three latest HTTP/1.1 related RFC documents non-stop, it would still take more than seven hours.

Must die?

In a recent conference talk with this click bait title, it was suggested that HTTP/1 is so hard to get implemented right that we should all stop using it.

Necessarily so?

All this, and yet there are few other Internet protocols that can compete with HTTP/1 in terms of use, adoption and popularity. HTTP is a big shot on the internet. Maybe this level of complication has been necessary to reach this success?

Comparing with other popular protocols still in use like DNS or SMTP I think we can see similar patterns: started out as something simple a long time ago. Decades later: not so simple anymore.

Perhaps this is just life happening?

Conclusion

HTTP is not a simple protocol.

The future is likely just going to be even more complicated as more things are added to HTTP over time – for all versions.

curl tells the %time

The curl command line option --write-out or just -w for short, is a powerful and flexible way to extract information from transfers done with the tool. It was introduced already back in version 6.5 in the early 2000.

This option takes an argument in which you can add “variables” that hold all sorts of different information, from time information, to speed, sizes, header content and more.

Some users have right out started to use the -w output for logging of the performed transfer, and when you do that there was a little detail missing: the ability to output the time the transfer completed. After all, most log lines actually feature the time in one way or another.

Starting in curl 8.16.0, curl -w knows the time and now allows the user to specify exactly how to output that time in the output. Suddenly this output is a whole notch better for logging purposes.

%time{format}

Since log files also tend to use different time formats I decided I didn’t want to use a fixed format and risk that a huge portion of users will think it is the wrong one, so I went straight with strftime formatting: the user controls the time format using standard %-flags: different ones for year, month, day, hour, minute, second etc.

Some details to note:

  1. The time is provided using UTC, not local.
  2. It also supports %f for microseconds, which is a POSIX extension already used by Python and possible others
  3. %z and %Z (for time zone offset and name) had to be fixed to become portable and identical across systems and platforms

Example

Here’s a sample command line outputting the time the transfer completed:

curl -w "%time{%a %b %e %Y - %H:%M:%S.%f} %{response_code}\n" https://example.com -o saved

When I ran this command line it gave me this output:

Wed Aug 6 2025 - 12:43:45.160382 200

Credits

The clock image by Alexa from Pixabay