Category Archives: cURL and libcurl

curl and/or libcurl related

curl distro report

On March 21 2024 we had a curl distro meeting where people from at least ten different distros and curl project members had a video meeting and talked curl and distro related topics for a while.

Here is my summary of what we talked about and concluded.

Attendees

We had about 25 persons attending. At least the following organizations had representation:

  • curl
  • Debian
  • Mageia
  • RHEL/Fedora
  • Windows 10/11
  • MacPorts
  • Homebrew
  • Yocto Project
  • AlmaLinux
  • Arch Linux
  • Rocky Linux

There were also a few interested people present without any particular association.

Agenda

Daniel went through a few slides and talked about vulnerabilities, curl features, testing, issues, long term support etc.

PSL

Be aware: you most probably want PSL support enabled in your curl build if your users ever use cookies.

HTTP/3

We had a discussion around the problems for distros to enable HTTP/3 because of the TLS situation. One way to somewhat untangle the situation would be to support using a different TLS library for QUIC than for everything else, but that is also a lot of work and probably brings its own set of unique problems as well.

curl’s support for OpenSSL’s QUIC (together with nghttp3) and OpenSSL’s upcoming improvements in that area (coming in OpenSSL 3.3) are for many users perhaps the most viable route to HTTP/3.

Tests

Distros seem to mostly run the curl test suite to verify that curl works for them – on each platform that they ship on. It was also noted that some distros’ habit of also running tests on all dependencies help them to catch things.

curl has introduced parallel tests the last few years and we encourage distros to try that out to possibly speed up the tests substantially.

Maintaining old curl versions

(the curl project does not maintain old versions/branches, it only releases new releases off the master branch)

Distributions have long-lived branches with curl versions they stick to for years. We spent a long time brain-storming around what can be done to improve the situation for everyone and to make things easier and more streamlined for distros to do this. A problem is that distros tend to have different priorities, schedules and selection criteria, which make them end up selecting different curl versions to stick to.

Therefore, at any given time, there is a large amount of old curl versions that get security fixes and serious bugfixes bugreported by distros.

That is also a reason for why introducing some sort of long term branch support in the curl project itself might not help much. Since that branch/version might not actually suit very many distros and trying to get everyone to agree on a specific one would be challenging.

But still: a backported fix to curl version N might be easy enough to also make work for version N-1 rather than starting from the beginning with the patch that was done against the latest release. Coordination and awareness around what patches have been made could help everyone.

We discussed the possibility of hosting back-ported (security) patches in repositories managed by upstream to make it easier for distros to share such efforts. To be discussed further on the mailing list. Could be worth trying to see if we can make it work in productive way.

Learning about issues

We also identified that an area for improvements is cross-distro communication when it comes to learning about issues against various curl versions. When a user submits an issue against curl version Y on distro X, sometimes distro Z has already fixed it. Perhaps with backport.

Regressions on latest release

A special kind of issues are regressions on the latest curl version. Sometimes such fixes are done upstream but the distros don’t necessarily notice if they do not trigger a dot release. When the change is small enough to upstream to not be worthy of patch release, but the distro considers it patch worthy.

Communication

Several of the topics touched how things could be improved by better communications between curl and distros and cross-distros about their curl work and related issues.

We are setting up this new mailing list: curl-distros with the sole purpose of facilitating information exchange curl <=> distros and distro <=> distro in curl related questions. Patches, bugfixes, challenges, anything.

Subscribe to the list here:

https://lists.haxx.se/listinfo/curl-distros

Distro pointers

The curl project creates a DISTROS document in the curl git repository that contains pointers to the curl home, curl pataches and curl issues for all distros that we can find information about.

The PR for this: https://github.com/curl/curl/pull/13178

Doing it again

We have a mailing list created now for increased communication, but we discussed perhaps doing this kind of meeting again on an annual schedule.

Maybe do some kind of meetup in association with FOSDEM? We will sync that on the mailing list for sure.

Thanks!

Thanks a lot to everyone who participated! I felt that we got quite a lot of value out of this and I hope this was the beginning of more communication and improved collaboration going forward. For the benefit for curl users.

curl turns 26 today

years++;

It feels like it was not very long ago that we had the big curl 25 year celebrations. I still have plenty of fluid left in my 25 year old whiskey from last year and I believe I will treat myself a drink from that tonight.

I have worked on curl full-time and spare time for another year. We have taken this old thing further forward, we refurbished lots of internals over the last year while we also added a bunch of improvements. To make sure curl stays fit and remains a rock solid Internet transfer foundation for many more years.

The other day I created an image in jest based on an old Java installer screenshot.

The image says 8 billion but it might just as well be 10 or 14, we just don’t know and can’t tell. I made image say 8 simply because it was easy to modify the original image into an 8 from the original 3.

I often repeat the number twenty billion installations for curl, but several of those installations are typically running in the same device. Like how most mobile phones include two to ten separate curl installations – because many apps bundle their own installations in addition to the one the OS itself uses.

And I don’t know that there are twenty billion either. Maybe there are just eighteen. Or forty.

Cheers to another year.

getting started with libcurl

I am doing another webinar on March 28 2024, introducing newcomers to how to Internet transfers using the libcurl API.

Starting at 10am Pacific time. 17:00 UTC. 18:00 CET.

Agenda

  • libcurl basics
  • synchronous transfers
  • getting transfer information
  • concurrent transfers
  • URL parser
  • Q&A

The plan is to spend about 30 minutes going through the topics in the agenda and then take as long as necessary to let the attendees ask all and every question you may have about curl, the libcurl API and Internet transfers.

Sign up here

I have run this webinar before. The setup will be similar but not identical to previous runs. I believe attending the webinar is way better than watching a video recording of it in particular because you get the opportunity to interact and ask questions. Whatever detail you think is unclear or you would like to know more about, I can tell you all about it.

Using the libcurl API is not complicated, but it is a powerful machine and not everything is immediately obvious or straight forward.

See you on March 28.

(The event will be recorded and made available after the fact, and so will the slides.)

the Apple curl security incident 12604

tldr: Apple thinks it is fine. I do not.

On December 28 2023, bugreport 12604 was filed in the curl issue tracker. We get a lot issues filed most days so this fact alone was hardly anything out of the ordinary. We read the reports, investigate, ask follow-up questions to see what we can learn and what we need to address.

The title stated of the problem in this case was quite clear: flag –cacert behavior isn’t consistent between macOS and Linux, and it was filed by Yuedong Wu.

The friendly reporter showed how the curl version bundled with macOS behaves differently than curl binaries built entirely from open source. Even when running the same curl version on the same macOS machine.

The curl command line option --cacert provides a way for the user to say to curl that this is the exact set of CA certificates to trust when doing the following transfer. If the TLS server cannot provide a certificate that can be verified with that set of certificates, it should fail and return error.

This particular behavior and functionality in curl has been established since many years (this option was added to curl in December 2000) and of course is provided to allow users to know that it communicates with a known and trusted server. A pretty fundamental part of what TLS does really.

When this command line option is used with curl on macOS, the version shipped by Apple, it seems to fall back and checks the system CA store in case the provided set of CA certs fail the verification. A secondary check that was not asked for, is not documented and plain frankly comes completely by surprise. Therefore, when a user runs the check with a trimmed and dedicated CA cert file, it will not fail if the system CA store contains a cert that can verify the server!

This is a security problem because now suddenly certificate checks pass that should not pass.

I reported this as a security problem in an email sent to Product Security at Apple on December 29 2023, 08:30 UTC. It’s not a major problem, but it is an issue.

Apple’s says it is fine

On March 8, 2024 Apple Product Security responded with their wisdom:

Hello,

Thank you again for reporting this to us and allowing us time to investigate.

Apple’s version of OpenSSL (LibreSSL) intentionally uses the built-in system trust store as a default source of trust. Because the server certificate can be validated successfully using the built-in system trust store, we don't consider this something that needs to be addressed in our platforms.

Best regards,
KC
Apple Product Security

Case closed.

I disagree

Obviously I think differently. This undocumented feature makes CA cert verification with curl on macOS totally unreliable and inconsistent with documentation. It tricks users.

Be aware.

Since this is not a security vulnerability in the curl version we ship, we have not issued a CVE or anything for this problem. The problem is strictly speaking not even in curl code. It comes with the version of LibreSSL that Apple ships and builds curl to use on their platforms.

Discussion

hacker news

curl’s built-in manual without nroff

On December 14 1998 we released curl 5.2.

The project was still early back then and lots of things had not settled yet. In that release, which came only two weeks after 5.1, we introduced the --manual option, or -M for short.

Long before I started working on curl I learnt to value and appreciate Unix manpages. I more or less learned C programming using them, and I certainly learned my first ways around Unix shells and command lines reading manpages. My first Unix I spent a lot of time on was AIX. It was in the early 1990s, several years before I first used Linux.

Since some systems don’t have the fine concept of manpages, I decided I would help those users by bundling the curl man page into the tool itself. You can ask curl to show the curl manpage, with the -M option. The entire thing, looking very similar – mostly just lacking font details such as bold, italics and underline.

How do you bundle a manpage?

I suppose there are many ways to go about to make such a thing happen. In our case, we were already making and shipping a manpage in the nroff manpage format so it became a question of generating a text version using that page as a source and then convert the text version into C source code.

Converting a manpage to text was done with nroff. nroff is an ancient Unix tool that has been around for a long time and it existed on virtually every Unix flavor already back then. It seemed like a no-brainer to go with that, so that is what the curl build system would use.

Once the build scripts were tweaked it continued to just work. It became problematic only on platforms that lacked nroff – but to help smooth over that obstacle we also shipped the generated source file in distribution tarballs.

nroff really?

nroff is quirky tool. It generates the output differently based on environment details and over the years it would also subtly change its output several times that forced us to adjust the scripts as well.

Still, for as long as the curl manpage was primarily written in nroff format, it was challenging to generate the ASCII version any other way. We stuck with nroff.

Source format change

Earlier this year I blogged about how we finally changed the format of all the curl documentation files that create man pages in the curl project. We switched over to using markdown all over.

Even after that switch we still generated the built-in manual with nroff from the curl.1 manpage, that then was created entirely from a large set of source files written in markdown. The manpage was generated by our own custom tool.

The time was ripe

With firm control of the input file format and generating the output entirely with our own tool, it became a viable (and attractive) option to tweak the tool to offer an alternative output format. Allow it to render the output either as a manpage formatted file, or as an ASCII text file. Without involving or using nroff.

The time had come. We had suffered long enough. It was time to address this friction in the build system.

Yesterday, I merged the pull-request that finally, after 25 years, 2 months, 21 days removed the use of nroff from the curl build scripts.

The curl -M output after this change is not 100% identical, but it is close enough and looks very good and similar in style as before. I did not actually even try to make it a complete clone. In fact, when we generate the output directly from markdown instead of going via the manpage, we can actually make it a better text-only version than we could before.

I opted to still use a justified right margin of the text, because that is what it always used and after some casual initial comparisons I think it looked better than without an aligned right column.

nroff does hyphenation of words, which helps somewhat to make justified text easier and nicer, and our own script does not – at least not until I have figured out a decent way to do it. Like if the word “variable” is the last word on a line, it could be written as “vari-” on the end of the line and “able” could start the next line. I believe doing it badly is worse than not doing it at all.

Building this is easier

It is now (much) easier to build this from source, even on esoteric platforms like Windows.

I don’t think a single person will miss the old way of doing this.

curl HTTP/3 security audit

An external security audit focused especially on curl’s HTTP/3 components and associated source code was recently concluded by Trail of Bits. In particular on the HTTP/3 related curl code that uses and interfaces the ngtcp2 and nghttp3 libraries, as that is so far the only HTTP/3 backend in curl that is not labeled as experimental. The audit was sponsored by the Sovereign Tech Fund via OSTIF.

The audit revealed no major discoveries or security problems but led to improved fuzzing and a few additional areas are noted as suitable to improve going forward. Maybe in particular in the fuzzing department. (If you’re looking for somewhere to contribute to curl, there’s your answer!)

The audit revealed that we had accidentally drastically shrunk the fuzzing coverage a while back without even noticing – which we of course immediately rectified. When fixed, we fortunately did not get an explosion in issues (phew!), which thus confirmed that we had not messed up in any particular way while the fuzzing ability had been limited. But still: several man weeks of professional code inspection and no serious flaws were detected. I am thrilled over this fact.

Because of curl’s use of third party libraries for doing QUIC and HTTP/3, the report advises that there should be follow-up audits of the involved libraries. Fair proposal, but that is of course something that is beyond what we as a project can do.

Trail of Bits is professional and a pleasure to work with. Now having done it twice, I have nothing but good things to say about the team we have worked with.

From curl’s side, I would like to also highlight and thank Stefan Eissing and Dan Fandrich for participating in the process.

The full report is available on the curl website, here.

The third

This is (quite fittingly since it is for HTTP/3) the third external security audit performed on curl source code, even if this was more limited in scope than the previous ones done in 2016 and 2022. Quite becomingly, the amount of detected important issues have decreased for every new audit. We love scrutiny and we take security seriously. I think this shows in the audit reports.

Related

OSTIF’s blog about the audit.

Image

The top image is a mashup of the official curl logo and the official IETF HTTP/3 logo. Done by me.

DISPUTED, not REJECTED

I keep insisting that the CVE system is broken and that the database of existing CVEs hosted by MITRE (and imported into lots of other databases) is full of questionable content and plenty of downright lies. A primary explanation for us being in this ugly situation is that it is simply next to impossible to get rid of invalid CVEs.

First this

I already wrote about the bogus curl CVE-2020-1909 last year and how it was denied being rejected because someone without a name at MITRE obviously knows the situation much better than any curl developer. This situation then forces us, the curl project, to provide documentation to explain how this is a documented CVE but it is not a vulnerability. Completely contrary to the very idea of CVEs.

A sane system would have a concept where rubbish is scrubbed off.

Now this

The curl project registered for and became a CNA in mid January 2024 to ideally help us filter out bad CVE input better. The future will tell if this effort works or not. (It was also recently highlighted that the Linux kernel is now also a CNA for similar reasons and I expect to see many more Open Source projects go the same route.)

However, in late December 2023, just weeks before we became CNA, someone (anonymous again) requested a CVE Id from MITRE for a curl issue. Sure enough they were immediately given CVE-2023-52071, according to how the system works.

This CVE was made public on January 30 2024, and the curl project was of course immediately made aware of it. A quick glance on the specifics was all we needed: this is another bogus claim. This is not a security problem and again this is a fact that does not require an experienced curl developer to analyze, it is quite easily discoverable.

Given the history of previous bogus CVEs, I was soon emailed by CVE db companies asking me for confirmations about this CVE and I was of course honest and told them that no, this is not a security problem. Do not warn your users about this.

We are a CNA now, meaning that we should be able to control curl issues better, even if this CVE was requested before we were officially given the keys to the kingdom. We immediately requested this CVE to be rejected. On the grounds that it was wrongly assigned in the first place.

“Will provide some confusion”

In the first response from MITRE to our rejection request, they insisted that:

We discussed this internally and believe it does deserve a CVE ID. If we transfer, and Curl REJECTS, then the reporter will likely come back to us and dispute which will provide some confusion for the public.

They actually think putting DISPUTED on the issue is less confusing to the public than rejecting it, because rejecting risks an appeal from the original reporter?

They say in this response that they think it actually deserves a CVE Id. If there was any way to have a conversation with these guys I would like to ask them on what grounds they base this on. Then lecture them on how the world works.

This communication has only been done indirectly with MITRE via our root CNA (Red Hat).

DISPUTED vs REJECTED

So it did not fly.

According to the MITRE guidelines: When one party disagrees with another party’s assertion that a particular issue is a vulnerability, a CVE Record assigned to that issue may be designated with a “DISPUTED” tag.

If someone says the earth is flat, we need to say that fact is disputed? No it is not. It is plain wrong. Incorrect. Bad. Stupid. Silly. Remove-the-statement worthy.

This meant I needed to take the fight to the next level. This policy is not good enough and it needs to be adjusted. This is not a disagreement on the facts. I insist that this is not a vulnerability to begin with. It was wrongly assigned a CVE in the first place. It feels ridiculous that the burden of proof falls on me to prove how this is not a security problem instead of the other way around: if someone would just have had the spine to ask the original submitter to explain, prove, hint or suggest how this is a vulnerability then it would never have been a CVE created for this in the first place. Because that person could not have done that.

The plain truth is that there is no system for doing this. There is no requirement on the individual to actually back up or explain what they claim. The system is designed for good-faith reporters against bad-faith product organizations. So that bad companies cannot shut down whistleblowers basically. Instead it allows irresponsible or bad-faith reporters populate the CVE database with rubbish.

Once the CVE is in, the product organization, like curl here, is not allowed to REJECT it. We have to go the lame route and say that the facts in the CVE are DISPUTED. We are apparently in disagreement whether the totally incorrect claim is totally incorrect or not. Bizarre.

Did I mention this is a broken system?

Elevated

Being a CNA at least means we have a foot in the door. An issue has been filed against the policy and guidelines and it has been elevated at MITRE via our root CNA (Red Hat). I cannot say if this eventually will make a difference or not, but I have decided to “take one for the team” and spend this time and effort on this case in the belief that if we manage to nudge the process ever so slightly in the right direction, it could be worth it.

For the sake of everyone. For the sake of my sanity.

Documented

In the curl documentation for CVE-2023-52071, which we unwittingly have to provide even though the issue is bogus, I have included this whole story including quoting the motivations from my email to MITRE as to why this CVE should be rejected in spite of the current procedure not allowing us to.

Future

Hopefully, supposedly, ideally, crossing my fingers, future CVEs against curl or libcurl will immediately be passed via us since we are now a CNA. This is how it is supposed to work. We will of course immediately and with no mercy reject and refuse all attempts in filing silly CVEs for issues that aren’t vulnerabilities.

The “elevated issue” above might (hopefully) lead to non-CNA organizations getting an increased ability to filter off junk from the system – and then perhaps lessen the need for the entire world to become CNAs. I am not overly optimistic that we will reach that position anytime soon, as clearly the system has worked like this for a long time and I expect resistance to change.

I can almost guarantee that I will write more blog posts about CVEs in the future. Hopefully when I have great news about updated CVE rejection policies.

Update

(Feb 23, 21:33 UTC) The CVE records have now been updated by MITRE and according to NVD for example, this CVE is now REJECTED. Wow.

I was not told about this, someone in a discussion thread mentioned it.

Contingency planning for me and curl

This is a frequently asked question: how will I handle the situation if/when I step away from the curl project? What happens if I get run over by a bus go on a permanent holiday tomorrow? What’s the contingency plan?

You would perhaps think that it could affect a few more things that I work on than just curl, but I rarely get questions about any other things or projects. But okay, I have since long accepted that curl is the single thing people are most likely to associate with me.

I’m not leaving

Let me start by saying that I have no plans to leave the curl project any time soon. curl is such a huge part of my life I would not know what to do if I did not spend a large chunk of it thinking about, talking about, blogging about and working on curl development. I am not ruling out that I might step back as a leader of the project in a distant future, but it sure does not feel like it will happen within the nearest decade.

I am far from done yet. curl is not done yet. The Internet has not stopped evolving yet.

Also: the most likely way I will leave the project in a distant future is slowly and in a controlled manner where I can make sure that everyone gets everything they need before I would completely disappear into the shadows.

This is not a solo show

I also want to stress that curl is not a solo mission. We have surpassed 1200 commit authors in total and we average in 25 commit authors every month, with about 10 new committers arriving every month. My share of all commits has been continuously shrinking for many years.

Documented

A healthy and striving open source project should stand on its own legs and not rely on the presence or responses of single contributors. Everything should be documented and explained. How things work in the code, but also how processes work and how decisions are made etc. Someone who arrives at the project, alone in the middle of the night without network access, should be able to figure out everything without having to ask anyone.

I work hard at documenting everything curl as much and as well as possible. My ambition is to have curl stand out as one of the best documented projects/products – no matter what you compare it against.

Distributed responsibilities

If a single maintainer vanishes tomorrow, the project should survive it fine. Redundancy is key and we must make sure that we have a whole team of people with the necessary rights and knowledge to “carry the torch forward”. We invite new maintainers to the team every once in a while so that we are at least a dozen or so that can do things like merging code into the repository or updating the website. Many of those rarely exercise that right, but they have it and they can.

A single maintainer’s sudden absence can certainly be a blow to the project, but it should not be lethal.

My “BDFL role” in curl is not enforced by locking others out. There is a whole team that can do just about everything in the project that I do. When and if they want to.

Accounts

I have logins and credentials to some services that the whole team does not. I use them to upload curl releases, manage the website and similar. My accounts. If I am gone tomorrow, getting into my accounts will offer challenges to those who want to shoulder those responsibilities. I have a few trusted dedicated individuals appointed to hopefully manage that in the unlikely event that ever becomes necessary.

BDFL

(Benevolent Dictator For Life)

I may be a sort of dictator in the project, but I prefer to see myself as a “lead developer” as I hardly ever veto anything and I always encourage discussions and feedback rather than decreeing my opinions or ways of working onto others. I strive to be benevolent. I do not claim to always know the correct or proper way to do things.

When I leave, there is no dedicated prince or appointed heir that will take over after me, royal family style. Sure, someone else in the ranks of existing maintainers might step up and become a new project leader but it could also very well just become a group sharing the load or something else. It is not up to me to decide or control that. It is not decided ahead of time and it will not.

Similarly, I don’t try to carve my vision of curl into some stone tablets to pass on to the next generation. When I am gone, the people who remain will need to drive the ship and have their own visions and ideas. The kids got to do their own choices.

Legacy

I don’t care about how or if people remember me or not. I try my best to do good now and I hope my efforts and work make a net positive to the world. If so, that is good enough for me.

FOSDEM 2024: you too could have made curl

This is the video recording of my talk with this title, done at February 4, 2024 10:00 in the K1.105 room at FOSDEM 2024. The room can hold some 800 people but there were a few hundred seats still unoccupied. Several people I met up with later have insisted that 10 am on a Sunday is way too early for attending talks…

When I was about to start my talk, the slides would not show on the projector. Yeah, sigh. Nothing surprising maybe, but you always hope you can avoid these problems – in particular in the last moment with a huge audience waiting.

There was this separate video monitor laptop that clearly showed that my laptop would output the correct thing – in a proper resolution (1280 x 720 as per auto-negotiation), but the projector refused to play ball. The live stream could also see my output, so the problem was somehow from the video box to the projector.

Several people eventually got involved, things were rebooted multiple times, cables were yanked and replugged in again. First after I installed arandr and forced-updated the resolution of my HDMI output to 1920×1080 the projector would suddenly show my presentation. (Later on I was told that people had the same problem in this room the day before…)

That was about nine minutes of technical difficulties that is cut out from the recording. Nine minutes to test my nerves and presentation finesse as I had to adapt.