curl annual user survey 2022

For the eighth consecutive year, we run the curl user survey. We usually kick it off during this time of the year.

Tell us how you use curl!

This is the best and frankly the only way the curl project has to get real feedback from people as to what features that are used and which are not used as well as other details in the project that can help us navigate our future and what to do next. And what not to do next.

curl runs no ads, has no trackers, users don’t report anything back and the project has no website logs. We are in many aspects completely blind as to what users do with curl and what they think of it. Unless we ask. This is us asking.

How is curl working for you?

[Go to survey]

Please ask your curl-using friends to also stop by and tell us their views!

[The survey analysis]

Credits

Image by Andreas Breitling from Pixabay

A tale of a trailing dot

Trailing dots on host names in URLs is the gift that keeps on giving.

Let me take you through a dwindling story of how the dot is handled differently in different places through the stack of an Internet client. The evil trailing dot.

DNS

When a given host name is to be resolved to an IP address on a networked computer, there are dedicated functions to use. The host name example.com resolves to a number of IP addresses.

If you add a dot to the end of that host name, it does not change what is resolved. “example.com.” resolves to the same set of addresses as “example.com” does. (But putting two dots at the end will make it fail.)

The reason why it works like this is based on how DNS is built up with different “labels” (that when written in text are separated with dots) and then having a trailing dot is just an empty final label, just as with no dot. So, in the DNS protocol there are no trailing dots so to speak. When trying two dots at the end, it makes a zero-length label between them and that is not allowed.

People accustomed to fiddle with DNS are used to ending Fully Qualified Domain Names (FQDN) with a trailing dot.

Resolving names

In addition to the name actually being resolved (sent to a DNS resolver), native resolver functions usually puts a meaning and a semantic difference between resolving “hello” and “hello.” (with or without the trailing dot). The trailing dot then means the name is to be used actually exactly only like that, it is specified in full, while the name without a trailing dot can be tried with a domain name appended to it. Or even a list of domain names, until one resolves. This makes people want to use a trailing dot at times, to avoid that domain test.

HTTP names

HTTP clients that want to work with a given URL needs to extract the name part from the URL and use that name to:

  1. resolve the host name to a list of IP addresses to connect to
  2. pass that name in the Host: or :authority: request headers, so that the HTTP server knows which specific server the clients speaks to – as it may run multiple servers on the same IP address

The HTTP spec says the name in the Host header should be used verbatim from the URL; the trailing dot should be included if it was present in the URL. This allows a server to host different content for “example.com” and “example.com.”, even if many servers will by default treat them as the same. Some hosts will just redirect the dot version to the non-dot. Some hosts will return error.

The HTTP client certainly connects to the same set of addresses for both.

For a lot of HTTP traffic, having the trailing dot there or not makes no difference. But they can be made to make a difference. And boy, they can certainly make a difference internally…

Cookies

Cookies are passed back and forth over HTTP using dedicated request and response headers. When a server wants to pass a cookie to the client, it can specify for which particular domain it is valid for and the client will send back cookies to the server only when there is a match of the domain it speaks to and for which domain cookies are set to etc.

The cookie spec RFC 6265 section 5.1.2 defines the host name in a way that makes it ignore trailing dots. Cookies set for a domain with a dot are valid for the same domain without one and vice versa.

SNI

When speaking to a HTTPS server, a client passes on the name of the remote server during the TLS handshake, in the SNI (Server Name Indication) field, so that the server knows which instance the client wants to speak to. What about the trailing dot you think?

The hostname is represented as a byte string using ASCII encoding without a trailing dot.

Meaning, that a HTTPS server cannot – in the TLS layer – make a distinction between a server for “example.com.” and “example.com”. Different hosts for HTTP, the same host for HTTPS.

curl’s dotted past

In the curl project, we – as everyone else – have struggled with the trailing dot over time.

As-is

We started out being mostly oblivious about the effects of the trailing dot and most of the code just treated it as part of the host name and it would be in the host name everywhere. Until one day someone pointed out that the SNI field does not approve of it. We fixed that.

Strip it

In 2014, curl started to always just cut off trailing dots internally if one was provided in the URL. The dot rarely makes a difference, it made the host name work fine with SNI and for HTTPS it is practically difficult to make a difference between them.

Keep it

In 2022, someone found a web site that actually requires a trailing dot in the Host: header to respond correctly and reported it to the curl project.

Sigh. We back-pedaled on the eight years old decision and decided to internally keep the dot in the name, but strip it for the purpose of the SNI field. This seems to be how the browsers are doing it. We released curl 7.82.0 with this change. That site that needed the trailing dot kept in the Host: header could now be retrieved with curl. Yay.

As a bonus curl also lowercases the SNI name field now, because that is what the browsers do even if the spec says the field is supposed to be used case insensitively. That habit has made sure there are servers on the Internet that won’t work properly if the SNI name is not lowercase…

In your face

That “back-peddle” for 7.82.0 when we brought back the dot into the host name field, turned out to be incomplete, but it was not totally obvious nor immediately apparent.

When we brought back the trailing dot into the name field, we accidentally broke several internal name checks.

The checks broke in the cookie handling of domains even though cookies, as mentioned above, are supposed to not care about trailing dots.

To understand this, we have to back up a little bit and talk about how cookies and cookie domains work.

Public Suffixes

Cookies are strange beasts and because the server can tell the client for which domain the cookie applies to, a client needs to check so that the server does not try to set the cookies too broadly or for other domains. It does not stop there, but there is also the concept of something called “Public Suffix List” (PSL), which are known domains for which setting a cookie is not accepted. (This list is also used for limiting other things in browsers but they are out of scope here.) One widely known such domain to mention as an example is “co.uk”. A server should not be allowed to set a cookie for “co.uk” as then it would basically be sent back for every web site that exist in the UK.

The PSL is a maintained list with a huge number of domains in it. To manage those and to make sure tools like curl can check for them in a convenient way, a dedicated library was made for this several years ago: libpsl. curl has optionally used this since 2015.

I said optional

That public suffix list is huge, which is a primary reason why many users still opt to build curl without support for it. This means that curl needs to provide backup functionality for the builds where libpsl is not present. Typically in a lot of embedded systems.

Without knowledge of the PSL curl will not reject cookies for “co.uk” but it should reject cookies for “.uk” or “.com” as even without PSL knowledge it still knows that setting cookies for top-level domains is not okay.

How did the curl check used without PSL verify if the given domain is a TLD only?

It checked – if there is a dot present in the name, then it is not a TLD.

CVE-2022-27779

Axel Chong figured out that for a curl build without PSL knowledge, the server could set a cookie for a TLD if you just made sure to end the name with a dot.

With the 7.82.0 change in place, where curl keeps the trailing dot for the host name, combined with that cookie set for TLD domain with a trailing dot, they have matching tail ends. This means that curl would send cookies to servers that match the criteria. The broken TLD check was benign all those years until we let the trailing dot in. This is security vulnerability CVE-2022-27779.

CVE-2022-30115

It did not stop there. Axel did not stop there. Since curl now keeps the trailing dot in the name and did not do it before, there was a second important string comparison that broke in unexpected ways that Axel figured out and reported. A second vulnerability introduced by the same change.

HSTS is the concept that allows curl to store a “cache” of host names and keep it around, so that if you want to do a subsequent transfer to one of those host names again before they expire, curl will go directly to HTTPS even if HTTP is used in the URL. As a way to avoid the clear-text insecure redirect step some URLs use.

The new treatment of trailing dots, that basically allows users to provide the same host name in two different ways and yet resolve to the exact same addresses exposed that the HSTS code did take care of (ignored) the trailing dot properly. If you let curl store HSTS info for the host name without a trailing dot, you can then later bypass the HSTS by using the same host name with a trailing dot. Or vice versa. This is security vulnerability CVE-2022-30115.

alt-svc

The code for alt-svc also needed adjustment for the dot, but fortunately that was “just” a bug and had not security impact.

All these three separate areas in which trailing dots caused problems have been fixed in curl 7.83.1 and all of them are now tested and verified with an extended set of tests to make sure they keep handle the dots correctly.

Someone called it a dot release.

Is this the end of dot problems?

I don’t know but it seems unlikely. The trailing dots have kept on haunting us since a long time by now so I would say the chances are big that there are both some more flaws lingering and some future changes pending. That then can make the cycle take another loop or two.

I suppose we will find out. Stay tuned!

curl 7.83.1 it burns

Welcome to this patch release of curl, shipped only 14 days since the previous version. We decided to cut the release cycle short because of the several security vulnerabilities that were pointed out. See below for details. There are no new features added in this release.

It burns. Mostly in our egos.

Release video

Numbers

the 208th release
0 changes
14 days (total: 8,818)

41 bug-fixes (total: 7,857)
65 commits (total: 28,573)
0 new public libcurl function (total: 88)
0 new curl_easy_setopt() option (total: 295)

0 new curl command line option (total: 247)
20 contributors, 6 new (total: 2,632)
13 authors, 3 new (total: 1,030)
6 security fixes (total: 121)
Bug Bounties total: 22,660 USD

Security

Axel Chong reported three issues, Harry Sintonen two and Florian Kohnhäuser one. An avalanche of security reports. Let’s have a look.

curl removes wrong file on error

CVE-2022-27778 reported a way how the brand new command line options remove-on-error and no-clobber when used together could end up having curl removing the wrong file. The file that curl was told not to clobber actually.

cookie for trailing dot TLD

CVE-2022-27779 is the first of two issues this time that identified a problem with how curl handles trailing dots since the 7.82.0 version. This flaw lets a site set a cookie for a TLD with a trailing dot that then might have curl send it back for all sites under that TLD.

percent-encoded path separator in URL host

In CVE-2022-27780 the reporter figured out how to abuse curl URL parser and its recent addition to decode percent-encoded host names.

CERTINFO never-ending busy-loop

CVE-2022-27781 details how a malicious server can trick curl built with NSS to get stuck in a busy-loop when returning a carefully crafted certificate.

TLS and SSH connection too eager reuse

CVE-2022-27782 was reported and identifies a set of TLS and SSH config parameters that curl did not consider when reusing a connection, which could end up in an application getting a reused connection for a transfer that it really did not expected to.

HSTS bypass via trailing dot

CVE-2022-30115 is very similar to the cookie TLD one, CVE-2022-27779. A user can make curl first store HSTS info for a host name without a trailing dot, and then in subsequent requests bypass the HSTS treatment by adding the trailing dot to the host name in the URL.

Bug-fixes

The security fixes above took a lot of my efforts this cycle, but there were a few additional ones I could mention.

urlapi: address (harmless) UndefinedBehavior sanitizer warning

In our regular attempts to remove warnings and errors, we fixed this warning that was on the border of a false positive. We want to be able to run with sanitizers warning-free so that every real warning we get can be treated accordingly.

gskit: fixed bogus setsockopt calls

A set of setsockopt() calls in the gskit.c backend was fond to be defective and haven’t worked since their introduction several years ago.

define HAVE_SSL_CTX_SET_EC_CURVES for libressl

Users of the libressl backend can now set curves correctly as well. OpenSSL and BoringSSL users already could.

x509asn1: make do_pubkey handle EC public keys

The libcurl private asn1 parser (used for some TLS backends) did not have support for these before.

Meeting the Cyber Safety Review Board

Three Open Source hackers were invited to this meeting with the CSRB and I was one of them.

The board with this name is part of CISA, a US government effort that received a presidential order to work on “Improving the Nation’s Cybersecurity“. Where “the Nation” here is the US.

I’m not in the US and I’m not a US citizen but I felt I should help out when asked and I was able to.

On April 21 2022, I joined the video meeting together with an OpenSSL and a Tomcat contributor and several members of the board. (I am not naming any names of participants in this post because I have not asked for permission nor do I think the names are important here.)

For about an hour we talked to the board how we develop Open Source, how we take on security problems and how we work on making sure we do things as securely as we can. It was striking how similarly the three of us looked at the issues and how we work in our project, despite our projects all being different and having our own specifics.

As projects, we believe we have pretty well-established and working procedures for getting problems reported and we think we fix the issues fairly swiftly. We ship fixes, advisories and updates not long after the issues get known. The CVE system where we register and publish security vulnerabilities in a global registry is working adequately. (I’m not saying things are perfect.)

The main problem

It was pretty clear to me that we agreed that the biggest problem in the Open Source supply chain today is the slow uptake in patching vulnerable software.

Lots of vendors and products have not been made or have any plans for how to handle upgrades when vulnerabilities are found. Many of those that do act, do that with such glacier like speeds that users of such products remain exposed for attackers for a long period after the flaws are already fixed and have become known.

My own analysis of this is that such vendors of course do this because its the cheapest way. Plain capitalistic reasons.

Addressing this is hard

If we had any easy fixes for this, we would already have them in progress. We were also asked by the board what kind of systems that we would not like to see.

Will Software Bill Of Materials (SBOM) fix this? Maybe it can help, by exposing to the world what software and versions are used in products, but it will certainly depend on how it is used and enforced. If done too heavy-handed, it risks causing overhead and added complications but in the other end it might end up too wishy-washy.

Ended there

This was just an hour of conversation with a few follow-up clarifying emails. I hope that we were able to provide insights into how Open Source is made but I have no illusions of us changing anything in drastic ways.

I felt honored to represent “my kind” and help sharing knowledge of Open Source to areas of the world that might not always get informed about it.

now on HTTP/3

The first mention of QUIC on this blog was back when I posted about the HTTP workshop of July 2015. Today, this blog is readable over the protocol QUIC subsequently would turn into. (Strictly speaking, it turned into QUIC + HTTP/3 but let’s not be too literal now.)

The other day Fastly announced that all their customers now can enable HTTP/3, and since this blog and the curl site are graciously running on the Fastly network I went ahead and enabled the protocol.

Within minutes and with almost no mistakes, I could load content over HTTP/3 using curl or browsers. Wooosh.

The name HTTP/3 wasn’t adopted until late 2018, and the RFC has still not been published yet. Some of the specifications for QUIC have however.

curling curl with h3

Considered “18+”

Vodafone UK has taken it on themselves to make the world better by marking this website (daniel.haxx.se) “adult content”. I suppose in order to protect the children.

It was first reported to me on May 2, with this screenshot from a Vodafone customer:

And later followed up with some more details from another user in this screenshot

Customers can opt out of this “protection” and then apparently Vodafone will no longer block my site.

How

I was graciously given more logs (my copy) showing DNS resolves and curl command line invokes.

It shows that this filter is for this specific host name only, not for the entire haxx.se domain.

It also shows that the DNS resolves are unaffected as they returned the expected Fastly IP addresses just fine. I suspect they have equipment that inspects outgoing traffic that catches this TLS connection based on the SNI field.

As the log shows, they then make their server do a TLS handshake in which they respond with a certificate that has daniel.haxx.se in the CN field.

The curl verbose output shows this:

* SSL connection using TLSv1.2 / ECDHE-ECDSA-CHACHA20-POLY1305
* ALPN, server did not agree to a protocol
* Server certificate:
*  subject: CN=daniel.haxx.se
*  start date: Dec 16 13:07:49 2016 GMT
*  expire date: Dec 16 13:07:49 2026 GMT
*  issuer: C=ES; ST=Madrid; L=Madrid; O=Allot; OU=Allot; CN=allot.com/emailAddress=info@allot.com
*  SSL certificate verify result: self signed certificate in certificate chain (19), continuing anyway.
> HEAD / HTTP/1.1
> Host: daniel.haxx.se
> User-Agent: curl/7.79.1
> Accept: */*
> 

The allot.com clue is the technology they use for this filtering. To quote their website, you can “protect citizens” with it.

I am not unique, clearly this has also hit other website owners. I have no idea if there is any way to appeal against this classification or something, but if you are a Vodafone UK customer, I would be happy if you did and maybe linked me to a public issue about it.

Update

I was pointed to the page where you can request to unblock specific sites so I have done that now (at 12:00 May 2).

Update on May 3

My unblock request for daniel.haxx.se is apparently “on hold” according to the web site.

I got an email from an anonymous (self-proclaimed) insider who says he works at Allot, the company doing this filtering for Vodafone. In this email, he says

Most likely, Vodafone is using their parental control a threat protection module which works based on a DNS resolving.

and then

After the business logic decides to block the website, it tells the DNS server to reply with a custom IP to a server that always shows a block page, because how HTTPS works, there is no way to trick it, either with Self-signed certificate, or using a signed certificate for a different domain, hence the warning.

What is weird here is that this explanation does not quite match what I have seen the logs provided to me. They showed this filtering clearly not being DNS based – since the DNS resolves got the exact same IP address a non-filtered resolver does.

Someone on Vodafone UK could of course easily test this by simply using a different DNS server, like 1.1.1.1 or 8.8.8.8.

Discussed on hacker news.

Update May 3, 2024: unblocked

Two years later the situation was the same and I wrote about it on Mastodon:

Just hours later I was emailed by a person who explained they are employed by Vodafone and they forwarded my post internally. The blocking should thereby be gone. The original block was wrongly applied and then my unblocking request from two years ago “never reached the responsible team”.

Another win for complaining in the public.

Uncurled

– Everything I know and learned about running and maintaining Open Source projects for three decades.

For several years now, I have had a blog post series in mind to describe something about what people could expect to happen in Open Source projects. I had a few already half-started blog post drafts for some sub topics.

I couldn’t really make up my mind how to craft a series of blog posts about this wide topic in a sensible way so I kept postponing it for later. I did this for years.

A book, it has to be a book

It just dawned on my one day: the only way to get all this into a comprehensible way that also can hold all the thoughts I would like it to have, is to put it into a book. By book, I mean a document. An essay. A collection of pages. A booklet maybe. I don’t know how many words it might end up to become and I have no illusions of it ever ending up in print.

I mean to write the document in the open and provide it for free, online. Open Source style.

Day one

I grabbed my original draft for my blog series “You can expect this in your Open Source project”. I had worked on that document in the background for a long time, adding some little thing here and there over years – and it now had maybe twenty-five “lessons” listed with a short paragraph of text next to each.

I also had started three blog posts based on such lessons that were in pending state here on daniel.haxx.se in my queue of drafts.

I first copied the blog post content back into the text file from those potential blog posts, before I deleted them, and converted the entire file to markdown.

I then grouped the “lessons” I had listed in the markdown file and moved them into a few different sections. Like what to expect, code, money, people and project. I put subtitles into separate files for those five main areas.

How hard can it be?

I didn’t want to do a lot of work before I put the thing into git, and I didn’t want to run any private git repository so I had to make a new repo with a name. I went with “How hard can it be” as a working title and created the repo on GitHub. On April 6 I made the first git push with initial contents to that repository.

The first external contributor appeared after just a few minutes with the first pull-request fixing typos. Clearly people are following me on GitHub and spotted the creating of the repository and checked out what it was. I hadn’t told anyone or given any pointers.

I started expanding on subjects in the book.

Let’s get a real title

In the evening of April 7 I posted this question on Twitter:

"If I write a booklet collecting everything I know and learned about running and maintaining Open Source projects for three decades, what should I call it?"

I got a flood of replies. Lots of good ones and also lots of fun and sarcastic ones. The one that I think really talked to me the best was also the shortest: Uncurled.

  • It’s short and sweet
  • It includes a reference to curl without saying it is “a curl book” (it isn’t)
  • The topic is a bit about “untangling” and curl is a project that probably has taught me the most of what I include here
  • It sounds a little like “debriefed” from the curl project, and it is…
  • I can put it up on the domain name un.curl.dev

I figured I could possibly go with a longer subtitle that could explain the book more: “Everything I know and learned about running and maintaining Open Source projects”.

A name

I renamed the GitHub repository and added a description there. I created the URL (by adding the “un” CNAME entry in the “curl.dev” domain) and I setup gitbook.com to render the content to appear on un.curl.dev.

With a little more thoughts and then spilling some beans about my plans in my weekly report on April 8 (but not leaking the URL or repo to anyone yet) that made people provide some more ideas, I added more content.

10,000 words

By the evening of April 9, I surpassed 10,000 words of contents. Still having the contents and the order of everything pretty much in flux and not yet sorted out.

20,000 words

On April 25, I surpassed 20,000 words. It starts to look like something I can announce soon.

Getting there, but not done

The uncurled book is now in a state I think I can show off without feeling embarrassed. I believe I will still need to work on it more going forward to add and polish content and make it more coherent and less of a collection of snippets. I hope that I over time can settle down and gradually slow down the change pace. It will of course also depend a lot on the feedback I get.

Cover

Since it doesn’t exist physically and probably never will, I don’t think it actually needs a cover image, but it would probably be cool to still have one to use as an image and symbol for the book. If someone has a good idea or feels artistically inclined to make one, let me know!

curl 7.83.0 headers bonanza

Welcome to the third curl release of the year.

Release presentation

curl 7.83.0 release presentation

Numbers

the 207th release
6 changes
53 days (total: 8,804)

125 bug-fixes (total: 7,816)
185 commits (total: 28,507)
2 new public libcurl function (total: 88)
0 new curl_easy_setopt() option (total: 295)

2 new curl command line option (total: 247)
60 contributors, 29 new (total: 2,626)
35 authors, 13 new (total: 1,027)
4 security fixes (total: 115)
0 USD paid in Bug Bounties (total: 16,900 USD)

Security fixes

The reason the Bug Bounty amount above is still at zero dollars for this cycle is that the rewards have not been set yet. There will be money handed out for all of them.

CVE-2022-22576– OAUTH2 bearer bypass in connection re-use

curl might reuse wrong connections when OAUTH2 bearer tokens are used.

CVE-2022-27774 – Credential leak on redirect

When curl follows a redirect to another protocol or to another port number, it could keep sending the credentials over the new connection and thus leak sensible information to the wrong party.

CVE-2022-27775 – Bad local IPv6 connection reuse

curl could reuse the wrong connection when asking to connect to an IPv6 address using zone id, as the zone id was not correctly checked when picking connection from the pool.

CVE-2022-27776 – Auth/cookie leak on redirect

curl’s system to avoid sending custom auth and cookies to other hosts after redirects did not take port number or protocol into account, and could leak sensible information to the wrong party.

Changes

While the number of changes can be counted to six, I will group them under four subtitles.

Cherry-pick headers

(These features are all landed as experimental to start with so you need to make sure to enable these in the build if you want to play with them.)

Two new functions have been introduced, curl_easy_header() and curl_easy_nextheader(). They allow applications to get the contents of specific HTTP headers or iterate over all of them after a transfer has been done. Applications have been able to get access to headers already before, but these functions bring a new level of ease and flexibility.

The command line tool was also extended to use these functions to allow easy header output to the --write-out option, both individual headers and also all headers as a JSON object. Read further.

--no-clobber

Long time TODO listing was now made into reality. Using this option, you can ask curl to not overwrite a local file even if you have specified it as an output file name in curl a command line.

--remove-on-error

The second of the new command line options: tell curl to remove the possibly partial file that might have been downloaded when it detects and returns an error.

msh3

This is the third supported HTTP/3 backend.

Bug-fixes

curl: error out if -T and -d are used for the same URL

One of them implies PUT and the other implies POST, they cannot both be used for the same target URL and starting now curl will error out properly with a message saying so.

system.h: ifdefs for MCST-LCC compiler

Yet another compiler is now supported by default when you build curl.

curl: fix segmentation fault for empty output file names

Also now generally behave better as in telling the user why it errors out because of this situation.

http2: RST the stream if we stop it on our own will

When an application stops a transfer that is being done over HTTP/2, it was not properly shut down from curl’s side and therefore could end up wasting data that the server kept sending but that the client wouldn’t receive anymore!

http: close the stream (not connection) on time condition abort

For a special kind of transfer abort due to a failed time condition, curl would always close the connection to stop the transfer, instead of just closing the stream. This of course made no different on HTTP/1 but for later HTTP versions the connection should be kept alive even for this condition.

http: streamclose “already downloaded”

Another case of curl deciding the connection shouldn’t continue when it for in fact should be kept alive for HTTP/2 and HTTP/3.

http: reject header contents with nul bytes

HTTP headers cannot legally contain these bytes as per the protocol specification and as hyper already rejects these response it made sense to unify the implementation and refuse them in native code as well. It might also save us from future badness.

http: return error on colon-less HTTP headers

Similar to the change above, HTTP/1 headers must have colons so curl now will consider it a broken transfer if a header arrives without. This makes curl much pickier of course, but should not affect any “real” HTTP transfers.

mqtt: better handling of TCP disconnect mid-message

A nasty busy-loop occurred if the connection was cut off at the wrong time for an MQTT transfer.

ngtcp2: numerous improvements

HTTP/3 with ngtcp2 was greatly enhanced during this cycle in several ways. Check out the changelog for the specific details and do try it out!

tls: make mbedtls and NSS check for h2, not nghttp2

In leftovers from the past we still checked if HTTP/2 support is present by the wrong #ifdef in a few places in the code. nghttp2 is no longer the only HTTP/2 library we can use.

curl: escape ‘?’ in code generated with --libcurl

It turns out you could sneakily insert and get fooled by trigraphs otherwise:

curl --libcurl client.c --user-agent "??/\");char c[]={'i','d',' ','>','x',0},m[]={'r',0};fclose(popen(c,m));//" http://example.invalid

curl up 2022 San Francisco

On June 6 2022, we will gather a bunch of curl aficionados in the Firehouse at the Fort Mason Centre in San Francisco, USA.

All details can be found here. We will add more info and details as we get closer to the event.

curl up is the annual curl developers and users “conference” where we meet up over a day and talk curl, curl related topics and share ideas about curl, its present and and its future. It is also really the only time of the year where we actually get to meet fellow curl hackers in person. The only day of the year that is completely devoted to curl. The best kind of day!

The last two years we have not run the conference for covid reasons but now we are back. The first time we arrange the event outside Europe.

I fully realize this geographic choice will prevent some of our European friends and contributors from attending, it will also allow North Americans to join the fun for the first time.

We help contributors attend

To better allow and encourage top curl contributors to attend this event, no matter where you live, we will help cover travel and lodging expenses for all and any top-100 curl committers who wants to come.

Sign up

Head over to the curl up 2022 page to find the link and details.

Agenda

Over the coming month I hope we can create an agenda with curl talks from several people. I need your ideas and your talks. We have started to collect some ideas for the 2022 agenda.

Tell us what you want to hear and what you want to share with us!

Who will be there?

I will of course be there and I hope we can attract a decent set of additional contributors, but also curl users and fans of all kinds and types.

Yes I can enter the country

Lots of you remember my struggles in the past to get permission to enter the US, but that was resolved a while ago. No problems remain.

Credits

Image by David Mark from Pixabay

msh3 as the third h3 backend

With the brand new merged support for the msh3 library, curl now supports no less than three different HTTP/3 backends. It was merged into curl’s git repository on April 10.

When you build curl, you have the option to build it with HTTP/3 support enabled. The HTTP/3 support in curl is still considered experimental so it is still not enabled by default.

The HTTP/3 support in curl depends on the presence and support from third party libraries. You need to select and enable a specific HTTP/3 backend when you build curl. It has previously been doing HTTP/3 using either quiche or ngtcp2 + nghttp3. Starting now, there is yet another option to consider: the msh3 library.

The msh3 library itself uses msquic for doing QUIC. This is a multi platform library that uses Schannel for TLS when on Windows and OpenSSL/quictls for other platforms. The Schannel part probably makes solution this particularly interesting for curl users on Windows.

curl, open source and networking