Category Archives: Open Source

Open Source, Free Software, and similar

Project curl governance

Over time, we’ve slowly been adjusting the curl project and its documentation so that we might at some point actually qualify to the CII open source Best Practices at silver level.

We qualified at the base level a while ago as one of the first projects which did that.

Recently, one of those issues we fixed was documenting the governance of the curl project. How exactly the curl project is run, what the key roles are and how decisions are made. That document is now in our git repo.

curl

The curl project is what I would call a fairly typical smallish open source project with a quite active and present project leader (me). We have a small set of maintainers who independently are allowed to and will merge commits to git (via pull-requests).

Any decision or any code change that was done or is about to be done can be brought up for questioning or discussion on the mailing list. Nothing is ever really seriously written in stone (except our backwards compatible API). If we did the wrong decision in the past, we should reconsider now.

Oh right, we also don’t have any legal entity. There’s no company or organization behind this or holding any particular rights. We’re not part of any umbrella organization. We’re all just individuals distributed over the globe.

Contributors

No active contributor or maintainer (that I know of) gets paid to work on curl regularly. No company has any particular say or weight to decide where the project goes next.

Contributors fix bugs and add features as part of our daily jobs or in their spare time. We get code submissions for well over a hundred unique authors every year.

Dictator

As a founder of the project and author of more than half of all commits, I am what others call, a Benevolent Dictator. I can veto things and I can merge things in spite of objections, although I avoid that as far as possible.

I feel that I generally have people’s trust and that the community expects me to be able to take decisions and drive this project in an appropriate direction, in a fashion that has worked out fine for the past twenty years.

I post all my patches (except occasional minuscule changes) as pull-requests on github before merge, to allow comments, discussions, reviews and to make sure they don’t break any tests.

I announce and ask for feedback for changes or larger things that I want to do, on the mailing list for wider attention. To bring up discussions and fish for additional ideas or for people to point out obvious mistakes. May times, my calls for opinions or objections are met with silence and I will then take that as “no objections” and more forward in a way I deem sensible.

Every now and then I blog about specific curl features or changes we work on, to highlight them and help out the user community “out there” to discover and learn what curl can do, or might be able to do soon.

I’m doing this primarily on my spare time. My employer also lets me spend some work hours on curl.

Long-term

One of the prime factors that has made curl and libcurl successful and end up one of the world’s most widely used software components, I’m convinced, is that we don’t break stuff.

By this I mean that once we’ve introduced functionality, we struggle hard to maintain that functionality from that point on and into the future. When we accept code and features into the project, we do this knowing that the code will likely remain in our code for decades to come. Once we’ve accepted the code, it becomes our responsibility and now we’ll care for it dearly for a long time forward.

Since we’re so few developers and maintainers in the project, I can also add that I’m very much aware that in many cases adopting code and merging patches mean that I will have to fix the remaining bugs and generally care for the code the coming years.

Changing governance?

I’m dictator of the curl project for practical reasons, not because I consider it an ideal way to run projects. If there were more people involved who cared enough about what and how we’re doing things we could also change how we run the project.

But until I sense such an interest, I don’t think the current model is bad – and our conquering the world over the recent years could also be seen as a proof that the project at least sometimes also goes in a direction that users approve of. And we are after all best practices certified.

I realize I come off sounding like a real-world dictator when I say things like this, but I genuinely believe that our governance is based on necessity and what works, not because we have to do it this way.

I’ve run the project since its inception 1998. One day I’ll get bored or get run over by a bus. Then at the very least will the project need another way to run…

Silver level?

We’re only two requirements away from Best Practices Silver level compliance and we’ve been discussing a bit lately (or perhaps: I’ve asked the question) whether the last criteria are actually worth the trouble for us or not.

  1. We need to enforce “Signed-off-by” lines in commits to maintain Developers Certificate of origin. This is easy in itself and I’ve only held this off this long because we’ve had zero interest or requirements for this from contributors and users. Added administration for little gain.
  2. We’re asked to provide an assurance case:a description of the threat model, clear identification of trust boundaries, an argument that secure design principles have been applied, and an argument that common implementation security weaknesses have been countered.” – This is work we haven’t done and a document we don’t have. And again: nobody has actually ever asked for this outside of this certificate form.

Do you think we should put in the extra effort and check off the final two requirements as well? Do you think they actually make the project better?

A hundred million cars run curl

One of my hobbies is to collect information about where curl is used. The following car brands feature devices, infotainment and/or navigation systems that use curl – in one or more of their models.

These are all brands about which I’ve found information online (for example curl license information), received photos of or otherwise been handed information by what I consider reliable sources (like involved engineers).

Do you have curl in a device installed in another car brand?

List of car brands using curl

Baojun, BMW, Buick, Cadillac, Chevrolet, Ford, GMC, Holden, Hyundai, Mazda, Mercedes, Nissan, Opel, Renault, Seat, Skoda, Subaru, Suzuki, Tesla, Toyota, VW and Vauxhall.

All together, this is a pretty amazing number of installations. This list contains eight (8) of the top-10 car brands in the world 2017! And all the top-3 brands. By my rough estimate, something like 40 million cars sold in 2017 had curl in them. Presumably almost as many in 2016 and a little more in 2018 (based on car sales stats).

Not too shabby for a little spare time project.

How to find curl in your car

Sometimes the curl open source license is included in a manual (it includes my name and email, offering more keywords to search for). That’s usually how I’ve found out many uses purely online.

Sometimes the curl license is included in the “open source license” screen within the actual infotainment system. Those tend to list hundreds of different components and without any search available, you often have to scroll for many minutes until you reach curl or libcurl. I occasionally receive photos of such devices.

Related: why is your email in my car and I have toyota corola.

Update: I added Tesla and Hyundai to the list after the initial post. The latter of those brands is a top-10 brand which bumped the counter of curl users to 8 out of the top-10 brands!

How to DoH-only with Firefox

Firefox supports DNS-over-HTTPS (aka DoH) since version 62.

You can instruct your Firefox to only use DoH and never fall-back and try the native resolver; the mode we call trr-only. Without any other ability to resolve host names, this is a little tricky so this guide is here to help you. (This situation might improve in the future.)

In trr-only mode, nobody on your local network nor on your ISP can snoop on your name resolves. The SNI part of HTTPS connections are still clear text though, so eavesdroppers on path can still figure out which hosts you connect to.

There’s a name in my URI

A primary problem for trr-only is that we usually want to use a host name in the URI for the DoH server (we typically need it to be a name so that we can verify the server’s certificate against it), but we can’t resolve that host name until DoH is setup to work. A catch-22.

There are currently two ways around this problem:

  1. Tell Firefox the IP address of the name that you use in the URI. We call it the “bootstrapAddress”. See further below.
  2. Use a DoH server that is provided on an IP-number URI. This is rather unusual. There’s for example one at 1.1.1.1.

Setup and use trr-only

There are three prefs to focus on (they’re all explained elsewhere):

network.trr.mode – set this to the number 3.

network.trr.uri – set this to the URI of the DoH server you want to use. This should be a server you trust and want to hand over your name resolves to. The Cloudflare one we’ve previously used in DoH tests with Firefox is https://mozilla.cloudflare-dns.com/dns-query.

network.trr.bootstrapAddress– when you use a host name in the URI for the network.trr.uri pref you must set this pref to an IP address that host name resolves to for you. It is important that you pick an IP address that the name you use actually would resolve to.

Example

Let’s pretend you want to go full trr-only and use a DoH server at https://example.com/dns. (it’s a pretend URI, it doesn’t work).

Figure out the bootstrapAddress with dig. Resolve the host name from the URI:

$ dig +short example.com
93.184.216.34

or if you prefer to be classy and use the IPv6 address (only do this if IPv6 is actually working for you)

$ dig -t AAAA +short example.com
2606:2800:220:1:248:1893:25c8:1946

dig might give you a whole list of addresses back, and then you can pick any one of them in the list. Only pick one address though.

Go to “about:config” and paste the copied IP address into the value field for network.trr.bootstrapAddress. Now TRR / DoH should be able to get going. When you can see web pages, you know it works!

DoH-only means only DoH

If you happen to start Firefox behind a captive portal while in trr-only mode, the connections to the DoH server will fail and no name resolves can be performed.

In those situations, normally Firefox’s captive portable detector would trigger and show you the login page etc, but when no names can be resolved and the captive portal can’t respond with a fake response to the name lookup and redirect you to the login, it won’t get anywhere. It gets stuck. And currently, there’s no good visual indication anywhere that this is what happens.

You simply can’t get out of a captive portal with trr-only. You probably then temporarily switch mode, login to the portal and switch the mode to 3 again.

If you “unlock” the captive portal with another browser/system, Firefox’s regular retries while in trr-only will soon detect that and things should start working again.

much faster curl uploads on Windows with a single tiny commit

These days, operating system kernels provide TCP/IP stacks that can do really fast network transfers. It’s not even unusual for ordinary people to have gigabit connections at home and of course we want our applications to be able take advantage of them.

I don’t think many readers here will be surprised when I say that fulfilling this desire turns out much easier said than done in the Windows world.

Autotuning?

Since Windows 7 / 2008R2, Windows implements send buffer autotuning. Simply put, the faster transfer and longer RTT the connection has, the larger the buffer it uses (up to a max) so that more un-acked data can be outstanding and thus enable the system to saturate even really fast links.

Turns out this useful feature isn’t enabled when applications use non-blocking sockets. The send buffer isn’t increased at all then.

Internally, curl is using non-blocking sockets and most of the code is platform agnostic so it wouldn’t be practical to switch that off for a particular system. The code is pretty much independent of the target that will run it, and now with this latest find we have also started to understand why it doesn’t always perform as well on Windows as on other operating systems: the upload buffer (SO_SNDBUF) is fixed size and simply too small to perform well in a lot of cases

Applications can still enlarge the buffer, if they’re aware of this bottleneck, and get better performance without having to change libcurl, but I doubt a lot of them do. And really, libcurl should perform as good as it possibly can just by itself without any necessary tuning by the application authors.

Users testing this out

Daniel Jelinski brought a fix for this that repeatedly poll Windows during uploads to ask for a suitable send buffer size and then resizes it on the go if it deems a new size is better. In order to figure out that if this patch is indeed a good idea or if there’s a downside for some, we went wide and called out for users to help us.

The results were amazing. With speedups up to almost 7 times faster, exactly those newer Windows versions that supposedly have autotuning can obviously benefit substantially from this patch. The median test still performed more than twice as fast uploads with the patch. Pretty amazing really. And beyond weird that this crazy thing should be required to get ordinary sockets to perform properly on an updated operating system in 2018.

Windows XP isn’t affected at all by this fix, and we’ve seen tests running as VirtualBox guests in NAT-mode also not gain anything, but we believe that’s VirtualBox’s “fault” rather than Windows or the patch.

Landing

The commit is merged into curl’s master git branch and will be part of the pending curl 7.61.1 release, which is due to ship on September 5, 2018. I think it can serve as an interesting case study to see how long time it takes until Windows 10 users get their versions updated to this.

Table of test runs

The Windows versions, and the test times for the runs with the unmodified curl, the patched one, how much time the second run needed as a percentage of the first, a column with comments and last a comment showing the speedup multiple for that test.

Thank you everyone who helped us out by running these tests!

Version Time vanilla Time patched New time Comment speedup
6.0.6002 15.234 2.234 14.66% Vista SP2 6.82
6.1.7601 8.175 2.106 25.76% Windows 7 SP1 Enterprise 3.88
6.1.7601 10.109 2.621 25.93% Windows 7 Professional SP1 3.86
6.1.7601 8.125 2.203 27.11% 2008 R2 SP1 3.69
6.1.7601 8.562 2.375 27.74% 3.61
6.1.7601 9.657 2.684 27.79% 3.60
6.1.7601 11.263 3.432 30.47% Windows 2008R2 3.28
6.1.7601 5.288 1.654 31.28% 3.20
10.0.16299.309 4.281 1.484 34.66% Windows 10, 1709 2.88
10.0.17134.165 4.469 1.64 36.70% 2.73
10.0.16299.547 4.844 1.797 37.10% 2.70
10.0.14393 4.281 1.594 37.23% Windows 10, 1607 2.69
10.0.17134.165 4.547 1.703 37.45% 2.67
10.0.17134.165 4.875 1.891 38.79% 2.58
10.0.15063 4.578 1.907 41.66% 2.40
6.3.9600 4.718 2.031 43.05% Windows 8 (original) 2.32
10.0.17134.191 3.735 1.625 43.51% 2.30
10.0.17713.1002 6.062 2.656 43.81% 2.28
6.3.9600 2.921 1.297 44.40% Windows 2012R2 2.25
10.0.17134.112 5.125 2.282 44.53% 2.25
10.0.17134.191 5.593 2.719 48.61% 2.06
10.0.17134.165 5.734 2.797 48.78% run 1 2.05
10.0.14393 3.422 1.844 53.89% 1.86
10.0.17134.165 4.156 2.469 59.41% had to use the HTTPS endpoint 1.68
6.1.7601 7.082 4.945 69.82% over proxy 1.43
10.0.17134.165 5.765 4.25 73.72% run 2 1.36
5.1.2600 10.671 10.157 95.18% Windows XP Professional SP3 1.05
10.0.16299.547 1.469 1.422 96.80% in a VM runing on Linux 1.03
5.1.2600 11.297 11.046 97.78% XP 1.02
6.3.9600 5.312 5.219 98.25% 1.02
5.2.3790 5.031 5 99.38% Windows 2003 1.01
5.1.2600 7.703 7.656 99.39% XP SP3 1.01
10.0.17134.191 1.219 1.531 125.59% FTP 0.80
TOTAL 205.303 102.271 49.81% 2.01
MEDIAN 43.51% 2.30

curl 7.61.0

Yet again we say hello to a new curl release that has been uploaded to the servers and sent off into the world. Version 7.61.0 (full changelog). It has been exactly eight weeks since 7.60.0 shipped.

Numbers

the 175th release
7 changes
56 days (total: 7,419)

88 bug fixes (total: 4,538)
158 commits (total: 23,288)
3 new curl_easy_setopt() options (total: 258)

4 new curl command line option (total: 218)
55 contributors, 25 new (total: 1,766)
42 authors, 18 new (total: 596)
  1 security fix (total: 81)

Security fixes

SMTP send heap buffer overflow (CVE-2018-0500)

A stupid heap buffer overflow that can be triggered when the application asks curl to use a smaller download buffer than default and then sends a larger file – over SMTP. Details.

New features

The trailing dot zero in the version number reveals that we added some news this time around – again.

More microsecond timers

Over several recent releases we’ve introduced ways to extract timer information from libcurl that uses integers to return time information with microsecond resolution, as a complement to the ones we already offer using doubles. This gives a better precision and avoids forcing applications to use floating point math.

Bold headers

The curl tool now outputs header names using a bold typeface!

Bearer tokens

The auth support now allows applications to set the specific bearer tokens to pass on.

TLS 1.3 cipher suites

As TLS 1.3 has a different set of suites, using different names, than previous TLS versions, an application that doesn’t know if the server supports TLS 1.2 or TLS 1.3 can’t set the ciphers in the single existing option since that would use names for 1.2 and not work for 1.3 . The new option for libcurl is called CURLOPT_TLS13_CIPHERS.

Disallow user name in URL

There’s now a new option that can tell curl to not acknowledge and support user names in the URL. User names in URLs can brings some security issues since they’re often sent or stored in plain text, plus if .netrc support is enabled a script accepting externally set URLs could risk getting exposing the privately set password.

Awesome bug-fixes this time

Some of my favorites include…

Resolver local host names faster

When curl is built to use the threaded resolver, which is the default choice, it will now resolve locally available host names faster. Locally as present in /etc/hosts or in the OS cache etc.

Use latest PSL and refresh it periodically

curl can now be built to use an external PSL (Public Suffix List) file so that it can get updated independently of the curl executable and thus better keep in sync with the list and the reality of the Internet.

Rumors say there are Linux distros that might start providing and updating the PSL file in separate package, much like they provide CA certificates already.

fnmatch: use the system one if available

The somewhat rare FTP wildcard matching feature always had its own internal fnmatch implementation, but now we’ve finally ditched that in favour of the system fnmatch() function for platforms that have such a one. It shrinks footprint and removes an attack surface – we’ve had a fair share of tiresome fuzzing issues in the custom fnmatch code.

axTLS: not considered fit for use

In an effort to slowly increase our requirement on third party code that we might tell users to build curl to use, we’ve made curl fail to build if asked to use the axTLS backend. This since we have serious doubts about the quality and commitment of the code and that project. This is just step one. If no one yells and fights for axTLS’ future in curl going forward, we will remove all traces of axTLS support from curl exactly six months after step one was merged. There are plenty of other and better TLS backends to use!

Detailed in our new DEPRECATE document.

TLS 1.3 used by default

When negotiating TLS version in the TLS handshake, curl will now allow TLS 1.3 by default. Previously you needed to explicitly allow that. TLS 1.3 support is not yet present everywhere so it will depend on the TLS library and its version that your curl is using.

Coming up?

We have several changes and new features lined up for next release. Stay tuned!

First, we will however most probably schedule a patch release, as we have two rather nasty HTTP/2 bugs filed that we want fixed. Once we have them fixed in a way we like, I think we’d like to see those go out in a patch release before the next pending feature release.

curl survey 2018 analysis

This year, 670 individuals spent some of their valuable time on our survey and filled in answers that help us guide what to do next. What’s good, what’s bad, what to remove and where to emphasize efforts more.

It’s taken me a good while to write up this analysis but hopefully the results here can be used all through the year as a reminder what people actually think and how they use curl and libcurl.

A new question this yeas was in which continent the respondent lives, which ended up with an unexpectedly strong Euro focus:

What didn’t trigger any surprises though was the question of what protocols users are using, which basically identically mirrored previous years’ surveys. HTTP and HTTPS are the king duo by far.

Read the full 34 page analysis PDF.

Some other interesting take-aways:

  • One person claims to use curl to handle 19 protocols! (out of 23)
  • One person claims to use curl on 11 different platforms!
  • Over 5% of the users argue for a rewrite in rust.
  • Windows is now the second most common platform to use curl on.

quic wg interim Kista

The IETF QUIC working group had its fifth interim meeting the other day, this time in Kista, Sweden hosted by Ericsson. For me as a Stockholm resident, this was ridiculously convenient. Not entirely coincidentally, this was also the first quic interim I attended in person.

We were 30 something persons gathered in a room without windows, with another dozen or so participants joining from remote. This being a meeting in a series, most people already know each other from before so the atmosphere was relaxed and friendly. Lots of the participants have also been involved in other protocol developments and standards before. Many familiar faces.

Schedule

As QUIC is supposed to be done “soon”, the emphasis is now a lot to close issues, postpone some stuff to “QUICv2” and make sure to get decisions on outstanding question marks.

Kazuho did a quick run-through with some info from the interop days prior to the meeting.

After MT’s initial explanation of where we’re at for the upcoming draft-13, Ian took us a on a deep dive into the Stream 0 Design Team report. This is a pretty radical change of how the wire format of the quic protocol, and how the TLS is being handled.

The existing draft-12 approach…

Is suggested to instead become…

What’s perhaps the most interesting take away here is that the new format doesn’t use TLS records anymore – but simplifies a lot of other things. Not using TLS records but still doing TLS means that a QUIC implementation needs to get data from the TLS layer using APIs that existing TLS libraries don’t typically provide. PicoTLS, Minq, BoringSSL. NSS already have or will soon provide the necessary APIs. Slightly behind, OpenSSL should offer it in a nightly build soon but the impression is that it is still a bit away from an actual OpenSSL release.

EKR continued the theme. He talked about the quic handshake flow and among other things explained how 0-RTT and early data works. Taken from that context, I consider this slide (shown below) fairly funny because it makes it look far from simple to me. But it shows communication in different layers, and how the acks go, etc.

HTTP

Mike then presented the state of HTTP over quic. The frames are no longer that similar to the HTTP/2 versions. Work is done to ensure that the HTTP layer doesn’t need to refer or “grab” stream IDs from the transport layer.

There was a rather lengthy discussion around how to handle “placeholder streams” like the ones Firefox uses over HTTP/2 to create “anchors” on which to make dependencies but are never actually used over the wire. The nature of the quic transport makes those impractical and we talked about what alternatives there are that could still offer similar functionality.

The subject of priorities and dependencies and if the relative complexity of the h2 model should be replaced by something simpler came up (again) but was ultimately pushed aside.

QPACK

Alan presented the state of QPACK, the HTTP header compression algorithm for hq (HTTP over QUIC). It is not wire compatible with HPACK anymore and there have been some recent improvements and clarifications done.

Alan also did a great step-by-step walk-through how QPACK works with adding headers to the dynamic table and how it works with its indices etc. It was very clarifying I thought.

The discussion about the static table for the compression basically ended with us agreeing that we should just agree on a fairly small fixed table without a way to negotiate the table. Mark said he’d try to get some updated header data from some server deployments to get another data set than just the one from WPT (which is from a single browser).

Interop-testing of QPACK implementations can be done by encode  + shuffle + decode a HAR file and compare the results with the source data. Just do it – and talk to Alan!

And the first day was over. A fully packed day.

ECN

Magnus started off with some heavy stuff talking Explicit Congestion Notification in QUIC and it how it is intended to work and some remaining issues.

He also got into the subject of ACK frequency and how the current model isn’t ideal in every situation, causing to work like this image below (from Magnus’ slide set):

Interestingly, it turned out that several of the implementers already basically had implemented Magnus’ proposal of changing the max delay to min(RTT/4, 25 ms) independently of each other!

mvfst deployment

Subodh took us on a journey with some great insights from Facebook’s deployment of mvfast internally, their QUIC implementation. Getting some real-life feedback is useful and with over 100 billion requests/day, it seems they did give this a good run.

Since their usage and stack for this is a bit use case specific I’m not sure how relevant or universal their performance numbers are. They showed roughly the same CPU and memory use, with a 70% RPS rate compared to h2 over TLS 1.2.

He also entertained us with some “fun issues” from bugs and debugging sessions they’ve done and learned from. Awesome.

The story highlights the need for more tooling around QUIC to help developers and deployers.

Load balancers

Martin talked about load balancers and servers, and how they could or should communicate to work correctly with routing and connection IDs.

The room didn’t seem overly thrilled about this work and mostly offered other ways to achieve the same results.

Implicit Open

During the last session for the day and the entire meeting, was mt going through a few things that still needed discussion or closure. On stateless reset and the rather big bike shed issue: implicit open. The later being the question if opening a stream with ID N + 1 implicitly also opens the stream with ID N. I believe we ended with a slight preference to the implicit approach and this will be taken to the list for a consensus call.

Frame type extensibility

How should the QUIC protocol allow extensibility? The oldest still open issue in the project can be solved or satisfied in numerous different ways and the discussion waved back and forth for a while, debating various approaches merits and downsides until the group more or less agreed on a fairly simple and straight forward approach where the extensions will announce support for a feature which then may or may involve one or more new frame types (to be in a registry).

We proceeded to discuss other issues all until “closing time”, which was set to be 16:00 today. This was just two days of pushing forward but still it felt quite intense and my personal impression is that there were a lot of good progress made here that took the protocol a good step forward.

The facilities were lovely and Ericsson was a great host for us. The Thursday afternoon cakes were great! Thank you!

Coming up

There’s an IETF meeting in Montreal in July and there’s a planned next QUIC interim probably in New York in September.

Inside Firefox’s DOH engine

DNS over HTTPS (DOH) is a feature where a client shortcuts the standard native resolver and instead asks a dedicated DOH server to resolve names.

Compared to regular unprotected DNS lookups done over UDP or TCP, DOH increases privacy, security and sometimes even performance. It also makes it easy to use a name server of your choice for a particular application instead of the one configured globally (often by someone else) for your entire system.

DNS over HTTPS is quite simply the same regular DNS packets (RFC 1035 style) normally sent in clear-text over UDP or TCP but instead sent with HTTPS requests. Your typical DNS server provider (like your ISP) might not support this yet.

To get the finer details of this concept, check out Lin Clark’s awesome cartoon explanation of DNS and DOH.

This new Firefox feature is planned to get ready and ship in Firefox release 62 (early September 2018). You can test it already now in Firefox Nightly by setting preferences manually as described below.

This article will explain some of the tweaks, inner details and the finer workings of the Firefox TRR implementation (TRR == Trusted Recursive Resolver) that speaks DOH.

Preferences

All preferences (go to “about:config”) for this functionality are located under the “network.trr” prefix.

network.trr.mode – set which resolver mode you want.

0 – Off (default). use standard native resolving only (don’t use TRR at all)
1 – Race native against TRR. Do them both in parallel and go with the one that returns a result first.
2 – TRR first. Use TRR first, and only if the name resolve fails use the native resolver as a fallback.
3 – TRR only. Only use TRR. Never use the native (after the initial setup).
4 – Shadow mode. Runs the TRR resolves in parallel with the native for timing and measurements but uses only the native resolver results.
5 – Explicitly off. Also off, but selected off by choice and not default.

network.trr.uri – (default: none) set the URI for your DOH server. That’s the URL Firefox will issue its HTTP request to. It must be a HTTPS URL (non-HTTPS URIs will simply be ignored). If “useGET” is enabled, Firefox will append “?ct&dns=….” to the URI when it makes its HTTP requests. For the default POST requests, they will be issued to exactly the specified URI.

“mode” and “uri” are the only two prefs required to set to activate TRR. The rest of them listed below are for tweaking behavior.

We list some publicly known DOH servers here. If you prefer to, it is easy to setup and run your own.

network.trr.credentials – (default: none) set credentials that will be used in the HTTP requests to the DOH end-point. It is the right side content, the value, sent in the Authorization: request header. Handy if you for example want to run your own public server and yet limit who can use it.

network.trr.wait-for-portal – (default: true) this boolean tells Firefox to first wait for the captive portal detection to signal “okay” before TRR is used.

network.trr.allow-rfc1918 – (default: false) set this to true to allow RFC 1918 private addresses in TRR responses. When set false, any such response will be considered a wrong response that won’t be used.

network.trr.useGET – (default: false) When the browser issues a request to the DOH server to resolve host names, it can do that using POST or GET. By default Firefox will use POST, but by toggling this you can enforce GET to be used instead. The DOH spec says a server MUST support both methods.

network.trr.confirmationNS – (default: example.com) At startup, Firefox will first check an NS entry to verify that TRR works, before it gets enabled for real and used for name resolves. This preference sets which domain to check. The verification only checks for a positive answer, it doesn’t actually care what the response data says.

network.trr.bootstrapAddress – (default: none) by setting this field to the IP address of the host name used in “network.trr.uri”, you can bypass using the system native resolver for it. This avoids that initial (native) name resolve for the host name mentioned in the network.trr.uri pref.

network.trr.blacklist-duration – (default: 60) is the number of seconds a name will be kept in the TRR blacklist until it expires and can be tried again. The default duration is one minute. (Update: this has been cut down from previous longer defaults.)

network.trr.request-timeout – (default: 3000) is the number of milliseconds a request to and corresponding response from the DOH server is allowed to spend until considered failed and discarded.

network.trr.early-AAAA – (default: false) For each normal name resolve, Firefox issues one HTTP request for A entries and another for AAAA entries. The responses come back separately and can come in any order. If the A records arrive first, Firefox will – as an optimization – continue and use those addresses without waiting for the second response. If the AAAA records arrive first, Firefox will only continue and use them immediately if this option is set to true.

network.trr.max-fails – (default: 5) If this many DoH requests in a row fails, consider TRR broken and go back to verify-NS state. This is meant to detect situations when the DoH server dies.

network.trr.disable-ECS – (default: true) If set, TRR asks the resolver to disable ECS (EDNS Client Subnet – the method where the resolver passes on the subnet of the client asking the question). Some resolvers will use ECS to the upstream if this request is not passed on to them.

Split-horizon and blacklist

With regular DNS, it is common to have clients in different places get different results back. This can be done since the servers know from where the request comes (which also enables quite a degree of spying) and they can then respond accordingly. When switching to another resolver with TRR, you may experience that you don’t always get the same set of addresses back. At times, this causes problems.

As a precaution, Firefox features a system that detects if a name can’t be resolved at all with TRR and can then fall back and try again with just the native resolver (the so called TRR-first mode). Ending up in this scenario is of course slower and leaks the name over clear-text UDP but this safety mechanism exists to avoid users risking ending up in a black hole where certain sites can’t be accessed. Names that causes such TRR failures are then put in an internal dynamic blacklist so that subsequent uses of that name automatically avoids using DNS-over-HTTPS for a while (see the blacklist-duration pref to control that period). Of course this fall-back is not in use if TRR-only mode is selected.

In addition, if a host’s address is retrieved via TRR and Firefox subsequently fails to connect to that host, it will redo the resolve without DOH and retry the connect again just to make sure that it wasn’t a split-horizon situation that caused the problem.

When a host name is added to the TRR blacklist, its domain also gets checked in the background to see if that whole domain perhaps should be blacklisted to ensure a smoother ride going forward.

Additionally, “localhost” and all names in the “.local” TLD are sort of hard-coded as blacklisted and will never be resolved with TRR. (Unless you run TRR-only…)

TTL as a bonus!

With the implementation of DNS-over-HTTPS, Firefox now gets the TTL (Time To Live, how long a record is valid) value for each DNS address record and can store and use that for expiry time in its internal DNS cache. Having accurate lifetimes improves the cache as it then knows exactly how long the name is meant to work and means less guessing and heuristics.

When using the native name resolver functions, this time-to-live data is normally not provided and Firefox does in fact not use the TTL on other platforms than Windows and on Windows it has to perform some rather awkward quirks to get the TTL from DNS for each record.

Server push

Still left to see how useful this will become in real-life, but DOH servers can push new or updated DNS records to Firefox. HTTP/2 Server Push being responses to requests the client didn’t send but the server thinks the client might appreciate anyway as if it sent requests for those resources.

These pushed DNS records will be treated as regular name resolve responses and feed the Firefox in-memory DNS cache, making subsequent resolves of those names to happen instantly.

Bootstrap

You specify the DOH service as a full URI with a name that needs to be resolved, and in a cold start Firefox won’t know the IP address of that name and thus needs to resolve it first (or use the provided address you can set with network.trr.bootstrapAddress). Firefox will then use the native resolver for that, until TRR has proven itself to work by resolving the network.trr.confirmationNS test domain. Firefox will also by default wait for the captive portal check to signal “OK” before it uses TRR, unless you tell it otherwise.

As a result of this bootstrap procedure, and if you’re not in TRR-only mode, you might still get  a few native name resolves done at initial Firefox startups. Just telling you this so you don’t panic if you see a few show up.

CNAME

The code is aware of CNAME records and will “chase” them down and use the final A/AAAA entry with its TTL as if there were no CNAMEs present and store that in the in-memory DNS cache. This initial approach, at least, does not cache the intermediate CNAMEs nor does it care about the CNAME TTL values.

Firefox currently allows no more than 64(!) levels of CNAME redirections.

about:networking

Enter that address in the Firefox URL bar to reach the debug screen with a bunch of networking information. If you then click the DNS entry in the left menu, you’ll get to see the contents of Firefox’s in-memory DNS cache. The TRR column says true or false for each name if that was resolved using TRR or not. If it wasn’t, the native resolver was used instead for that name.

Private Browsing

When in private browsing mode, DOH behaves similar to regular name resolves: it keeps DNS cache entries separately from the regular ones and the TRR blacklist is then only kept in memory and not persisted to disk. The DNS cache is flushed when the last PB session is exited.

Tools

I wrote up dns2doh, a little tool to create DOH requests and responses with, that can be used to build your own toy server with and to generate requests to send with curl or similar.

It allows you to manually issue a type A (regular IPv4 address) DOH request like this:

$ dns2doh --A --onlyq --raw daniel.haxx.se | \
curl --data-binary @- \
https://dns.cloudflare.com/.well-known/dns \
-H "Content-Type: application/dns-udpwireformat"

I also wrote doh, which is a small stand-alone tool (based on libcurl) that issues requests for the A and AAAA records of a given host name from the given DOH URI.

Why HTTPS

Some people giggle and think of this as a massive layer violation. Maybe it is, but doing DNS over HTTPS makes a lot of sense compared to for example using plain TLS:

  1. We get transparent and proxy support “for free”
  2. We get multiplexing and the use of persistent connections from the get go (this can be supported by DNS-over-TLS too, depending on the implementation)
  3. Server push is a potential real performance booster
  4. Browsers often already have a lot of existing HTTPS connections to the same CDNs that might offer DOH.

Further explained in Patrick Mcmanus’ The Benefits of HTTPS for DNS.

It still leaks the SNI!

Yes, the Server Name Indication field in the TLS handshake is still clear-text, but we hope to address that as well in the future with efforts like encrypted SNI.

Bugs?

File bug reports in Bugzilla! (in “Core->Networking:DNS” please)

If you want to enable HTTP logging and see what TRR is doing, set the environment variable MOZ_LOG component and level to “nsHostResolver:5”. The TRR implementation source code in Firefox lives in netwerk/dns.

Caveats

Credits

While I have written most of the Firefox TRR implementation, I’ve been greatly assisted by Patrick Mcmanus. Valentin Gosu, Nick Hurley and others in the Firefox Necko team.

DOH in curl?

Since I am also the lead developer of curl people have asked. The work on DOH for curl has not really started yet, but I’ve collected some thoughts on how DNS-over-HTTPS could be implemented in curl and the doh tool I mentioned above has the basic function blocks already written.

Other efforts to enhance DNS security

There have been other DNS-over-HTTPS protocols and efforts. Recently there was one offered by at least Google that was a JSON style API. That’s different.

There’s also DNS-over-TLS which shares some of the DOH characteristics, but lacks for example the nice ability to work through proxies, do multiplexing and share existing connections with standard web traffic.

DNScrypt is an older effort that encrypts regular DNS packets and sends them over UDP or TCP.

curl, http2 and quic on the Changelog

Three years ago I talked on a changelog episode about curl just having turned 17 years old and what it has meant for me etc.

Fast forward three years, 146 changelog episodes later and now curl has turned 20 years and I was again invited and joined the lovely hosts of the changelog podcast, Adam and Jerod.

Changelog episode 299

We talked curl of course but we also spent time talking about where HTTP/2 is and how QUIC is coming around and a little about why and how its UDP nature makes things a little different. If you’re into either curl or web transport, I hope you’ll find it interesting.

The curl 7 series reaches 60

curl 7.60.0 is released. Remember 7.59.0? This latest release cycle was a week longer than normal since the last was one week shorter and we had this particular release date adapted to my traveling last week. It gave us 63 days to cram things in, instead of the regular 56 days.

7.60.0 is a crazy version number in many ways. We’ve been working on the version 7 series since virtually forever (the year 2000) and there’s no version 8 in sight any time soon. This is the 174th curl release ever.

I believe we shouldn’t allow the minor number to go above 99 (because I think it will cause serious confusion among users) so we should come up with a scheme to switch to version 8 before 7.99.0 gets old. If we keeping doing a new minor version every eight weeks, which seems like the fastest route, math tells us that’s a mere 6 years away.

Numbers

In the 63 days since the previous release, we have done and had..

3 changes
111 bug fixes (total: 4,450)
166 commits (total: 23,119)
2 new curl_easy_setopt() options (total: 255)

1 new curl command line option (total: 214)
64 contributors, 36 new (total: 1,741)
42 authors (total: 577)
2 security fixes (total: 80)

What good does 7.60.0 bring?

Our tireless and fierce army of security researches keep hammering away at every angle of our code and this has again unveiled vulnerabilities in previously released curl code:

  1. FTP shutdown response buffer overflow: CVE-2018-1000300

When you tell libcurl to use a larger buffer size, that larger buffer size is not used for the shut down of an FTP connection so if the server then sends back a huge response during that sequence, it would buffer-overflow a heap based buffer.

2. RTSP bad headers buffer over-read: CVE-2018-1000301

The header parser function would sometimes not restore a pointer back to the beginning of the buffer, which could lead to a subsequent function reading out of buffer and causing a crash or potential information leak.

There are also two new features introduced in this version:

HAProxy protocol support

HAProxy has pioneered this simple protocol for clients to pass on meta-data to the server about where it comes from; designed to allow systems to chain proxies / reverse-proxies without losing information about the original originating client. Now you can make your libcurl-using application switch this on with CURLOPT_HAPROXYPROTOCOL and from the command line with curl’s new –haproxy-protocol option.

Shuffling DNS addresses

Over six years ago, I blogged on how round robin DNS doesn’t really work these days. Once upon the time the gethostbyname() family of functions actually returned addresses in a sort of random fashion, which made clients use them in an almost random fashion and therefore they were spread out on the different addresses. When getaddrinfo() has taken over as the name resolving function, it also introduced address sorting and prioritizing, in a way that effectively breaks the round robin approach.

Now, you can get this feature back with libcurl. Set CURLOPT_DNS_SHUFFLE_ADDRESSES to have the list of addresses shuffled after resolved, before they’re used. If you’re connecting to a service that offer several IP addresses and you want to connect to one of those addresses in a semi-random fashion, this option is for you.

There’s no command line option to switch this on. Yet.

Bug fixes

We did many bug fixes for this release as usual, but some of my favorite ones this time around are…

improved pending transfers for HTTP/2

libcurl-using applications that add more transfers than what can be sent over the wire immediately (usually because the application as set some limitation of the parallelism libcurl will do) can be held “pending” by libcurl. They’re basically kept in a separate queue until there’s a chance to send them off. They will then be attempted to get started when the streams than are in progress end.

The algorithm for retrying the pending transfers were quite naive and “brute-force” which made it terribly slow and in effective when there are many transfers waiting in the pending queue. This slowed down the transfers unnecessarily.

With the fixes we’ve landed in7.60.0, the algorithm is less stupid which leads to much less overhead and for this setup, much faster transfers.

curl_multi_timeout values with threaded resolver

When using a libcurl version that is built to use a threaded resolver, there’s no socket to wait for during the name resolving phase so we’ve often recommended users to just wait “a short while” during this interval. That has always been a weakness and an unfortunate situation.

Starting now, curl_multi_timeout() will return suitable timeout values during this period so that users will no longer have to re-implement that logic themselves. The timeouts will be slowly increasing to make sure fast resolves are detected quickly but slow resolves don’t consume too much CPU.

much faster cookies

The cookie code in libcurl was keeping them all in a linear linked list. That’s fine for small amounts of cookies or perhaps if you don’t manipulate them much.

Users with several hundred cookies, or even thousands, will in 7.60.0 notice a speed increase that in some situations are in the order of several magnitudes when the internal representation has changed to use hash tables and some good cleanups were made.

HTTP/2 GOAWAY-handling

We figure out some problems in libcurl’s handling of GOAWAY, like when an application wants to do a bunch of transfers over a connection that suddenly gets a GOAWAY so that libcurl needs to create a new connection to do the rest of the pending transfers over.

Turns out nginx ships with a config option named http2_max_requests that sets the maximum number of requests it allows over the same connection before it sends GOAWAY over it (and it defaults to 1000). This option isn’t very well explained in their docs and it seems users won’t really know what good values to set it to, so this is probably the primary reason clients see GOAWAYs where there’s no apparent good reason for them.

Setting the value to a ridiculously low value at least helped me debug this problem and improve how libcurl deals with it!

Repair non-ASCII support

We’ve supported transfers with libcurl on non-ASCII platforms since early 2007. Non-ASCII here basically means EBCDIC, but the code hasn’t been limited to those.

However, due to this being used by only a small amount of users and that our test infrastructure doesn’t test this feature good enough, we slipped recently and broke libcurl for the non-ASCII users. Work was put in and changes were landed to make sure that libcurl works again on these systems!

Enjoy 7.60.0! In 56 days there should be another release to play with…