All posts by Daniel Stenberg

My 10th FOSDEM

I didn’t present anything during last year’s conference, so I submitted my DNS-over-HTTPS presentation proposal early on for this year’s FOSDEM. Someone suggested it was generic enough I should rather ask for main track instead of the DNS room, and so I did. Then time passed and in November 2018 “HTTP/3” was officially coined as a real term and then, after the Mozilla devroom’s deadline had been extended for a week I filed my second proposal. I might possibly even have been an hour or two after the deadline. I hoped at least one of them would be accepted.

Not only were both my proposed talks accepted, I was also approached and couldn’t decline the honor of participating in the DNS privacy panel. Ok, three slots in the same FOSDEM is a new record for me, but hey, surely that’s no problems for a grown-up..

HTTP/3

I of coursed hoped there would be interest in what I had to say.

I spent the time immediately before my talk with a coffee in the awesome newly opened cafeteria part to have a moment of calmness before I started. I then headed over to the U2.208 room maybe half an hour before the start time.

It was packed. Quite literally there were hundreds of persons waiting in the area outside the U2 rooms and there was this totally massive line of waiting visitors queuing to get into the Mozilla room once it would open.

The “Sorry, this room is FULL” sign is commonly seen on FOSDEM.

People don’t know who I am by my appearance so I certainly didn’t get any special treatment, waiting for my talk to start. I waited in line with the rest and when the time for my presentation started to get closer I just had to excuse myself, leave my friends behind and push through the crowd. I managed to get a “sorry, it’s full” told to me by a conference admin before one of the room organizers recognized me as the speaker of the next talk and I could walk by a very long line of humans that eventually would end up not being able to get in. The room could fit 170 souls, and every single seat was occupied when I started my presentation just a few minutes late.

This presentation could have filled a much larger room. Two years ago my HTTP/2 talk filled up the 300 seat room Mozilla had that year.

Video

Video from my HTTP/3 talk. Duration 1 hour.

The slides from my HTTP/3 presentation.

DNS over HTTPS

I tend to need a little “landing time” after having done a presentation to cool off an come back to normal senses and adrenaline levels again. I got myself a lunch, a beer and chatted with friends in the cafeteria (again). During this conversation, it struck me I had forgotten something in my coming presentation and I added a slide that I felt would improve it (the screenshot showing “about:networking#dns” output with DoH enabled). In what felt like no time, it was again to move. I walked over to Janson, the giant hall that fits 1,470 persons, which I entered a few minutes ahead of my scheduled time and began setting up my machine.

I started off with a little technical glitch because the projector was correctly detected and setup as a second screen on my laptop but it would detect and use a too high resolution for it, but after just a short moment of panic I lowered the resolution on that screen manually and the image appeared fine. Phew! With a slightly raised pulse, I witnessed the room fill up. Almost full. I estimate over 90% of the seats were occupied.

The DNS over HTTPS talk seen from far back. Photo by Steve Holme.

This was a brand new talk with all new material and I performed it for the largest audience I think I’ve ever talked in front of.

Video

Video of my DNS over HTTPS presentation. Duration 50 minutes.

To no surprise, my talk triggered questions and objections. I spent a while in the corridor behind Janson afterward, discussing DoH details, the future of secure DNS and other subtle points of the different protocols involved. In the end I think I manged pretty good, and I had expected more arguments and more tough questions. This is after all the single topic I’ve had more abuse and name-calling for than anything else I’ve ever worked on before in my 20+ years in Internet protocols. (After all, I now often refer to myself and what I do as webshit.)

My DNS over HTTPS slides.

DNS Privacy panel

I never really intended to involve myself in DNS privacy discussions, but due to the constant misunderstandings and mischaracterizations (both on purpose and by ignorance) sometimes spread about DoH, I’ve felt a need to stand up for it a few times. I think that was a contributing factor to me getting invited to be part of the DNS privacy panel that the organizers of the DNS devroom setup.

There are several problems and challenges left to solve before we’re in a world with correctly and mostly secure DNS. DoH is one attempt to raise the bar. I was content to had the opportunity to really spell out my view of things before the DNS privacy panel.

While sitting next to these giants from the DNS world, Stéphane Bortzmeyer, Bert Hubert and me discussed DoT, DoH, DNS centralization, user choice, quad-dns-hosters and more. The discussion didn’t get very heated but instead I think it showed that we’re all largely in agreement that we need more secure DNS and that there are obstacles in the way forward that we need to work further on to overcome. Moderator Jan-Piet Mens did an excellent job I think, handing over the word, juggling the questions and taking in questions from the audience.

Video

Video from the DNS Privacy panel. Duration 30 minutes.

Ten years, ten slots

Appearing in three scheduled slots during the same FOSDEM was a bit much, and it effectively made me not attend many other talks. They were all great fun to do though, and I appreciate people giving me the chance to share my knowledge and views to the world. As usually very nicely organized and handled. The videos of each presentation are linked to above.

I met many people, old and new friends. I handed out a lot of curl stickers and I enjoyed talking to people about my recently announced new job at wolfSSL.

After ten consecutive annual visits to FOSDEM, I have appeared in ten program slots!

I fully intend to go back to FOSDEM again next year. For all the friends, the waffles, the chats, the beers, the presentations and then for the waffles again. Maybe I will even present something…

I’m on team wolfSSL

Let me start by saying thank you to all and everyone who sent me job offers or otherwise reached out with suggestions and interesting career moves. I received more than twenty different offers and almost every one of those were truly good options that I could have said yes to and still pulled home a good job. What a luxury challenge to have to select something from that! Publicly announcing me leaving Mozilla turned out a great ego-boost.

I took some time off to really reflect and contemplate on what I wanted from my next career step. What would the right next move be?

I love working on open source. Internet protocols, and transfers and doing libraries written in C are things considered pure fun for me. Can I get all that and yet keep working from home, not sacrifice my wage and perhaps integrate working on curl better in my day to day job?

I talked to different companies. Very interesting companies too, where I have friends and people who like me and who really wanted to get me working for them, but in the end there was one offer with a setup that stood out. One offer for which basically all check marks in my wish-list were checked.

wolfSSL

On February 5, 2019 I’m starting my new job at wolfSSL. My short and sweet period as unemployed is over and now it’s full steam ahead again! (Some members of my family have expressed that they haven’t really noticed any difference between me having a job and me not having a job as I spend all work days the same way nevertheless: in front of my computer.)

Starting now, we offer commercial curl support and various services for and around curl that companies and organizations previously really haven’t been able to get. Time I do not spend on curl related activities for paying customers I will spend on other networking libraries in the wolfSSL “portfolio”. I’m sure I will be able to keep busy.

I’ve met Larry at wolfSSL physically many times over the years and every year at FOSDEM I’ve made certain to say hello to my wolfSSL friends in their booth they’ve had there for years. They’re truly old-time friends.

wolfSSL is mostly a US-based company – I’m the only Swede on the team and the only one based in Sweden. My new colleagues all of course know just as well as you that I’m prevented from traveling to the US. All coming physical meetings with my work mates will happen in other countries.

commercial curl support!

We offer all sorts of commercial support for curl. I’ll post separately with more details around this.

HTTP/3 talk on video

Yesterday, I had attracted audience enough to fill up the largest presentation room GOTO 10 has, which means about one hundred interested souls.

The subject of the day was HTTP/3. The event was filmed with a mevo camera and I captured the presentation directly from my laptop as well, and I then stitched together the two sources into this final version late last night. As you’ll notice, the sound isn’t awesome and the rest of the “production” isn’t exactly top notch either, but hey, I don’t think it matters too much.

I’ll talk about HTTP/3 (Photo by Jon Åslund)
I’m Daniel Stenberg. I was handed a medal from the Swedish king in 2017 for my work on… (Photo by OpenTokix)
HTTP/2 vs HTTP/3 (Photo by OpenTokix)
Some of the challenges to deploy HTTP/3 are…. (Photo by Jonathan Sulo)

The slide set can also be viewed on slideshare.

QUIC and missing APIs

I trust you’ve heard by now that HTTP/3 is coming. It is the next destined HTTP version, targeted to get published as an RFC in July 2019. Not very far off.

HTTP/3 will not be done over TCP. It will only be performed over QUIC, which is a transport protocol replacement for TCP that always is done encrypted. There’s no clear-text version of QUIC.

TLS 1.3

The encryption in QUIC is based on TLS 1.3 technologies which I believe everyone thinks is a good idea and generally the correct decision. We need to successively raise the bar as we move forward with protocols.

However, QUIC is not only a transport protocol that does encryption by itself while TLS is typically (and designed as) a protocol that is done on top of TCP, it was also designed by a team of engineers who came up with a design that requires APIs from the TLS layer that the traditional TLS over TCP use case doesn’t need!

New TLS APIs

A QUIC implementation needs to extract traffic secrets from the TLS connection and it needs to be able to read/write TLS messages directly – not using the TLS record layer. TLS records are what’s used when we send TLS over TCP. (This was discussed and decided back around the time for the QUIC interim in Kista.)

These operations need APIs that still are missing in for example the very popular OpenSSL library, but also in other commonly used ones like GnuTLS and libressl. And of course schannel and Secure Transport.

Libraries known to already have done the job and expose the necessary mechanisms include BoringSSL, NSS, quicly, PicoTLS and Minq. All of those are incidentally TLS libraries with a more limited number of application users and less mainstream. They’re also more or less developed by people who are also actively engaged in the QUIC protocol development.

The QUIC libraries in progress now are typically using either one of the TLS libraries that already are adapted or do what ngtcp2 does: it hosts a custom-patched version of OpenSSL that brings the needed functionality.

Matt Caswell of the OpenSSL development team acknowledged this situation already back in September 2017, but so far we haven’t seen this result in updated code shipped in a released version.

curl and QUIC

curl is TLS library agnostic and can get built with around 12 different TLS libraries – one or many actually, as you can build it to allow users to select TLS backend in run-time!

OpenSSL is without competition the most popular choice to build curl with outside of the proprietary operating systems like macOS and Windows 10. But even the vendor-build and provided mac and Windows versions are also built with libraries that lack APIs for this.

With our current keen interest in QUIC and HTTP/3 support for curl, we’re about to run into an interesting TLS situation. How exactly is someone going to build curl to simultaneously support both traditional TLS based protocols as well as QUIC going forward?

I don’t have a good answer to this yet. Right now (assuming we would have the code ready in our end, which we don’t), we can’t ship QUIC or HTTP/3 support enabled for curl built to use the most popular TLS libraries! Hopefully by the time we get our code in order, the situation has improved somewhat.

This will slow down QUIC deployment

I’m personally convinced that this little API problem will be friction enough when going forward that it will slow down and hinder QUIC deployment at least initially.

When the HTTP/2 spec shipped in May 2015, it introduced a dependency on the fairly new TLS extension called ALPN that for a long time caused head aches for server admins since ALPN wasn’t supported in the OpenSSL versions that was typically installed and used at the time, but you had to upgrade OpenSSL to version 1.0.2 to get that supported.

At that time, almost four years ago, OpenSSL 1.0.2 was already released and the problem was big enough to just upgrade to that. This time, the API we’re discussing here is not even in a beta version of OpenSSL and thus hasn’t been released in any version yet. That’s far worse than the HTTP/2 situation we had and that took a few years to ride out.

Will we get these APIs into an OpenSSL release to test before the QUIC specification is done? If the schedule sticks, there’s about six months left…

My talks at FOSDEM 2019

I’ll be celebrating my 10th FOSDEM when I travel down to Brussels again in early February 2019. That’s ten years in a row. It’ll also be the 6th year I present something there, as I’ve done these seven talks in the past:

My past FOSDEM appearances

2010. I talked Rockbox in the embedded room.

2011. libcurl, seven SSL libs and one SSH lib in the security room.

2015. Internet all the things – using curl in your device. In the embedded room.

2015. HTTP/2 right now. In the Mozilla room.

2016. an HTTP/2 update. In the Mozilla room.

2017. curl. On the main track.

2017. So that was HTTP/2, what’s next? In the Mozilla room.

DNS over HTTPS – the good, the bad and the ugly

On the main track, in Janson at 15:00 on Saturday 2nd of February.

DNS over HTTPS (aka “DoH”, RFC 8484) introduces a new transport protocol to do secure and private DNS messaging. Why was it made, how does it work and how users are free (to resolve names).

The presentation will discuss reasons why DoH was deemed necessary and interesting to ship and deploy and how it compares to alternative technologies that offer similar properties. It will discuss how this protocol “liberates” users and offers stronger privacy (than the typical status quo).

How to enable and start using DoH today.

It will also discuss some downsides with DoH and what you should consider before you decide to use a random DoH server on the Internet.

HTTP/3

In the Mozilla room, at 11:30 on Saturday 2nd of February.

HTTP/3 is the next coming HTTP version.

This time TCP is replaced by the new transport protocol QUIC and things are different yet again! This is a presentation about HTTP/3 and QUIC with a following Q&A about everything HTTP. Join us at Goto 10.

HTTP/3 is the designated name for the coming next version of the protocol that is currently under development within the QUIC working group in the IETF.

HTTP/3 is designed to improve in areas where HTTP/2 still has some shortcomings, primarily by changing the transport layer. HTTP/3 is the first major protocol to step away from TCP and instead it uses QUIC. I’ll talk about HTTP/3 and QUIC. Why the new protocols are deemed necessary, how they work, how they change how things are sent over the network and what some of the coming deployment challenges will be.

DNS Privacy panel

In the DNS room, at 11:55 on Sunday 3rd of February.

This isn’t strictly a prepared talk or presentation but I’ll still be there and participate in the panel discussion on DNS privacy. I hope to get most of my finer points expressed in the DoH talk mentioned above, but I’m fully prepared to elaborate on some of them in this session.

A curl 2018 retrospective

Another year reaches its calendar end and a new year awaits around the corner. In the curl project we’ve had another busy and event-full year. Here’s a look back at some of the fun we’ve done during 2018.

Releases

We started out the year with the 7.58.0 release in January, and we managed to squeeze in another six releases during the year. In total we count 658 documented bug-fixes and 31 changes. The total number of bug-fixes was actually slightly lower this year compared to last year’s 683. An average of 1.8 bug-fixes per day is still not too shabby.

Authors

I’m very happy to say that we again managed to break our previous record as 155 unique authors contributed code. 111 of them for the first time in the project, and 126 did fewer than three commits during the year. Basically this means we merged code from a brand new author every three days through-out the year!

The list of “contributors”, where we also include helpers, bug reporters, security researchers etc, increased with another 169 new names this year to a total of 1829 in the last release of the year. That’s 169 new names. Of course we also got a lot of help from people who were already mentioned in there!

Will we be able to reach 2000 names before the end of 2019?

Commits

At the time of this writing, almost two weeks before the end of the year, we’re still behind the last few years with 1051 commits done this year. 1381 commits were done in 2017.

Daniel’s commit share

I personally authored 535 (50.9%) of all commits during 2018. Marcel Raad did 65 and Daniel Gustafsson 61. In general I maintain my general share of the changes done in the project over time. Possibly I’ve even increased it slightly the last few years. This graph shows my share of the commits layered on top of the number of commits done.

Vulnerabilities

This year we got exactly the same amount of security problems reported as we did last year: 12. Some of the problems were one-off due curl being added to the OSS-Fuzz project in 2018 and it has taken a while to really hit some of our soft spots and as we’ve seen a slow-down in reports from there it’ll be interesting to see if 2019 will be a brighter year in this department. (In total, OSS-Fuzz is credited for having found six security vulnerabilities in curl to date.)

During the year we manage to both introduce new bug bounty program as well as retract that very same again when it shut down almost at once! 🙁

Lines of code

Counting all lines in the git repo in the src, lib and include directories, they grew nearly 6,000 lines (3.7%) during the year to 155,912. The primary code growing activities this year were:

  1. DNS-over-HTTPS support
  2. The new URL API and using that internally as well

Deprecating legacy

In July we created the DEPRECATE.md document to keep order of some things we’re stowing away in the cyberspace attic. During the year we cut off axTLS support as a first example of this deprecation procedure. HTTP pipelining, global DNS cache and HTTP/0.9 accepted by default are features next in line marked for removal, and the two first are already disabled in code.

curl up

We had our second curl conference this year; in Stockholm. It was blast again and I’m already looking forward to curl up 2019 in Prague.

Sponsor updates

Yours truly quit Mozilla and with that we lost them as a sponsor of the curl project. We have however gotten several new backers and sponsors over the year since we joined opencollective, and can receive donations from there.

Governance

Together with a bunch of core team members I put together a two-step proposal that I posted back in October:

  1. we join an umbrella organization
  2. we create a “board” to decide over money

As the first step turned out to be a very slow operation (ie we’ve applied, but the process has not gone very far yet) we haven’t yet made step 2 happen either.

2019

Things that didn’t happen in 2018 but very well might happen in 2019 include:

  1. Some first HTTP/3 and QUIC code attempts in curl
  2. HSTS support? A pull request for this has been lingering for a while already.

Note: the numbers for 2018 in this post were extracted and graphs were prepared a few weeks before the actual end of year, so some of the data quite possibly changed a little bit since.

HTTP/3 talk in Stockholm on January 22

HTTP/3 – the coming HTTP version

This time TCP is replaced by the new transport protocol QUIC and things are different yet again! This is a presentation by Daniel Stenberg about HTTP/3 and QUIC with a following Q&A about everything HTTP.

The presentation will be done in English. It will be recorded and possibly live-streamed. Organized by me, together with our friends at goto10. It is free of charge, but you need to register.

When

17:30 – 19:00
January 22, 2019

Goto 10: Hörsalen, Hammarby Kaj 10D plan 5

Register here!

Fancy map to goto 10


Why is curl different everywhere?

At a talk I did a while ago, someone from the back of the audience raised this question. I found it to be such a great question that I decided to spend a few minutes and explain how this happens and why.

In this blog post I’ll stick to discussing the curl command line tool. “curl” is often also used as a shortcut for the library but let’s focus on the tool here.

When you use a particular curl version installed in a system near you, chances are that it differs slightly from the curl your neighbor runs or even the one that you use in the machines at work.

Why is this?

Versions

We release a new curl version every eight weeks. On average we ship over thirty releases in a five-year period.

A lot of people use curl versions that are a few years old, some even many years old. There are easily more than 30 different curl version in active use at any given moment.

Not every curl release introduce changes and new features, but it is very common and all releases are at least always corrected a lot of bugs from previous versions. New features and fixed bugs make curl different between releases.

Linux/OS distributions tend to also patch their curl versions at times, and then they all of course have different criteria and work flows, so the exact same curl version built and shipped from two different vendors can still differ!

Platforms

curl builds on almost every platform you can imagine. When you build curl for your platform, it is designed to use features, native APIs and functions available and they will indeed differ between systems.

curl also relies on a number of different third party libraries. The set of libraries a particular curl build is set to use varies by platform, but even more so due to the decisions of the persons or group that built this particular curl executable. The exact set, and the exact versions of each of those third party libraries, will change curl’s feature set, from subtle and small changes up to large really noticeable differences.

TLS libraries

As a special third party library, I want to especially highlight the importance of the TLS library that curl is built to use. It will change not only what SSL and TLS versions curl supports, but also how to handle CA certificates, it provides crypto support for authentication schemes such as NTLM and more. Not to mention that of course TLS libraries also develop over time so if curl is built to use an older release, it probably has less support for later features and protocol versions.

Feature shaving

When building curl, you can switch features on and off to a very large extent, making it possible to quite literally build it in several million different combinations. The organizations, people and companies that build curl to ship with their operating systems or their package distribution systems decide what feature set they want or don’t want for their users. One builder’s decision and thought process certainly does not have to match the ones of the others’. With the same curl version, the same TLS library on the same operating system two curl builds might thus still end up different!

Build your own!

If you aren’t satisfied with the version or feature-set of your own locally installed curl – build your own!

7.63.0 – another step down the endless path

This curl release was developed and put together over a period of six weeks (two weeks less than usual). This was done to accommodate to my personal traveling plans – and to avoid doing a release too close to Christmas in case we would ship any security fixes, but ironically, we have no security advisories this time!

Numbers

the 178th release
3 changes
42 days (total: 7,572)

79 bug fixes (total: 4,837)
122 commits (total: 23,799)
0 new public libcurl functions (total: 80)
1 new curl_easy_setopt() options (total: 262)

0 new curl command line option (total: 219)
51 contributors, 21 new (total: 1,829)
31 authors, 14 new (total: 646)
  0 security fixes (total: 84)

Changes

With the new CURLOPT_CURLU option, an application can now  pass in an already parsed URL to libcurl instead of a string.

When using libcurl’s URL API, introduced in 7.62.0, the result is held in a “handle” and that handle is what now can be passed straight into libcurl when setting up a transfer.

In the command line tool, the –write-out option got the ability to optionally redirect its output to stderr. Previously it was always a given file or stdout but many people found that a bit limiting.

Interesting bug-fixes

Weirdly enough we found and fixed a few cookie related bugs this time. I say “weirdly” because you’d think this is functionality that’s been around for a long time and should’ve been battle tested and hardened quite a lot already. As usual, I’m only covering some bugs here. The full list is in the changelog!

Cookie saving –  One cookie bug that we fixed was related to libcurl not saving a cookie jar when no cookies are kept in memory (any more). This turned out to be a changed behavior due to us doing more aggressive expiry of old cookies since a while back, and one user had a use case where they would load cookies from a cookie jar and then expect that the cookies would update and write to the jar again, overwriting the old one – although when no cookies were left internally it didn’t touch the file and the application thus reread the old cookies again on the next invoke. Since this was subtly changed behavior, libcurl will now save an empty jar in this situation to make sure such apps will note the blank jar.

Cookie expiry – For the received cookies that get ‘Max-Age=0’ set, curl would treat the zero value the same way as any number and therefore have the cookie continue to exist during the whole second it arrived (time() + 0 basically). The cookie RFC is actually rather clear that receiving a zero for this parameter is a special case and means that it should rather expire it immediately and now curl does.

Timeout handling – when calling curl_easy_perform() to do a transfer, and you ask libcurl to timeout that transfer after say 5.1 seconds, the transfer hasn’t completed in that time and the connection is in fact totally idle at that time, a recent regression would make libcurl not figure this out until a full 6 seconds had elapsed.

NSS – we fixed several minor  issues in the NSS back-end this time. Perhaps the most important issue was if the installed NSS library has been built with TLS 1.3 disabled while curl was built knowing about TLS 1.3, as then things like the ‘–tlsv1.2’ option would still cause errors. Now curl will fall back correctly. Fixes were also made to make sure curl again works with NSS versions back to 3.14.

OpenSSL – with TLS 1.3 session resumption was changed for TLS, but now curl will support it with OpenSSL.

snprintf – curl has always had its own implementation of the *printf() family of functions for portability reasons. First, traditionally snprintf() was not universally available but then also different implementations have different support for things like 64 bit integers or size_t fields and they would disagree on return values. Since curl’s snprintf() implementation doesn’t use the same return code as POSIX or other common implementations we decided we shouldn’t use the same name so that we don’t fool readers of code into believing that they are fully compatible. For that reason, we now also “ban” the use of snprintf() in the curl code.

URL parsing – there were several regressions from the URL parsing news introduced in curl 7.62.0. That was the first release that offers the new URL API for applications, and we also then switched the internals to use that new code. Perhaps the funniest error was how a short name plus port number (hello:80) was accidentally treated as a “scheme” by the parser and since the scheme was unknown the URL was rejected. The numerical IPv6 address parser was also badly broken – I take the blame for not writing good enough test cases for it which made me not realize this in time. Two related regressions that came from the URL  work broke HTTP Digest auth and some LDAP transfers.

DoH over HTTP/1 – DNS-over-HTTPS was simply not enabled in the build if HTTP/2 support wasn’t there, which was an unnecessary restriction and now h2-disabled builds will also be able to resolve host names using DoH.

Trailing dots in host name – an old favorite subject came back to haunt us and starting in this version, curl will keep any trailing dot in the host name when it resolves the name, and strip it off for all the rest of the uses where the name will be passed in: for cookies, for the HTTP Host: header and for the TLS SNI field. This, since most resolver APIs makes a difference between resolving “host” compared to “host.” and we wouldn’t previously acknowledge or support the two versions.

HTTP/2 – When we enabled HTTP/2 by default for more transfers in 7.62.0, we of course knew that could force more latent bugs to float up to the surface and get noticed. We made curl understand  HTTP_1_1_REQUIRED error when received over HTTP/2 and then retry over HTTP/1.1. and if NTLM is selected as the authentication to use curl now forces HTTP/1 use.

Next release

We have suggested new features already lined up waiting to get merged so the next version is likely to be called 7.64.0 and it is scheduled to happen on February 6th 2019.

HTTP/3 Explained

I’m happy to tell that the booklet HTTP/3 Explained is now ready for the world. It is entirely free and open and is available in several different formats to fit your reading habits. (It is not available on dead trees.)

The book describes what HTTP/3 and its underlying transport protocol QUIC are, why they exist, what features they have and how they work. The book is meant to be readable and understandable for most people with a rudimentary level of network knowledge or better.

These protocols are not done yet, there aren’t even any implementation of these protocols in the main browsers yet! The book will be updated and extended along the way when things change, implementations mature and the protocols settle.

If you find bugs, mistakes, something that needs to be explained better/deeper or otherwise want to help out with the contents, file a bug!

It was just a short while ago I mentioned the decision to change the name of the protocol to HTTP/3. That triggered me to refresh my document in progress and there are now over 8,000 words there to help.

The entire HTTP/3 Explained contents are available on github.

If you haven’t caught up with HTTP/2 quite yet, don’t worry. We have you covered for that as well, with the http2 explained book.