Tag Archives: HTTP

http2 explained in markdown

http2 explainedAfter twelve  releases and over 140,000 downloads of my explanatory document “http2 explained“, I eventually did the right thing and converted the entire book over to markdown syntax and put the book up on gitbook.com.

Better output formats, now epub, MOBI, PDF and everything happens on every commit.

Better collaboration, github and regular pull requests work fine with text content instead of weird binary word processor file formats.

Easier for translators. With plain text commits to aid in tracking changes, and with the images in a separate directory etc writing and maintaining translated versions of the book should be less tedious.

I’m amazed and thrilled that we already have Chinese, Russian, French and Spanish translations and I hear news about additional languages in the pipe.

I haven’t yet decided how to do with “releases” now, as now we update everything on every push so the latest version is always available to read. Go to http://daniel.haxx.se/http2/ to find out the latest about the document and the most updated version of the document.

Thanks everyone who helps out. You’re the best!

Content over HTTP/2

cdn77 logoRoughly a week ago, on August 19, cdn77.com announced that they are the “first CDN to publicly offer HTTP/2 support for all customers, without ‘beta’ limitations”. They followed up just hours later with a demo site showing off how HTTP/2 might perform side by side with a HTTP/1.1 example. And yes, the big competitor CDNs are not yet offering HTTP/2 support it seems.

Their demo site initially got critized for not being realistic and for showing HTTP/2 to be way better in comparison that what a real life scenario would be more likely to look like, and it was also subsequently updated fairly quickly. It is useful to compare with the similarly designed previously existing demo sites hosted by Akamai and the Go project.

NGINX logocdn77’s offering is built on nginx’s alpha patch for HTTP/2 that was anounced just two weeks ago. I believe nginx’s full release is still planned to happen by the end of this year.

I’ve talked with cdn77’s Jakub Straka and their lead developer Honza about their HTTP/2 efforts, and since I suspect there are a few others in my audience who’re also similarly curious I’m offering this interview-style posting here, intertwined with my own comments and thoughts. It is not just a big ad for this company, but since they’re early players on this field I figure their view and comments on this are worth reading!

I’ve been in touch with more than one person who’ve expressed their surprise and awe over the fact that they’re using this early patch for nginx to run in production. So I had to ask them about that. Below, their comments are all prefixed with CDN77 and shown using italics.

nginx

CDN77: “Yes, we are running the alpha patch, which is basically a slightly modified SPDY. In the past we have been in touch with the Nginx team and exchanged tips and ideas, the same way we plan to work on the alpha patch in the future.

We’re actually pretty careful when deploying new and potentially unstable packages into production. We have separate servers for http2 and we are monitoring them 24/7 for any exceptions. We also have dedicated developers who debug any issues we are facing with these machines. We would never jeopardize the stability of our current network.

I’m not an expert on neither server-side HTTP/2 nor nginx in particular , but I think I read somewhere that the nginx HTTP/2 patch removes the SPDY support in favor of the new protocol.

CDN77: “You are right. HTTP/2 patch rewrites SPDY into the HTTP/2, so the SPDY is no longer supported after applying the patch. Since we have HTTP/2 running on separate servers, we still have SPDY support on the rest of the network.”

Did the team at cdn77 at all consider using something else than nginx for HTTP/2, like the promising newcomer h2o?

CDN77: “Not at all. Nginx is a clear choice for us. Its architecture and modularity is awesome. It is also very reliable and it has a pretty long history.

On scale

Can you share some of the biggest hurdles you had to overcome to deploy HTTP/2 on this scale with nginx?

CDN77: “Since nobody has tried the patch in such a scale ever before, we had to make sure it will hold even under pressure and needed to create a load heavy testing environment. We used servers from our partner company 10gbps.io and their 10G uplinks to create intensive ghost traffic. Also, it was important to make sure that supporting tools and applications are HTTP/2 ready – not all of them were. We needed to adjust the way we monitor and control servers in few cases.

There are a few bugs in Nginx that appear mainly in association with the longer-lived connections. They cause issues with the application layer and consume more resources. To be able to accommodate HTTP/2 and still keep necessary network redundancies, we needed to upgrade our network significantly.

I read this as an indication that the nginx patch isn’t perfected just yet rather than signifying that http2 is special. Perhaps also that http2 connections might use a larger footprint in nginx than good old http1 connections do.

Jakub mentioned they see average “performance savings” in the order of 20 to 60 percent depending on sites and contents with the switch to h2, but their traffic amounts haven’t been that large yet:

CDN77: “So far, only a fraction of the traffic is running via HTTP/2, but that is understandable since we launched the support few days ago. On the first day, only about 0.45% of the traffic was HTTP/2 and a big part of this was our own demo site. Over the weekend, we saw impressive adoption rate and the total HTTP/2 traffic accounts for more than 0.8% now, all that with the portion of our own traffic in this dropping dramatically. We expect to be pushing around 1.2% – 1.5% of total traffic over HTTP/2 till the end of this week.

Understandably, it is ramping up. Still, Firefox telemetry is showing at least 10% of the total traffic over HTTP/2 already.

Future of HTTPS and HTTP/2?

Whttp2 logohile I’m talking to a CDN operator, I figured I should poll their view on HTTPS going forward! Will the fact that all browsers only support h2 over HTTPS push more of your customers and your traffic in general over to HTTP, you think?

CDN77: “This is not easy to predict. There is encryption overhead, but HTTP/2 comes with header compression and it is binary. So at this single point, the advantages and disadvantages zero out. Also, the use of HTTPS is rising rapidly even on the older protocol, so we don’t consider this an issue at all.

In general, from a CDN perspective and as someone who just deployed this on a fairly large scale, what’s your general perception of what http2 means going forward?

CDN77: “We believe that this is a huge step forward in how we distribute content online and as a CDN company, we are especially excited as it concerns the very core of our business. From the new features, we have great expectations about cache invalidation that is being discussed right now.

Thanks to Jakub, Honza and Tomáš of cdn77 for providing answers and info. We live in exciting times.

The last HTTP Workshop day

This workshop has been really intense days so far and this last and forth Workshop day did not turn out differently. We started out the morning with the presentation: Caching, Intermediation and the Modern Web by Martin Thomson (Mozilla) describing his idea of a “blind cache” and how it could help to offer caching in a HTTPS world. It of course brought a lot of discussions and further brainstorming on the ideas and how various people in the room thought the idea could be improved or changed.

Immediately following that, Martin continued with a second presentation describing for us a suggested new encryption format for HTTP based on the JWE format and how it could possible be used.

The room then debated connection coalescing (with HTTP/2) for a while and some shared their experiences and thoughts on the topic. It is an area where over-sharing based on the wrong assumptions certainly can lead to tears and unhappiness but it seems the few in the room who actually have implemented this seemed to have considered most of the problems people could foresee.

Support of Trailers in HTTP was brought up and we discussed its virtues for a while vs the possible problems with supporting it and what possible caveats could be, and we also explored the idea of using HTTP/2 push instead of trailers to allow servers to send meta-data that way, and that then also doesn’t necessarily have to follow after the transfer but can in fact be sent during transfer!

Resumed uploads is a topic that comes back every now and then and that has some interest. (It is probably one of the most frequently requested protocol features I get asked about.) It was brought up as something we should probably discuss further, and especially when discussing the next generation HTTP.

At some point in the future we will start talking about HTTP/3. We had a long discussion with the whole team here on what HTTP/3 could entail and we also explored general future HTTP and HTTP/2 extensions and more. A massive list of possible future work was created. The list ended up with something like 70 different things to discuss or work on, but of course most of those things will never actually become reality.

With so much possible or potential work ahead, we need to involve more people that want to and can consider writing specs and to show how easy it apparently can be, Martin demoed how to write a first I-D draft using the fancy Internet Draft Template Repository. Go check it out!

Poul-Henning Kamp brought up the topic of “CO2 usage of the Internet” and argued for that current and future protocol work need to consider the environmental impact and how “green” protocols are. Ilya Grigorik (Google) showed off numbers from http archive.org‘s data and demoed how easy it is to use the big query feature to extract useful information and statistical info out of the vast amount of data they’ve gathered there. Brad Fitspatrick (Google) showed off his awesome tool h2i and how we can use it to poke on and test HTTP/2 server implementations in a really convenient and almost telnet-style command line using way.

Finally, Mark Nottingham (Akamai) showed off his redbot.org service that runs HTTP against a site, checks its responses and reports with details exactly what it responds and why and provide a bunch of analysis and informational based on that.

Such an eventful day really had to be rounded off with a bunch of beers and so we did. The HTTP Workshop of the summer 2015 ended. The event was great. The attendees were great. The facilities and the food were perfect. I couldn’t ask for more. Thanks for arranging such a great happening!

I’ll round off showing off my laptop lid after the two new stickers of the week were applied. (The HTTP Workshop one and an Apache one I got from Roy):

laptop-stickers

… I’ll get up early tomorrow morning and fly back home.

A third day of HTTP Workshopping

I’ve met a bunch of new faces and friends here at the HTTP Workshop in Münster. Several who I’ve only seen or chatted with online before and some that I never interacted with until now. Pretty awesome really.

Out of the almost forty HTTP fanatics present at this workshop, five persons are from Google, four from Mozilla (including myself) and Akamai has three employees here. Those are the top-3 companies. There are a few others with 2 representatives but most people here are the only guys from their company. Yes they are all guys. We are all guys. The male dominance at this event is really extreme and we’ve discussed this sad circumstance during breaks and it hasn’t gone unnoticed.

This particular day started out grand with Eric Rescorla (of Mozilla) talking about HTTP Security in his marvelous high-speed style. Lots of talk about how how the HTTPS usage is right now on  the web, HTTPS trends, TLS 1.3 details and when it is coming and we got into a lot of talk about how HTTP deprecation and what can and cannot be done etc.

Next up was a presentation about HTTP Privacy and Anonymity by Mike Perry (from the Tor project) about lots of aspects of what the Tor guys consider regarding fingerprinting, correlation, network side-channels and similar things that can be used to attempt to track user or usage over the Tor network. We got into details about what recent protocols like HTTP/2 and QUIC “leak” or open up for fingerprinting and what (if anything) can or could be done to mitigate the effects.

Evolving HTTP Header Fields by Julian Reschke (of Green Bytes) then followed, discussing all the variations of header syntax that we have in HTTP and how it really is not possible to write a generic parser that can handle them, with a suggestion on how to unify this and introduce a common format for future new headers. Julian’s suggestion to use JSON for this ignited a discussion about header formats in general and what should or could be done for HTTP/3 and if keeping support for the old formats is necessary or not going forward. No real consensus was reached.

Willy Tarreau (from HAProxy) then took us into the world of HTTP Infrastructure scaling and Load balancing, and showed us on the microsecond level how fast a load balancer can be, how much extra work adding HTTPS can mean and then ending with a couple suggestions of what he thinks could’ve helped his scenario. That then turned into a general discussion and network architecture brainstorm on what can be done, how it could be improved and what TLS and other protocols could possibly be do to aid. Cramming out every possible gigabit out of load balancers certainly is a challange.

Talking about cramming bits, Kazuho Oku got to show the final slides when he showed how he’s managed to get his picohttpparser to parse HTTP/1 headers at a speed that is only slightly slower than strlen() – including a raw dump of the x86 assembler the code is turned into by a compiler. What could possibly be a better way to end a day full of protocol geekery?

Google graciously sponsored the team dinner in the evening at a Peruvian place in the town! Yet another fully packed day has ended.

I’ll top off today’s summary with a picture of the gift Mark Nottingham (who’s herding us through these days) was handing out today to make us stay keen and alert (Mark pointed out to me that this was a gift from one of our Japanese friends here):

kitkat

HTTP Workshop, second day

All 37 of us gathered again on the 3rd floor in the Factory hotel here in Münster. Day two of the HTTP Workshop.

Jana Iyengar (from Google) kicked off this morning with his presentations on HTTP and the Transport Layer and QUIC. Very interesting area if you ask me – if you’re interested in this, you really should check out the video recording from the barbof they did on this topic in the recent Prague IETF. It is clear that a team with dedication, a clear use-case, a fearless approach to not necessarily maintaining “layers” and a handy control of widely used servers and clients can do funky experiments with new transport protocols.

I think there was general agreement with Jana’s statement that “Engagement with the transport community is critical” for us to really be able to bring better web protocols now and in the future. Jana’s excellent presentations were interrupted a countless number of times with questions, elaborations, concerns and sub-topics from attendees.

Gaetano Carlucci followed up with a presentation of their QUIC evaluations, showing how it performs under various situations like packet loss etc in comparison to HTTP/2. Lots of transport related discussions followed.

We rounded off the afternoon with a walk through the city (the rain stopped just minutes before we took off) to the town center where we tried some of the local beers while arguing their individual qualities. We then took off in separate directions and had dinner in smaller groups across the city.

snackstation

The HTTP Workshop started

So we started today. I won’t get into any live details or quotes from the day since it has all been informal and we’ve all agreed to not expose snippets from here without checking properly first. There will be a detailed report put together from this event afterwards.

The most critical peace of information is however how we must not walk on the red parts of the sidewalks here in Münster, as that’s the bicycle lane and they (the bicyclers) can be ruthless there.

We’ve had a bunch of presentations today with associated Q&A and follow-up discussions. Roy Fielding (HTTP spec pioneer) started out the series with a look at HTTP full of historic details and views from the past and where we are and what we’ve gone through over the years. Patrick Mcmanus (of Firefox HTTP networking) took us through some of the quirks of what a modern day browser has to do to speak HTTP and topped it off with a quiz regrading Firefox metrics. Did you know 31% of all Firefox HTTP requests get fulfilled by the cache or that 73% of all Firefox HTTP/2 connections are used more than once but only 7% of the HTTP/1 ones?

Poul-Henning Kamp (author of Varnish) brought his view on HTTP/2 from an intermediary’s point of view with a slightly pessimistic view, not totally unlike what he’s published before. Stefan Eissing (from Green Bytes) entertained us by talking about his work on writing mod_h2 for Apache Httpd (and how it might be included in the coming 2.4.x release) and we got to discuss a bit around timing measurements and its difficulties.

We rounded off the afternoon with a priority and dependency tree discussion topped off with a walk-through of numbers and slides from Kazuho Oku (author of H2O) on how dependency-trees really help and from Moto Ishizawa (from Yahoo! Japan) explaining Firefox’s (Patrick’s really) implementation of dependencies for HTTP/2.

We spent the evening having a 5-course (!) meal at a nice Italian restaurant while trading war stories about HTTP, networking and the web. Now it is close to midnight and it is time to reload and get ready for another busy day tomorrow.

I’ll round off with a picture of where most of the important conversations were had today:

kafeestation

HTTP Workshop 2015, day -1

http workshopI’ve traveled to a rainy and gray Münster, Germany, today and checked in to my hotel for the coming week and the HTTP Workshop. Tomorrow is the first day and I’m looking forward to it probably a little too much.

There is a whole bunch of attendees coming. Simply put, most of the world’s best brains and the most eager implementers of the HTTP stacks that are in use today and will be in use tomorrow (with a bunch of notable absentees of course but you know you’ll be missed). I’m happy and thrilled to be able to take part during this coming week.

More HTTP framing attempts

Previously, in my exciting series “improving the HTTP framing checks in Firefox” we learned that I landed a patch, got it backed out, struggled to improve the checks and finally landed the fixed version only to eventually get that one backed out as well.

And now I’ve landed my third version. The amendment I did this time:

When receiving HTTP content that is content-encoded and compressed I learned that when receiving deflate compression there is basically no good way for us to know if the content gets prematurely cut off. They seem to lack the footer too often for it to make any sense in checking for that. gzip streams however end with a footer so they are easier to reliably detect when they are incomplete. (As was discovered before, the Content-Length: is far too often not updated by the server so it is instead wrongly showing the uncompressed size.)

This (deflate vs gzip) knowledge is now used by the patch, meaning that deflate compressed downloads can be cut off without the browser noticing…

Will this version of the fix actually stick? I don’t know. There’s lots of bad voodoo out there in the HTTP world and I’m putting my finger right in the middle of some of it with this change. I’m pretty sure I’ve not written my last blog post on this topic just yet… If it sticks this time, it should show up in Firefox 39.

bolt-cutter

Tightening Firefox’s HTTP framing – again

An old http1.1 frameCall me crazy, but I’m at it again. First a little resume from our previous episodes in this exciting saga:

Chapter 1: I closed the 10+ year old bug that made the Firefox download manager not detect failed downloads, simply because Firefox didn’t care if the HTTP 1.1 Content-Length was larger than what was actually saved – after the connection potentially was cut off for example. There were additional details, but that was the bigger part.

Chapter 2: After having been included all the way to public release, we got a whole slew of bug reports immediately when Firefox 33 shipped and we had to revert parts of the fix I did.

Chapter 3.

Will it land before it turns 11 years old? The bug was originally submitted 2004-03-16.

Since chapter two of this drama brought back the original bugs again we still have to do something about them. I fully understand if not that many readers of this can even keep up of all this back and forth and juggling of HTTP protocol details, but this time we’re putting back the stricter frame checks with a few extra conditions to allow a few violations to remain but detect and react on others!

Here’s how I addressed this issue. I wanted to make the checks stricter but still allow some common protocol violations.

In particular I needed to allow two particular flaws that have proven to be somewhat common in the wild and were the reasons for the previous fix being backed out again:

A – HTTP chunk-encoded responses that lack the final 0-sized chunk.

B – HTTP gzipped responses where the Content-Length is not the same as the actual contents.

So, in order to allow A + B and yet be able to detect prematurely cut off transfers I decided to:

  1. Detect incomplete chunks then the transfer has ended. So, if a chunk-encoded transfer ends on exactly a chunk boundary we consider that fine. Good: This will allow case (A) to be considered fine. Bad: It will make us not detect a certain amount of cut-offs.
  2. When receiving a gzipped response, we consider a gzip stream that doesn’t end fine according to the gzip decompressing state machine to be a partial transfer. IOW: if a gzipped transfer ends fine according to the decompressor, we do not check for size misalignment. This allows case (B) as long as the content could be decoded.
  3. When receiving HTTP that isn’t content-encoded/compressed (like in case 2) and not chunked (like in case 1), perform the size comparison between Content-Length: and the actual size received and consider a mismatch to mean a NS_ERROR_NET_PARTIAL_TRANSFER error.

Firefox BallPrefs

When my first fix was backed out, it was actually not removed but was just put behind a config string (pref as we call it) named “network.http.enforce-framing.http1“. If you set that to true, Firefox will behave as it did with my original fix applied. It makes the HTTP1.1 framing fairly strict and standard compliant. In order to not mess with that setting that now has been around for a while (and I’ve also had it set to true for a while in my browser and I have not seen any problems with doing it this way), I decided to introduce my new changes pref’ed behind a separate variable.

network.http.enforce-framing.soft” is the new pref that is set to true by default with my patch. It will make Firefox do the detections outlined in 1 – 3 and setting it to false will disable those checks again.

Now I only hope there won’t ever be any chapter 4 in this story… If things go well, this will appear in Firefox 38.

Chromium

But how do they solve these problems in the Chromium project? They have slightly different heuristics (with the small disclaimer that I haven’t read their code for this in a while so details may have changed). First of all, they do not allow a missing final 0-chunk. Then, they basically allow any sort of misaligned size when the content is gzipped.

Update: this patch was subsequently backed out again due to several bug reports about it. I have yet to analyze exactly what went wrong.