Call me crazy, but I’m at it again. First a little resume from our previous episodes in this exciting saga:
Chapter 1: I closed the 10+ year old bug that made the Firefox download manager not detect failed downloads, simply because Firefox didn’t care if the HTTP 1.1 Content-Length was larger than what was actually saved – after the connection potentially was cut off for example. There were additional details, but that was the bigger part.
Chapter 2: After having been included all the way to public release, we got a whole slew of bug reports immediately when Firefox 33 shipped and we had to revert parts of the fix I did.
Will it land before it turns 11 years old? The bug was originally submitted 2004-03-16.
Since chapter two of this drama brought back the original bugs again we still have to do something about them. I fully understand if not that many readers of this can even keep up of all this back and forth and juggling of HTTP protocol details, but this time we’re putting back the stricter frame checks with a few extra conditions to allow a few violations to remain but detect and react on others!
Here’s how I addressed this issue. I wanted to make the checks stricter but still allow some common protocol violations.
In particular I needed to allow two particular flaws that have proven to be somewhat common in the wild and were the reasons for the previous fix being backed out again:
A – HTTP chunk-encoded responses that lack the final 0-sized chunk.
B – HTTP gzipped responses where the Content-Length is not the same as the actual contents.
So, in order to allow A + B and yet be able to detect prematurely cut off transfers I decided to:
- Detect incomplete chunks then the transfer has ended. So, if a chunk-encoded transfer ends on exactly a chunk boundary we consider that fine. Good: This will allow case (A) to be considered fine. Bad: It will make us not detect a certain amount of cut-offs.
- When receiving a gzipped response, we consider a gzip stream that doesn’t end fine according to the gzip decompressing state machine to be a partial transfer. IOW: if a gzipped transfer ends fine according to the decompressor, we do not check for size misalignment. This allows case (B) as long as the content could be decoded.
- When receiving HTTP that isn’t content-encoded/compressed (like in case 2) and not chunked (like in case 1), perform the size comparison between Content-Length: and the actual size received and consider a mismatch to mean a NS_ERROR_NET_PARTIAL_TRANSFER error.
When my first fix was backed out, it was actually not removed but was just put behind a config string (pref as we call it) named “network.http.enforce-framing.http1“. If you set that to true, Firefox will behave as it did with my original fix applied. It makes the HTTP1.1 framing fairly strict and standard compliant. In order to not mess with that setting that now has been around for a while (and I’ve also had it set to true for a while in my browser and I have not seen any problems with doing it this way), I decided to introduce my new changes pref’ed behind a separate variable.
“network.http.enforce-framing.soft” is the new pref that is set to true by default with my patch. It will make Firefox do the detections outlined in 1 – 3 and setting it to false will disable those checks again.
Now I only hope there won’t ever be any chapter 4 in this story… If things go well, this will appear in Firefox 38.
But how do they solve these problems in the Chromium project? They have slightly different heuristics (with the small disclaimer that I haven’t read their code for this in a while so details may have changed). First of all, they do not allow a missing final 0-chunk. Then, they basically allow any sort of misaligned size when the content is gzipped.
Update: this patch was subsequently backed out again due to several bug reports about it. I have yet to analyze exactly what went wrong.
4 thoughts on “Tightening Firefox’s HTTP framing – again”
Neat! For the GZIP case, are you checking the GZIP-footer’s specified size and CRC? (Most browsers’ GZIP implementations seem to allow those to be wrong.)
FWIW, IE10’s handling of these scenarios is described here: http://blogs.msdn.com/b/ieinternals/archive/2012/07/16/content-length-and-transfer-encoding-validation-in-ie10-download-manager-couldnt-be-downloaded-retry-cancel.aspx
Eric, the check basically just makes sure that the gzip state machine ended “nicely”
(https://mxr.mozilla.org/mozilla-central/source/netwerk/streamconv/converters/nsHTTPCompressConv.cpp#97) and if so it’ll skip the size check later on.
Ending nicely just means it reached the end of the gzip stream at the time the request stops. The gzip reponse size is then completely ignored and not compared to C-L or anything.
It seems silly to require a final 0-sized chunk when the content-length header is present. But on the other hand it also seems silly to use chunked encoding when the content-length is known. 🙂
Anders: Content-Length _with_ chunked-encoding is very rare and is not allowed as per the spec.
Comments are closed.