Category Archives: Technology

Really everything related to technology

HTTP/2 adoption, end of 2015

http2 front imageWhen I asked my surrounding in March 2015 to guess the expected HTTP/2 adoption by now, we as a group ended up with about 10%. OK, the question was vaguely phrased and what does it really mean? Let’s take a look at some aspects of where we are now.

Perhaps the biggest flaw in the question was that it didn’t specify HTTPS. All the browsers of today only implement HTTP/2 over HTTPS so of course if every HTTPS site in the world would support HTTP/2 that would still be far away from all the HTTP requests. Admittedly, browsers aren’t the only HTTP clients…

During the fall of 2015, both nginx and Apache shipped release versions with HTTP/2 support. nginx made it slightly harder for people by forcing users to select either SPDY or HTTP/2 (which was a technical choice done by them, not really enforced by the protocols) and also still telling users that SPDY is the safer choice.

Let’s Encrypt‘s finally launching their public beta in the early December also helps HTTP/2 by removing one of the most annoying HTTPS obstacles: the cost and manual administration of server certs.

Amount of Firefox responses

This is the easiest metric since Mozilla offers public access to the metric data. It is skewed since it is opt-in data and we know that certain kinds of users are less likely to enable this (if you’re more privacy aware or if you’re using it in enterprise environments for example). This also then measures the share by volume of requests; making the popular sites get more weight.

Firefox 43 counts no less than 22% of all HTTP responses as HTTP/2 (based on data from Dec 8 to Dec 16, 2015).

Out of all HTTP traffic Firefox 43 generates, about 63% is HTTPS which then makes almost 35% of all Firefox HTTPS requests are HTTP/2!

Firefox 43 is also negotiating HTTP/2 four times as often as it ends up with SPDY.

Amount of browser traffic

One estimate of how large share of browsers that supports HTTP/2 is the caniuse.com number. Roughly 70% on a global level. Another metric is the one published by KeyCDN at the end of October 2015. When they enabled HTTP/2 by default for their HTTPS customers world wide, the average number of users negotiating HTTP/2 turned out to be 51%. More than half!

Cloudflare however, claims the share of supported browsers are at a mere 26%. That’s a really big difference and I personally don’t buy their numbers as they’re way too negative and give some popular browsers very small market share. For example: Chrome 41 – 49 at a mere 15% of the world market, really?

I think the key is rather that it all boils down to what you measure – as always.

Amount of the top-sites in the world

Netcraft bundles SPDY with HTTP/2 in their October report, but it says that “29% of SSL sites within the thousand most popular sites currently support SPDY or HTTP/2, while 8% of those within the top million sites do.” (note the “of SSL sites” in there)

That’s now slightly old data that came out almost exactly when Apache first release its HTTP/2 support in a public release and Nginx hadn’t even had it for a full month yet.

Facebook eventually enabled HTTP/2 in November 2015.

Amount of “regular” sites

There’s still no ideal service that scans a larger portion of the Internet to measure adoption level. The httparchive.org site is about to change to a chrome-based spider (from IE) and once that goes live I hope that we will get better data.

W3Tech’s report says 2.5% of web sites in early December – less than SPDY!

I like how isthewebhttp2yet.com looks so far and I’ve provided them with my personal opinions and feedback on what I think they should do to make that the preferred site for this sort of data.

Using the shodan search engine, we could see that mid December 2015 there were about 115,000 servers on the Internet using HTTP/2.  That’s 20,000 (~24%) more than isthewebhttp2yet site says. It doesn’t really show percentages there, but it could be interpreted to say that slightly over 6% of HTTP/1.1 sites also support HTTP/2.

On Dec 3rd 2015, Cloudflare enabled HTTP/2 for all its customers and they claimed they doubled the number of HTTP/2 servers on the net in that single move. (The shodan numbers seem to disagree with that statement.)

Amount of system lib support

iOS 9 supports HTTP/2 in its native HTTP library. That’s so far the leader of HTTP/2 in system libraries department. Does Mac OS X have something similar?

I had expected Window’s wininet or other HTTP libs to be up there as well but I can’t find any details online about it. I hear the Android HTTP libs are not up to snuff either but since okhttp is now part of Android to some extent, I guess proper HTTP/2 in Android is not too far away?

Amount of HTTP API support

I hear very little about HTTP API providers accepting HTTP/2 in addition or even instead of HTTP/1.1. My perception is that this is basically not happening at all yet.

Next-gen experiments

If you’re using a modern Chrome browser today against a Google service you’re already (mostly) using QUIC instead of HTTP/2, thus you aren’t really adding to the HTTP/2 client side numbers but you’re also not adding to the HTTP/1.1 numbers.

QUIC and other QUIC-like (UDP-based with the entire stack in user space) protocols are destined to grow and get used even more as we go forward. I’m convinced of this.

Conclusion

Everyone was right! It is mostly a matter of what you meant and how to measure it.

Future

Recall the words on the Chromium blog: “We plan to remove support for SPDY in early 2016“. For Firefox we haven’t said anything that absolute, but I doubt that Firefox will support SPDY for very long after Chrome drops it.

A 2015 retrospective

Wow, another year has passed. Summing up some things I did this year.

Commits

I don’t really have good global commit count for the year, but github counts 1300 commits and I believe the vast majority of my commits are hosted there. Most of them in curl and curl-oriented projects.

We did 8 curl releases during the year featuring a total of 575 bug fixes. The almost 1,200 commits were authored by 107 different individuals.

Books

I continued working on http2 explained during the year, and after having changed to markdown format it is now available in more languages than ever thanks to our awesome translators!

I started my second book project in the fall of 2015, using the working title everything curl, which is a much larger book effort than the HTTP/2 book and after having just passed 23,500 words that create over 110 pages in the PDF version, almost half of the planned sections are still left to write…

Twitter

I almost doubled my number of twitter followers during this year, now at 2,850 something. While this is a pointless number, reaching out slightly further does have the advantage that I get better responses and that makes me appreciate and get more out of twitter.

Stackoverflow

I’ve continued to respond to questions there, and my total count is now at 550 answers, out of which I wrote about 80 this year. The top scored answer I wrote during 2015 is for a question that isn’t phrased like one: Apache and HTTP2.

Keyboard use

I’ve pressed a bit over 6.4 million keys on my primary keyboard during the year, and 10.7% of the keys were pressed on weekends.

During the 2900+ hours when at least one key press were registered, I averaged on 2206 key presses per hour.

The most excessive key banging hour of the year started  September 21 at 14:00 and ended with me reaching 10,875 key presses.

The most excessive day was June 9, during which I pushed 63,757 keys.

Talks

This is all the 16 opportunities where I’ve talked in front of an audience during 2015. As you will see, the list of topics were fairly limited…

Daniel talking at Apachecon 2015

curl and HTTP/2 by default

cURL

Followers of this blog know that I’ve dabbled with HTTP/2 stuff for quite some time, and curl got its initial support for the new protocol version already in September 2013.

curl shipped “proper” HTTP/2 support as it looks in the final specification both for the command line tool and the libcurl library before any browsers did in their release versions. (Firefox was the first browser to ship HTTP/2 in a release version, followed by Chrome. Both did this in the beginning of 2015.)

libcurl features an option that lets the application to select HTTP version to use, and that includes HTTP/2 since back then. The command line tool got a corresponding command line option (aptly named –http2) to switch on this protocol version.

This goes hand in hand with curl’s general philosophy that it just does the basics and you have to specifically switch on more features and tell it to enable things you want to use. This conservative approach makes it very reliable protocol-wise and provides applications a very large level of control. The downside is of course that fewer people switch on certain features since they’re just not aware of them. Or as in this case with HTTP/2, it also complicates matters that only a subset of users still have a HTTP/2 tool and library since they might still run outdated versions or they may run recent versions that were built without the necessary prerequisites present (basically the nghttp2 library).

By default?

libcurl is even more conservative that the curl tool so switching default for the library isn’t really on the agenda yet. We are very careful of modifying behavior so we’re saving that for later but what about upping the curl tool a notch?

We could switch the default to use HTTP/2 as soon as the tool has the powers built-in. But for regular clear text HTTP, using the Upgrade: header has a few drawbacks. It makes the requests larger, it complicates matter somewhat that most servers don’t do upgrades on HTTP POST requests (and a few others) so there might indeed be several requests before an upgrade is actually made even on a server that supports HTTP/2 and perhaps the strongest reason: we already found servers that (wrongly, I would claim) reject requests with Upgrade: headers they don’t understand. All this taken together, Upgrade over HTTP will break too many requests that work with HTTP 1.1. And then we haven’t even considered how the breakage situation will be when using explicit or transparent proxies…

By default!

To help users with this problem of HTTP upgrades not being feasible by default, we’ve just landed a new alternative to the “set HTTP version” that only sets HTTP/2 for HTTPS requests and leaves it to HTTP/1.1 for clear text HTTP requests. This option will ship in the next release, to be called 7.47.0, and can of course be tested out before that with git or daily snapshot builds.

Setting this option is next to risk-free, as the HTTP/2 negotiation in TLS is based on one or two TLS extensions (NPN and ALPN) that both have proper fallbacks to 1.1.

Said and done. The curl tool now sets this option. Using the curl tool on a HTTPS:// URL will from now on use HTTP/2 by default as soon as both the libcurl it uses and the server it connects to support HTTP/2!

We will of course keep our eyes and ears open to see if this causes any problems. Let us know what you see!

Hosting RMS again!

I’m thrilled to once again have to honor to organize a lecture and talk in Stockholm by the legendary RMS himself. (Remember the last time?)

On January 25 2016, RMS will talk about “For a Free Digital Society” in the large Aula Magna room at Stockholm University that seats almost 1200 persons.

See http://www.foss-sthlm.se/rms2016.html for the full invitation and sign-up. Registration is voluntary, but it helps us understand the interest and size of the audience.

Aula MagnaPhoto by Kjell Ericson, taken just before the event started the last time.

The most popular curl download – by a malware

During October 2015 the curl web site sent out 1127 gigabytes of data. This was the first time we crossed the terabyte limit within a single month.

Looking at the stats a little closer, I noticed that in July 2015 a particular single package started to get very popular. The exact URL was

http://curl.haxx.se/gknw.net/7.40.0/dist-w32/curl-7.40.0-devel-mingw32.zip

Curious. In October it alone was downloaded more than 300,000 times, accounting for over 70% of the site’s bandwidth. Why?

The downloads came from what appears to be different locations. They don’t use any HTTP referer headers and they used different User-agent headers. I couldn’t really see a search bot gone haywire or a malicious robot stuck in a crazy mode.

After I shared some of this data over in our IRC channel (#curl on freenode), Björn Stenberg stumbled over this AVG slide set, describing how a particular malware works when it infects a computer. Downloading that particular file is thus a step in its procedures to create a trojan that will run on the host system – see slide 11 for the curl details. The slide also mentions that an updated version of the malware comes bundled with the curl library already, which then I guess makes the hits we see on the curl site being done by the older versions still being run.

Of course, we can’t be completely sure this is the source for the increased download of this particular file but it seems highly likely.

I renamed the file just now to see what happens.

Evil use of good code

We can of course not prevent evil uses of our code. We provide source code and we even host some binaries of curl and libcurl and both good and bad actors are able to take advantage of our offers.

This rename won’t prevent a dedicated hacker, but hopefully it can prevent a few new victims from getting this malware running on their machines.

Update: the hacker news discussion about this post.

TCP tuning for HTTP

I’m the author of a brand new internet-draft that I submitted just the other day. The title is TCP Tuning for HTTP,  and the intent is to gather a set of current best practices for HTTP implementers; to share and distribute knowledge we’ve gathered over the years. Clients, servers and intermediaries. For HTTP/1.1 as well as HTTP/2.

I’m now awaiting, expecting and looking forward to feedback, criticisms and additional content for this document so that it can become the resource I’d like it to be.

How to contribute to this?

  1.  ideally, send your feedback to the HTTPbis mailing list,
  2. or submit an issue or pull-request on github for the draft.md
  3. or simply email me your comments: daniel <at> haxx.se

I’ve been participating first passively and more and more actively over the years within the IETF, mostly in the HTTPbis working group. I think open protocols and open standards are important and I like being part of making them reality. I have the utmost respect and admiration for those who are involved in putting the RFCs together and thus improve the world we live in, step by step.

For a long while I’ve been wanting  to step up and “pull my weight” too,  to become a better participant in this area, and I’m happy to now finally take this step. Hopefully this is just the first step of many more to come.

(Psssst: While gathering feedback and updating the git version, the current work in progress version of the draft is always visible here.)

h2 performance at Velocity NYC

Tuesday October 13th 2015 I co-presented a talk at the Velocity conference in NYC together with Ragnar Lönn of Loadimpact. Ragnar is a friend of mine and another Swede.

Daniel and Ragnar at VelocityThe presentation was split up in two parts, in which I laid out the foundations of HTTP/2 in the first part, and Ragnar then presented the results of his performance study in the second part.

I think an interesting take away from the study is the following.

Existing sites are usually having a lot of resources that need to get downloaded. An average site has around one hundred now and the number is increasing. Those resources often have dependencies or trigger subsequent transfers. Like a HTML file gets parsed and then a CSS file is downloaded and once the CSS is downloaded it gets parsed and images specified in there are downloaded. It easily gets even more “steps” like that when downloading javascript, that triggers more javascript that renders parts of the page that causes more resources to get downloaded.

velocity room

Nothing new there, right? But when switching a site like that over to HTTP/2 the performance gain will be capped at a certain percentage no matter how large latency you have to the site because what limits such a site to perform well is the time it takes to get to the end of the slowest “dependency chain”. It is less of an issue with HTTP/1.1 since if the resources are from the same site, browsers won’t do more than 6 requests in parallel anyway (on the 6 separate TCP connections it’ll use).

It becomes evident that in order to make such a site really benefit from HTTP/2, the site would have to be modified ever so slightly so that it would deliver its contents with shorter chains and allow the browsers to get more of the resources earlier, in parallel rather than serially.

The actual talk

Splitting up a presentation in two parts with two talkers is more difficult than doing it yourself. I think we did a decent job and we ended the presentation early. It enabled us to answer to a lot of questions and we were actually quite bombarded with them – all relevant and well considered and I think we managed to bring more to the room thanks to them. A lot of the questions were about more generic HTTP/2 and deployments though and not all exactly about the performance study of the presentation.

The audience gave us an average score of 3.74 out of 5. Not too shabby. The room seated 360 persons but it wasn’t completely filled up.

GOTO Copenhagen

I was invited speak at the GOTO Copenhagen conference that took place on October 5-6, 2015. A to me previously unknown conference that attracted over a thousand attendees in a hotel in central Copenhagen. According to the info desk, about 800 of these were from Denmark.

My talk was about HTTP/2 (again), which I guess doesn’t make any reader of this to raise his or hers eyebrows. I’d say there were about 200 persons in the audience as the room was fairly full. Probably one of the bigger audiences I’ve talked HTTP/2 to so far.

Talked HTTP/2 at ApacheCon

I was invited as one of the speakers at the ApacheCon core conference in Budapest, Hungary on October 1-2, 2015.

daniel-apachecon-2015

I was once again spreading the news about HTTP/2, why it was made and how it works and of course: updated numbers on adoption right now.

The talk was unfortunately not filmed, but I’ve put my slides for this version of my talk online. Readers of this blog and those who’ve seen my presentations before will recognize large parts of it.

Following my talk was talks about mod_http2, the Apache module for HTTP/2 that will be coming in the upcoming 2.4.17 release of Apache Httpd, explained by its author Stefan Eissing. The name of the module was actually a bit of a surprise to me since it has been known as just mod_h2 for its entire life time up until now.

William A Rowe took us through the state of TLS for the main Apache servers and yeah, the state seem to be pretty good and they’re coming along really well. TLS and then HTTPS is important as that’s really a prerequisite for HTTP/2

I also got to listen to Mark Thomas explain the agonies of making Tomcat support HTTP/2, and then perhaps especially how ALPN and a good set of ciphers are hard to get in Java.

Jean-Frederic Clere then explained how to activate HTTP/2 on all the Apache servers (tomcat, httpd and traffic server) and a little about their HTTP/2 state, following with an explanation how they worked on tomcat to make that use OpenSSL for the TLS layer (including ALPN) to avoid the deadlock of decent TLS support in Java.

All in all, a great track and splendid talks with deep technical content. Exactly the way I like it. Thanks everyone. Apachecon certainly delivered for me! Twas fun.

libbrotli is brotli in lib form

Brotli is this new cool compression algorithm that Firefox now has support for in Content-Encoding, Chrome will too soon and Eric Lawrence wrote up this nice summary about.

So I’d love to see brotli supported as a Content-Encoding in curl too, and then we just basically have to write some conditional code to detect the brotli library, add the adaption code for it and we should be in a good position. But…

There is (was) no brotli library!

It turns out the brotli team just writes their code to be linked with their tools, without making any library nor making it easy to install and use for third party applications.

an unmotivated circle sawWe can’t have it like that! I rolled up my imaginary sleeves (imaginary since my swag tshirt doesn’t really have sleeves) and I now offer libbrotli to the world. It is just a bunch of files and a build system that sucks in the brotli upstream repo as a submodule and then it builds a decoder library (brotlidec) and an encoder library (brotlienc) out of them. So there’s no code of our own here. Just building on top of the great stuff done by others.

It’s not complicated. It’s nothing fancy. But you can configure, make and make install two libraries and I can now go on and write a curl adaption for this library so that we can get brotli support for it done. Ideally, this (making a library) is something the brotli project will do on their own at some point, but until they do I don’t mind handling this.

As always, dive in and try it out, file any issues you find and send us your pull-requests for everything you can help us out with!