libbrotli is brotli in lib form

Brotli is this new cool compression algorithm that Firefox now has support for in Content-Encoding, Chrome will too soon and Eric Lawrence wrote up this nice summary about.

So I’d love to see brotli supported as a Content-Encoding in curl too, and then we just basically have to write some conditional code to detect the brotli library, add the adaption code for it and we should be in a good position. But…

There is (was) no brotli library!

It turns out the brotli team just writes their code to be linked with their tools, without making any library nor making it easy to install and use for third party applications.

an unmotivated circle sawWe can’t have it like that! I rolled up my imaginary sleeves (imaginary since my swag tshirt doesn’t really have sleeves) and I now offer libbrotli to the world. It is just a bunch of files and a build system that sucks in the brotli upstream repo as a submodule and then it builds a decoder library (brotlidec) and an encoder library (brotlienc) out of them. So there’s no code of our own here. Just building on top of the great stuff done by others.

It’s not complicated. It’s nothing fancy. But you can configure, make and make install two libraries and I can now go on and write a curl adaption for this library so that we can get brotli support for it done. Ideally, this (making a library) is something the brotli project will do on their own at some point, but until they do I don’t mind handling this.

As always, dive in and try it out, file any issues you find and send us your pull-requests for everything you can help us out with!

daniel weekly 42, switching off Nagle

Topics

See you at ApacheCon on Friday!

FOSDEM 2016

14% HTTP/2 thanks to nginx ?

Brotli everywhere! Firefox, libbrotli

The –libcurl flaw is fixed (and it was GONE from github for a few hours)

http2 explained in Swedish

No, the cheat sheet cannot be in the man page. But…

bug of the week: the http/2 performance fix

TCP_NODELAY in the HTTP/2 FAQ

option of the week: -k

Talking at the GOTO Conference next week

daniel weekly 41, now in markdown

Episode 41, just out:

Topics

me on kodsnack

115 days with RFC

http2 explained in markdown, translations. Swedish?

The curl google tech talk

curl -X

the curl and wget war

curl vs Wget

a curl cheat sheet

curl feature freeze period, release october 7

ApacheCon, October 2

Bug of the week: Downloading a long sequence of URLs results in high CPU usage and slowness

Option of the week: -O

Blabbed about curl at Google

I was invited by Robert Nyman to speak at Google in their Stockholm offices on August 26th, in his Google Tech Talk series.

curl – a hobby project with a billion users” was the humble title of the talk.

curl is like a Swiss army-knife for HTTP and internet transfers. For over 17 years the project has been run by volunteers and now counts perhaps more than one billion users. Daniel takes us through how it started, how it works and why it never gets done.

Already back in June all the 70 seats were taken and there were more than twice as many persons in the waiting list by the time the talk was happening! A totally mind-blowing interest I mostly credit Robert’s reach and ability to gather people.

Here’s the video of the talk:

To my great surprise and joy, I got this awesome gift from the host:

my LG watch urbane(It is an LG Watch Urbane)

a curl cheat sheet

MOAR DOX!

Whenever we ask curl users what they lack in our project and what we should improve, the response is always clear: documentation.

Here are the most common curl command line options when doing HTTP operations. This is the web page rendered as an image, click it to get to the web version.

curl cheat sheet

The source for the document/image is on github. As always, you can help improving it.

Update: see the refreshed version!

Yours truly on “kodsnack”

kodsnackKodsnack is a Swedish-speaking weekly podcast with a small team of web/app- developers discussing their experiences and thoughts on and around software development.

I was invited to participate a week ago or so, and I had a great time. Not surprisingly, the topics at hand moved a lot around curl, Firefox and HTTP/2. The recorded episode has now gone live, today.

You can find kodsnack episode 120 here, and again, it is all Swedish.

The curl and wget war

“To be honest, I often use wget to download files”

… some people tell me in a lowered voice, like if they were revealing one of their deepest family secrets  to me. This is usually done with a slightly scared and a little ashamed look in their eyes – yet still intrigued, like it took some effort to say that straight in my face. How will I respond to that!?

I enjoy maintaining a notion that there is a “war” between curl and wget. Like the classics emacs vs vi or KDE vs GNOME. That we’re like two rivals competing for some awesome prize and both teams are glaring at the other one and throwing the occasional insult over the wall at the competing team. Mostly because people believe it and I sort of like the image it projects in my brain. So I continue doing jokes about it when I can.

monty-python-taunt-you-a-second-time

In reality though, where some of us spend our lives, there is no such war. There’s no conflict or backstabbing going on. We’re quite simply two open source projects busy doing our own things and we’ve both been doing it for almost two decades. I consider the current wget maintainer, Giuseppe, a friend and I’m friends with the two former maintainers as well.

We have more things in common than what separates us. We’re like members of the fairly exclusive HTTP/FTP command line tool club that doesn’t have that many members.

We don’t have a lot of developer overlap, there are but a few occasional contributors sending patches to both projects and I’m one of them. We have some functional overlap in the curl tool with wget but really, I strongly recommend everyone to always use the best tool for the job and to use the tool they prefer. If wget does the job, use it. If it does the job better than curl, then switch to wget.

There’s been a line in the curl FAQ since over 15 years: “Never, during curl’s development, have we intended curl to replace wget or compete on its market.” and it tells the truth. We are believers in the Unix philosophy that each tool does what it does best and you get your job done best by combining the right set of tools. In the curl project we make one command line tool and we make it as good as we can, but we still urge our users to use the best tool for the job even when that means not using our tool.

All this said, there are plenty of things, protocols and features that curl does that you cannot find in wget and that wget doesn’t do. I’ve detailed some differences in my curl vs wget document. Some things that both can do are much easier to do with curl or offer you more control or power than in the wget counter part. Those are the things you should use curl for. Use the best tool for the job.

What takes the most effort in the curl project (and frankly that gets used by the largest amount of users in the world) is the making of the libcurl transfer library to which there is no alternative in the wget project. Writing a stable multi platform library with a sensible and solid API is much harder and lots of more work than writing a command line tool.

OK, I’ll stop tip-toeing and answer the question you really wanted to know while enduring all this text up until this point:

When do you suggest I use wget instead of curl?

For me, wget is for recursive gets and for doing more persistent and patient retries of continuing transfers over really bad connections and networks better. But then you really must take my bias into account and ignore anything I say because I live and breath the curl life.

Unnecessary use of curl -X

I’ve grown a bit tired of the web filling up with curl command line examples showing use of superfluous -X’s. I’m putting code where my mouth is.

Starting with curl 7.45.0 (due to ship October 7th 2015), the tool will help users to understand that their use of the -X (or --request) is very often unnecessary or even downright wrong. If you specify the same method with -X that will be used anyway, and you have verbose mode enabled, curl will inform you about it and gently push you to stop doing it.

Example:

$ curl -I -XHEAD http://example.com --verbose

The option dash capital i means asking curl to issue a HEAD request. Adding -X HEAD to that command line asks for it again. This option sequence will now make curl say:

Note: Unnecessary use of -X or --request, HEAD is already inferred.

It’ll also inform the user similarly if you do -XGET on a normal fetch or -XPOST when using one of the -d options. Like this:

$ curl -v -d hello -XPOST http://example.com Note: Unnecessary use of -X or --request, POST is already inferred.

curl will still continue to work exactly like before though, these are only informational texts that won’t alter any behaviors. Again, it only says this if verbose mode is enabled.

What -X does

When doing HTTP with curl, the -X option changes the actual method string in the HTTP request. That’s all it does. It does not change behavior accordingly. It’s the perfect option when you want to send a DELETE method or TRACE or similar that curl has no native support for and you want to send easily. You can use it to make curl send a GET with a request-body or you can use it to have the -d option work even when you want to send a PUT. All good uses.

Why superfluous -X usage is bad

I know several users out there will disagree with this. That’s also why this is only shown in verbose mode and it only says “Note:” about it. For now.

There are a few problems with the superfluous uses of -X in curl:

One of most obvious problems is that if you also tell curl to follow HTTP redirects (using -L or --location), the -X option will also be used  on the redirected-to requests which may not at all be what the server asks for and the user expected. Dropping the -X will make curl adhere to what the server asks for. And if you want to alter what method to use in a redirect, curl already have dedicated options for that named --post301, --post302 and --post303!

But even without following redirects, just throwing in an extra -X “to clarify” leads users into believing that -X has a function to serve there when it doesn’t. It leads the user to use that -X in his or her’s next command line too, which then may use redirects or something else that makes it unsuitable.

The perhaps biggest mistake you can do with -X, and one that now actually leads to curl showing a “warning”, is if you’d use -XHEAD on an ordinary command line (that isn’t using -I). Like this (I’ll display it crossed over to make it abundantly clear that this is a bad command line):

$ curl -XHEAD http://example.com/

… which will have curl act as if it sends a GET but it sends a HEAD. A response to a HEAD never has a body, although it sends the size of the body exactly like a GET response which thus mostly will lead to curl to sit there waiting for the response body to arrive when it simply won’t and it’ll hang.

Starting with this change, this is the warning it’ll show for the above command line:

Warning: Setting custom HTTP method to HEAD may not work the way you want.

http2 explained in markdown

http2 explainedAfter twelve  releases and over 140,000 downloads of my explanatory document “http2 explained“, I eventually did the right thing and converted the entire book over to markdown syntax and put the book up on gitbook.com.

Better output formats, now epub, MOBI, PDF and everything happens on every commit.

Better collaboration, github and regular pull requests work fine with text content instead of weird binary word processor file formats.

Easier for translators. With plain text commits to aid in tracking changes, and with the images in a separate directory etc writing and maintaining translated versions of the book should be less tedious.

I’m amazed and thrilled that we already have Chinese, Russian, French and Spanish translations and I hear news about additional languages in the pipe.

I haven’t yet decided how to do with “releases” now, as now we update everything on every push so the latest version is always available to read. Go to http://daniel.haxx.se/http2/ to find out the latest about the document and the most updated version of the document.

Thanks everyone who helps out. You’re the best!

HTTP/2 – 115 days with the RFC

http2Back in March 2015, I asked friends for a forecast on how much HTTP traffic that will be HTTP/2 by the end of the year and we arrived at about 10% as a group. Are we getting there? Remember that RFC 7540 was published on May 15th, so it is still less than 4 months old!

The HTTP/2 implementations page now lists almost 40 reasonably up-to-date implementations.

Browsers

Since then, all browsers used by the vast majority of people have stated that they have or will have HTTP/2 support soon (Firefox, Chrome, Edge, Safari and Opera – including Firefox and Chrome on Android and Safari on iPhone). Even OS support is coming: on iOS 9 the support is coming as we speak and the windows HTTP library is getting HTTP/2 support. The adoption rate so far is not limited by the clients.

Unfortunately, the WGet summer of code project to add HTTP/2 support failed.

(I have high hopes for getting a HTTP/2 enabled curl into Debian soon as they’ve just packaged a new enough nghttp2 library. If things go well, this leads the way for other distros too.)

Servers

Server-side we see Apache’s mod_h2 module ship in a public release soon (possibly in a httpd version 2.4 series release), nginx has this alpha patch I’ve already mentioned and Apache Traffic Server (ATS) has already shipped h2 support for a while and my friends tell me that 6.0 has fixed numerous of their initial bugs. IIS 10 for Windows 10 was released on July 29th 2015 and supports HTTP/2. H2O and nghttp2 have shipped HTTP/2 for a long time by now. I would say that the infrastructure offering is starting to look really good! Around the end of the year it’ll look even better than today.

Of course we’re still seeing HTTP/2 only deployed over HTTPS so HTTP/2 cannot currently get more popular than HTTPS is but there’s also no real reason for a site using HTTPS today to not provide HTTP/2 within the near future. I think there’s a real possibility that we go above 10% use already in 2015 and at least for browser traffic to HTTPS sites we should be able to that almost every single HTTPS site will go HTTP/2 during 2016.

The delayed start of letsencrypt has also delayed more and easier HTTPS adoption.

Still catching up

I’m waiting to see the intermediaries really catch up. Varnish, Squid and HAProxy I believe all are planning to support it to at least some extent, but I’ve not yet seen them release a version with HTTP/2 enabled.

I hear there’s still not a good HTTP/2 story on Android and its stock HTTP library, although you can in fact run libcurl HTTP/2 enabled even there, and I believe there are other stand-alone libs for Android that support HTTP/2 too, like OkHttp for example.

Firefox numbers

Firefox Nightly screenshotThe latest stable Firefox release right now is version 40. It counts 13% HTTP/2 responses among all HTTP responses. Counted as a share of the transactions going over HTTPS, the share is roughly 27%! (Since Firefox 40 counts 47% of the transactions as HTTPS.)

This is certainly showing a share of the high volume sites of course, but there are also several very high volume sites that have not yet gone HTTP/2, like Facebook, Yahoo, Amazon, Wikipedia and more…

The IPv6 comparison

Right, it is not a fair comparison, but… The first IPv6 RFC has been out for almost twenty years and the adoption is right now at about 8.4% globally.