All posts by Daniel Stenberg

copy as curl

Using curl to perform an operation a user just managed to do with his or her browser is one of the more common requests and areas people ask for help about.

How do you get a curl command line to get a resource, just like the browser would get it, nice and easy? Both Chrome and Firefox have provided this feature for quite some time already!

From Firefox

You get the site shown with Firefox’s network tools.  You then right-click on the specific request you want to repeat in the “Web Developer->Network” tool when you see the HTTP traffic, and in the menu that appears you select “Copy as cURL”. Like this screenshot below shows. The operation then generates a curl command line to your clipboard and you can then paste that into your favorite shell window. This feature is available by default in all Firefox installations.

firefox-copy-as-curl

From Chrome

When you pop up the More tools->Developer mode in Chrome, and you select the Network tab you see the HTTP traffic used to get the resources of the site. On the line of the specific resource you’re interested in, you right-click with the mouse and you select “Copy as cURL” and it’ll generate a command line for you in your clipboard. Paste that in a shell to get a curl command line  that makes the transfer. This feature is available by default in all Chome and Chromium installations.

chrome-copy-as-curl

On Firefox, without using the devtools

If this is something you’d like to get done more often, you probably find using the developer tools a bit inconvenient and cumbersome to pop up just to get the command line copied. Then cliget is the perfect add-on for you as it gives you a new option in the right-click menu, so you can get a quick command line generated really quickly, like this example when I right-click an image in Firefox:

firefox-cliget

This post was not bought

coinsAt times I post blog articles that get the view counter go up to and beyond 50,000 views. This puts me in a position where I get offers from companies to mention them or to “cooperate” on further blog posts that would somehow push their agenda or businesses.

I also get the more simple offers of adding random ads or “text only information” on specific individual pages on my sites that some SEO person out there figured out could potentially attract audience that search for specific terms.

I’ve even gotten offers from a company to sell off my server logs. Allegedly to help them work on anti-fraud so possibly for a good cause, but still…

This is by no counts a “big” blog or site, yet I get a steady stream of individuals and companies offering me money to give up a piece of my soul. I can only imagine what more popular sites get and it is clear that someone with a less strict standpoint than mine could easily make an extra income that way.

I turn down all those examples of “easy money”.

I want to be able to look you, my dear readers, straight in the eyes when I say that what’s written here are my own words and the opinions revealed are my own – even if of course you may not agree with me and I may do mistakes and be completely wrong at times or even many times. You can rest assured that I did the mistakes on my own and I was not paid by anyone to do them.

I’ve also removed ads from most of my sites and I don’t run external analytic scripts, minimizing the privacy intrusions and optimizing the contents: the stuff downloaded from my sites are what your browser needs to render the page. Not heaps of useless crap to show ads or to help anyone track you (in order to show more targeted ads).

I don’t judge others’ actions based on how I decide to run my blog. I’m in a fortunate position to take this stand, I realize that.

Still biased of course

This all said, I’m still employed by a company (Mozilla) that pays my salary and I work on several projects that are dear to me so of course I will show bias to some subjects. I don’t claim to have an objective view on things and I don’t even try to have that. When I write posts here, they come colored by my background and by what I am.

The most popular curl download – by a malware

During October 2015 the curl web site sent out 1127 gigabytes of data. This was the first time we crossed the terabyte limit within a single month.

Looking at the stats a little closer, I noticed that in July 2015 a particular single package started to get very popular. The exact URL was

http://curl.haxx.se/gknw.net/7.40.0/dist-w32/curl-7.40.0-devel-mingw32.zip

Curious. In October it alone was downloaded more than 300,000 times, accounting for over 70% of the site’s bandwidth. Why?

The downloads came from what appears to be different locations. They don’t use any HTTP referer headers and they used different User-agent headers. I couldn’t really see a search bot gone haywire or a malicious robot stuck in a crazy mode.

After I shared some of this data over in our IRC channel (#curl on freenode), Björn Stenberg stumbled over this AVG slide set, describing how a particular malware works when it infects a computer. Downloading that particular file is thus a step in its procedures to create a trojan that will run on the host system – see slide 11 for the curl details. The slide also mentions that an updated version of the malware comes bundled with the curl library already, which then I guess makes the hits we see on the curl site being done by the older versions still being run.

Of course, we can’t be completely sure this is the source for the increased download of this particular file but it seems highly likely.

I renamed the file just now to see what happens.

Evil use of good code

We can of course not prevent evil uses of our code. We provide source code and we even host some binaries of curl and libcurl and both good and bad actors are able to take advantage of our offers.

This rename won’t prevent a dedicated hacker, but hopefully it can prevent a few new victims from getting this malware running on their machines.

Update: the hacker news discussion about this post.

TCP tuning for HTTP

I’m the author of a brand new internet-draft that I submitted just the other day. The title is TCP Tuning for HTTP,  and the intent is to gather a set of current best practices for HTTP implementers; to share and distribute knowledge we’ve gathered over the years. Clients, servers and intermediaries. For HTTP/1.1 as well as HTTP/2.

I’m now awaiting, expecting and looking forward to feedback, criticisms and additional content for this document so that it can become the resource I’d like it to be.

How to contribute to this?

  1.  ideally, send your feedback to the HTTPbis mailing list,
  2. or submit an issue or pull-request on github for the draft.md
  3. or simply email me your comments: daniel <at> haxx.se

I’ve been participating first passively and more and more actively over the years within the IETF, mostly in the HTTPbis working group. I think open protocols and open standards are important and I like being part of making them reality. I have the utmost respect and admiration for those who are involved in putting the RFCs together and thus improve the world we live in, step by step.

For a long while I’ve been wanting  to step up and “pull my weight” too,  to become a better participant in this area, and I’m happy to now finally take this step. Hopefully this is just the first step of many more to come.

(Psssst: While gathering feedback and updating the git version, the current work in progress version of the draft is always visible here.)

h2 performance at Velocity NYC

Tuesday October 13th 2015 I co-presented a talk at the Velocity conference in NYC together with Ragnar Lönn of Loadimpact. Ragnar is a friend of mine and another Swede.

Daniel and Ragnar at VelocityThe presentation was split up in two parts, in which I laid out the foundations of HTTP/2 in the first part, and Ragnar then presented the results of his performance study in the second part.

I think an interesting take away from the study is the following.

Existing sites are usually having a lot of resources that need to get downloaded. An average site has around one hundred now and the number is increasing. Those resources often have dependencies or trigger subsequent transfers. Like a HTML file gets parsed and then a CSS file is downloaded and once the CSS is downloaded it gets parsed and images specified in there are downloaded. It easily gets even more “steps” like that when downloading javascript, that triggers more javascript that renders parts of the page that causes more resources to get downloaded.

velocity room

Nothing new there, right? But when switching a site like that over to HTTP/2 the performance gain will be capped at a certain percentage no matter how large latency you have to the site because what limits such a site to perform well is the time it takes to get to the end of the slowest “dependency chain”. It is less of an issue with HTTP/1.1 since if the resources are from the same site, browsers won’t do more than 6 requests in parallel anyway (on the 6 separate TCP connections it’ll use).

It becomes evident that in order to make such a site really benefit from HTTP/2, the site would have to be modified ever so slightly so that it would deliver its contents with shorter chains and allow the browsers to get more of the resources earlier, in parallel rather than serially.

The actual talk

Splitting up a presentation in two parts with two talkers is more difficult than doing it yourself. I think we did a decent job and we ended the presentation early. It enabled us to answer to a lot of questions and we were actually quite bombarded with them – all relevant and well considered and I think we managed to bring more to the room thanks to them. A lot of the questions were about more generic HTTP/2 and deployments though and not all exactly about the performance study of the presentation.

The audience gave us an average score of 3.74 out of 5. Not too shabby. The room seated 360 persons but it wasn’t completely filled up.

GOTO Copenhagen

I was invited speak at the GOTO Copenhagen conference that took place on October 5-6, 2015. A to me previously unknown conference that attracted over a thousand attendees in a hotel in central Copenhagen. According to the info desk, about 800 of these were from Denmark.

My talk was about HTTP/2 (again), which I guess doesn’t make any reader of this to raise his or hers eyebrows. I’d say there were about 200 persons in the audience as the room was fairly full. Probably one of the bigger audiences I’ve talked HTTP/2 to so far.

Talked HTTP/2 at ApacheCon

I was invited as one of the speakers at the ApacheCon core conference in Budapest, Hungary on October 1-2, 2015.

daniel-apachecon-2015

I was once again spreading the news about HTTP/2, why it was made and how it works and of course: updated numbers on adoption right now.

The talk was unfortunately not filmed, but I’ve put my slides for this version of my talk online. Readers of this blog and those who’ve seen my presentations before will recognize large parts of it.

Following my talk was talks about mod_http2, the Apache module for HTTP/2 that will be coming in the upcoming 2.4.17 release of Apache Httpd, explained by its author Stefan Eissing. The name of the module was actually a bit of a surprise to me since it has been known as just mod_h2 for its entire life time up until now.

William A Rowe took us through the state of TLS for the main Apache servers and yeah, the state seem to be pretty good and they’re coming along really well. TLS and then HTTPS is important as that’s really a prerequisite for HTTP/2

I also got to listen to Mark Thomas explain the agonies of making Tomcat support HTTP/2, and then perhaps especially how ALPN and a good set of ciphers are hard to get in Java.

Jean-Frederic Clere then explained how to activate HTTP/2 on all the Apache servers (tomcat, httpd and traffic server) and a little about their HTTP/2 state, following with an explanation how they worked on tomcat to make that use OpenSSL for the TLS layer (including ALPN) to avoid the deadlock of decent TLS support in Java.

All in all, a great track and splendid talks with deep technical content. Exactly the way I like it. Thanks everyone. Apachecon certainly delivered for me! Twas fun.

libbrotli is brotli in lib form

Brotli is this new cool compression algorithm that Firefox now has support for in Content-Encoding, Chrome will too soon and Eric Lawrence wrote up this nice summary about.

So I’d love to see brotli supported as a Content-Encoding in curl too, and then we just basically have to write some conditional code to detect the brotli library, add the adaption code for it and we should be in a good position. But…

There is (was) no brotli library!

It turns out the brotli team just writes their code to be linked with their tools, without making any library nor making it easy to install and use for third party applications.

an unmotivated circle sawWe can’t have it like that! I rolled up my imaginary sleeves (imaginary since my swag tshirt doesn’t really have sleeves) and I now offer libbrotli to the world. It is just a bunch of files and a build system that sucks in the brotli upstream repo as a submodule and then it builds a decoder library (brotlidec) and an encoder library (brotlienc) out of them. So there’s no code of our own here. Just building on top of the great stuff done by others.

It’s not complicated. It’s nothing fancy. But you can configure, make and make install two libraries and I can now go on and write a curl adaption for this library so that we can get brotli support for it done. Ideally, this (making a library) is something the brotli project will do on their own at some point, but until they do I don’t mind handling this.

As always, dive in and try it out, file any issues you find and send us your pull-requests for everything you can help us out with!

daniel weekly 42, switching off Nagle

Topics

See you at ApacheCon on Friday!

FOSDEM 2016

14% HTTP/2 thanks to nginx ?

Brotli everywhere! Firefox, libbrotli

The –libcurl flaw is fixed (and it was GONE from github for a few hours)

http2 explained in Swedish

No, the cheat sheet cannot be in the man page. But…

bug of the week: the http/2 performance fix

TCP_NODELAY in the HTTP/2 FAQ

option of the week: -k

Talking at the GOTO Conference next week

daniel weekly 41, now in markdown

Episode 41, just out:

Topics

me on kodsnack

115 days with RFC

http2 explained in markdown, translations. Swedish?

The curl google tech talk

curl -X

the curl and wget war

curl vs Wget

a curl cheat sheet

curl feature freeze period, release october 7

ApacheCon, October 2

Bug of the week: Downloading a long sequence of URLs results in high CPU usage and slowness

Option of the week: -O