All posts by Daniel Stenberg

target-independent libcurl headers

We write libcurl to be very portable. It can be built and run on virtually every operating system with an CPU architecture that is at least 32 bit, from some of the most legacy Unixes from the early 90s to the most recent updates to all the popular systems, including the widespread mobile platforms.

Type sizes on different architectures

In the early 2000s we added support to libcurl for “large files” (back in the days when that support wasn’t always present in your operating systems) and large variable types (beyond 32 bits) to work for applications and libcurl alike, and to work the same way for libcurl-using applications independently on which platform you’d compile the code on.

We started out using compiler/system defines to figure out for example the size of the native “off_t” type to know if it was 32 bit or 64 bit. That turned out to be problematic as users accidentally ended up in situations where the library considered a type to be one size and the application considered it to be another, leading to unexpected behaviors at best or downright crashes and misery.

Determine at lib build-time

The fix to that run-time size-of-variables confusion was to generate a fixed “outcome” at build-time that would then be used by both the library and applications so that they could never again disagree on this. The obvious downside here was that we had to generate this target specific information into public headers for the library (known as curl/curlbuild.h). We didn’t like doing it this way, but this approach was a better situation than before as it caused less headaches for users.

Now we instead created problems for system packagers who wanted to provide a set of curl headers and allow users to build for example either a 32 bit build or a 64 bit build of their application – so they had to generate two sets of curl headers. Or having the headers on a shared file system to be used by many different systems. Inconvenient. But as this solution didn’t hurt too many people, was a cumbersome problem to fix and yet possible to work around, it remained in the curl project since August 7, 2008 (commit 14240e9e109fe6af1).

Determine at app build-time

In March 2017 (commit 9506d01ee5) we introduced a new take on this problem. A new header that checks systems defines and determines all the necessary information at the time the application is compiled instead of at the time libcurl is compiled. We call it curl/system.h.

The goal was to replace the generated curlbuild.h header, but since it would cause serious problems if this new header would get any different results (like variable type sizes) than the old header, it was a risky move. We needed extra seat-belts for this.

We therefor added the new header next to the old header in parallel, and introduced a test case in the curl test suite that verifies the output from the two systems and make sure that they agree, and had them present in the curl source tree, coexisting. The curl/system.h file of course without being used for anything real, but tested by everyone who runs the test suite – to make sure it isn’t awful.

We think the new header file has now proven itself worthy. We have not gotten any recent reports on problems with test 1541. It is time to cut out the old header system and launch the new!

Starting in release curl 7.55.0, due to be released on August 9, 2017, the header files will finally again be truly platform agnostic. It took us nine years but we finally did it! The bulk of the change is made in this commit.

Just another detail in the machinery.

HTTP Workshop s03e02

(Season three, episode two)

Previously, on the HTTP Workshop. Yesterday ended with a much appreciated group dinner and now we’re back energized and eager to continue blabbing about HTTP frames, headers and similar things.

Martin from Mozilla talked on “connection management is hard“. Parts of the discussion was around the HTTP/2 connection coalescing that I’ve blogged about before. The ORIGIN frame is a draft for a suggested way for servers to more clearly announce which origins it can answer for on that connection which should reduce the frequency of 421 needs. The ORIGIN frame overrides DNS and will allow coalescing even for origins that don’t otherwise resolve to the same IP addresses. The Alt-Svc header, a suggested CERTIFICATE frame and how does a HTTP/2 server know for which origins it can do PUSH for?

A lot of positive words were expressed about the ORIGIN frame. Wildcard support?

Willy from HA-proxy talked about his Memory and CPU efficient HPACK decoding algorithm. Personally, I think the award for the best slides of the day goes to Willy’s hand-drawn notes.

Lucas from BBC talked about usage data for iplayer and how much data and number of requests they serve and how their largest share of users are “non-browsers”. Lucas mentioned their work on writing a libcurl adaption to make gstreamer use it instead of libsoup. Lucas talk triggered a lengthy discussion on what needs there are and how (if at all) you can divide clients into browsers and non-browser.

Wenbo from Google spoke about Websockets and showed usage data from Chrome. The median websockets connection time is 20 seconds and 10% something are shorter than 0.5 seconds. At the 97% percentile they live over an hour. The connection success rates for Websockets are depressingly low when done in the clear while the situation is better when done over HTTPS. For some reason the success rate on Mac seems to be extra low, and Firefox telemetry seems to agree. Websockets over HTTP/2 (or not) is an old hot topic that brought us back to reiterate issues we’ve debated a lot before. This time we also got a lovely and long side track into web push and how that works.

Roy talked about Waka, a HTTP replacement protocol idea and concept that Roy’s been carrying around for a long time (he started this in 2001) and to which he is now coming back to do actual work on. A big part of the discussion was focused around the wakli compression ideas, what the idea is and how it could be done and evaluated. Also, Roy is not a fan of content negotiation and wants it done differently so he’s addressing that in Waka.

Vlad talked about his suggestion for how to do cross-stream compression in HTTP/2 to significantly enhance compression ratio when, for example, switching to many small resources over h2 compared to a single huge resource over h1. The security aspect of this feature is what catches most of people’s attention and the following discussion. How can we make sure this doesn’t leak sensitive information? What protocol mechanisms exist or can we invent to help out making this work in a way that is safer (by default)?

Trailers. This is again a favorite topic that we’ve discussed before that is resurfaced. There are people around the table who’d like to see support trailers and we discussed the same topic in the HTTP Workshop in 2016 as well. The corresponding issue on trailers filed in the fetch github repo shows a lot of the concerns.

Julian brought up the subject of “7230bis” – when and how do we start the work. What do we want from such a revision? Fixing the bugs seems like the primary focus. “10 years is too long until update”.

Kazuho talked about “HTTP/2 attack mitigation” and how to handle clients doing many parallel slow POST requests to a CDN and them having an origin server behind that runs a new separate process for each upload.

And with this, the day and the workshop 2017 was over. Thanks to Facebook for hosting us. Thanks to the members of the program committee for driving this event nicely! I had a great time. The topics, the discussions and the people – awesome!

HTTP Workshop – London edition. First day.

The HTTP workshop series is back for a third time this northern hemisphere summer. The selected location for the 2017 version is London and this time we’re down to a two-day event (we seem to remove a day every year)…

Nothing in this blog entry is a quote to be attributed to a specific individual but they are my interpretations and paraphrasing of things said or presented. Any mistakes or errors are all mine.

At 9:30 this clear Monday morning, 35 persons sat down around a huge table in a room in the Facebook offices. Most of us are the same familiar faces that have already participated in one or two HTTP workshops, but we also have a set of people this year who haven’t attended before. Getting fresh blood into these discussions is certainly valuable. Most major players are represented, including Mozilla, Google, Facebook, Apple, Cloudflare, Fastly, Akamai, HA-proxy, Squid, Varnish, BBC, Adobe and curl!

Mark (independent, co-chair of the HTTP working group as well as the QUIC working group) kicked it all off with a presentation on quic and where it is right now in terms of standardization and progress. The upcoming draft-04 is becoming the first implementation draft even though the goal for interop is set basically at handshake and some very basic data interaction. The quic transport protocol is still in a huge flux and things have not settled enough for it to be interoperable right now to a very high level.

Jana from Google presented on quic deployment over time and how it right now uses about 7% of internet traffic. The Android Youtube app’s switch to QUIC last year showed a huge bump in usage numbers. Quic is a lot about reducing latency and numbers show that users really do get a reduction. By that nature, it improves the situation best for those who currently have the worst connections.

It doesn’t solve first world problems, this solves third world connection issues.

The currently observed 2x CPU usage increase for QUIC connections as compared to h2+TLS is mostly blamed on the Linux kernel which apparently is not at all up for this job as good is should be. Things have clearly been more optimized for TCP over the years, leaving room for improvement in the UDP areas going forward. “Making kernel bypassing an interesting choice”.

Alan from Facebook talked header compression for quic and presented data, graphs and numbers on how HPACK(-for-quic), QPACK and QCRAM compare when used for quic in different networking conditions and scenarios. Those are the three current header compression alternatives that are open for quic and Alan first explained the basics behind them and then how they compare when run in his simulator. The current HPACK version (adopted to quic) seems to be out of the question for head-of-line-blocking reasons, the QCRAM suggestion seems to run well but have two main flaws as it requires an awkward layering violation and an annoying possible reframing requirement on resends. Clearly some more experiments can be done, possible with a hybrid where some QCRAM ideas are brought into QPACK. Alan hopes to get his simulator open sourced in the coming months which then will allow more people to experiment and reproduce his numbers.

Hooman from Fastly on problems and challenges with HTTP/2 server push, the 103 early hints HTTP response and cache digests. This took the discussions on push into the weeds and into the dark protocol corners we’ve been in before and all sorts of ideas and suggestions were brought up. Some of them have been discussed before without having been resolved yet and some ideas were new, at least to me. The general consensus seems to be that push is fairly complicated and there are a lot of corner cases and murky areas that haven’t been clearly documented, but it is a feature that is now being used and for the CDN use case it can help with a lot more than “just an RTT”. But is perhaps the 103 response good enough for most of the cases?

The discussion on server push and how well it fares is something the QUIC working group is interested in, since the question was asked already this morning if a first version of quic could be considered to be made without push support. The jury is still out on that I think.

ekr from Mozilla spoke about TLS 1.3, 0-RTT, how the TLS 1.3 handshake looks like and how applications and servers can take advantage of the new 0-RTT and “0.5-RTT” features. TLS 1.3 is already passed the WGLC and there are now “only” a few issues pending to get solved. Taking advantage of 0RTT in an HTTP world opens up interesting questions and issues as HTTP request resends and retries are becoming increasingly prevalent.

Next: day two.

curl survey 2017 – analysis

The results are in. The curl user survey 2017 was up for a little over two weeks and attracted answers from a total of 513 individuals. This was a much better turnout that last year’s disappointment – thank you everyone!

The 2017 survey analysis as a 40 page PDF

This year we learned that the distribution curve for the amount of protocols people use curl for looks like this:

And the interest in getting even more protocols supported is still high, if not even very high and I think the top-most requested protocol is a bit surprising:

The outcome of the survey is the analysis document in which I’ve summarized by thoughts and added a bunch of graphs and other diagrams that illustrate the numbers. In particular compared to previous’ years results. It became a 40 page thing as I’ve tried to be detailed and also somewhat elaborate on commenting and reacting to a lot of the write-in suggestions and comments!

If you want to draw your own conclusions or just verify mine, I also offer you the following source material:

  1. The pristine 2017 CSV file as downloaded from Google, with all the results from the survey.
  2. To compare with last year, I also offer you the 2016 CSV file.
  3. During my the work of producing the analysis document, I imported the 2017 CSV file into libreoffice and fiddled with a lot of numbers and graphs, most of that didn’t end up in the document but you can find the raw 2017 survey libreoffice calc file and verify the outcomes or the formulas used.

Number of semicolon separated words in libreoffice calc

=LEN(TRIM(A1))-LEN(SUBSTITUTE(TRIM(A1),";",""))+1

The length of the entire A1 field (white space trimmed) minus the length of the A1 field with all the semicolons removed, plus one.

A string in A1 that looks like “A;B;C” will give the number 3.

I had to figure this out and I thought I’d store it here for later retrieval. Maybe someone else will enjoy it as well. The same formula works in Excel too I think.

curl: 5000 stars

The curl project has been hosted on github since March 2010, and we just now surpassed 5000 stars. A star on github of course being a completely meaningless measurement for anything. But it does show that at least 5000 individuals have visited the page and shown appreciation this low impact way.

5000 is not a lot compared to the really popular projects.

On August 12, 2014 I celebrated us passing 1000 stars with a beer. Hopefully it won’t be another seven years until I can get my 10000 stars beer!

The curl user survey 2017

The annual survey for curl and libcurl users is open. The 2017 edition has some minor edits since last year but is mostly the same set of questions used before. To help us detect changes and trends over time.

If you use curl or libcurl, in any way, shape or form, please consider spending a few minutes of your precious time on this. Your input helps us understand where we are and in which direction we should go next.

Fill in the form!

The poll is open fourteen days from Friday May 12th until midnight (CEST) May 27th 2017. All data we collect is non-personal and anonymous.

To get some idea of what sort of information we extract and collect from the results, have a look at the analysis of last year’s survey.

The subsequent analysis of the 2017 user survey.

Improving timers in libcurl

A few years ago I explained the timer and timeout concepts of the libcurl internals. A decent prerequisite for this post really.

Today, I introduced “timer IDs” internally in libcurl so that all expiring timers we use has to specify which timer it is with an ID, and we only have a set number of IDs to select from. Turns out there are only about 10 of them. 10 different ones. Per easy handle.

With this change, we now only allow one running timer for each ID, which then makes it possible for us to change timers during execution so that they never “fire” in vain like they used to do (since we couldn’t stop or remove them them before they expired previously). This change makes event loops slightly more efficient since now they will end up getting much fewer “spurious” timeouts that happen only because we had no infrastructure to prevent them.

Another benefit from only keeping one single timer for each ID in the list of timers, is that the dynamic list of running timers immediately become much shorter. This, because many times the same timer ID was used again and we would then add a new node to the list so the timer that had one purpose would expire twice (or more). But now it no longer works like that. In some typical uses I’ve tested, I’ve seen the list shrink from a maximum of 7-8 nodes down to a maximum of 1 or 2.

Finally, since we now have a finite number of timers that can be set at any given time and we know the maximum amount is fairly small, I could make the timer code completely skip using dynamic memory. Allocating 10 tiny structs as part of the main handle allocation is more efficient than doing tiny mallocs() for them one by one. In a basic comparison test I’ve run, this reduced the total number of allocations from 82 to 72 for “curl localhost”.

This change will be included in the pending curl release targeted to ship on June 14th 2017.  Possibly called version 7.54.1.

Those are all in a tree

As explained previously: the above explanation of timers goes for the set of timers kept for each individual easy handle, and with libcurl you can add an unlimited amount of easy handles to a multi handle (to perform lots of transfers in parallel) and then the multi handle has a self-balanced splay tree with the nearest-in-time timer for each individual easy handle as nodes in the tree, so that it can quickly and easily figure out which handle that needs attention next and when in time that is.

The illustration below shows a captured imaginary moment in time when there are five easy handles in different colors, all doing their own separate transfers, Each easy handle has three private timers set. The tree contains five nodes and the root of the tree is the node representing the the easy handle that needs to be taken care of next (in time). It also means we immediately know exactly how long time there is left until libcurl needs to act next.

Expiry

As soon “time N” occurs or expires, libcurl takes care of what the yellow handle needs to do and then removes that timer node from the tree. The yellow handle then advances the next timer first in line and the tree gets re-adjusted accordingly so that the new yellow first-node gets re-inserted positioned at the right place in the tree.

In our imaginary case here, the new yellow time N (formerly known as N + 1) is now later in time than L, but before M. L is now nearest in time and the tree has now adjusted to look something like this:

Since the tree only really cares about the root timer for each handle, you also see how adding a new timeout to single easy handle that isn’t the next in time is a really quick operation. It just adds a node in a linked list – per that specific handle. The linked list which now has a maximum length that is capped to the total amount of different timers: 10.

Straight-forward!

A curl delivery network

I’ve run my own public web sites on hardware I’ve administered myself for over twenty years now. I’ve hosted the curl web site myself since it’s inception.

The curl web site at curl.haxx.se has recently been delivering roughly 1.5 terabyte of data to the world per month. The CA bundle we convert to PEM from the Mozilla source code, is alone downloaded more than 100,000 times per day. Occasional blog entries I’ve posted here on my blog have climbed very fast on popular sites such as Hacker news and Reddit, and have resulted in intense visitor storms hitting this same server – sometimes reaching visitor counts above 200,000 “uniques” – most of them within the first few hours of the publication. At times, those visitor spikes have effectively brought the server to its knees.

Yes, my personal web site and the curl web site are both sharing the same physical server. It also hosts more than a dozen other sites and numerous services for our own pleasures and fun, providing services for a handful of different open source projects. So when the server has to cease doing work because it runs out of memory or hits other resource restraints, that causes interruptions all over. Oh yes, and my email doesn’t reach me.

Inconvenient and annoying.

The server

Haxx owns and runs this co-located server that we have a busload of web servers on – for the good of the projects and people that run things on it. This machine’s worst bottle neck is available RAM memory and perhaps I/O performance. Every time the server goes down to a crawl due to network traffic overload we discuss how we should upgrade it. Installing a new machine and transferring over all the sites and services is work. Work that none of us at Haxx are very happy to volunteer to do. So it hasn’t been done yet, and frankly the server handles the daily load just fine and without even a blink. Which is ninety nine point something percent of the time…

Haxx pays for a certain amount of network traffic so as long as we’re below some threshold we remain paying the same monthly fee. We don’t want to increase the traffic by magnitudes as that would cost more.

The specific machine, that sits deep inside a server room in Stockholm Sweden, is a five(?) years old Dell Poweredge E310, Intel Xeon X3440 2.53GHz with 8GB ram, This model is shown on the image at the top.

Alternatives that hasn’t helped

Why not a mirror system? We had a fair amount of curl site mirrors a few years ago, but it never worked well because they were always less reliable than the main site and they often turned stale and out of sync with the master site which eventually just hurt users.

They also trick visitors into bookmark or otherwise go back to the mirror site instead of the real one and there were always the annoying people who couldn’t resist but to fill the mirror with ads and stuff. Plus, they didn’t help much with with the storms to the main site.

Why not a cloud server? Because with the amount of services, servers and various things we do on our server, it would be inconvenient and expensive. But perhaps even more because we started out like this so we have invested time and energy into the infrastructure as it works right now. And I enjoy rowing my own boat!

The CDN

Fastly reached out and graciously offered to help us handle the load. Both on the account of traffic amounts but also to save our machine from struggling this hard the next time I’ll write something that tickles people’s curiosity (or rage) to that level when several thousands of visitors want to read the same article at the same time.

Starting now, the curl.haxx.se and the daniel.haxx.se web sites are fronted by Fastly. It should give web site visitors from all over the world faster response times and it will make the site more reliable and less likely to have problems due to traffic load going forward.

In case you’re not familiar with what a CDN is, a simplified explanation would say it is a globally distributed network of reverse proxy servers deployed in multiple data centers. These CDN servers front the Internet and will to the largest extent possible serve the visitors with the right content directly from their own caches instead of them reaching the actual lowly backend server I run that hosts the original content. Fastly has lots of servers across the globe for this purpose. Users who are a long way away from Sweden will probably be the ones who will notice this change the most, as you may suddenly find haxx.se content much closer (network round-trip wise) than before.

Standards

These new servers will host the sites over HTTPS just like before, and they will require TLS 1.2 and SNI. They will work over IPv6 and support HTTP/2.  Network standard wise, there shouldn’t be any step down – and honestly, I haven’t exactly been on the cutting edge of these technologies myself for these sites in the past.

Editing the site

We will keep editing and maintaining the site like before. It is made up of an old system with templates and include files that generate mostly static web pages. The site is mostly available on github and using that, you can build a local version for development and trying out changes before they land.

Hopefully, this move to Fastly will only make the site faster and more reliable. If you notice any glitches or experience any problems with the site, please let us know!