curl says bye bye to pipelining

HTTP/1.1 Pipelining is the protocol feature where the client sends off a second HTTP/1.1 request already before the answer to the previous request has arrived (completely) from the server. It is defined in the original HTTP/1.1 spec and is a way to avoid waiting times. To reduce latency.

HTTP/1.1 Pipelining was badly supported by curl for a long time in the sense that we had a series of known bugs and it was a fragile feature without enough tests. Also, pipelining is fairly tricky to debug due to the timing sensitivity so very often enabling debug outputs or similar completely changes the nature of the behavior and things are not reproducing anymore!

HTTP pipelining was never enabled by default by the large desktop browsers due to all the issues with it, like broken server implementations and the likes. Both Firefox and Chrome dropped pipelining support entirely since a long time back now. curl did in fact over time become more and more lonely in supporting pipelining.

The bad state of HTTP pipelining was a primary driving factor behind HTTP/2 and its multiplexing feature. HTTP/2 multiplexing is truly and really “pipelining done right”. It is way more solid, practical and solves the use case in a better way with better performance and fewer downsides and problems. (curl enables multiplexing by default since 7.62.0.)

In 2019, pipelining should be abandoned and HTTP/2 should be used instead.

Starting with this commit, to be shipped in release 7.65.0, curl no longer has any code that supports HTTP/1.1 pipelining. It has been disabled in the code since 7.62.0 already so applications and users that use a recent version already should not notice any difference.

Pipelining was always offered on a best-effort basis and there was never any guarantee that requests would actually be pipelined, so we can remove this feature entirely without breaking API or ABI promises. Applications that ask libcurl to use pipelining can still do that, it just won’t have any effect.

8 thoughts on “curl says bye bye to pipelining”

  1. Pipelining sounds like a very simple concept. I had no idea it wasn’t used.

    What made it a hard problem to solve?

    1. Fundamental limitations of HTTP 1.1 meant that in practice P
      pipelining cost a lot in overhead and complexity with few real world benefits.

      Even with multiple connections sent from the client, all operations are still processed synchronously on the server. The first connection to the server is always processed first and blocks other connections (head-of-line blocking) until that’s done. Also only certain things can always be pipeline, for example GET and HEAD) can be pipelined, POST requests can never be pipelines and others some but not all the time.

      For all this and other problems and complexities introduced, the only situation that pipelining really helps with is bandwidth-limited clients on a high-latency network. This is not an uncommon situation, but to be brief HTTP/2 addresses this and HTTP/3 aka QUIC “fixes” it (though maybe introducing other problems) by, essentially, figuring out how to have non-blocking threading.

      If there’s any takeaway, it’s that pipelining, threading, concurrency or whatever you want to call it is rarely as simple as it seems.

    2. Pipelining in HTTP 1.1 suffers from a problem called head of line blocking that was difficult to solve without chaning the wire protocol, see https://en.m.wikipedia.org/wiki/Head-of-line_blocking . This implementation also made it difficult to correctly implement middle proxies, for example L7 load balancers, correctly due to how the load balancer must preserve the order of the responses that must be sent back over the connection.

  2. > In 2019, pipelining should be abandoned and HTTP/2 should be used instead.

    This is incorrect. HTTP/3 should be used instead.

    1. > In 2019, pipelining should be abandoned and HTTP/2 should be used instead.

      No.

      HTTP/2 and (much more) HTTP/3 are just Google attempts to further increase the complexity of the Web stack (already too high) and further centralize the control over it.

      They are not content of controlling the two main browser’ engine (Mozilla Servo and Chromium) and a large percentage of world contents (through their services and their cloud offer) they want to raise the server complexity just like they did with the browser one.

      The simple solution to these sort of issues is to introduce a new archive mime-type that can transfer chunks of the website at once, including several HTML pages, images, css and so on.

      One might even decouple authentication and encryption, signing the archive without encrypting it, to make the archives cache-able on HTTP proxy, reducing server load and surveillance issues.

      The solution is so simple that every competent Web developer can see they are not implementing it because… they want to centralize everything.

  3. It’s a shame that HTTP/2 implementations leaked all over the open source landscape then. They clearly lost control over it.

Comments are closed.