Category Archives: cURL and libcurl

curl and/or libcurl related

A hundred million cars run curl

One of my hobbies is to collect information about where curl is used. The following car brands feature devices, infotainment and/or navigation systems that use curl – in one or more of their models.

These are all brands about which I’ve found information online (for example curl license information), received photos of or otherwise been handed information by what I consider reliable sources (like involved engineers).

Do you have curl in a device installed in another car brand?

List of car brands using curl

Baojun, BMW, Buick, Cadillac, Chevrolet, Ford, GMC, Holden, Hyundai, Mazda, Mercedes, Nissan, Opel, Renault, Seat, Skoda, Subaru, Suzuki, Tesla, Toyota, VW and Vauxhall.

All together, this is a pretty amazing number of installations. This list contains eight (8) of the top-10 car brands in the world 2017! And all the top-3 brands. By my rough estimate, something like 40 million cars sold in 2017 had curl in them. Presumably almost as many in 2016 and a little more in 2018 (based on car sales stats).

Not too shabby for a little spare time project.

How to find curl in your car

Sometimes the curl open source license is included in a manual (it includes my name and email, offering more keywords to search for). That’s usually how I’ve found out many uses purely online.

Sometimes the curl license is included in the “open source license” screen within the actual infotainment system. Those tend to list hundreds of different components and without any search available, you often have to scroll for many minutes until you reach curl or libcurl. I occasionally receive photos of such devices.

Related: why is your email in my car and I have toyota corola.

Update: I added Tesla and Hyundai to the list after the initial post. The latter of those brands is a top-10 brand which bumped the counter of curl users to 8 out of the top-10 brands!

much faster curl uploads on Windows with a single tiny commit

These days, operating system kernels provide TCP/IP stacks that can do really fast network transfers. It’s not even unusual for ordinary people to have gigabit connections at home and of course we want our applications to be able take advantage of them.

I don’t think many readers here will be surprised when I say that fulfilling this desire turns out much easier said than done in the Windows world.

Autotuning?

Since Windows 7 / 2008R2, Windows implements send buffer autotuning. Simply put, the faster transfer and longer RTT the connection has, the larger the buffer it uses (up to a max) so that more un-acked data can be outstanding and thus enable the system to saturate even really fast links.

Turns out this useful feature isn’t enabled when applications use non-blocking sockets. The send buffer isn’t increased at all then.

Internally, curl is using non-blocking sockets and most of the code is platform agnostic so it wouldn’t be practical to switch that off for a particular system. The code is pretty much independent of the target that will run it, and now with this latest find we have also started to understand why it doesn’t always perform as well on Windows as on other operating systems: the upload buffer (SO_SNDBUF) is fixed size and simply too small to perform well in a lot of cases

Applications can still enlarge the buffer, if they’re aware of this bottleneck, and get better performance without having to change libcurl, but I doubt a lot of them do. And really, libcurl should perform as good as it possibly can just by itself without any necessary tuning by the application authors.

Users testing this out

Daniel Jelinski brought a fix for this that repeatedly poll Windows during uploads to ask for a suitable send buffer size and then resizes it on the go if it deems a new size is better. In order to figure out that if this patch is indeed a good idea or if there’s a downside for some, we went wide and called out for users to help us.

The results were amazing. With speedups up to almost 7 times faster, exactly those newer Windows versions that supposedly have autotuning can obviously benefit substantially from this patch. The median test still performed more than twice as fast uploads with the patch. Pretty amazing really. And beyond weird that this crazy thing should be required to get ordinary sockets to perform properly on an updated operating system in 2018.

Windows XP isn’t affected at all by this fix, and we’ve seen tests running as VirtualBox guests in NAT-mode also not gain anything, but we believe that’s VirtualBox’s “fault” rather than Windows or the patch.

Landing

The commit is merged into curl’s master git branch and will be part of the pending curl 7.61.1 release, which is due to ship on September 5, 2018. I think it can serve as an interesting case study to see how long time it takes until Windows 10 users get their versions updated to this.

Table of test runs

The Windows versions, and the test times for the runs with the unmodified curl, the patched one, how much time the second run needed as a percentage of the first, a column with comments and last a comment showing the speedup multiple for that test.

Thank you everyone who helped us out by running these tests!

Version Time vanilla Time patched New time Comment speedup
6.0.6002 15.234 2.234 14.66% Vista SP2 6.82
6.1.7601 8.175 2.106 25.76% Windows 7 SP1 Enterprise 3.88
6.1.7601 10.109 2.621 25.93% Windows 7 Professional SP1 3.86
6.1.7601 8.125 2.203 27.11% 2008 R2 SP1 3.69
6.1.7601 8.562 2.375 27.74% 3.61
6.1.7601 9.657 2.684 27.79% 3.60
6.1.7601 11.263 3.432 30.47% Windows 2008R2 3.28
6.1.7601 5.288 1.654 31.28% 3.20
10.0.16299.309 4.281 1.484 34.66% Windows 10, 1709 2.88
10.0.17134.165 4.469 1.64 36.70% 2.73
10.0.16299.547 4.844 1.797 37.10% 2.70
10.0.14393 4.281 1.594 37.23% Windows 10, 1607 2.69
10.0.17134.165 4.547 1.703 37.45% 2.67
10.0.17134.165 4.875 1.891 38.79% 2.58
10.0.15063 4.578 1.907 41.66% 2.40
6.3.9600 4.718 2.031 43.05% Windows 8 (original) 2.32
10.0.17134.191 3.735 1.625 43.51% 2.30
10.0.17713.1002 6.062 2.656 43.81% 2.28
6.3.9600 2.921 1.297 44.40% Windows 2012R2 2.25
10.0.17134.112 5.125 2.282 44.53% 2.25
10.0.17134.191 5.593 2.719 48.61% 2.06
10.0.17134.165 5.734 2.797 48.78% run 1 2.05
10.0.14393 3.422 1.844 53.89% 1.86
10.0.17134.165 4.156 2.469 59.41% had to use the HTTPS endpoint 1.68
6.1.7601 7.082 4.945 69.82% over proxy 1.43
10.0.17134.165 5.765 4.25 73.72% run 2 1.36
5.1.2600 10.671 10.157 95.18% Windows XP Professional SP3 1.05
10.0.16299.547 1.469 1.422 96.80% in a VM runing on Linux 1.03
5.1.2600 11.297 11.046 97.78% XP 1.02
6.3.9600 5.312 5.219 98.25% 1.02
5.2.3790 5.031 5 99.38% Windows 2003 1.01
5.1.2600 7.703 7.656 99.39% XP SP3 1.01
10.0.17134.191 1.219 1.531 125.59% FTP 0.80
TOTAL 205.303 102.271 49.81% 2.01
MEDIAN 43.51% 2.30

curl 7.61.0

Yet again we say hello to a new curl release that has been uploaded to the servers and sent off into the world. Version 7.61.0 (full changelog). It has been exactly eight weeks since 7.60.0 shipped.

Numbers

the 175th release
7 changes
56 days (total: 7,419)

88 bug fixes (total: 4,538)
158 commits (total: 23,288)
3 new curl_easy_setopt() options (total: 258)

4 new curl command line option (total: 218)
55 contributors, 25 new (total: 1,766)
42 authors, 18 new (total: 596)
  1 security fix (total: 81)

Security fixes

SMTP send heap buffer overflow (CVE-2018-0500)

A stupid heap buffer overflow that can be triggered when the application asks curl to use a smaller download buffer than default and then sends a larger file – over SMTP. Details.

New features

The trailing dot zero in the version number reveals that we added some news this time around – again.

More microsecond timers

Over several recent releases we’ve introduced ways to extract timer information from libcurl that uses integers to return time information with microsecond resolution, as a complement to the ones we already offer using doubles. This gives a better precision and avoids forcing applications to use floating point math.

Bold headers

The curl tool now outputs header names using a bold typeface!

Bearer tokens

The auth support now allows applications to set the specific bearer tokens to pass on.

TLS 1.3 cipher suites

As TLS 1.3 has a different set of suites, using different names, than previous TLS versions, an application that doesn’t know if the server supports TLS 1.2 or TLS 1.3 can’t set the ciphers in the single existing option since that would use names for 1.2 and not work for 1.3 . The new option for libcurl is called CURLOPT_TLS13_CIPHERS.

Disallow user name in URL

There’s now a new option that can tell curl to not acknowledge and support user names in the URL. User names in URLs can brings some security issues since they’re often sent or stored in plain text, plus if .netrc support is enabled a script accepting externally set URLs could risk getting exposing the privately set password.

Awesome bug-fixes this time

Some of my favorites include…

Resolver local host names faster

When curl is built to use the threaded resolver, which is the default choice, it will now resolve locally available host names faster. Locally as present in /etc/hosts or in the OS cache etc.

Use latest PSL and refresh it periodically

curl can now be built to use an external PSL (Public Suffix List) file so that it can get updated independently of the curl executable and thus better keep in sync with the list and the reality of the Internet.

Rumors say there are Linux distros that might start providing and updating the PSL file in separate package, much like they provide CA certificates already.

fnmatch: use the system one if available

The somewhat rare FTP wildcard matching feature always had its own internal fnmatch implementation, but now we’ve finally ditched that in favour of the system fnmatch() function for platforms that have such a one. It shrinks footprint and removes an attack surface – we’ve had a fair share of tiresome fuzzing issues in the custom fnmatch code.

axTLS: not considered fit for use

In an effort to slowly increase our requirement on third party code that we might tell users to build curl to use, we’ve made curl fail to build if asked to use the axTLS backend. This since we have serious doubts about the quality and commitment of the code and that project. This is just step one. If no one yells and fights for axTLS’ future in curl going forward, we will remove all traces of axTLS support from curl exactly six months after step one was merged. There are plenty of other and better TLS backends to use!

Detailed in our new DEPRECATE document.

TLS 1.3 used by default

When negotiating TLS version in the TLS handshake, curl will now allow TLS 1.3 by default. Previously you needed to explicitly allow that. TLS 1.3 support is not yet present everywhere so it will depend on the TLS library and its version that your curl is using.

Coming up?

We have several changes and new features lined up for next release. Stay tuned!

First, we will however most probably schedule a patch release, as we have two rather nasty HTTP/2 bugs filed that we want fixed. Once we have them fixed in a way we like, I think we’d like to see those go out in a patch release before the next pending feature release.

curl survey 2018 analysis

This year, 670 individuals spent some of their valuable time on our survey and filled in answers that help us guide what to do next. What’s good, what’s bad, what to remove and where to emphasize efforts more.

It’s taken me a good while to write up this analysis but hopefully the results here can be used all through the year as a reminder what people actually think and how they use curl and libcurl.

A new question this yeas was in which continent the respondent lives, which ended up with an unexpectedly strong Euro focus:

What didn’t trigger any surprises though was the question of what protocols users are using, which basically identically mirrored previous years’ surveys. HTTP and HTTPS are the king duo by far.

Read the full 34 page analysis PDF.

Some other interesting take-aways:

  • One person claims to use curl to handle 19 protocols! (out of 23)
  • One person claims to use curl on 11 different platforms!
  • Over 5% of the users argue for a rewrite in rust.
  • Windows is now the second most common platform to use curl on.

curl, http2 and quic on the Changelog

Three years ago I talked on a changelog episode about curl just having turned 17 years old and what it has meant for me etc.

Fast forward three years, 146 changelog episodes later and now curl has turned 20 years and I was again invited and joined the lovely hosts of the changelog podcast, Adam and Jerod.

Changelog episode 299

We talked curl of course but we also spent time talking about where HTTP/2 is and how QUIC is coming around and a little about why and how its UDP nature makes things a little different. If you’re into either curl or web transport, I hope you’ll find it interesting.

The curl 7 series reaches 60

curl 7.60.0 is released. Remember 7.59.0? This latest release cycle was a week longer than normal since the last was one week shorter and we had this particular release date adapted to my traveling last week. It gave us 63 days to cram things in, instead of the regular 56 days.

7.60.0 is a crazy version number in many ways. We’ve been working on the version 7 series since virtually forever (the year 2000) and there’s no version 8 in sight any time soon. This is the 174th curl release ever.

I believe we shouldn’t allow the minor number to go above 99 (because I think it will cause serious confusion among users) so we should come up with a scheme to switch to version 8 before 7.99.0 gets old. If we keeping doing a new minor version every eight weeks, which seems like the fastest route, math tells us that’s a mere 6 years away.

Numbers

In the 63 days since the previous release, we have done and had..

3 changes
111 bug fixes (total: 4,450)
166 commits (total: 23,119)
2 new curl_easy_setopt() options (total: 255)

1 new curl command line option (total: 214)
64 contributors, 36 new (total: 1,741)
42 authors (total: 577)
2 security fixes (total: 80)

What good does 7.60.0 bring?

Our tireless and fierce army of security researches keep hammering away at every angle of our code and this has again unveiled vulnerabilities in previously released curl code:

  1. FTP shutdown response buffer overflow: CVE-2018-1000300

When you tell libcurl to use a larger buffer size, that larger buffer size is not used for the shut down of an FTP connection so if the server then sends back a huge response during that sequence, it would buffer-overflow a heap based buffer.

2. RTSP bad headers buffer over-read: CVE-2018-1000301

The header parser function would sometimes not restore a pointer back to the beginning of the buffer, which could lead to a subsequent function reading out of buffer and causing a crash or potential information leak.

There are also two new features introduced in this version:

HAProxy protocol support

HAProxy has pioneered this simple protocol for clients to pass on meta-data to the server about where it comes from; designed to allow systems to chain proxies / reverse-proxies without losing information about the original originating client. Now you can make your libcurl-using application switch this on with CURLOPT_HAPROXYPROTOCOL and from the command line with curl’s new –haproxy-protocol option.

Shuffling DNS addresses

Over six years ago, I blogged on how round robin DNS doesn’t really work these days. Once upon the time the gethostbyname() family of functions actually returned addresses in a sort of random fashion, which made clients use them in an almost random fashion and therefore they were spread out on the different addresses. When getaddrinfo() has taken over as the name resolving function, it also introduced address sorting and prioritizing, in a way that effectively breaks the round robin approach.

Now, you can get this feature back with libcurl. Set CURLOPT_DNS_SHUFFLE_ADDRESSES to have the list of addresses shuffled after resolved, before they’re used. If you’re connecting to a service that offer several IP addresses and you want to connect to one of those addresses in a semi-random fashion, this option is for you.

There’s no command line option to switch this on. Yet.

Bug fixes

We did many bug fixes for this release as usual, but some of my favorite ones this time around are…

improved pending transfers for HTTP/2

libcurl-using applications that add more transfers than what can be sent over the wire immediately (usually because the application as set some limitation of the parallelism libcurl will do) can be held “pending” by libcurl. They’re basically kept in a separate queue until there’s a chance to send them off. They will then be attempted to get started when the streams than are in progress end.

The algorithm for retrying the pending transfers were quite naive and “brute-force” which made it terribly slow and in effective when there are many transfers waiting in the pending queue. This slowed down the transfers unnecessarily.

With the fixes we’ve landed in7.60.0, the algorithm is less stupid which leads to much less overhead and for this setup, much faster transfers.

curl_multi_timeout values with threaded resolver

When using a libcurl version that is built to use a threaded resolver, there’s no socket to wait for during the name resolving phase so we’ve often recommended users to just wait “a short while” during this interval. That has always been a weakness and an unfortunate situation.

Starting now, curl_multi_timeout() will return suitable timeout values during this period so that users will no longer have to re-implement that logic themselves. The timeouts will be slowly increasing to make sure fast resolves are detected quickly but slow resolves don’t consume too much CPU.

much faster cookies

The cookie code in libcurl was keeping them all in a linear linked list. That’s fine for small amounts of cookies or perhaps if you don’t manipulate them much.

Users with several hundred cookies, or even thousands, will in 7.60.0 notice a speed increase that in some situations are in the order of several magnitudes when the internal representation has changed to use hash tables and some good cleanups were made.

HTTP/2 GOAWAY-handling

We figure out some problems in libcurl’s handling of GOAWAY, like when an application wants to do a bunch of transfers over a connection that suddenly gets a GOAWAY so that libcurl needs to create a new connection to do the rest of the pending transfers over.

Turns out nginx ships with a config option named http2_max_requests that sets the maximum number of requests it allows over the same connection before it sends GOAWAY over it (and it defaults to 1000). This option isn’t very well explained in their docs and it seems users won’t really know what good values to set it to, so this is probably the primary reason clients see GOAWAYs where there’s no apparent good reason for them.

Setting the value to a ridiculously low value at least helped me debug this problem and improve how libcurl deals with it!

Repair non-ASCII support

We’ve supported transfers with libcurl on non-ASCII platforms since early 2007. Non-ASCII here basically means EBCDIC, but the code hasn’t been limited to those.

However, due to this being used by only a small amount of users and that our test infrastructure doesn’t test this feature good enough, we slipped recently and broke libcurl for the non-ASCII users. Work was put in and changes were landed to make sure that libcurl works again on these systems!

Enjoy 7.60.0! In 56 days there should be another release to play with…

curl user survey 2018

The curl user survey 2018 is up. If you ever use curl or libcurl, please donate some of your precious time and provide your answers!

The curl user survey is an annual tradition since 2014 and it is one of our primary ways to get direct feedback from a larger audience about what’s good, what’s bad and what to focus on next in the curl project. Your input really helps us!

2018 survey

The survey will be up and available to fill in during 14 days, from May 15th until the end of May 28th. Please help us share this and ask your curl using friends to join in as well.

If you submitted data last year, make sure you didn’t miss the analysis of the 2017 survey.

Would you like some bold with those headers?

Displaying HTTP headers for a URL on the screen is one of those things people commonly use curl for.

curl -I example.com

To help your eyes separate header names from the corresponding values, I’ve been experimenting with a change that makes the header names get shown using a bold type face and the header values to the right of the colons to use the standard font.

Sending a HEAD request to the curl site could look like this:

This seemingly small change required an unexpectedly large surgery.

Now I want to turn this into a discussion if this is good enough, if we need more customization, how to make the code act on windows and perhaps how an option to explicitly enable/disable this should be named.

If you have ideas for any of that or other things around this feature, do comment in the PR.

The feature window for the next curl release is already closed so this change will not be considered for real until curl 7.61.0 at the earliest. Due for release in July 2018. So lots of time left to really “bike shed” all the details!

Update: the PR was merged into master on May 21st.

curl up 2018 summary

curl up 2018

The event that occurred this past weekend was the second time we gathered a bunch of curl enthusiasts and developers in the same physical room to discuss the past, the present and the future from a curl perspective.

Stockholm, Sweden, was the center of gravity this time when Goto 10 hosted our merry collective. Spring has finally arrived here and as the sun was out a lot, it made it a lovely weekend weather wise too. As a bonus, the little coffee shop on the premises was open all through both days. Just for us!

This time we were 22 peeps coming in from Sweden, Germany, UK, Spain, the US, and Czechia.

This is what it looked like (photos by me):

Talks

We had a bunch of presentations over the two days, done by a bunch of people. I recorded the screens and recorded the voice on most of them, and they’re available online. (Apologies for only recording a part of the screen for many of them!)

The talks were around how things work in curl or in the curl project, how people have used curl and a bit about upcoming technologies that we hope to get curl to support (soon?): QUIC, DOH, Alt-Svc, tests, CI, proxies, libcurl in Apache, using curl on a CDN, fuzzing curl, parsing email with curl etc.

Quiz

We rounded off the Saturday with a twelve question curl quiz. The winner, Fernando, managed to hit the right answer in 8 questions and did it faster than the competition. He got himself a signed copy of Everything curl the second print edition as a prize!

Sponsors

46 Elks was graciously sponsoring us with awesome food and t-shirts.

Sticker Mule provided us with stickers.

Goto 10 let us occupy their place during the weekend when it is otherwise closed!

This event was possible only thanks to their help!

2019

Several people asked me about next year already. I certainly hope we can run another curl up in 2019, but I don’t know yet where this should happen. Ideally, I would like to move it around to different countries to give different people the ability to show up easier, but I also value having a local “host” that can provide the room and facilities for us. I’ll send out probing questions about the 2019 location later this year. If you have a usable office or another suitable place that could host us, (preferably outside of Germany or Sweden), feel most welcome and encouraged to contact me!

(me, photographed by Christian Schmitz)

curl another host

Sometimes you want to issue a curl command against a server, but you don’t really want curl to resolve the host name in the given URL and use that, you want to tell it to go elsewhere. To the “wrong” host, which in this case of course happens to be the right host. Because you know better.

Don’t worry. curl covers this as well, in several different ways…

Fake the host header

The classic and and easy to understand way to send a request to the wrong HTTP host is to simply send a different Host: header so that the server will provide a response for that given server.

If you run your “example.com” HTTP test site on localhost and want to verify that it works:

curl --header "Host: example.com" http://127.0.0.1/

curl will also make cookies work for example.com in this case, but it will fail miserably if the page redirects to another host and you enable redirect-following (--location) since curl will send the fake Host: header in all further requests too.

The --header option cleverly cancels the built-in provided Host: header when a custom one is provided so only the one passed in from the user gets sent in the request.

Fake the host header better

We’re using HTTPS everywhere these days and just faking the Host: header is not enough then. An HTTPS server also needs to get the server name provided already in the TLS handshake so that it knows which cert etc to use. The name is provided in the SNI field. curl also needs to know the correct host name to verify the server certificate against (server certificates are rarely registered for an IP address). curl extracts the name to use in both those case from the provided URL.

As we can’t just put the IP address in the URL for this to work, we reverse the approach and instead give curl the proper URL but with a custom IP address to use for the host name we set. The --resolve command line option is our friend:

curl --resolve example.com:443:127.0.0.1 https://example.com/

Under the hood this option populates curl’s DNS cache with a custom entry for “example.com” port 443 with the address 127.0.0.1, so when curl wants to connect to this host name, it finds your crafted address and connects to that instead of the IP address a “real” name resolve would otherwise return.

This method also works perfectly when following redirects since any further use of the same host name will still resolve to the same IP address and redirecting to another host name will then resolve properly. You can even use this option multiple times on the command line to add custom addresses for several names. You can also add multiple IP addresses for each name if you want to.

Connect to another host by name

As shown above, --resolve is awesome if you want to point curl to a specific known IP address. But sometimes that’s not exactly what you want either.

Imagine you have a host name that resolves to a number of different host names, possibly a number of front end servers for the same site/service. Not completely unheard of. Now imagine you want to issue your curl command to one specific server out of the front end servers. It’s a server that serves “example.com” but the individual server is called “host-47.example.com”.

You could resolve the host name in a first step before curl is used and use --resolve as shown above.

Or you can use --connect-to, which instead works on a host name basis. Using this, you can make curl replace a specific host name + port number pair with another host name + port number pair before the name is resolved!

curl --connect-to example.com:443:host-47.example.com:443 https://example.com/

Crazy combos

Most options in curl are individually controlled which means that there’s rarely logic that prevents you from using them in the awesome combinations that you can think of.

--resolve, --connect-to and --header can all be used in the same command line!

Connect to a HTTPS host running on localhost, use the correct name for SNI and certificate verification, but then still ask for a separate host in the Host: header? Sure, no problem:

curl --resolve example.com:443:127.0.0.1 https://example.com/ --header "Host: diff.example.com"

All the above with libcurl?

When you’re done playing with the curl options as described above and want to convert your command lines to libcurl code instead, your best friend is called --libcurl.

Just append --libcurl example.c to your command line, and curl will generate the C code template for you in that given file name. Based on that template, making use of  that code correctly is usually straight-forward and you’ll get all the options to read up in a convenient way.

Good luck!

Update: thanks to @Manawyrm, I fixed the ndash issues this post originally had.