Category Archives: Open Source

Open Source, Free Software, and similar

libcurl gets a URL API

libcurl has done internet transfers specified as URLs for a long time, but the URLs you’d tell libcurl to use would always just get parsed and used internally.

Applications that pass in URLs to libcurl would of course still very often need to parse URLs, create URLs or otherwise handle them, but libcurl has not been helping with that.

At the same time, the under-specification of URLs has led to a situation where there’s really no stable document anywhere describing how URLs are supposed to work and basically every implementer is left to handle the WHATWG URL spec, RFC 3986 and the world in between all by themselves. Understanding how their URL parsing libraries, libcurl, other tools and their favorite browsers differ is complicated.

By offering applications access to libcurl’s own URL parser, we hope to tighten a problematic vulnerable area for applications where the URL parser library would believe one thing and libcurl another. This could and has sometimes lead to security problems. (See for example Exploiting URL Parser in Trending Programming Languages! by Orange Tsai)

Additionally, since libcurl deals with URLs and virtually every application using libcurl already does some amount of URL fiddling, it makes sense to offer it in the “same package”. In the curl user survey 2018, more than 40% of the users said they’d use an URL API in libcurl if it had one.

Handle based

Create a handle, operate on the handle and then cleanup the handle when you’re done with it. A pattern that is familiar to existing users of libcurl.

So first you just make the handle.

/* create a handle */
CURLU *h = curl_url();

Parse a URL

Give the handle a full URL.

/* "set" a URL in the handle */
curl_url_set(h, CURLUPART_URL,
"https://example.com/path?q=name", 0);

If the parser finds a problem with the given URL it returns an error code detailing the error.  The flags argument (the zero in the function call above) allows the user to tweak some parsing behaviors. It is a bitmask and all the bits are explained in the curl_url_set() man page.

A parsed URL gets split into its components, parts, and each such part can be individually retrieved or updated.

Get a URL part

Get a separate part from the URL by asking for it. This example gets the host name:

/* extract host from the URL */
char *host;
curl_url_get(h, CURLUPART_HOST, &host, 0);

/* use it, then free it */
curl_free(host);

As the example here shows, extracted parts must be specifically freed with curl_free() once the application is done with them.

The curl_url_get() can extract all the parts from the handle, by specifying the correct id in the second argument. scheme, user, password, port number and more. One of the “parts” it can extract is a bit special: CURLUPART_URL. It returns the full URL back (normalized and using proper syntax).

curl_url_get() also has a flags option to allow the application to specify certain behavior.

Set a URL part

/* set a URL part */
curl_url_set(h, CURLUPART_PATH, "/index.html", 0);

curl_url_set() lets the user set or update all and any of the individual parts of the URL.

curl_url_set() can also update the full URL, which also accepts a relative URL in case an existing one was already set. It will then apply the relative URL onto the former one and “transition” to the new absolute URL. Like this;

/* first an absolute URL */
curl_url_set(h, CURLUPART_URL,
     "https://example.org:88/path/html", 0);

/* .. then we set a relative URL "on top" */
curl_url_set(h, CURLUPART_URL,
     "../new/place", 0);

Duplicate a handle

It might be convenient to setup a handle once and then make copies of that…

CURLU *n = curl_url_dup(h);

Cleanup the handle

When you’re done working with this URL handle, free it and all its related resources.

curl_url_cleanup(h);

Ship?

This API is marked as experimental for now and ships for the first time in libcurl 7.62.0 (October 31, 2018). I will happily read your feedback and comments on how it works for you, what’s missing and what we should fix to make it even more usable for you and your applications!

We call it experimental to reserve the right to modify it slightly  going forward if necessary, and as soon as we remove that label the API will then be fixed and stay like that for the foreseeable future.

See also

The URL API section in Everything curl.

DoH in curl

DNS-over-HTTPS (DoH) is being designed (it is not an RFC quite yet but very soon!) to allow internet clients to get increased privacy and security for their name resolves. I’ve previously explained the DNS-over-HTTPS functionality within Firefox that ships in Firefox 62 and I did a presentation about DoH and its future in curl at curl up 2018.

We are now introducing DoH support in curl. I hope this will not only allow users to start getting better privacy and security for their curl based internet transfers, but ideally this will also provide an additional debugging tool for DoH in other clients and servers.

Let’s take a look at how we plan to let applications enable this when using libcurl and how libcurl has to work with this internally to glue things together.

How do I make my libcurl transfer use DoH?

There’s a primary new option added, which is the “DoH URL”. An application sets the CURLOPT_DOH_URL for a transfer, and then libcurl will use that service for resolving host names. Easy peasy. There should be nothing else in the transfer that changes or appears differently. It’ll just resolve the host names over DoH instead of using the default resolver!

What about bootstrap, how does libcurl find the DoH server’s host name?

Since the DoH URL itself typically is given using a host name, that first host name will be resolved using the normal resolver – or if you so desire, you can provide the IP address for that host name with the CURLOPT_RESOLVE option just like you can for any host name.

If done using the resolver, the resolved address will then be kept in libcurl’s DNS cache for a short while and the DoH connection will be kept in the regular connection pool with the other connections, making subsequent DoH resolves on the same handle much faster.

How do I use this from the command line?

Tell curl which DoH URL to use with the new –doh-url command line option:

$ curl --doh-url https://dns-server.example.com https://www.example.com

How do I make my libcurl code use this?

curl = curl_easy_init();
curl_easy_setopt(curl, CURLOPT_URL,
                 "https://curl.haxx.se/");
curl_easy_setopt(curl, CURLOPT_DOH_URL,
                 "https://doh.example.com/");
res = curl_easy_perform(curl);

Internals

Internally, libcurl itself creates two new easy handles that it adds to the existing multi handles and they are then performing two HTTP requests while the original transfer sits in the “waiting for name resolve” state. Once the DoH requests are completed, the original transfer’s state can progress and continue on.

libcurl handles parallel transfers perfectly well already and by leveraging the already existing support for this, it was easy to add this new functionality and still work non-blocking and even event-based correctly depending on what libcurl API that is being used.

We had to add a new little special thing that makes libcurl handle the end of a transfer in a new way since there are now easy handles that are created and added to the multi handle entirely without the user’s knowledge, so the code also needs to remove and delete those handles when they’re done serving their purposes.

Was this hard to add to a 20 year old code base?

Actually, no. It was surprisingly easy, but then I’ve also worked on a few different client-side DoH implementations already so I had gotten myself a clear view of how I wanted the functionality to work plus the fact that I’m very familiar with the libcurl internals.

Plus, everything inside libcurl is already using non-blocking code and the multi interface paradigms so the foundation for adding parallel transfers like this was already in place.

The entire DoH patch for curl, including documentation and test cases, was a mere 1500 lines.

Ship?

This is merged into the master branch in git and is planned to ship as part of the next release: 7.62.0 at the end of October 2018.

curl 7.61.1 comes with only bug-fixes

Already at the time when we shipped the previous release, 7.61.0, I had decided I wanted to do a patch release next. We had some pretty serious HTTP/2 bugs in the pipe to get fixed and there were a bunch of other unresolved issues also awaiting their treatments. Then I took off on vacation and and the HTTP/2 fixes took a longer time than expected to get on top of, so I subsequently decided that this would become a bug-fix-only release cycle. No features and no changes would be merged into master. So this is what eight weeks of only bug-fixes can look like.

Numbers

the 176th release
0 changes
56 days (total: 7,419)

102 bug fixes (total: 4,640)
151 commits (total: 23,439)
0 new curl_easy_setopt() options (total: 258)

0 new curl command line option (total: 218)
46 contributors, 21 new (total: 1,787)
27 authors, 14 new (total: 612)
  1 security fix (total: 81)

Notable bug-fixes this cycle

Among the many small fixes that went in, I feel the following ones deserve a little extra highlighting…

NTLM password overflow via integer overflow

This latest security fix (CVE-2018-14618) is almost identical to an earlier one we fixed back in 2017 called CVE-2017-8816, and is just as silly…

The internal function Curl_ntlm_core_mk_nt_hash() takes a password argument, the same password that is passed to libcurl from an application. It then gets the length of that password and allocates a memory area that is twice the length, since it needs to expand the password. Due to a lack of checks, this calculation will overflow and wrap on a 32 bit machine if a password that is longer than 2 gigabytes is passed to this function. It will then lead to a very small memory allocation, followed by an attempt to write a very long password to that small memory buffer. A heap memory overflow.

Some mitigating details: most architectures support 64 bit size_t these days. Most applications won’t allow passing in passwords that are two gigabytes.

This bug has been around since libcurl 7.15.4, released back in 2006!

Oh, and on the curl web site we now use the CVE number in the actual URL for all the security vulnerabilities to make them easier to find and refer to.

HTTP/2 issues

This was actually a whole set of small problems that together made the new crawler example not work very well – until fixed. I think it is safe to say that HTTP/2 users of libcurl have previously used it in a pretty “tidy” fashion, because I believe I corrected four or five separate issues that made it misbehave.  It was rather pure luck that has made it still work as well as it has for past users!

Another HTTP/2 bug we ran into recently involved us discovering a little quirk in the underlying nghttp2 library, which in some very special circumstances would refuse to blank out the stream id to struct pointer mapping which would lead to it delivering a pointer to a stale (already freed) struct at a later point. This is fixed in nghttp2 now, shipped in its recent 1.33.0 release.

Windows send-buffer tuning

Making uploads on Windows from between two to seven times faster than before is certainly almost like a dream come true. This is what 7.61.1 offers!

Upload buffer size increased

In tests triggered by the fix above, it was noticed that curl did not meet our performance expectations when doing uploads on really high speed networks, notably on localhost or when using SFTP. We could easily double the speed by just increasing the upload buffer size. Starting now, curl allocates the upload buffer on demand (since many transfers don’t need it), and now allocates a 64KB buffer instead of the previous 16KB. It has been using 16KB since the 2001, and with the on-demand setup and the fact that computer memories have grown a bit during 17 years I think it is well motivated.

A future curl version will surely allow the application to set this upload buffer size. The receive buffer size can already be set.

Darwinssl goes ALPN

While perhaps in the grey area of what a bugfix can be, this fix  allows curl to negotiate ALPN using the darwinssl backend, which by extension means that curl built to use darwinssl can now – finally – do HTTP/2 over HTTPS! Darwinssl is also known under the name Secure Transport, the native TLS library on macOS.

Note however that macOS’ own curl builds that Apple ships are no longer built to use Secure Transport, they use libressl these days.

The Auth Bearer fix

When we added support for Auth Bearer tokens in 7.61.0, we accidentally caused a regression that now is history. This bug seems to in particular have hit git users for some reason.

-OJ regression

The introduction of bold headers in 7.61.0 caused a regression which made a command line like “curl -O -J http://example.com/” to fail, even if a Content-Disposition: header with a correct file name was passed on.

Cookie order

Old readers of this blog may remember my ramblings on cookie sort order from back in the days when we worked on what eventually became RFC 6265.

Anyway, we never did take all aspects of that spec into account when we sort cookies on the HTTP headers sent off to servers, and it has very rarely caused users any grief. Still, now Daniel Gustafsson did a glorious job and tweaked the code to also take creation order into account, exactly like the spec says we should! There’s still some gotchas in this, but at least it should be much closer to what the spec says and what some sites might assume a cookie-using client should do…

Unbold properly

Yet another regression. Remember how curl 7.61.0 introduced the cool bold headers in the terminal? Turns out I of course had my escape sequences done wrong, so in a large number of terminal programs the end-of-bold sequence (“CSI 21 m”) that curl sent didn’t actually switch off the bold style. This would lead to the terminal either getting all bold all the time or on some terminals getting funny colors etc.

In 7.61.1, curl sends the “switch off all styles” code (“CSI 0 m”) that hopefully should work better for people!

Next release!

We’ve held up a whole bunch of pull requests to ship this patch-only release. Once this is out the door, we’ll open the flood gates and accept the nearly 10 changes that are eagerly waiting merge. Expect my next release blog post to mention several new things in curl!

Blessed curl builds for Windows

The curl project is happy to introduce official and blessed curl builds for Windows for download on the curl web site.

This means we have a set of recommended curl packages that we advice users on Windows to download.

On Linux, macOS, cygwin and pretty much all the other alternatives you have out there, you don’t need to go to random sites on the Internet and download a binary package provided by a (to you) unknown stranger to get curl for your system. Unfortunately that is basically what we have forced Windows users into doing for a few years since our previous maintainer of curl builds for Windows dropped off the project.

These new official curl builds for Windows are the same set of builds Viktor Szakats has been building and providing to the community for a long time already. Now just with the added twist that he feeds his builds and information about them to the main curl site so that users can get them from the same site and thus lean on the same trust they already have in the curl brand in general.

These builds are reproducible, provided with sha256 hashes and a link to the full build log. Everything is public and transparently done.

All the hard work to get these builds in this great shape was done by Viktor Szakats.

Go get it!

Project curl governance

Over time, we’ve slowly been adjusting the curl project and its documentation so that we might at some point actually qualify to the CII open source Best Practices at silver level.

We qualified at the base level a while ago as one of the first projects which did that.

Recently, one of those issues we fixed was documenting the governance of the curl project. How exactly the curl project is run, what the key roles are and how decisions are made. That document is now in our git repo.

curl

The curl project is what I would call a fairly typical smallish open source project with a quite active and present project leader (me). We have a small set of maintainers who independently are allowed to and will merge commits to git (via pull-requests).

Any decision or any code change that was done or is about to be done can be brought up for questioning or discussion on the mailing list. Nothing is ever really seriously written in stone (except our backwards compatible API). If we did the wrong decision in the past, we should reconsider now.

Oh right, we also don’t have any legal entity. There’s no company or organization behind this or holding any particular rights. We’re not part of any umbrella organization. We’re all just individuals distributed over the globe.

Contributors

No active contributor or maintainer (that I know of) gets paid to work on curl regularly. No company has any particular say or weight to decide where the project goes next.

Contributors fix bugs and add features as part of our daily jobs or in their spare time. We get code submissions for well over a hundred unique authors every year.

Dictator

As a founder of the project and author of more than half of all commits, I am what others call, a Benevolent Dictator. I can veto things and I can merge things in spite of objections, although I avoid that as far as possible.

I feel that I generally have people’s trust and that the community expects me to be able to take decisions and drive this project in an appropriate direction, in a fashion that has worked out fine for the past twenty years.

I post all my patches (except occasional minuscule changes) as pull-requests on github before merge, to allow comments, discussions, reviews and to make sure they don’t break any tests.

I announce and ask for feedback for changes or larger things that I want to do, on the mailing list for wider attention. To bring up discussions and fish for additional ideas or for people to point out obvious mistakes. May times, my calls for opinions or objections are met with silence and I will then take that as “no objections” and more forward in a way I deem sensible.

Every now and then I blog about specific curl features or changes we work on, to highlight them and help out the user community “out there” to discover and learn what curl can do, or might be able to do soon.

I’m doing this primarily on my spare time. My employer also lets me spend some work hours on curl.

Long-term

One of the prime factors that has made curl and libcurl successful and end up one of the world’s most widely used software components, I’m convinced, is that we don’t break stuff.

By this I mean that once we’ve introduced functionality, we struggle hard to maintain that functionality from that point on and into the future. When we accept code and features into the project, we do this knowing that the code will likely remain in our code for decades to come. Once we’ve accepted the code, it becomes our responsibility and now we’ll care for it dearly for a long time forward.

Since we’re so few developers and maintainers in the project, I can also add that I’m very much aware that in many cases adopting code and merging patches mean that I will have to fix the remaining bugs and generally care for the code the coming years.

Changing governance?

I’m dictator of the curl project for practical reasons, not because I consider it an ideal way to run projects. If there were more people involved who cared enough about what and how we’re doing things we could also change how we run the project.

But until I sense such an interest, I don’t think the current model is bad – and our conquering the world over the recent years could also be seen as a proof that the project at least sometimes also goes in a direction that users approve of. And we are after all best practices certified.

I realize I come off sounding like a real-world dictator when I say things like this, but I genuinely believe that our governance is based on necessity and what works, not because we have to do it this way.

I’ve run the project since its inception 1998. One day I’ll get bored or get run over by a bus. Then at the very least will the project need another way to run…

Silver level?

We’re only two requirements away from Best Practices Silver level compliance and we’ve been discussing a bit lately (or perhaps: I’ve asked the question) whether the last criteria are actually worth the trouble for us or not.

  1. We need to enforce “Signed-off-by” lines in commits to maintain Developers Certificate of origin. This is easy in itself and I’ve only held this off this long because we’ve had zero interest or requirements for this from contributors and users. Added administration for little gain.
  2. We’re asked to provide an assurance case:a description of the threat model, clear identification of trust boundaries, an argument that secure design principles have been applied, and an argument that common implementation security weaknesses have been countered.” – This is work we haven’t done and a document we don’t have. And again: nobody has actually ever asked for this outside of this certificate form.

Do you think we should put in the extra effort and check off the final two requirements as well? Do you think they actually make the project better?

A hundred million cars run curl

One of my hobbies is to collect information about where curl is used. The following car brands feature devices, infotainment and/or navigation systems that use curl – in one or more of their models.

These are all brands about which I’ve found information online (for example curl license information), received photos of or otherwise been handed information by what I consider reliable sources (like involved engineers).

Do you have curl in a device installed in another car brand?

List of car brands using curl

Baojun, BMW, Buick, Cadillac, Chevrolet, Ford, GMC, Holden, Hyundai, Mazda, Mercedes, Nissan, Opel, Renault, Seat, Skoda, Subaru, Suzuki, Tesla, Toyota, VW and Vauxhall.

All together, this is a pretty amazing number of installations. This list contains eight (8) of the top-10 car brands in the world 2017! And all the top-3 brands. By my rough estimate, something like 40 million cars sold in 2017 had curl in them. Presumably almost as many in 2016 and a little more in 2018 (based on car sales stats).

Not too shabby for a little spare time project.

How to find curl in your car

Sometimes the curl open source license is included in a manual (it includes my name and email, offering more keywords to search for). That’s usually how I’ve found out many uses purely online.

Sometimes the curl license is included in the “open source license” screen within the actual infotainment system. Those tend to list hundreds of different components and without any search available, you often have to scroll for many minutes until you reach curl or libcurl. I occasionally receive photos of such devices.

Related: why is your email in my car and I have toyota corola.

Update: I added Tesla and Hyundai to the list after the initial post. The latter of those brands is a top-10 brand which bumped the counter of curl users to 8 out of the top-10 brands!

How to DoH-only with Firefox

Firefox supports DNS-over-HTTPS (aka DoH) since version 62.

You can instruct your Firefox to only use DoH and never fall-back and try the native resolver; the mode we call trr-only. Without any other ability to resolve host names, this is a little tricky so this guide is here to help you. (This situation might improve in the future.)

In trr-only mode, nobody on your local network nor on your ISP can snoop on your name resolves. The SNI part of HTTPS connections are still clear text though, so eavesdroppers on path can still figure out which hosts you connect to.

There’s a name in my URI

A primary problem for trr-only is that we usually want to use a host name in the URI for the DoH server (we typically need it to be a name so that we can verify the server’s certificate against it), but we can’t resolve that host name until DoH is setup to work. A catch-22.

There are currently two ways around this problem:

  1. Tell Firefox the IP address of the name that you use in the URI. We call it the “bootstrapAddress”. See further below.
  2. Use a DoH server that is provided on an IP-number URI. This is rather unusual. There’s for example one at 1.1.1.1.

Setup and use trr-only

There are three prefs to focus on (they’re all explained elsewhere):

network.trr.mode – set this to the number 3.

network.trr.uri – set this to the URI of the DoH server you want to use. This should be a server you trust and want to hand over your name resolves to. The Cloudflare one we’ve previously used in DoH tests with Firefox is https://mozilla.cloudflare-dns.com/dns-query.

network.trr.bootstrapAddress– when you use a host name in the URI for the network.trr.uri pref you must set this pref to an IP address that host name resolves to for you. It is important that you pick an IP address that the name you use actually would resolve to.

Example

Let’s pretend you want to go full trr-only and use a DoH server at https://example.com/dns. (it’s a pretend URI, it doesn’t work).

Figure out the bootstrapAddress with dig. Resolve the host name from the URI:

$ dig +short example.com
93.184.216.34

or if you prefer to be classy and use the IPv6 address (only do this if IPv6 is actually working for you)

$ dig -t AAAA +short example.com
2606:2800:220:1:248:1893:25c8:1946

dig might give you a whole list of addresses back, and then you can pick any one of them in the list. Only pick one address though.

Go to “about:config” and paste the copied IP address into the value field for network.trr.bootstrapAddress. Now TRR / DoH should be able to get going. When you can see web pages, you know it works!

DoH-only means only DoH

If you happen to start Firefox behind a captive portal while in trr-only mode, the connections to the DoH server will fail and no name resolves can be performed.

In those situations, normally Firefox’s captive portable detector would trigger and show you the login page etc, but when no names can be resolved and the captive portal can’t respond with a fake response to the name lookup and redirect you to the login, it won’t get anywhere. It gets stuck. And currently, there’s no good visual indication anywhere that this is what happens.

You simply can’t get out of a captive portal with trr-only. You probably then temporarily switch mode, login to the portal and switch the mode to 3 again.

If you “unlock” the captive portal with another browser/system, Firefox’s regular retries while in trr-only will soon detect that and things should start working again.

much faster curl uploads on Windows with a single tiny commit

These days, operating system kernels provide TCP/IP stacks that can do really fast network transfers. It’s not even unusual for ordinary people to have gigabit connections at home and of course we want our applications to be able take advantage of them.

I don’t think many readers here will be surprised when I say that fulfilling this desire turns out much easier said than done in the Windows world.

Autotuning?

Since Windows 7 / 2008R2, Windows implements send buffer autotuning. Simply put, the faster transfer and longer RTT the connection has, the larger the buffer it uses (up to a max) so that more un-acked data can be outstanding and thus enable the system to saturate even really fast links.

Turns out this useful feature isn’t enabled when applications use non-blocking sockets. The send buffer isn’t increased at all then.

Internally, curl is using non-blocking sockets and most of the code is platform agnostic so it wouldn’t be practical to switch that off for a particular system. The code is pretty much independent of the target that will run it, and now with this latest find we have also started to understand why it doesn’t always perform as well on Windows as on other operating systems: the upload buffer (SO_SNDBUF) is fixed size and simply too small to perform well in a lot of cases

Applications can still enlarge the buffer, if they’re aware of this bottleneck, and get better performance without having to change libcurl, but I doubt a lot of them do. And really, libcurl should perform as good as it possibly can just by itself without any necessary tuning by the application authors.

Users testing this out

Daniel Jelinski brought a fix for this that repeatedly poll Windows during uploads to ask for a suitable send buffer size and then resizes it on the go if it deems a new size is better. In order to figure out that if this patch is indeed a good idea or if there’s a downside for some, we went wide and called out for users to help us.

The results were amazing. With speedups up to almost 7 times faster, exactly those newer Windows versions that supposedly have autotuning can obviously benefit substantially from this patch. The median test still performed more than twice as fast uploads with the patch. Pretty amazing really. And beyond weird that this crazy thing should be required to get ordinary sockets to perform properly on an updated operating system in 2018.

Windows XP isn’t affected at all by this fix, and we’ve seen tests running as VirtualBox guests in NAT-mode also not gain anything, but we believe that’s VirtualBox’s “fault” rather than Windows or the patch.

Landing

The commit is merged into curl’s master git branch and will be part of the pending curl 7.61.1 release, which is due to ship on September 5, 2018. I think it can serve as an interesting case study to see how long time it takes until Windows 10 users get their versions updated to this.

Table of test runs

The Windows versions, and the test times for the runs with the unmodified curl, the patched one, how much time the second run needed as a percentage of the first, a column with comments and last a comment showing the speedup multiple for that test.

Thank you everyone who helped us out by running these tests!

Version Time vanilla Time patched New time Comment speedup
6.0.6002 15.234 2.234 14.66% Vista SP2 6.82
6.1.7601 8.175 2.106 25.76% Windows 7 SP1 Enterprise 3.88
6.1.7601 10.109 2.621 25.93% Windows 7 Professional SP1 3.86
6.1.7601 8.125 2.203 27.11% 2008 R2 SP1 3.69
6.1.7601 8.562 2.375 27.74% 3.61
6.1.7601 9.657 2.684 27.79% 3.60
6.1.7601 11.263 3.432 30.47% Windows 2008R2 3.28
6.1.7601 5.288 1.654 31.28% 3.20
10.0.16299.309 4.281 1.484 34.66% Windows 10, 1709 2.88
10.0.17134.165 4.469 1.64 36.70% 2.73
10.0.16299.547 4.844 1.797 37.10% 2.70
10.0.14393 4.281 1.594 37.23% Windows 10, 1607 2.69
10.0.17134.165 4.547 1.703 37.45% 2.67
10.0.17134.165 4.875 1.891 38.79% 2.58
10.0.15063 4.578 1.907 41.66% 2.40
6.3.9600 4.718 2.031 43.05% Windows 8 (original) 2.32
10.0.17134.191 3.735 1.625 43.51% 2.30
10.0.17713.1002 6.062 2.656 43.81% 2.28
6.3.9600 2.921 1.297 44.40% Windows 2012R2 2.25
10.0.17134.112 5.125 2.282 44.53% 2.25
10.0.17134.191 5.593 2.719 48.61% 2.06
10.0.17134.165 5.734 2.797 48.78% run 1 2.05
10.0.14393 3.422 1.844 53.89% 1.86
10.0.17134.165 4.156 2.469 59.41% had to use the HTTPS endpoint 1.68
6.1.7601 7.082 4.945 69.82% over proxy 1.43
10.0.17134.165 5.765 4.25 73.72% run 2 1.36
5.1.2600 10.671 10.157 95.18% Windows XP Professional SP3 1.05
10.0.16299.547 1.469 1.422 96.80% in a VM runing on Linux 1.03
5.1.2600 11.297 11.046 97.78% XP 1.02
6.3.9600 5.312 5.219 98.25% 1.02
5.2.3790 5.031 5 99.38% Windows 2003 1.01
5.1.2600 7.703 7.656 99.39% XP SP3 1.01
10.0.17134.191 1.219 1.531 125.59% FTP 0.80
TOTAL 205.303 102.271 49.81% 2.01
MEDIAN 43.51% 2.30

curl 7.61.0

Yet again we say hello to a new curl release that has been uploaded to the servers and sent off into the world. Version 7.61.0 (full changelog). It has been exactly eight weeks since 7.60.0 shipped.

Numbers

the 175th release
7 changes
56 days (total: 7,419)

88 bug fixes (total: 4,538)
158 commits (total: 23,288)
3 new curl_easy_setopt() options (total: 258)

4 new curl command line option (total: 218)
55 contributors, 25 new (total: 1,766)
42 authors, 18 new (total: 596)
  1 security fix (total: 81)

Security fixes

SMTP send heap buffer overflow (CVE-2018-0500)

A stupid heap buffer overflow that can be triggered when the application asks curl to use a smaller download buffer than default and then sends a larger file – over SMTP. Details.

New features

The trailing dot zero in the version number reveals that we added some news this time around – again.

More microsecond timers

Over several recent releases we’ve introduced ways to extract timer information from libcurl that uses integers to return time information with microsecond resolution, as a complement to the ones we already offer using doubles. This gives a better precision and avoids forcing applications to use floating point math.

Bold headers

The curl tool now outputs header names using a bold typeface!

Bearer tokens

The auth support now allows applications to set the specific bearer tokens to pass on.

TLS 1.3 cipher suites

As TLS 1.3 has a different set of suites, using different names, than previous TLS versions, an application that doesn’t know if the server supports TLS 1.2 or TLS 1.3 can’t set the ciphers in the single existing option since that would use names for 1.2 and not work for 1.3 . The new option for libcurl is called CURLOPT_TLS13_CIPHERS.

Disallow user name in URL

There’s now a new option that can tell curl to not acknowledge and support user names in the URL. User names in URLs can brings some security issues since they’re often sent or stored in plain text, plus if .netrc support is enabled a script accepting externally set URLs could risk getting exposing the privately set password.

Awesome bug-fixes this time

Some of my favorites include…

Resolver local host names faster

When curl is built to use the threaded resolver, which is the default choice, it will now resolve locally available host names faster. Locally as present in /etc/hosts or in the OS cache etc.

Use latest PSL and refresh it periodically

curl can now be built to use an external PSL (Public Suffix List) file so that it can get updated independently of the curl executable and thus better keep in sync with the list and the reality of the Internet.

Rumors say there are Linux distros that might start providing and updating the PSL file in separate package, much like they provide CA certificates already.

fnmatch: use the system one if available

The somewhat rare FTP wildcard matching feature always had its own internal fnmatch implementation, but now we’ve finally ditched that in favour of the system fnmatch() function for platforms that have such a one. It shrinks footprint and removes an attack surface – we’ve had a fair share of tiresome fuzzing issues in the custom fnmatch code.

axTLS: not considered fit for use

In an effort to slowly increase our requirement on third party code that we might tell users to build curl to use, we’ve made curl fail to build if asked to use the axTLS backend. This since we have serious doubts about the quality and commitment of the code and that project. This is just step one. If no one yells and fights for axTLS’ future in curl going forward, we will remove all traces of axTLS support from curl exactly six months after step one was merged. There are plenty of other and better TLS backends to use!

Detailed in our new DEPRECATE document.

TLS 1.3 used by default

When negotiating TLS version in the TLS handshake, curl will now allow TLS 1.3 by default. Previously you needed to explicitly allow that. TLS 1.3 support is not yet present everywhere so it will depend on the TLS library and its version that your curl is using.

Coming up?

We have several changes and new features lined up for next release. Stay tuned!

First, we will however most probably schedule a patch release, as we have two rather nasty HTTP/2 bugs filed that we want fixed. Once we have them fixed in a way we like, I think we’d like to see those go out in a patch release before the next pending feature release.

curl survey 2018 analysis

This year, 670 individuals spent some of their valuable time on our survey and filled in answers that help us guide what to do next. What’s good, what’s bad, what to remove and where to emphasize efforts more.

It’s taken me a good while to write up this analysis but hopefully the results here can be used all through the year as a reminder what people actually think and how they use curl and libcurl.

A new question this yeas was in which continent the respondent lives, which ended up with an unexpectedly strong Euro focus:

What didn’t trigger any surprises though was the question of what protocols users are using, which basically identically mirrored previous years’ surveys. HTTP and HTTPS are the king duo by far.

Read the full 34 page analysis PDF.

Some other interesting take-aways:

  • One person claims to use curl to handle 19 protocols! (out of 23)
  • One person claims to use curl on 11 different platforms!
  • Over 5% of the users argue for a rewrite in rust.
  • Windows is now the second most common platform to use curl on.