Tag Archives: release

curl 7.61.0

Yet again we say hello to a new curl release that has been uploaded to the servers and sent off into the world. Version 7.61.0 (full changelog). It has been exactly eight weeks since 7.60.0 shipped.

Numbers

the 175th release
7 changes
56 days (total: 7,419)

88 bug fixes (total: 4,538)
158 commits (total: 23,288)
3 new curl_easy_setopt() options (total: 258)

4 new curl command line option (total: 218)
55 contributors, 25 new (total: 1,766)
42 authors, 18 new (total: 596)
  1 security fix (total: 81)

Security fixes

SMTP send heap buffer overflow (CVE-2018-0500)

A stupid heap buffer overflow that can be triggered when the application asks curl to use a smaller download buffer than default and then sends a larger file – over SMTP. Details.

New features

The trailing dot zero in the version number reveals that we added some news this time around – again.

More microsecond timers

Over several recent releases we’ve introduced ways to extract timer information from libcurl that uses integers to return time information with microsecond resolution, as a complement to the ones we already offer using doubles. This gives a better precision and avoids forcing applications to use floating point math.

Bold headers

The curl tool now outputs header names using a bold typeface!

Bearer tokens

The auth support now allows applications to set the specific bearer tokens to pass on.

TLS 1.3 cipher suites

As TLS 1.3 has a different set of suites, using different names, than previous TLS versions, an application that doesn’t know if the server supports TLS 1.2 or TLS 1.3 can’t set the ciphers in the single existing option since that would use names for 1.2 and not work for 1.3 . The new option for libcurl is called CURLOPT_TLS13_CIPHERS.

Disallow user name in URL

There’s now a new option that can tell curl to not acknowledge and support user names in the URL. User names in URLs can brings some security issues since they’re often sent or stored in plain text, plus if .netrc support is enabled a script accepting externally set URLs could risk getting exposing the privately set password.

Awesome bug-fixes this time

Some of my favorites include…

Resolver local host names faster

When curl is built to use the threaded resolver, which is the default choice, it will now resolve locally available host names faster. Locally as present in /etc/hosts or in the OS cache etc.

Use latest PSL and refresh it periodically

curl can now be built to use an external PSL (Public Suffix List) file so that it can get updated independently of the curl executable and thus better keep in sync with the list and the reality of the Internet.

Rumors say there are Linux distros that might start providing and updating the PSL file in separate package, much like they provide CA certificates already.

fnmatch: use the system one if available

The somewhat rare FTP wildcard matching feature always had its own internal fnmatch implementation, but now we’ve finally ditched that in favour of the system fnmatch() function for platforms that have such a one. It shrinks footprint and removes an attack surface – we’ve had a fair share of tiresome fuzzing issues in the custom fnmatch code.

axTLS: not considered fit for use

In an effort to slowly increase our requirement on third party code that we might tell users to build curl to use, we’ve made curl fail to build if asked to use the axTLS backend. This since we have serious doubts about the quality and commitment of the code and that project. This is just step one. If no one yells and fights for axTLS’ future in curl going forward, we will remove all traces of axTLS support from curl exactly six months after step one was merged. There are plenty of other and better TLS backends to use!

Detailed in our new DEPRECATE document.

TLS 1.3 used by default

When negotiating TLS version in the TLS handshake, curl will now allow TLS 1.3 by default. Previously you needed to explicitly allow that. TLS 1.3 support is not yet present everywhere so it will depend on the TLS library and its version that your curl is using.

Coming up?

We have several changes and new features lined up for next release. Stay tuned!

First, we will however most probably schedule a patch release, as we have two rather nasty HTTP/2 bugs filed that we want fixed. Once we have them fixed in a way we like, I think we’d like to see those go out in a patch release before the next pending feature release.

The curl 7 series reaches 60

curl 7.60.0 is released. Remember 7.59.0? This latest release cycle was a week longer than normal since the last was one week shorter and we had this particular release date adapted to my traveling last week. It gave us 63 days to cram things in, instead of the regular 56 days.

7.60.0 is a crazy version number in many ways. We’ve been working on the version 7 series since virtually forever (the year 2000) and there’s no version 8 in sight any time soon. This is the 174th curl release ever.

I believe we shouldn’t allow the minor number to go above 99 (because I think it will cause serious confusion among users) so we should come up with a scheme to switch to version 8 before 7.99.0 gets old. If we keeping doing a new minor version every eight weeks, which seems like the fastest route, math tells us that’s a mere 6 years away.

Numbers

In the 63 days since the previous release, we have done and had..

3 changes
111 bug fixes (total: 4,450)
166 commits (total: 23,119)
2 new curl_easy_setopt() options (total: 255)

1 new curl command line option (total: 214)
64 contributors, 36 new (total: 1,741)
42 authors (total: 577)
2 security fixes (total: 80)

What good does 7.60.0 bring?

Our tireless and fierce army of security researches keep hammering away at every angle of our code and this has again unveiled vulnerabilities in previously released curl code:

  1. FTP shutdown response buffer overflow: CVE-2018-1000300

When you tell libcurl to use a larger buffer size, that larger buffer size is not used for the shut down of an FTP connection so if the server then sends back a huge response during that sequence, it would buffer-overflow a heap based buffer.

2. RTSP bad headers buffer over-read: CVE-2018-1000301

The header parser function would sometimes not restore a pointer back to the beginning of the buffer, which could lead to a subsequent function reading out of buffer and causing a crash or potential information leak.

There are also two new features introduced in this version:

HAProxy protocol support

HAProxy has pioneered this simple protocol for clients to pass on meta-data to the server about where it comes from; designed to allow systems to chain proxies / reverse-proxies without losing information about the original originating client. Now you can make your libcurl-using application switch this on with CURLOPT_HAPROXYPROTOCOL and from the command line with curl’s new –haproxy-protocol option.

Shuffling DNS addresses

Over six years ago, I blogged on how round robin DNS doesn’t really work these days. Once upon the time the gethostbyname() family of functions actually returned addresses in a sort of random fashion, which made clients use them in an almost random fashion and therefore they were spread out on the different addresses. When getaddrinfo() has taken over as the name resolving function, it also introduced address sorting and prioritizing, in a way that effectively breaks the round robin approach.

Now, you can get this feature back with libcurl. Set CURLOPT_DNS_SHUFFLE_ADDRESSES to have the list of addresses shuffled after resolved, before they’re used. If you’re connecting to a service that offer several IP addresses and you want to connect to one of those addresses in a semi-random fashion, this option is for you.

There’s no command line option to switch this on. Yet.

Bug fixes

We did many bug fixes for this release as usual, but some of my favorite ones this time around are…

improved pending transfers for HTTP/2

libcurl-using applications that add more transfers than what can be sent over the wire immediately (usually because the application as set some limitation of the parallelism libcurl will do) can be held “pending” by libcurl. They’re basically kept in a separate queue until there’s a chance to send them off. They will then be attempted to get started when the streams than are in progress end.

The algorithm for retrying the pending transfers were quite naive and “brute-force” which made it terribly slow and in effective when there are many transfers waiting in the pending queue. This slowed down the transfers unnecessarily.

With the fixes we’ve landed in7.60.0, the algorithm is less stupid which leads to much less overhead and for this setup, much faster transfers.

curl_multi_timeout values with threaded resolver

When using a libcurl version that is built to use a threaded resolver, there’s no socket to wait for during the name resolving phase so we’ve often recommended users to just wait “a short while” during this interval. That has always been a weakness and an unfortunate situation.

Starting now, curl_multi_timeout() will return suitable timeout values during this period so that users will no longer have to re-implement that logic themselves. The timeouts will be slowly increasing to make sure fast resolves are detected quickly but slow resolves don’t consume too much CPU.

much faster cookies

The cookie code in libcurl was keeping them all in a linear linked list. That’s fine for small amounts of cookies or perhaps if you don’t manipulate them much.

Users with several hundred cookies, or even thousands, will in 7.60.0 notice a speed increase that in some situations are in the order of several magnitudes when the internal representation has changed to use hash tables and some good cleanups were made.

HTTP/2 GOAWAY-handling

We figure out some problems in libcurl’s handling of GOAWAY, like when an application wants to do a bunch of transfers over a connection that suddenly gets a GOAWAY so that libcurl needs to create a new connection to do the rest of the pending transfers over.

Turns out nginx ships with a config option named http2_max_requests that sets the maximum number of requests it allows over the same connection before it sends GOAWAY over it (and it defaults to 1000). This option isn’t very well explained in their docs and it seems users won’t really know what good values to set it to, so this is probably the primary reason clients see GOAWAYs where there’s no apparent good reason for them.

Setting the value to a ridiculously low value at least helped me debug this problem and improve how libcurl deals with it!

Repair non-ASCII support

We’ve supported transfers with libcurl on non-ASCII platforms since early 2007. Non-ASCII here basically means EBCDIC, but the code hasn’t been limited to those.

However, due to this being used by only a small amount of users and that our test infrastructure doesn’t test this feature good enough, we slipped recently and broke libcurl for the non-ASCII users. Work was put in and changes were landed to make sure that libcurl works again on these systems!

Enjoy 7.60.0! In 56 days there should be another release to play with…

Here’s curl 7.59.0

We ship curl 7.59.0 exactly 49 days since the previous release (a week shorter than planned because of reasons). Download it from here. Full changelog is here.

In these 49 days, we have done and had..

6 changes(*)
78 bug fixes (total: 4337)
149 commits (total: 22,952)
45 contributors, 20 new (total: 1,702)
29 authors (total: 552)
3 security fixes (total: 78)

This time we’ve fixed no less than three separate security vulnerabilities:

  1. FTP path trickery security issue
  2. LDAP NULL dereference
  3. RTSP RTP buffer over-read

(*) = changes are things that don’t fix existing functionality but actually add something new to curl/libcurl. New features mostly.

The new things time probably won’t be considered as earth shattering but still a bunch of useful stuff:

–proxy-pinnedpubkey

The ability to specified a public key pinning has been around for a while for regular servers, and libcurl has had the ability to pin proxies’ keys as well. This change makes sure that users of the command line tool also gets that ability. Make sure your HTTPS proxy isn’t MITMed!

CURLOPT_TIMEVALUE_LARGE

Part of our effort to cleanup our use of ‘long’ variables internally to make sure we don’t have year-2038 problems, this new option was added.

CURLOPT_RESOLVE

This popular libcurl option that allows applications to populate curl’s DNS cache with custom IP addresses for host names were improved and now you can add multiple addresses for host names. This allows transfers using this to even more work like as if it used normal name resolves.

CURLOPT_HAPPY_EYEBALLS_TIMEOUT_MS

As a true HTTP swiss-army knife tool and library, you can toggle and tweak almost all aspects, timers and options that are used. This libcurl option has a new corresponding curl command line option, and allows the user to set the timeout time for how long after the initial (IPv6) connect call is done until the second (IPv4) connect is invoked in the happy eyeballs connect procedure. The default is 200 milliseconds.

Bug fixes!

As usual we fixed things all over. Big and small. Some of the ones that I think stuck out a little were the fix for building with OpenSSL 0.9.7 (because you’d think that portion of users should be extinct by now) and the fix to make configure correctly detect OpenSSL 1.1.1 (there are beta releases out there).

Some application authors will appreciate that libcurl now for the most part detects if it gets called from within one of its own callbacks and returns an error about it. This is mostly to save these users from themselves as doing this would already previously risk damaging things. There are some functions that are still allowed to get called from within callbacks.

Cheers for curl 7.58.0

Here’s to another curl release!

curl 7.58.0 is the 172nd curl release and it contains, among other things, 82 bug fixes thanks to 54 contributors (22 new). All this done with 131 commits in 56 days.

The bug fix rate is slightly lower than in the last few releases, which I tribute mostly to me having been away on vacation for a month during this release cycle. I retain my position as “committer of the Month” and January 2018 is my 29th consecutive month where I’ve done most commits in the curl source code repository. In total, almost 58% of the commits have been done by me (if we limit the count to all commits done since 2014, I’m at 43%). We now count a total of 545 unique commit authors and 1,685 contributors.

So what’s new this time? (full changelog here)

libssh backend

Introducing the pluggable SSH backend, and libssh is now the new alternative SSH backend to libssh2 that has been supported since late 2006. This change alone brought thousands of new lines of code.

Tell configure to use it with –with-libssh and you’re all set!

The libssh backend work was done by Nikos Mavrogiannopoulos, Tomas Mraz, Stanislav Zidek, Robert Kolcun and Andreas Schneider.

Security

Yet again we announce security issues that we’ve found and fixed. Two of them to be exact:

  1. We found a problem with how HTTP/2 trailers was handled, which could lead to crashes or even information leakage.
  2. We addressed a problem for users sending custom Authorization: headers to HTTP servers and who are then redirected to another host that shouldn’t receive those Authorization headers.

Progress bar refresh

A minor thing, but we refreshed the progress bar layout for when no total size is known.

Next?

March 21 is the date set for next release. Unless of course we find an urgent reason to fix and release something before then…

curl 7.57.0 happiness

The never-ending series of curl releases continued today when we released version 7.57.0. The 171th release since the beginning, and the release that follows 37 days after 7.56.1. Remember that 7.56.1 was an extra release that fixed a few most annoying regressions.

We bump the minor number to 57 and clear the patch number in this release due to the changes introduced. None of them very ground breaking, but fun and useful and detailed below.

41 contributors helped fix 69 bugs in these 37 days since the previous release, using 115 separate commits. 23 of those contributors were new, making the total list of contributors now contain 1649 individuals! 25 individuals authored commits since the previous release, making the total number of authors 540 persons.

The curl web site currently sends out 8GB data per hour to over 2 million HTTP requests per day.

Support RFC7616 – HTTP Digest

This allows HTTP Digest authentication to use the must better SHA256 algorithm instead of the old, and deemed unsuitable, MD5. This should be a transparent improvement so curl should just be able to use this without any particular new option has to be set, but the server-side support for this version seems to still be a bit lacking.

(Side-note: I’m credited in RFC 7616 for having contributed my thoughts!)

Sharing the connection cache

In this modern age with multi core processors and applications using multi-threaded designs, we of course want libcurl to enable applications to be able to get the best performance out of libcurl.

libcurl is already thread-safe so you can run parallel transfers multi-threaded perfectly fine if you want to, but it doesn’t allow the application to share handles between threads. Before this specific change, this limitation has forced multi-threaded applications to be satisfied with letting libcurl has a separate “connection cache” in each thread.

The connection cache, sometimes also referred to as the connection pool, is where libcurl keeps live connections that were previously used for a transfer and still haven’t been closed, so that a subsequent request might be able to re-use one of them. Getting a re-used connection for a request is much faster than having to create a new one. Having one connection cache per thread, is ineffective.

Starting now, libcurl’s “share concept” allows an application to specify a single connection cache to be used cross-thread and cross-handles, so that connection re-use will be much improved when libcurl is used multi-threaded. This will significantly benefit the most demanding libcurl applications, but it will also allow more flexible designs as now the connection pool can be designed to survive individual handles in a way that wasn’t previously possible.

Brotli compression

The popular browsers have supported brotli compression method for a while and it has already become widely supported by servers.

Now, curl supports it too and the command line tool’s –compressed option will ask for brotli as well as gzip, if your build supports it. Similarly, libcurl supports it with its CURLOPT_ACCEPT_ENCODING option. The server can then opt to respond using either compression format, depending on what it knows.

According to CertSimple, who ran tests on the top-1000 sites of the Internet, brotli gets contents 14-21% smaller than gzip.

As with other compression algorithms, libcurl uses a 3rd party library for brotli compression and you may find that Linux distributions and others are a bit behind in shipping packages for a brotli decompression library. Please join in and help this happen. At the moment of this writing, the Debian package is only available in experimental.

(Readers may remember my libbrotli project, but that effort isn’t really needed anymore since the brotli project itself builds a library these days.)

Three security issues

In spite of our hard work and best efforts, security issues keep getting reported and we fix them accordingly. This release has three new ones and I’ll describe them below. None of them are alarmingly serious and they will probably not hurt anyone badly.

Two things can be said about the security issues this time:

1. You’ll note that we’ve changed naming convention for the advisory URLs, so that they now have a random component. This is to reduce potential information leaks based on the name when we pass these around before releases.

2. Two of the flaws happen only on 32 bit systems, which reveals a weakness in our testing. Most of our CI tests, torture tests and fuzzing are made on 64 bit architectures. We have no immediate and good fix for this, but this is something we must work harder on.

1. NTLM buffer overflow via integer overflow

(CVE-2017-8816) Limited to 32 bit systems, this is a flaw where curl takes the combined length of the user name and password, doubles it, and allocates a memory area that big. If that doubling ends up larger than 4GB, an integer overflow makes a very small buffer be allocated instead and then curl will overwrite that.

Yes, having user name plus password be longer than two gigabytes is rather excessive and I hope very few applications would allow this.

2. FTP wildcard out of bounds read

(CVE-2017-8817) curl’s wildcard functionality for FTP transfers is not a not very widely used feature, but it was discovered that the default pattern matching function could erroneously read beyond the URL buffer if the match pattern ends with an open bracket ‘[‘ !

This problem was detected by the OSS-Fuzz project! This flaw  has existed in the code since this feature was added, over seven years ago.

3. SSL out of buffer access

(CVE-2017-8818) In July this year we introduced multissl support in libcurl. This allows an application to select which TLS backend libcurl should use, if it was built to support more than one. It was a fairly large overhaul to the TLS code in curl and unfortunately it also brought this bug.

Also, only happening on 32 bit systems, libcurl would allocate a buffer that was 4 bytes too small for the TLS backend’s data which would lead to the TLS library accessing and using data outside of the heap allocated buffer.

Next?

The next release will ship no later than January 24th 2018. I think that one will as well add changes and warrant the minor number to bump. We have fun pending stuff such as: a new SSH backend, modifiable happy eyeballs timeout and more. Get involved and help us do even more good!

Firefox Quantum

Next week, Mozilla will release Firefox 57. Also referred to as Firefox Quantum, from the project name we’ve used for all the work that has been put into making this the most awesome Firefox release ever. This is underscored by the fact that I’ve gotten mailed release-swag for the first time during my four years so far as a Mozilla employee.

Firefox 57 is the major milestone hundreds of engineers have worked really hard toward during the last year or so, and most of the efforts have been focused on performance. Or perhaps perceived end user snappiness. Early comments I’ve read and heard also hints that it is also quite notable. I think every single Mozilla engineer (and most non-engineers as well) has contributed to at least some parts of this, and of course many have done a lot. My personal contributions to 57 are not much to write home about, but are mostly a stream of minor things that combined at least move the notch forward.

[edited out some secrets I accidentally leaked here.] I’m a proud Mozillian and being part of a crowd that has put together something as grand as Firefox 57 is an honor and a privilege.

Releasing a product to hundreds of millions of end users across the world is interesting. People get accustomed to things, get emotional and don’t particularly like change very much. I’m sure Firefox 57 will also get a fair share of sour feedback and comments written in uppercase. That’s inevitable. But sometimes, in order to move forward and do good stuff, we have to make some tough decisions for the greater good that not everyone will agree with.

This is however not the end of anything. It is rather the beginning of a new Firefox. The work on future releases goes on, we will continue to improve the web experience for users all over the world. Firefox 58 will have even more goodies, and I know there are much more good stuff planned for the releases coming in 2018 too…

Onwards and upwards!

(Update: as I feared in this text, I got a lot of negativism, vitriol and criticism in the comments to this post. So much that I decided to close down comments for this entry and delete the worst entries.)

Say hi to curl 7.56.0

Another curl version has been released into the world. curl 7.56.0 is available for download from the usual place. Here are some news I think are worthy to mention this time…

An FTP security issue

A mistake in the code that parses responses to the PWD command could make curl read beyond the end of a buffer, Max Dymond figured it out, and we’ve released a security advisory about it. Our 69th security vulnerability counted from the beginning and the 8th reported in 2017.

Multiple SSL backends

Since basically forever you’ve been able to build curl with a selected SSL backend to make it get a different feature set or behave slightly different – or use a different license or get a different footprint. curl supports eleven different TLS libraries!

Starting now, libcurl can be built to support more than one SSL backend! You specify all the SSL backends at build-time and then you can tell libcurl at run-time exactly which of the backends it should use.

The selection can only happen once per invocation so there’s no switching back and forth among them, but still. It also of course requires that you actually build curl with more than one TLS library, which you do by telling configure all the libs to use.

The first user of this feature that I’m aware of is git for windows that can select between using the schannel and OpenSSL backends.

curl_global_sslset() is the new libcurl call to do this with.

This feature was brought by Johannes Schindelin.

New MIME API

The currently provided API for creating multipart formposts, curl_formadd, has always been considered a bit quirky and complicated to work with. Its extensive use of varargs is to blame for a significant part of that.

Now, we finally introduce a replacement API to accomplish basically the same features but also with a few additional ones, using a new API that is supposed to be easier to use and easier to wrap for bindings etc.

Introducing the mime API: curl_mime_init, curl_mime_addpart, curl_mime_name and more. See the postit2.c and multi-post.c examples for some easy to grasp examples.

This work was done by Patrick Monnerat.

SSH compression

The SSH protocol allows clients and servers to negotiate to use of compression when communicating, and now curl can too. curl has the new –compressed-ssh option and libcurl has a new setopt called CURLOPT_SSH_COMPRESSION using the familiar style.

Feature worked on by Viktor Szakats.

SSLKEYLOGFILE

Peter Wu and Jay Satiro have worked on this feature that allows curl to store SSL session secrets in a file if this environment variable is set. This is normally the way you tell Chrome and Firefox to do this, and is extremely helpful when you want to wireshark and analyze a TLS stream.

This is still disabled by default due to its early days. Enable it by defining ENABLE_SSLKEYLOGFILE when building libcurl and set environment variable SSLKEYLOGFILE to a pathname that will receive the keys.

Numbers

This, the 169th curl release, contains 89 bug fixes done during the 51 days since the previous release.

47 contributors helped making this release, out of whom 18 are new.

254 commits were done since the previous release, by 26 authors.

The top-5 commit authors this release are:

  1. Daniel Stenberg (116)
  2. Johannes Schindelin (37)
  3. Patrick Monnerat (28)
  4. Jay Satiro (12)
  5. Dan Fandrich (10)

Thanks a lot everyone!

(picture from pixabay)

Some things to enjoy in curl 7.55.0

In this endless stream of frequent releases, the next release isn’t terribly different from the previous.

curl’s 167th release is called 7.55.0 and while the name or number isn’t standing out in any particular way, I believe this release has a few extra bells and whistles that makes it stand out a little from the regular curl releases, feature wise. Hopefully this will turn out to be a release that becomes the new “you should at least upgrade to this version” in the coming months and years.

Here are six things in this release I consider worthy some special attention. (The full changelog.)

1. Headers from file

The command line options that allows users to pass on custom headers can now read a set of headers from a given file.

2. Binary output prevention

Invoke curl on the command line, give it a URL to a binary file and see it destroy your terminal by sending all that gunk to the terminal? No more.

3. Target independent headers

You want to build applications that use libcurl and build for different architectures, such as 32 bit and 64 bit builds, using the same installed set of libcurl headers? Didn’t use to be possible. Now it is.

4. OPTIONS * support!

Among HTTP requests, this is a rare beast. Starting now, you can tell curl to send such requests.

5. HTTP proxy use cleanup

Asking curl to use a HTTP proxy while doing a non-HTTP protocol would often behave in unpredictable ways since it wouldn’t do CONNECT requests unless you added an extra instruction. Now libcurl will assume CONNECT operations for all protocols over an HTTP proxy unless you use HTTP or FTP.

6. Coverage counter

The configure script now supports the option –enable-code-coverage. We now build all commits done on github with it enabled, run a bunch of tests and measure the test coverage data it produces. How large share of our source code that is exercised by our tests. We push all coverage data to coveralls.io.

That’s a blunt tool, but it could help us identify parts of the project that we don’t test well enough. Right now it says we have a 75% coverage. While not totally bad, it’s not very impressive either.

Stats

This release ships 56 days since the previous one. Exactly 8 weeks, right on schedule. 207 commits.

This release contains 114 listed bug-fixes, including three security advisories. We list 7 “changes” done (new features basically).

We got help from 41 individual contributors who helped making this single release. Out of this bunch, 20 persons were new contributors and 24 authored patches.

283 files in the git repository were modified for this release. 51 files in the documentation tree were updated, and in the library 78 files were changed: 1032 lines inserted and 1007 lines deleted. 24 test cases were added or modified.

The top 5 commit authors in this release are:

  1. Daniel Stenberg
  2. Marcel Raad
  3. Jay Satiro
  4. Max Dymond
  5. Kamil Dudka

c-ares 1.13.0

The c-ares project may not be very fancy or make a lot of noise, but it steadily moves forward and boasts an amazing 95% code coverage in the automated tests.

Today we release c-ares 1.13.0.

This time there’s basically three notable things to take home from this, apart from the 20-something bug-fixes.

CVE-2017-1000381

Due to an oversight there was an API function that we didn’t fuzz and yes, it was found out to have a security flaw. If you ask a server for a NAPTR DNS field and that response comes back crafted carefully, it could cause c-ares to access memory out of bounds.

All details for CVE-2017-1000381 on the c-ares site.

(Side-note: this is the first CVE I’ve received with a 7(!)-digit number to the right of the year.)

cmake

Now c-ares can optionally be built using cmake, in addition to the existing autotools setup.

Virtual socket IO

If you have a special setup or custom needs, c-ares now allows you to fully replace all the socket IO functions with your own custom set with ares_set_socket_functions.

What’s new in curl

CURL keyboardWe just shipped our 150th public release of curl. On December 2, 2015.

curl 7.46.0

One hundred and fifty public releases done during almost 18 years makes a little more than 8 releases per year on average. In mid November 2015 we also surpassed 20,000 commits in the git source code repository.

With the constant and never-ending release train concept of just another release every 8 weeks that we’re using, no release is ever the grand big next release with lots of bells and whistles. Instead we just add a bunch of things, fix a bunch of bugs, release and then loop. With no fanfare and without any press-stopping marketing events.

So, instead of just looking at what was made in this last release, because you can check that out yourself in our changelog, I wanted to take a look at the last two years and have a moment to show you want we have done in this period. curl and libcurl are the sort of tool and library that people use for a long time and a large number of users have versions installed that are far older than two years and hey, now I’d like to tease you and tell you what can be yours if you take the step straight into the modern day curl or libcurl.

Thanks

Before we dive into the real contents, let’s not fool ourselves and think that we managed these years and all these changes without the tireless efforts and contributions from hundreds of awesome hackers. Thank you everyone! I keep calling myself lead developer of curl but it truly would not not exist without all the help I get.

We keep getting a steady stream of new contributors and quality patches. Our problem is rather to review and receive the contributions in a timely manner. In a personal view, I would also like to just add that during these two last years I’ve had support from my awesome employer Mozilla that allows me to spend a part of my work hours on curl.

What happened the last 2 years in curl?

We released curl and libcurl 7.34.0 on December 17th 2013 (12 releases ago). What  did we do since then that could be worth mentioning? Well, a lot, and then I’m going to mostly skip the almost 900 bug fixes we did in this time.

Many security fixes

Almost half (18 out of 37) of the security vulnerabilities reported for our project were reported during the last two years. It may suggest a greater focus and more attention put on those details by users and developers. Security reports are a good thing, it means that we address and find problems. Yes it unfortunately also shows that we introduce security issues at times, but I consider that secondary, even if we of course also work on ways to make sure we’ll do this less in the future.

URL specific options: –next

A pretty major feature that was added to the command line tool without much bang or whistles. You can now add –next as a separator on the command line to “group” options for specific URLs. This allows you to run multiple different requests on URLs that still can re-use the same connection and so on. It opens up for lots of more fun and creative uses of curl and has in fact been requested on and off for the project’s entire life time!

HTTP/2

There’s a new protocol version in town and during the last two years it was finalized and its RFC went public. curl and libcurl supports HTTP/2, although you need to explicitly ask for it to be used still.

HTTP/2 is binary, multiplexed, uses compressed headers and offers server push. Since the command line tool is still serially sending and receiving data, the multiplexing and server push features can right now only get fully utilized by applications that use libcurl directly.

HTTP/2 in curl is powered by the nghttp2 library and it requires a fairly new TLS library that supports the ALPN extension to be fully usable for HTTPS. Since the browsers only support HTTP/2 over HTTPS, most HTTP/2 in the wild so far is done over HTTPS.

We’ve gradually implemented and provided more and more HTTP/2 features.

Separate proxy headers

For a very long time, there was no way to tell curl which custom headers to use when talking to a proxy and which to use when talking to the server. You’d just add a custom header to the request. This was never good and we eventually made it possible to specify them separately and then after the security alert on the same thing, we made it the default behavior.

Option man pages

We’ve had two user surveys as we now try to make it an annual spring tradition for the project. To learn what people use, what people think, what people miss etc. Both surveys have told us users think our documentation needs improvement and there has since been an extra push towards improving the documentation to make it more accessible and more readable.

One way to do that, has been to introduce separate, stand-alone, versions of man pages for each and very libcurl option. For the functions curl_easy_setopt, curl_multi_setopt and curl_easy_getinfo. Right now, that means 278 new man pages that are easier to link directly to, easier to search for with Google etc and they are now written with more text and more details for each individual option. In total, we now host and maintain 351 individual man pages.

The boringssl / libressl saga

The Heartbleed incident of April 2014 was a direct reason for libressl being created as a new fork of OpenSSL and I believe it also helped BoringSSL to find even more motivation for its existence.

Subsequently, libcurl can be built to use either one of these three forks based on the same origin.  This is however not accomplished without some amount of agony.

SSLv3 is also disabled by default

The continued number of problems detected in SSLv3 finally made it too get disabled by default in curl (together with SSLv2 which has been disabled by default for a while already). Now users need to explicitly ask for it in case they need it, and in some cases the TLS libraries do not even support them anymore. You may need to build your own binary to get the support back.

Everyone should move up to TLS 1.2 as soon as possible. HTTP/2 also requires TLS 1.2 or later when used over HTTPS.

support for the SMB/CIFS protocol

For the first time in many years we’ve introduced support for a new protocol, using the SMB:// and SMBS:// schemes. Maybe not the most requested feature out there, but it is another network protocol for transfers…

code of conduct

Triggered by several bad examples in other projects, we merged a code of conduct document into our source tree without much of a discussion, because this is the way this project always worked. This just makes it clear to newbies and outsiders in case there would ever be any doubt. Plus it offers a clear text saying what’s acceptable or not in case we’d ever come to a point where that’s needed. We’ve never needed it so far in the project’s very long history.

–data-raw

Just a tiny change but more a symbol of the many small changes and advances we continue doing. The –data option that is used to specify what to POST to a server can take a leading ‘@’ symbol and then a file name, but that also makes it tricky to actually send a literal ‘@’ plus it makes scripts etc forced to make sure it doesn’t slip in one etc.

–data-raw was introduced to only accept a string to send, without any ability to read from a file and not using ‘@’ for anything. If you include a ‘@’ in that string, it will be sent verbatim.

attempting VTLS as a lib

We support eleven different TLS libraries in the curl project – that is probably more than all other transfer libraries in existence do. The way we do this is by providing an internal API for TLS backends, and we call that ‘vtls’.

In 2015 we started made an effort in trying to make that into its own sub project to allow other open source projects and tools to use it. We managed to find a few hackers from the wget project also interested and to participate. Unfortunately I didn’t feel I could put enough effort or time into it to drive it forward much and while there was some initial work done by others it soon was obvious it wouldn’t go anywhere and we pulled the plug.

The internal vtls glue remains fine though!

pull-requests on github

Not really a change in the code itself but still a change within the project. In March 2015 we changed our policy regarding pull-requests done on github. The effect has been a huge increase in number of pull-requests and a slight shift in activity away from the mailing list over to github more. I think it has made it easier for casual contributors to send enhancements to the project but I don’t have any hard facts backing this up (and I wouldn’t know how to measure this).

… as mentioned in the beginning, there have also been hundreds of smaller changes and bug fixes. What fun will you help us make reality in the next two years?