Tag Archives: patches

curl 7.61.1 comes with only bug-fixes

Already at the time when we shipped the previous release, 7.61.0, I had decided I wanted to do a patch release next. We had some pretty serious HTTP/2 bugs in the pipe to get fixed and there were a bunch of other unresolved issues also awaiting their treatments. Then I took off on vacation and and the HTTP/2 fixes took a longer time than expected to get on top of, so I subsequently decided that this would become a bug-fix-only release cycle. No features and no changes would be merged into master. So this is what eight weeks of only bug-fixes can look like.

Numbers

the 176th release
0 changes
56 days (total: 7,419)

102 bug fixes (total: 4,640)
151 commits (total: 23,439)
0 new curl_easy_setopt() options (total: 258)

0 new curl command line option (total: 218)
46 contributors, 21 new (total: 1,787)
27 authors, 14 new (total: 612)
  1 security fix (total: 81)

Notable bug-fixes this cycle

Among the many small fixes that went in, I feel the following ones deserve a little extra highlighting...

NTLM password overflow via integer overflow

This latest security fix (CVE-2018-14618) is almost identical to an earlier one we fixed back in 2017 called CVE-2017-8816, and is just as silly...

The internal function Curl_ntlm_core_mk_nt_hash() takes a password argument, the same password that is passed to libcurl from an application. It then gets the length of that password and allocates a memory area that is twice the length, since it needs to expand the password. Due to a lack of checks, this calculation will overflow and wrap on a 32 bit machine if a password that is longer than 2 gigabytes is passed to this function. It will then lead to a very small memory allocation, followed by an attempt to write a very long password to that small memory buffer. A heap memory overflow.

Some mitigating details: most architectures support 64 bit size_t these days. Most applications won't allow passing in passwords that are two gigabytes.

This bug has been around since libcurl 7.15.4, released back in 2006!

Oh, and on the curl web site we now use the CVE number in the actual URL for all the security vulnerabilities to make them easier to find and refer to.

HTTP/2 issues

This was actually a whole set of small problems that together made the new crawler example not work very well - until fixed. I think it is safe to say that HTTP/2 users of libcurl have previously used it in a pretty "tidy" fashion, because I believe I corrected four or five separate issues that made it misbehave.  It was rather pure luck that has made it still work as well as it has for past users!

Another HTTP/2 bug we ran into recently involved us discovering a little quirk in the underlying nghttp2 library, which in some very special circumstances would refuse to blank out the stream id to struct pointer mapping which would lead to it delivering a pointer to a stale (already freed) struct at a later point. This is fixed in nghttp2 now, shipped in its recent 1.33.0 release.

Windows send-buffer tuning

Making uploads on Windows from between two to seven times faster than before is certainly almost like a dream come true. This is what 7.61.1 offers!

Upload buffer size increased

In tests triggered by the fix above, it was noticed that curl did not meet our performance expectations when doing uploads on really high speed networks, notably on localhost or when using SFTP. We could easily double the speed by just increasing the upload buffer size. Starting now, curl allocates the upload buffer on demand (since many transfers don't need it), and now allocates a 64KB buffer instead of the previous 16KB. It has been using 16KB since the 2001, and with the on-demand setup and the fact that computer memories have grown a bit during 17 years I think it is well motivated.

A future curl version will surely allow the application to set this upload buffer size. The receive buffer size can already be set.

Darwinssl goes ALPN

While perhaps in the grey area of what a bugfix can be, this fix  allows curl to negotiate ALPN using the darwinssl backend, which by extension means that curl built to use darwinssl can now - finally - do HTTP/2 over HTTPS! Darwinssl is also known under the name Secure Transport, the native TLS library on macOS.

Note however that macOS' own curl builds that Apple ships are no longer built to use Secure Transport, they use libressl these days.

The Auth Bearer fix

When we added support for Auth Bearer tokens in 7.61.0, we accidentally caused a regression that now is history. This bug seems to in particular have hit git users for some reason.

-OJ regression

The introduction of bold headers in 7.61.0 caused a regression which made a command line like "curl -O -J http://example.com/" to fail, even if a Content-Disposition: header with a correct file name was passed on.

Cookie order

Old readers of this blog may remember my ramblings on cookie sort order from back in the days when we worked on what eventually became RFC 6265.

Anyway, we never did take all aspects of that spec into account when we sort cookies on the HTTP headers sent off to servers, and it has very rarely caused users any grief. Still, now Daniel Gustafsson did a glorious job and tweaked the code to also take creation order into account, exactly like the spec says we should! There's still some gotchas in this, but at least it should be much closer to what the spec says and what some sites might assume a cookie-using client should do...

Unbold properly

Yet another regression. Remember how curl 7.61.0 introduced the cool bold headers in the terminal? Turns out I of course had my escape sequences done wrong, so in a large number of terminal programs the end-of-bold sequence ("CSI 21 m") that curl sent didn't actually switch off the bold style. This would lead to the terminal either getting all bold all the time or on some terminals getting funny colors etc.

In 7.61.1, curl sends the "switch off all styles" code ("CSI 0 m") that hopefully should work better for people!

Next release!

We've held up a whole bunch of pull requests to ship this patch-only release. Once this is out the door, we'll open the flood gates and accept the nearly 10 changes that are eagerly waiting merge. Expect my next release blog post to mention several new things in curl!

More HTTP framing attempts

Previously, in my exciting series "improving the HTTP framing checks in Firefox" we learned that I landed a patch, got it backed out, struggled to improve the checks and finally landed the fixed version only to eventually get that one backed out as well.

And now I've landed my third version. The amendment I did this time:

When receiving HTTP content that is content-encoded and compressed I learned that when receiving deflate compression there is basically no good way for us to know if the content gets prematurely cut off. They seem to lack the footer too often for it to make any sense in checking for that. gzip streams however end with a footer so they are easier to reliably detect when they are incomplete. (As was discovered before, the Content-Length: is far too often not updated by the server so it is instead wrongly showing the uncompressed size.)

This (deflate vs gzip) knowledge is now used by the patch, meaning that deflate compressed downloads can be cut off without the browser noticing...

Will this version of the fix actually stick? I don't know. There's lots of bad voodoo out there in the HTTP world and I'm putting my finger right in the middle of some of it with this change. I'm pretty sure I've not written my last blog post on this topic just yet... If it sticks this time, it should show up in Firefox 39.

bolt-cutter

Apple’s modified CA cert handling and curl

I tweeted about me finding a change in Apple's version of curl that I haven't seen any public patch for. Apple otherwise hosts a whole slew of curl patches which they never discuss with us about but still make public and we can see what they did.

I was trying to help out a fellow curl user on IRC (we're in #curl on freenode, come see us) and he was trying to understand some funny effects of running curl against a HTTPS site and he showed me the output from a "curl -v" log. The verbose log curiously was different than mine (same curl version built by myself on Linux). My conclusion was that something was different in the Apple version.

The users log said:

* About to connect() to host.example.com port 443 (#0)
*   Trying 1.2.3.4... connected
* Connected to host.example.com (1.2.3.4) port 443 (#0)
* SSLv3, TLS handshake, Client hello (1):

... while my command against the same site said:

* About to connect() to host.example.com port 443 (#0)
*   Trying 1.2.3.4... connected
* Connected to host.example.com (1.2.3.4) port 443 (#0)
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/certs/ca-certificates.crt
  CApath: none
* SSLv3, TLS handshake, Client hello (1):

(I've bolded the part my output showed that wasn't in the mac version, the real host name and IP have been changed.)

It seems I was wrong however.

The output above is only shown if libcurl sets the CA cert path to OpenSSL and it seems the Mac version doesn't. Somehow they get the CA certs loaded to libcurl differently.

So ok, maybe they didn't modify curl but they certainly changed how curl uses CA certs and they did this by modifying OpenSSL and clearly their version of OpenSSL now defaults to use their CA cert bundle. The end result for me is still the same though: I have no idea how CA certs work with curl on Mac so it leaves me with the unfortunate situation where I can't help fellow curl users when they have CA cert problems on a Mac.

It also leaves me very curious on what --cacert does exactly on the mac version of curl.

OpenSSL is patched. Apparently it now works so that if the "normal" x509 validation fails, and TrustEvaluationAgent (TEA) is enabled, it will attempt to use the TEA to validate the certificate. The apple source code to read through for this is x509_vfy_apple.c in their patched OpenSSL tree. It is also possible to skip the TEA verification thing in OpenSSL by setting an environment variable, so that we can still have curl on mac act "as default" with a command line like:

$ env OPENSSL_X509_TEA_DISABLE=1 curl https://www.example.com/

Finally: yes, curl is released under an MIT license. They're perfectly allowed to do whichever of these actions they want. I know this, and I chose the MIT license fully aware that any company can take the code, modify it and never return any changes. I'm not arguing against anyone's rights to do this with curl.

Thank you, friendly anonymous helper for helping me straighten out my findings!

Distros Going Their Own Way

Lemme take the opportunity to express my serious dislike about a particular habit in the open source world, frequently seen performed by various distros (and by distro I then mean in the wider sense, not limited to only Linux distros):

They fix problems by patching code in projects they ship/offer, but they don't discuss the problem upstream and they don't ship their patch upstream. In fact, in one particular case in a project near to me (make a guess!) I've even tried to contact the patch author(s) over the years but they've never responded so even though I know of their patch, I can't get anyone to explain to me why they think they need it...

So hello hey you packagers working on distros! When you get a bug report that clearly is a problem with the particular tool/project and that isn't really a problem with your particular distro's way of doing things, please please please forward it upstream or at least involve the actual project team behind the tool in the discussions around the bug and possible solutions. And if you don't do that, the very least you should do is to make sure the patches you do and apply are forwarded upstream to the project team.

How else are we gonna be able to improve the project if you absorb the bug reports and you keep fixes hidden? That's not a very open source'ish attitude, methinks.

Recent example that triggered this post.