Tag Archives: Security

More curl bug bounty

Together with Bountygraph, the curl project now offers money to security researchers for report security vulnerabilities to us.

https://bountygraph.com/programs/curl

The idea is that sponsors donate money to the bounty fund, and we will use that fund to hand out rewards for reported issues. It is a way for the curl project to help compensate researchers for the time and effort they spend helping us improving our security.

Right now the bounty fund is very small as we just started this project, but hopefully we can get a few sponsors interested and soon offer “proper” rewards at decent levels in case serious flaws are detected and reported here.

If you’re a company using curl or libcurl and value security, you know what you can do…

Already before, people who reported security problems could ask for money from Hackerone’s IBB program, and this new program is in addition to that – even though you won’t be able to receive money from both bounties for the same issue.

After I announced this program on twitter yesterday, I did an interview with Arif Khan for latesthackingnews.com. Here’s what I had to say:

A few questions

Q: You have launched a self-managed bug bounty program for the first time. Earlier, IBB used to pay out for most security issues in libcurl. How do you think the idea of self-management of a bug bounty program, which has some obvious problems such as active funding might eventually succeed?

First, this bounty program is run on bountygraph.com so I wouldn’t call it “self-managed” since we’re standing on a lot of infra setup and handled by others.

To me, this is an attempt to make a bounty program that is more visible as clearly a curl bounty program. I love Hackerone and the IBB program for what they offer, but it is A) very generic, so the fact that you can get money for curl flaws there is not easy to figure out and there’s no obvious way for companies to sponsor curl security research and B) they are very picky to which flaws they pay money for (“only critical flaws”) and I hope this program can be a little more accommodating – assuming we get sponsors of course.

Will it work and make any differences compared to IBB? I don’t know. We will just have to see how it plays out.

Q: How do you think the crowdsourcing model is going to help this bug bounty program?

It’s crucial. If nobody sponsors this program, there will be no money to do payouts with and without payouts there are no bounties. Then I’d call the curl bounty program a failure. But we’re also not in a hurry. We can give this some time to see how it works out.

My hope is though that because curl is such a widely used component, we will get sponsors interested in helping out.

Q: What would be the maximum reward for most critical a.k.a. P0 security vulnerabilities for this program?

Right now we have a total of 500 USD to hand out. If you report a p0 bug now, I suppose you’ll get that. If we just get sponsors, I’m hoping we should be able to raise that reward level significantly. I might be very naive, but I think we won’t have to pay for very many critical flaws.

It goes back to the previous question: this model will only work if we get sponsors.

Q: Do you feel there’s a risk that bounty hunters could turn malicious?

I don’t think this bounty program particularly increases or reduces that risk to any significant degree. Malicious hunters probably already exist and I would assume that blackhat researchers might be able to extract more money on the less righteous markets if they’re so inclined. I don’t think we can “outbid” such buyers with this program.

Q: How will this new program mutually benefit security researchers as well as the open source community around curl as a whole?

Again, assuming that this works out…

Researchers can get compensated for the time and efforts they spend helping the curl project to produce and provide a more secure product to the world.

curl is used by virtually every connected device in the world in one way or another, affecting every human in the connected world on a daily basis. By making sure curl is secure we keep users safe; users of countless devices, applications and networked infrastructure.

Update: just hours after this blog post, Dropbox chipped in 32,768 USD to the curl bounty fund…

curl 7.61.0

Yet again we say hello to a new curl release that has been uploaded to the servers and sent off into the world. Version 7.61.0 (full changelog). It has been exactly eight weeks since 7.60.0 shipped.

Numbers

the 175th release
7 changes
56 days (total: 7,419)

88 bug fixes (total: 4,538)
158 commits (total: 23,288)
3 new curl_easy_setopt() options (total: 258)

4 new curl command line option (total: 218)
55 contributors, 25 new (total: 1,766)
42 authors, 18 new (total: 596)
  1 security fix (total: 81)

Security fixes

SMTP send heap buffer overflow (CVE-2018-0500)

A stupid heap buffer overflow that can be triggered when the application asks curl to use a smaller download buffer than default and then sends a larger file – over SMTP. Details.

New features

The trailing dot zero in the version number reveals that we added some news this time around – again.

More microsecond timers

Over several recent releases we’ve introduced ways to extract timer information from libcurl that uses integers to return time information with microsecond resolution, as a complement to the ones we already offer using doubles. This gives a better precision and avoids forcing applications to use floating point math.

Bold headers

The curl tool now outputs header names using a bold typeface!

Bearer tokens

The auth support now allows applications to set the specific bearer tokens to pass on.

TLS 1.3 cipher suites

As TLS 1.3 has a different set of suites, using different names, than previous TLS versions, an application that doesn’t know if the server supports TLS 1.2 or TLS 1.3 can’t set the ciphers in the single existing option since that would use names for 1.2 and not work for 1.3 . The new option for libcurl is called CURLOPT_TLS13_CIPHERS.

Disallow user name in URL

There’s now a new option that can tell curl to not acknowledge and support user names in the URL. User names in URLs can brings some security issues since they’re often sent or stored in plain text, plus if .netrc support is enabled a script accepting externally set URLs could risk getting exposing the privately set password.

Awesome bug-fixes this time

Some of my favorites include…

Resolver local host names faster

When curl is built to use the threaded resolver, which is the default choice, it will now resolve locally available host names faster. Locally as present in /etc/hosts or in the OS cache etc.

Use latest PSL and refresh it periodically

curl can now be built to use an external PSL (Public Suffix List) file so that it can get updated independently of the curl executable and thus better keep in sync with the list and the reality of the Internet.

Rumors say there are Linux distros that might start providing and updating the PSL file in separate package, much like they provide CA certificates already.

fnmatch: use the system one if available

The somewhat rare FTP wildcard matching feature always had its own internal fnmatch implementation, but now we’ve finally ditched that in favour of the system fnmatch() function for platforms that have such a one. It shrinks footprint and removes an attack surface – we’ve had a fair share of tiresome fuzzing issues in the custom fnmatch code.

axTLS: not considered fit for use

In an effort to slowly increase our requirement on third party code that we might tell users to build curl to use, we’ve made curl fail to build if asked to use the axTLS backend. This since we have serious doubts about the quality and commitment of the code and that project. This is just step one. If no one yells and fights for axTLS’ future in curl going forward, we will remove all traces of axTLS support from curl exactly six months after step one was merged. There are plenty of other and better TLS backends to use!

Detailed in our new DEPRECATE document.

TLS 1.3 used by default

When negotiating TLS version in the TLS handshake, curl will now allow TLS 1.3 by default. Previously you needed to explicitly allow that. TLS 1.3 support is not yet present everywhere so it will depend on the TLS library and its version that your curl is using.

Coming up?

We have several changes and new features lined up for next release. Stay tuned!

First, we will however most probably schedule a patch release, as we have two rather nasty HTTP/2 bugs filed that we want fixed. Once we have them fixed in a way we like, I think we’d like to see those go out in a patch release before the next pending feature release.

The curl 7 series reaches 60

curl 7.60.0 is released. Remember 7.59.0? This latest release cycle was a week longer than normal since the last was one week shorter and we had this particular release date adapted to my traveling last week. It gave us 63 days to cram things in, instead of the regular 56 days.

7.60.0 is a crazy version number in many ways. We’ve been working on the version 7 series since virtually forever (the year 2000) and there’s no version 8 in sight any time soon. This is the 174th curl release ever.

I believe we shouldn’t allow the minor number to go above 99 (because I think it will cause serious confusion among users) so we should come up with a scheme to switch to version 8 before 7.99.0 gets old. If we keeping doing a new minor version every eight weeks, which seems like the fastest route, math tells us that’s a mere 6 years away.

Numbers

In the 63 days since the previous release, we have done and had..

3 changes
111 bug fixes (total: 4,450)
166 commits (total: 23,119)
2 new curl_easy_setopt() options (total: 255)

1 new curl command line option (total: 214)
64 contributors, 36 new (total: 1,741)
42 authors (total: 577)
2 security fixes (total: 80)

What good does 7.60.0 bring?

Our tireless and fierce army of security researches keep hammering away at every angle of our code and this has again unveiled vulnerabilities in previously released curl code:

  1. FTP shutdown response buffer overflow: CVE-2018-1000300

When you tell libcurl to use a larger buffer size, that larger buffer size is not used for the shut down of an FTP connection so if the server then sends back a huge response during that sequence, it would buffer-overflow a heap based buffer.

2. RTSP bad headers buffer over-read: CVE-2018-1000301

The header parser function would sometimes not restore a pointer back to the beginning of the buffer, which could lead to a subsequent function reading out of buffer and causing a crash or potential information leak.

There are also two new features introduced in this version:

HAProxy protocol support

HAProxy has pioneered this simple protocol for clients to pass on meta-data to the server about where it comes from; designed to allow systems to chain proxies / reverse-proxies without losing information about the original originating client. Now you can make your libcurl-using application switch this on with CURLOPT_HAPROXYPROTOCOL and from the command line with curl’s new –haproxy-protocol option.

Shuffling DNS addresses

Over six years ago, I blogged on how round robin DNS doesn’t really work these days. Once upon the time the gethostbyname() family of functions actually returned addresses in a sort of random fashion, which made clients use them in an almost random fashion and therefore they were spread out on the different addresses. When getaddrinfo() has taken over as the name resolving function, it also introduced address sorting and prioritizing, in a way that effectively breaks the round robin approach.

Now, you can get this feature back with libcurl. Set CURLOPT_DNS_SHUFFLE_ADDRESSES to have the list of addresses shuffled after resolved, before they’re used. If you’re connecting to a service that offer several IP addresses and you want to connect to one of those addresses in a semi-random fashion, this option is for you.

There’s no command line option to switch this on. Yet.

Bug fixes

We did many bug fixes for this release as usual, but some of my favorite ones this time around are…

improved pending transfers for HTTP/2

libcurl-using applications that add more transfers than what can be sent over the wire immediately (usually because the application as set some limitation of the parallelism libcurl will do) can be held “pending” by libcurl. They’re basically kept in a separate queue until there’s a chance to send them off. They will then be attempted to get started when the streams than are in progress end.

The algorithm for retrying the pending transfers were quite naive and “brute-force” which made it terribly slow and in effective when there are many transfers waiting in the pending queue. This slowed down the transfers unnecessarily.

With the fixes we’ve landed in7.60.0, the algorithm is less stupid which leads to much less overhead and for this setup, much faster transfers.

curl_multi_timeout values with threaded resolver

When using a libcurl version that is built to use a threaded resolver, there’s no socket to wait for during the name resolving phase so we’ve often recommended users to just wait “a short while” during this interval. That has always been a weakness and an unfortunate situation.

Starting now, curl_multi_timeout() will return suitable timeout values during this period so that users will no longer have to re-implement that logic themselves. The timeouts will be slowly increasing to make sure fast resolves are detected quickly but slow resolves don’t consume too much CPU.

much faster cookies

The cookie code in libcurl was keeping them all in a linear linked list. That’s fine for small amounts of cookies or perhaps if you don’t manipulate them much.

Users with several hundred cookies, or even thousands, will in 7.60.0 notice a speed increase that in some situations are in the order of several magnitudes when the internal representation has changed to use hash tables and some good cleanups were made.

HTTP/2 GOAWAY-handling

We figure out some problems in libcurl’s handling of GOAWAY, like when an application wants to do a bunch of transfers over a connection that suddenly gets a GOAWAY so that libcurl needs to create a new connection to do the rest of the pending transfers over.

Turns out nginx ships with a config option named http2_max_requests that sets the maximum number of requests it allows over the same connection before it sends GOAWAY over it (and it defaults to 1000). This option isn’t very well explained in their docs and it seems users won’t really know what good values to set it to, so this is probably the primary reason clients see GOAWAYs where there’s no apparent good reason for them.

Setting the value to a ridiculously low value at least helped me debug this problem and improve how libcurl deals with it!

Repair non-ASCII support

We’ve supported transfers with libcurl on non-ASCII platforms since early 2007. Non-ASCII here basically means EBCDIC, but the code hasn’t been limited to those.

However, due to this being used by only a small amount of users and that our test infrastructure doesn’t test this feature good enough, we slipped recently and broke libcurl for the non-ASCII users. Work was put in and changes were landed to make sure that libcurl works again on these systems!

Enjoy 7.60.0! In 56 days there should be another release to play with…

GAAAAAH

That’s the thought that ran through my head when I read the email I had just received.

GAAAAAAAAAAAAH

You know the feeling when the realization hits you that you did something really stupid? And you did it hours ago and people already noticed so its too late to pretend it didn’t happen or try to cover it up and whistle innocently. Nope, none of those options were available anymore. The truth was out there.

I had messed up royally.

What triggered this sudden journey of emotions and sharp sense of pain in my soul, was an email I received at 10:18, Friday March 9 2018. The encrypted email pointed out to me in clear terms that there was information available publicly on the curl web site about the security vulnerabilities that we intended to announce in association with the next curl release, on March 21. (The person who emailed me is a member of a group that was informed by me about these issues ahead of time.)

In the curl project, we never reveal nor show any information about known security flaws until we ship fixes for them and publish their corresponding security advisories that explain the flaws, the risks, the fixes and work-arounds in detail. This of course in the name of keeping users safe. We don’t want bad guys to learn about problems and flaws until we also offer fixes for them. That is, unless you screw up like me.

It took me a few minutes until I paused my work I was doing at the moment and actually read the email, but once I did I acted immediately and at 10:24 I had reverted the change on the web site and purged the URL from the CDN so the information was no longer publicly visible there.

The entire curl web site is however kept in a public git repository, so while the sensitive information was no longer immediately notable on the site, it was still out of the bag and there was just no taking it back. Not to mention that we don’t know how many people that already updated their git clones etc.

I pushed the particular file containing the “extra information” to the web site’s git repository at 01:26 CET the same early morning and since the web site updates itself in a cronjob every 20 minutes we know the information became available just after 01:40. At which time I had already gone to bed.

The sensitive information was displayed on the site for 8 hours and 44 minutes. The security page table showed these lines at the top:

# Vulnerability Date First Last CVE CWE
78 RTSP RTP buffer over-read February 20, 2018 7.20.0 7.58.0 CVE-2018-1000122 CWE-126: Buffer Over-read
77 LDAP NULL pointer dereference March 06, 2018 7.21.0 7.58.0 CVE-2018-1000121 CWE-476: NULL Pointer Dereference
76 FTP path trickery leads to NIL byte out of bounds write March 21, 2018 7.12.3 7.58.0 CVE-2018-1000120 CWE-122: Heap-based Buffer Overflow

I only revealed the names of the flaws and their corresponding CWE (Common Weakness Enumeration) numbers, the full advisories were thankfully not exposed, the links to them were broken. (Oh, and the date column shows the dates we got the reports, not the date of the fixed release which is the intention.) We still fear that the names alone plus the CWE descriptions might be enough for intelligent attackers to figure out the rest.

As a direct result of me having revealed information about these three security vulnerabilities, we decided to change the release date of the pending release curl 7.59.0 to happen one week sooner than previously planned. To reduce the time bad actors would be able to abuse this information for malicious purposes.

How exactly did it happen?

When approaching a release day, I always create local git branches  called next-release in both the source and the web site git repositories. In the web site’s next-release branch I add the security advisories we’re working on and I add/update meta-data about these vulnerabilities etc. I prepare things in that branch that should go public on the release moment.

We’ve added CWE numbers to our vulnerabilities for the first time (we are now required to provide them when we ask for CVEs). Figuring out these numbers for the new issues made me think that I should also go back and add relevant CWE numbers to our old vulnerabilities as well and I started to go back to old issues and one by one dig up which numbers to use.

After having worked on that for a while, for some of the issues it is really tricky to figure out which CWE to use, I realized the time was rather late.

– I better get to bed and get some sleep so that I can get some work done tomorrow as well.

Then I realized I had been editing the old advisory documents while still being in the checked out next-release branch. Oops, that was a mistake. I thus wanted to check out the master branch again to push the update from there. git then pointed out that the vuln.pm file couldn’t get moved over because of reasons. (I forget the exact message but it it happened because I had already committed changes to the file in the new branch that weren’t present in the master branch.)

So, as I wanted to get to bed and not fight my tools, I saved the current (edited) file in a different name, checked out the old file version from git again, changed branch and moved the renamed file back to vuln.pm again (without a single thought that this file now contained three lines too many that should only be present in the next-release branch), committed all the edited files and pushed them all to the remote git repository… boom.

You’d think I would…

  1. know how to use git correctly
  2. know how to push what to public repos
  3. not try to do things like this at 01:26 in the morning

curl 7.59.0 and these mentioned security vulnerabilities were made public this morning.

Here’s curl 7.59.0

We ship curl 7.59.0 exactly 49 days since the previous release (a week shorter than planned because of reasons). Download it from here. Full changelog is here.

In these 49 days, we have done and had..

6 changes(*)
78 bug fixes (total: 4337)
149 commits (total: 22,952)
45 contributors, 20 new (total: 1,702)
29 authors (total: 552)
3 security fixes (total: 78)

This time we’ve fixed no less than three separate security vulnerabilities:

  1. FTP path trickery security issue
  2. LDAP NULL dereference
  3. RTSP RTP buffer over-read

(*) = changes are things that don’t fix existing functionality but actually add something new to curl/libcurl. New features mostly.

The new things time probably won’t be considered as earth shattering but still a bunch of useful stuff:

–proxy-pinnedpubkey

The ability to specified a public key pinning has been around for a while for regular servers, and libcurl has had the ability to pin proxies’ keys as well. This change makes sure that users of the command line tool also gets that ability. Make sure your HTTPS proxy isn’t MITMed!

CURLOPT_TIMEVALUE_LARGE

Part of our effort to cleanup our use of ‘long’ variables internally to make sure we don’t have year-2038 problems, this new option was added.

CURLOPT_RESOLVE

This popular libcurl option that allows applications to populate curl’s DNS cache with custom IP addresses for host names were improved and now you can add multiple addresses for host names. This allows transfers using this to even more work like as if it used normal name resolves.

CURLOPT_HAPPY_EYEBALLS_TIMEOUT_MS

As a true HTTP swiss-army knife tool and library, you can toggle and tweak almost all aspects, timers and options that are used. This libcurl option has a new corresponding curl command line option, and allows the user to set the timeout time for how long after the initial (IPv6) connect call is done until the second (IPv4) connect is invoked in the happy eyeballs connect procedure. The default is 200 milliseconds.

Bug fixes!

As usual we fixed things all over. Big and small. Some of the ones that I think stuck out a little were the fix for building with OpenSSL 0.9.7 (because you’d think that portion of users should be extinct by now) and the fix to make configure correctly detect OpenSSL 1.1.1 (there are beta releases out there).

Some application authors will appreciate that libcurl now for the most part detects if it gets called from within one of its own callbacks and returns an error about it. This is mostly to save these users from themselves as doing this would already previously risk damaging things. There are some functions that are still allowed to get called from within callbacks.

Cheers for curl 7.58.0

Here’s to another curl release!

curl 7.58.0 is the 172nd curl release and it contains, among other things, 82 bug fixes thanks to 54 contributors (22 new). All this done with 131 commits in 56 days.

The bug fix rate is slightly lower than in the last few releases, which I tribute mostly to me having been away on vacation for a month during this release cycle. I retain my position as “committer of the Month” and January 2018 is my 29th consecutive month where I’ve done most commits in the curl source code repository. In total, almost 58% of the commits have been done by me (if we limit the count to all commits done since 2014, I’m at 43%). We now count a total of 545 unique commit authors and 1,685 contributors.

So what’s new this time? (full changelog here)

libssh backend

Introducing the pluggable SSH backend, and libssh is now the new alternative SSH backend to libssh2 that has been supported since late 2006. This change alone brought thousands of new lines of code.

Tell configure to use it with –with-libssh and you’re all set!

The libssh backend work was done by Nikos Mavrogiannopoulos, Tomas Mraz, Stanislav Zidek, Robert Kolcun and Andreas Schneider.

Security

Yet again we announce security issues that we’ve found and fixed. Two of them to be exact:

  1. We found a problem with how HTTP/2 trailers was handled, which could lead to crashes or even information leakage.
  2. We addressed a problem for users sending custom Authorization: headers to HTTP servers and who are then redirected to another host that shouldn’t receive those Authorization headers.

Progress bar refresh

A minor thing, but we refreshed the progress bar layout for when no total size is known.

Next?

March 21 is the date set for next release. Unless of course we find an urgent reason to fix and release something before then…

curl 7.57.0 happiness

The never-ending series of curl releases continued today when we released version 7.57.0. The 171th release since the beginning, and the release that follows 37 days after 7.56.1. Remember that 7.56.1 was an extra release that fixed a few most annoying regressions.

We bump the minor number to 57 and clear the patch number in this release due to the changes introduced. None of them very ground breaking, but fun and useful and detailed below.

41 contributors helped fix 69 bugs in these 37 days since the previous release, using 115 separate commits. 23 of those contributors were new, making the total list of contributors now contain 1649 individuals! 25 individuals authored commits since the previous release, making the total number of authors 540 persons.

The curl web site currently sends out 8GB data per hour to over 2 million HTTP requests per day.

Support RFC7616 – HTTP Digest

This allows HTTP Digest authentication to use the must better SHA256 algorithm instead of the old, and deemed unsuitable, MD5. This should be a transparent improvement so curl should just be able to use this without any particular new option has to be set, but the server-side support for this version seems to still be a bit lacking.

(Side-note: I’m credited in RFC 7616 for having contributed my thoughts!)

Sharing the connection cache

In this modern age with multi core processors and applications using multi-threaded designs, we of course want libcurl to enable applications to be able to get the best performance out of libcurl.

libcurl is already thread-safe so you can run parallel transfers multi-threaded perfectly fine if you want to, but it doesn’t allow the application to share handles between threads. Before this specific change, this limitation has forced multi-threaded applications to be satisfied with letting libcurl has a separate “connection cache” in each thread.

The connection cache, sometimes also referred to as the connection pool, is where libcurl keeps live connections that were previously used for a transfer and still haven’t been closed, so that a subsequent request might be able to re-use one of them. Getting a re-used connection for a request is much faster than having to create a new one. Having one connection cache per thread, is ineffective.

Starting now, libcurl’s “share concept” allows an application to specify a single connection cache to be used cross-thread and cross-handles, so that connection re-use will be much improved when libcurl is used multi-threaded. This will significantly benefit the most demanding libcurl applications, but it will also allow more flexible designs as now the connection pool can be designed to survive individual handles in a way that wasn’t previously possible.

Brotli compression

The popular browsers have supported brotli compression method for a while and it has already become widely supported by servers.

Now, curl supports it too and the command line tool’s –compressed option will ask for brotli as well as gzip, if your build supports it. Similarly, libcurl supports it with its CURLOPT_ACCEPT_ENCODING option. The server can then opt to respond using either compression format, depending on what it knows.

According to CertSimple, who ran tests on the top-1000 sites of the Internet, brotli gets contents 14-21% smaller than gzip.

As with other compression algorithms, libcurl uses a 3rd party library for brotli compression and you may find that Linux distributions and others are a bit behind in shipping packages for a brotli decompression library. Please join in and help this happen. At the moment of this writing, the Debian package is only available in experimental.

(Readers may remember my libbrotli project, but that effort isn’t really needed anymore since the brotli project itself builds a library these days.)

Three security issues

In spite of our hard work and best efforts, security issues keep getting reported and we fix them accordingly. This release has three new ones and I’ll describe them below. None of them are alarmingly serious and they will probably not hurt anyone badly.

Two things can be said about the security issues this time:

1. You’ll note that we’ve changed naming convention for the advisory URLs, so that they now have a random component. This is to reduce potential information leaks based on the name when we pass these around before releases.

2. Two of the flaws happen only on 32 bit systems, which reveals a weakness in our testing. Most of our CI tests, torture tests and fuzzing are made on 64 bit architectures. We have no immediate and good fix for this, but this is something we must work harder on.

1. NTLM buffer overflow via integer overflow

(CVE-2017-8816) Limited to 32 bit systems, this is a flaw where curl takes the combined length of the user name and password, doubles it, and allocates a memory area that big. If that doubling ends up larger than 4GB, an integer overflow makes a very small buffer be allocated instead and then curl will overwrite that.

Yes, having user name plus password be longer than two gigabytes is rather excessive and I hope very few applications would allow this.

2. FTP wildcard out of bounds read

(CVE-2017-8817) curl’s wildcard functionality for FTP transfers is not a not very widely used feature, but it was discovered that the default pattern matching function could erroneously read beyond the URL buffer if the match pattern ends with an open bracket ‘[‘ !

This problem was detected by the OSS-Fuzz project! This flaw  has existed in the code since this feature was added, over seven years ago.

3. SSL out of buffer access

(CVE-2017-8818) In July this year we introduced multissl support in libcurl. This allows an application to select which TLS backend libcurl should use, if it was built to support more than one. It was a fairly large overhaul to the TLS code in curl and unfortunately it also brought this bug.

Also, only happening on 32 bit systems, libcurl would allocate a buffer that was 4 bytes too small for the TLS backend’s data which would lead to the TLS library accessing and using data outside of the heap allocated buffer.

Next?

The next release will ship no later than January 24th 2018. I think that one will as well add changes and warrant the minor number to bump. We have fun pending stuff such as: a new SSH backend, modifiable happy eyeballs timeout and more. Get involved and help us do even more good!

The life of a curl security bug

The report

Usually, security problems in the curl project come to us out of the blue. Someone has found a bug they suspect may have a security impact and they tell us about it on the curl-security@haxx.se email address. Mails sent to this address reach a private mailing list with the curl security team members as the only subscribers.

An important first step is that we respond to the sender, acknowledging the report. Often we also include a few follow-up questions at once. It is important to us to keep the original reporter in the loop and included in all subsequent discussions about this issue – unless they prefer to opt out.

If we find the issue ourselves, we act pretty much the same way.

In the most obvious and well-reported cases there are no room for doubts or hesitation about what the bugs and the impact of them are, but very often the reports lead to discussions.

The assessment

Is it a bug in the first place, is it perhaps even documented or just plain bad use?

If it is a bug, is this a security problem that can be abused or somehow put users in some sort of risk?

Most issues we get reported as security issues are also in the end treated as such, as we tend to err on the safe side.

The time plan

Unless the issue is critical, we prefer to schedule a fix and announcement of the issue in association with the pending next release, and as we do releases every 8 weeks like clockwork, that’s never very far away.

We communicate the suggested schedule with the reporter to make sure we agree. If a sooner release is preferred, we work out a schedule for an extra release. In the past we’ve did occasional faster security releases also when the issue already had been made public, so we wanted to shorten the time window during which users could be harmed by the problem.

We really really do not want a problem to persist longer than until the next release.

The fix

The curl security team and the reporter work on fixing the issue. Ideally in part by the reporter making sure that they can’t reproduce it anymore and we add a test case or two.

We keep the fix undisclosed for the time being. It is not committed to the public git repository but kept in a private branch. We usually put it on a private URL so that we can link to it when we ask for a CVE, see below.

All security issues should make us ask ourselves – what did we do wrong that made us not discover this sooner? And ideally we should introduce processes, tests and checks to make sure we detect other similar mistakes now and in the future.

Typically we only generate a single patch from the git master master and offer that as the final solution. In the curl project we don’t maintain multiple branches. Distros and vendors who ship older or even multiple curl versions backport the patch to their systems by themselves. Sometimes we get backported patches back to offer users as well, but those are exceptions to the rule.

The advisory

In parallel to working on the fix, we write up a “security advisory” about the problem. It is a detailed description about the problem, what impact it may have if triggered or abused and if we know of any exploits of it.

What conditions need to be met for the bug to trigger. What’s the version range that is affected, what’s the remedies that can be done as a work-around if the patch is not applied etc.

We work out the advisory in cooperation with the reporter so that we get the description and the credits right.

The advisory also always contains a time line that clearly describes when we got to know about the problem etc.

The CVE

Once we have an advisory and a patch, none of which needs to be their final versions, we can proceed and ask for a CVE. A CVE is a unique “ID” that is issued for security problems to make them easy to reference. CVE stands for Common Vulnerabilities and Exposures.

Depending on where in the release cycle we are, we might have to hold off at this point. For all bugs that aren’t proprietary-operating-system specific, we pre-notify and ask for a CVE on the distros@openwall mailing list. This mailing list prohibits an embargo longer than 14 days, so we cannot ask for a CVE from them longer than 2 weeks in advance before our release.

The idea here is that the embargo time gives the distributions time and opportunity to prepare updates of their packages so they can be pretty much in sync with our release and reduce the time window their users are at risk. Of course, not all operating system vendors manage to actually ship a curl update on two weeks notice, and at least one major commercial vendor regularly informs me that this is a too short time frame for them.

For flaws that don’t affect the free operating systems at all, we ask MITRE directly for CVEs.

The last 48 hours

When there is roughly 48 hours left until the coming release and security announcement, we merge the private security fix branch into master and push it. That immediately makes the fix public and those who are alert can then take advantage of this knowledge – potentially for malicious purposes. The security advisory itself is however not made public until release day.

We use these 48 hours to get the fix tested on more systems to verify that it is not doing any major breakage. The weakest part of our security procedure is that the fix has been worked out in secret so it has not had the chance to get widely built and tested, so that is performed now.

The release

We upload the new release. We send out the release announcement email, update the web site and make the advisory for the issue public. We send out the security advisory alert on the proper email lists.

Bug Bounty?

Unfortunately we don’t have any bug bounties on our own in the curl project. We simply have no money for that. We actually don’t have money at all for anything.

Hackerone offers bounties for curl related issues. If you have reported a critical issue you can request one from them after it has been fixed in curl.

 

The backdoor threat

— “Have you ever detected anyone trying to add a backdoor to curl?”

— “Have you ever been pressured by an organization or a person to add suspicious code to curl that you wouldn’t otherwise accept?”

— “If a crime syndicate would kidnap your family to force you to comply, what backdoor would you be be able to insert into curl that is the least likely to get detected?” (The less grim version of this question would instead offer huge amounts of money.)

I’ve been asked these questions and variations of them when I’ve stood up in front of audiences around the world and talked about curl and how it is one of the most widely used software components in the world, counting way over three billion instances.

Back door (noun)
— a feature or defect of a computer system that allows surreptitious unauthorized access to data.

So how is it?

No. I’ve never seen a deliberate attempt to add a flaw, a vulnerability or a backdoor into curl. I’ve seen bad patches and I’ve seen patches that brought bugs that years later were reported as security problems, but I did not spot any deliberate attempt to do bad in any of them. But if done with skills, certainly I wouldn’t have noticed them being deliberate?

If I had cooperated in adding a backdoor or been threatened to, then I wouldn’t tell you anyway and I’d thus say no to questions about it.

How to be sure

There is only one way to be sure: review the code you download and intend to use. Or get it from a trusted source that did the review for you.

If you have a version you trust, you really only have to review the changes done since then.

Possibly there’s some degree of safety in numbers, and as thousands of applications and systems use curl and libcurl and at least some of them do reviews and extensive testing, one of those could discover mischievous activities if there are any and report them publicly.

Infected machines or owned users

The servers that host the curl releases could be targeted by attackers and the tarballs for download could be replaced by something that carries evil code. There’s no such thing as a fail-safe machine, especially not if someone really wants to and tries to target us. The safeguard there is the GPG signature with which I sign all official releases. No malicious user can (re-)produce them. They have to be made by me (since I package the curl releases). That comes back to trusting me again. There’s of course no safe-guard against me being forced to signed evil code with a knife to my throat…

If one of the curl project members with git push rights would get her account hacked and her SSH key password brute-forced, a very skilled hacker could possibly sneak in something, short-term. Although my hopes are that as we review and comment each others’ code to a very high degree, that would be really hard. And the hacked person herself would most likely react.

Downloading from somewhere

I think the highest risk scenario is when users download pre-built curl or libcurl binaries from various places on the internet that isn’t the official curl web site. How can you know for sure what you’re getting then, as you couldn’t review the code or changes done. You just put your trust in a remote person or organization to do what’s right for you.

Trusting other organizations can be totally fine, as when you download using Linux distro package management systems etc as then you can expect a certain level of checks and vouching have happened and there will be digital signatures and more involved to minimize the risk of external malicious interference.

Pledging there’s no backdoor

Some people argue that projects could or should pledge for every release that there’s no deliberate backdoor planted so that if the day comes in the future when a three-letter secret organization forces us to insert a backdoor, the lack of such a pledge for the subsequent release would function as an alarm signal to people that something is wrong.

That takes us back to trusting a single person again. A truly evil adversary can of course force such a pledge to be uttered no matter what, even if that then probably is more mafia level evilness and not mere three-letter organization shadiness anymore.

I would be a bit stressed out to have to do that pledge every single release as if I ever forgot or messed it up, it should lead to a lot of people getting up in arms and how would such a mistake be fixed? It’s little too irrevocable for me. And we do quite frequent releases so the risk for mistakes is not insignificant.

Also, if I would pledge that, is that then a promise regarding all my code only, or is that meant to be a pledge for the entire code base as done by all committers? It doesn’t scale very well…

Additionally, I’m a Swede living in Sweden. The American organizations cannot legally force me to backdoor anything, and the Swedish versions of those secret organizations don’t have the legal rights to do so either (caveat: I’m not a lawyer). So, the real threat is not by legal means.

What backdoor would be likely?

It would be very hard to add code, unnoticed, that sends off data to somewhere else. Too much code that would be too obvious.

A backdoor similarly couldn’t really be made to split off data from the transfer pipe and store it locally for other systems to read, as that too is probably too much code that is too different than the current code and would be detected instantly.

No, I’m convinced the most likely backdoor code in curl is a deliberate but hard-to-detect security vulnerability that let’s the attacker exploit the program using libcurl/curl by some sort of specific usage pattern. So when triggered it can trick the program to send off memory contents or perhaps overwrite the local stack or the heap. Quite possibly only one step out of several steps necessary for a successful attack, much like how a single-byte-overwrite can lead to root access.

Any past security problems on purpose?

We’ve had almost 70 security vulnerabilities reported through the project’s almost twenty years of existence. Since most of them were triggered by mistakes in code I wrote myself, I can be certain that none of those problems were introduced on purpose. I can’t completely rule out that someone else’s patch modified curl along the way and then by extension maybe made a vulnerability worse or easier to trigger, could have been made on purpose. None of the security problems that were introduced by others have shown any sign of “deliberateness”. (Or were written cleverly enough to not make me see that!)

Maybe backdoors have been planted that we just haven’t discovered yet?

Discussion

Follow-up discussion/comments on hacker news.

keep finding old security problems

I decided to look closer at security problems and the age of the reported issues in the curl project.

One theory I had when I started to collect this data, was that we actually get security problems reported earlier and earlier over time. That bugs would be around in public release for shorter periods of time nowadays than what they did in the past.

My thinking would go like this: Logically, bugs that have been around for a long time have had a long time to get caught. The more eyes we’ve had on the code, the fewer old bugs should be left and going forward we should more often catch more recently added bugs.

The time from a bug’s introduction into the code until the day we get a security report about it, should logically decrease over time.

What if it doesn’t?

First, let’s take a look at the data at hand. In the curl project we have so far reported in total 68 security problems over the project’s life time. The first 4 were not recorded correctly so I’ll discard them from my data here, leaving 64 issues to check out.

The graph below shows the time distribution. The all time leader so far is the issue reported to us on March 10 this year (2017), which was present in the code since the version 6.5 release done on March 13 2000. 6,206 days, just three days away from 17 whole years.

There are no less than twelve additional issues that lingered from more than 5,000 days until reported. Only 20 (31%) of the reported issues had been public for less than 1,000 days. The fastest report was reported on the release day: 0 days.

The median time from release to report is a whopping 2541 days.

When we receive a report about a security problem, we want the issue fixed, responsibly announced to the world and ship a new release where the problem is gone. The median time to go through this procedure is 26.5 days, and the distribution looks like this:

What stands out here is the TLS session resumption bypass, which happened because we struggled with understanding it and how to address it properly. Otherwise the numbers look all reasonable to me as we typically do releases at least once every 8 weeks. We rarely ship a release with a known security issue outstanding.

Why are very old issues still found?

I think partly because the tools are gradually improving that aid people these days to find things much better, things that simply wasn’t found very often before. With new tools we can find problems that have been around for a long time.

Every year, the age of the oldest parts of the code get one year older. So the older the project gets, the older bugs can be found, while in the early days there was a smaller share of the code that was really old (if any at all).

What if we instead count age as a percentage of the project’s life time? Using this formula, a bug found at day 100 that was added at day 50 would be 50% but if it was added at day 80 it would be 20%. Maybe this would show a graph where the bars are shrinking over time?

But no. In fact it shows 17 (27%) of them having been present during 80% or more of the project’s life time! The median issue had been in there during 49% of the project’s life time!

It does however make another issue the worst offender, as one of the issues had been around during 91% of the project’s life time.

This counts on March 20 1998 being the birth day. Of course we got no reports the first few years since we basically had no users then!

Specific or generic?

Is this pattern something that is specific for the curl project or can we find it in other projects too? I don’t know. I have not seen this kind of data being presented by others and I don’t have the same insight on such details of projects with an enough amount of issues to be interesting.

What can we do to make the bars shrink?

Well, if there are old bugs left to find they won’t shrink, because for every such old security issue that’s still left there will be a tall bar. Hopefully though, by doing more tests, using more tools regularly (fuzzers, analyzers etc) and with more eyeballs on the code, we should iron out our security issues over time. Logically that should lead to a project where newly added security problems are detected sooner rather than later. We just don’t seem to be at that point yet…

Caveat

One fact that skews the numbers is that we are much more likely to record issues as security related these days. A decade ago when we got a report about a segfault or something we would often just consider it bad code and fix it, and neither us maintainers nor the reporter would think much about the potential security impact.

These days we’re at the other end of the spectrum where we people are much faster to jumping to a security issue suspicion or conclusion. Today people report bugs as security issues to a much higher degree than they did in the past. This is basically a good thing though, even if it makes it harder to draw conclusions over time.

Data sources

When you want to repeat the above graphs and verify my numbers:

  • vuln.pm – from the curl web site repository holds security issue meta data
  • releaselog – on the curl web site offers release meta data, even as a CSV download on the bottom of the page
  • report2release.pl – the perl script I used to calculate the report until release periods.