Category Archives: Security

How to DoH-only with Firefox

Firefox supports DNS-over-HTTPS (aka DoH) since version 62.

You can instruct your Firefox to only use DoH and never fall-back and try the native resolver; the mode we call trr-only. Without any other ability to resolve host names, this is a little tricky so this guide is here to help you. (This situation might improve in the future.)

In trr-only mode, nobody on your local network nor on your ISP can snoop on your name resolves. The SNI part of HTTPS connections are still clear text though, so eavesdroppers on path can still figure out which hosts you connect to.

There’s a name in my URI

A primary problem for trr-only is that we usually want to use a host name in the URI for the DoH server (we typically need it to be a name so that we can verify the server’s certificate against it), but we can’t resolve that host name until DoH is setup to work. A catch-22.

There are currently two ways around this problem:

  1. Tell Firefox the IP address of the name that you use in the URI. We call it the “bootstrapAddress”. See further below.
  2. Use a DoH server that is provided on an IP-number URI. This is rather unusual. There’s for example one at 1.1.1.1.

Setup and use trr-only

There are three prefs to focus on (they’re all explained elsewhere):

network.trr.mode – set this to the number 3.

network.trr.uri – set this to the URI of the DoH server you want to use. This should be a server you trust and want to hand over your name resolves to. The Cloudflare one we’ve previously used in DoH tests with Firefox is https://mozilla.cloudflare-dns.com/dns-query.

network.trr.bootstrapAddress– when you use a host name in the URI for the network.trr.uri pref you must set this pref to an IP address that host name resolves to for you. It is important that you pick an IP address that the name you use actually would resolve to.

Example

Let’s pretend you want to go full trr-only and use a DoH server at https://example.com/dns. (it’s a pretend URI, it doesn’t work).

Figure out the bootstrapAddress with dig. Resolve the host name from the URI:

$ dig +short example.com
93.184.216.34

or if you prefer to be classy and use the IPv6 address (only do this if IPv6 is actually working for you)

$ dig -t AAAA +short example.com
2606:2800:220:1:248:1893:25c8:1946

dig might give you a whole list of addresses back, and then you can pick any one of them in the list. Only pick one address though.

Go to “about:config” and paste the copied IP address into the value field for network.trr.bootstrapAddress. Now TRR / DoH should be able to get going. When you can see web pages, you know it works!

DoH-only means only DoH

If you happen to start Firefox behind a captive portal while in trr-only mode, the connections to the DoH server will fail and no name resolves can be performed.

In those situations, normally Firefox’s captive portable detector would trigger and show you the login page etc, but when no names can be resolved and the captive portal can’t respond with a fake response to the name lookup and redirect you to the login, it won’t get anywhere. It gets stuck. And currently, there’s no good visual indication anywhere that this is what happens.

You simply can’t get out of a captive portal with trr-only. You probably then temporarily switch mode, login to the portal and switch the mode to 3 again.

If you “unlock” the captive portal with another browser/system, Firefox’s regular retries while in trr-only will soon detect that and things should start working again.

quic wg interim Kista

The IETF QUIC working group had its fifth interim meeting the other day, this time in Kista, Sweden hosted by Ericsson. For me as a Stockholm resident, this was ridiculously convenient. Not entirely coincidentally, this was also the first quic interim I attended in person.

We were 30 something persons gathered in a room without windows, with another dozen or so participants joining from remote. This being a meeting in a series, most people already know each other from before so the atmosphere was relaxed and friendly. Lots of the participants have also been involved in other protocol developments and standards before. Many familiar faces.

Schedule

As QUIC is supposed to be done “soon”, the emphasis is now a lot to close issues, postpone some stuff to “QUICv2” and make sure to get decisions on outstanding question marks.

Kazuho did a quick run-through with some info from the interop days prior to the meeting.

After MT’s initial explanation of where we’re at for the upcoming draft-13, Ian took us a on a deep dive into the Stream 0 Design Team report. This is a pretty radical change of how the wire format of the quic protocol, and how the TLS is being handled.

The existing draft-12 approach…

Is suggested to instead become…

What’s perhaps the most interesting take away here is that the new format doesn’t use TLS records anymore – but simplifies a lot of other things. Not using TLS records but still doing TLS means that a QUIC implementation needs to get data from the TLS layer using APIs that existing TLS libraries don’t typically provide. PicoTLS, Minq, BoringSSL. NSS already have or will soon provide the necessary APIs. Slightly behind, OpenSSL should offer it in a nightly build soon but the impression is that it is still a bit away from an actual OpenSSL release.

EKR continued the theme. He talked about the quic handshake flow and among other things explained how 0-RTT and early data works. Taken from that context, I consider this slide (shown below) fairly funny because it makes it look far from simple to me. But it shows communication in different layers, and how the acks go, etc.

HTTP

Mike then presented the state of HTTP over quic. The frames are no longer that similar to the HTTP/2 versions. Work is done to ensure that the HTTP layer doesn’t need to refer or “grab” stream IDs from the transport layer.

There was a rather lengthy discussion around how to handle “placeholder streams” like the ones Firefox uses over HTTP/2 to create “anchors” on which to make dependencies but are never actually used over the wire. The nature of the quic transport makes those impractical and we talked about what alternatives there are that could still offer similar functionality.

The subject of priorities and dependencies and if the relative complexity of the h2 model should be replaced by something simpler came up (again) but was ultimately pushed aside.

QPACK

Alan presented the state of QPACK, the HTTP header compression algorithm for hq (HTTP over QUIC). It is not wire compatible with HPACK anymore and there have been some recent improvements and clarifications done.

Alan also did a great step-by-step walk-through how QPACK works with adding headers to the dynamic table and how it works with its indices etc. It was very clarifying I thought.

The discussion about the static table for the compression basically ended with us agreeing that we should just agree on a fairly small fixed table without a way to negotiate the table. Mark said he’d try to get some updated header data from some server deployments to get another data set than just the one from WPT (which is from a single browser).

Interop-testing of QPACK implementations can be done by encode  + shuffle + decode a HAR file and compare the results with the source data. Just do it – and talk to Alan!

And the first day was over. A fully packed day.

ECN

Magnus started off with some heavy stuff talking Explicit Congestion Notification in QUIC and it how it is intended to work and some remaining issues.

He also got into the subject of ACK frequency and how the current model isn’t ideal in every situation, causing to work like this image below (from Magnus’ slide set):

Interestingly, it turned out that several of the implementers already basically had implemented Magnus’ proposal of changing the max delay to min(RTT/4, 25 ms) independently of each other!

mvfst deployment

Subodh took us on a journey with some great insights from Facebook’s deployment of mvfast internally, their QUIC implementation. Getting some real-life feedback is useful and with over 100 billion requests/day, it seems they did give this a good run.

Since their usage and stack for this is a bit use case specific I’m not sure how relevant or universal their performance numbers are. They showed roughly the same CPU and memory use, with a 70% RPS rate compared to h2 over TLS 1.2.

He also entertained us with some “fun issues” from bugs and debugging sessions they’ve done and learned from. Awesome.

The story highlights the need for more tooling around QUIC to help developers and deployers.

Load balancers

Martin talked about load balancers and servers, and how they could or should communicate to work correctly with routing and connection IDs.

The room didn’t seem overly thrilled about this work and mostly offered other ways to achieve the same results.

Implicit Open

During the last session for the day and the entire meeting, was mt going through a few things that still needed discussion or closure. On stateless reset and the rather big bike shed issue: implicit open. The later being the question if opening a stream with ID N + 1 implicitly also opens the stream with ID N. I believe we ended with a slight preference to the implicit approach and this will be taken to the list for a consensus call.

Frame type extensibility

How should the QUIC protocol allow extensibility? The oldest still open issue in the project can be solved or satisfied in numerous different ways and the discussion waved back and forth for a while, debating various approaches merits and downsides until the group more or less agreed on a fairly simple and straight forward approach where the extensions will announce support for a feature which then may or may involve one or more new frame types (to be in a registry).

We proceeded to discuss other issues all until “closing time”, which was set to be 16:00 today. This was just two days of pushing forward but still it felt quite intense and my personal impression is that there were a lot of good progress made here that took the protocol a good step forward.

The facilities were lovely and Ericsson was a great host for us. The Thursday afternoon cakes were great! Thank you!

Coming up

There’s an IETF meeting in Montreal in July and there’s a planned next QUIC interim probably in New York in September.

Play TLS 1.3 with curl

The IESG recently approved the TLS 1.3 draft-28 for proposed standard and we can expect the real RFC for this protocol version to appear soon (within a few months probably).

TLS 1.3 has been in development for quite some time by now, and a lot of TLS libraries already support it to some extent. At varying draft levels.

curl and libcurl has supported an explicit option to select TLS 1.3 since curl 7.52.0 (December 2016) and assuming you build curl to use a TLS library with support, you’ve been able to use TLS 1.3 with curl since at least then. The support has gradually been expanded to cover more and more libraries since then.

Today, curl and libcurl support speaking TLS 1.3 if you build it to use one of these fine TLS libraries of a recent enough version:

  • OpenSSL
  • BoringSSL
  • libressl
  • NSS
  • WolfSSL
  • Secure Transport (on iOS 11 or later, and macOS 10.13 or later)

GnuTLS seems to be well on their way too. TLS 1.3 support exists in the GnuTLS master branch on gitlab.

curl’s TLS 1.3-support makes it possible to select TLS 1.3 as preferred minimum version.

GAAAAAH

That’s the thought that ran through my head when I read the email I had just received.

GAAAAAAAAAAAAH

You know the feeling when the realization hits you that you did something really stupid? And you did it hours ago and people already noticed so its too late to pretend it didn’t happen or try to cover it up and whistle innocently. Nope, none of those options were available anymore. The truth was out there.

I had messed up royally.

What triggered this sudden journey of emotions and sharp sense of pain in my soul, was an email I received at 10:18, Friday March 9 2018. The encrypted email pointed out to me in clear terms that there was information available publicly on the curl web site about the security vulnerabilities that we intended to announce in association with the next curl release, on March 21. (The person who emailed me is a member of a group that was informed by me about these issues ahead of time.)

In the curl project, we never reveal nor show any information about known security flaws until we ship fixes for them and publish their corresponding security advisories that explain the flaws, the risks, the fixes and work-arounds in detail. This of course in the name of keeping users safe. We don’t want bad guys to learn about problems and flaws until we also offer fixes for them. That is, unless you screw up like me.

It took me a few minutes until I paused my work I was doing at the moment and actually read the email, but once I did I acted immediately and at 10:24 I had reverted the change on the web site and purged the URL from the CDN so the information was no longer publicly visible there.

The entire curl web site is however kept in a public git repository, so while the sensitive information was no longer immediately notable on the site, it was still out of the bag and there was just no taking it back. Not to mention that we don’t know how many people that already updated their git clones etc.

I pushed the particular file containing the “extra information” to the web site’s git repository at 01:26 CET the same early morning and since the web site updates itself in a cronjob every 20 minutes we know the information became available just after 01:40. At which time I had already gone to bed.

The sensitive information was displayed on the site for 8 hours and 44 minutes. The security page table showed these lines at the top:

# Vulnerability Date First Last CVE CWE
78 RTSP RTP buffer over-read February 20, 2018 7.20.0 7.58.0 CVE-2018-1000122 CWE-126: Buffer Over-read
77 LDAP NULL pointer dereference March 06, 2018 7.21.0 7.58.0 CVE-2018-1000121 CWE-476: NULL Pointer Dereference
76 FTP path trickery leads to NIL byte out of bounds write March 21, 2018 7.12.3 7.58.0 CVE-2018-1000120 CWE-122: Heap-based Buffer Overflow

I only revealed the names of the flaws and their corresponding CWE (Common Weakness Enumeration) numbers, the full advisories were thankfully not exposed, the links to them were broken. (Oh, and the date column shows the dates we got the reports, not the date of the fixed release which is the intention.) We still fear that the names alone plus the CWE descriptions might be enough for intelligent attackers to figure out the rest.

As a direct result of me having revealed information about these three security vulnerabilities, we decided to change the release date of the pending release curl 7.59.0 to happen one week sooner than previously planned. To reduce the time bad actors would be able to abuse this information for malicious purposes.

How exactly did it happen?

When approaching a release day, I always create local git branches  called next-release in both the source and the web site git repositories. In the web site’s next-release branch I add the security advisories we’re working on and I add/update meta-data about these vulnerabilities etc. I prepare things in that branch that should go public on the release moment.

We’ve added CWE numbers to our vulnerabilities for the first time (we are now required to provide them when we ask for CVEs). Figuring out these numbers for the new issues made me think that I should also go back and add relevant CWE numbers to our old vulnerabilities as well and I started to go back to old issues and one by one dig up which numbers to use.

After having worked on that for a while, for some of the issues it is really tricky to figure out which CWE to use, I realized the time was rather late.

– I better get to bed and get some sleep so that I can get some work done tomorrow as well.

Then I realized I had been editing the old advisory documents while still being in the checked out next-release branch. Oops, that was a mistake. I thus wanted to check out the master branch again to push the update from there. git then pointed out that the vuln.pm file couldn’t get moved over because of reasons. (I forget the exact message but it it happened because I had already committed changes to the file in the new branch that weren’t present in the master branch.)

So, as I wanted to get to bed and not fight my tools, I saved the current (edited) file in a different name, checked out the old file version from git again, changed branch and moved the renamed file back to vuln.pm again (without a single thought that this file now contained three lines too many that should only be present in the next-release branch), committed all the edited files and pushed them all to the remote git repository… boom.

You’d think I would…

  1. know how to use git correctly
  2. know how to push what to public repos
  3. not try to do things like this at 01:26 in the morning

curl 7.59.0 and these mentioned security vulnerabilities were made public this morning.

Cheers for curl 7.58.0

Here’s to another curl release!

curl 7.58.0 is the 172nd curl release and it contains, among other things, 82 bug fixes thanks to 54 contributors (22 new). All this done with 131 commits in 56 days.

The bug fix rate is slightly lower than in the last few releases, which I tribute mostly to me having been away on vacation for a month during this release cycle. I retain my position as “committer of the Month” and January 2018 is my 29th consecutive month where I’ve done most commits in the curl source code repository. In total, almost 58% of the commits have been done by me (if we limit the count to all commits done since 2014, I’m at 43%). We now count a total of 545 unique commit authors and 1,685 contributors.

So what’s new this time? (full changelog here)

libssh backend

Introducing the pluggable SSH backend, and libssh is now the new alternative SSH backend to libssh2 that has been supported since late 2006. This change alone brought thousands of new lines of code.

Tell configure to use it with –with-libssh and you’re all set!

The libssh backend work was done by Nikos Mavrogiannopoulos, Tomas Mraz, Stanislav Zidek, Robert Kolcun and Andreas Schneider.

Security

Yet again we announce security issues that we’ve found and fixed. Two of them to be exact:

  1. We found a problem with how HTTP/2 trailers was handled, which could lead to crashes or even information leakage.
  2. We addressed a problem for users sending custom Authorization: headers to HTTP servers and who are then redirected to another host that shouldn’t receive those Authorization headers.

Progress bar refresh

A minor thing, but we refreshed the progress bar layout for when no total size is known.

Next?

March 21 is the date set for next release. Unless of course we find an urgent reason to fix and release something before then…

Inspect curl’s TLS traffic

Since a long time back, the venerable network analyzer tool Wireshark (screenshot above) has provided a way to decrypt and inspect TLS traffic when sent and received by Firefox and Chrome.

You do this by making the browser tell Wireshark the SSL secrets:

  1. set the environment variable named SSLKEYLOGFILE to a file name of your choice before you start the browser
  2. Setting the same file name path in the Master-secret field in Wireshark. Go to Preferences->Protocols->SSL and edit the path as shown in the screenshot below.

Having done this simple operation, you can now inspect your browser’s HTTPS traffic in Wireshark. Just super handy and awesome.

Just remember that if you record TLS traffic and want to save it for analyzing later, you need to also save the file with the secrets so that you can decrypt that traffic capture at a later time as well.

curl

Adding curl to the mix. curl can be built using a dozen different TLS libraries and not just a single one as the browsers do. It complicates matters a bit.

In the NSS library for example, which is the TLS library curl is typically built with on Redhat and Centos, handles the SSLKEYLOGFILE magic all by itself so by extension you have been able to do this trick with curl for a long time – as long as you use curl built with NSS. A pretty good argument to use that build really.

Since curl version 7.57.0 the SSLKEYLOGFILE feature can also be enabled when built with GnuTLS, BoringSSL or OpenSSL. In the latter two libs, the feature is powered by new APIs in those libraries and in GnuTLS the library’s own logic similar to how NSS does it. Since OpenSSL is the by far most popular TLS backend for curl, this feature is now brought to users more widely.

In curl 7.58.0 (due to ship on Janurary 24, 2018), this feature is built by default also for curl with OpenSSL and in 7.57.0 you need to define ENABLE_SSLKEYLOGFILE to enable it for OpenSSL and BoringSSL.

And what’s even cooler? This feature is at the same time also brought to every single application out there that is built against this or later versions of libcurl. In one single blow. now suddenly a whole world opens to make it easier for you to debug, diagnose and analyze your applications’ TLS traffic when powered by libcurl!

Like the description above for browsers, you

  1. set the environment variable SSLKEYLOGFILE to a file name to store the secrets in
  2. tell Wireshark to use that same file to find the TLS secrets (Preferences->Protocols->SSL), as the screenshot showed above
  3. run the libcurl-using application (such as curl) and Wireshark will be able to inspect TLS-based protocols just fine!

trace options

Of course, as a light weight alternative: you may opt to use the –trace or –trace-ascii options with the curl tool and be fully satisfied with that. Using those command line options, curl will log everything sent and received in the protocol layer without the TLS applied. With HTTPS you’ll see all the HTTP traffic for example.

Credits

Most of the curl work to enable this feature was done by Peter Wu and Ray Satiro.

curl 7.57.0 happiness

The never-ending series of curl releases continued today when we released version 7.57.0. The 171th release since the beginning, and the release that follows 37 days after 7.56.1. Remember that 7.56.1 was an extra release that fixed a few most annoying regressions.

We bump the minor number to 57 and clear the patch number in this release due to the changes introduced. None of them very ground breaking, but fun and useful and detailed below.

41 contributors helped fix 69 bugs in these 37 days since the previous release, using 115 separate commits. 23 of those contributors were new, making the total list of contributors now contain 1649 individuals! 25 individuals authored commits since the previous release, making the total number of authors 540 persons.

The curl web site currently sends out 8GB data per hour to over 2 million HTTP requests per day.

Support RFC7616 – HTTP Digest

This allows HTTP Digest authentication to use the must better SHA256 algorithm instead of the old, and deemed unsuitable, MD5. This should be a transparent improvement so curl should just be able to use this without any particular new option has to be set, but the server-side support for this version seems to still be a bit lacking.

(Side-note: I’m credited in RFC 7616 for having contributed my thoughts!)

Sharing the connection cache

In this modern age with multi core processors and applications using multi-threaded designs, we of course want libcurl to enable applications to be able to get the best performance out of libcurl.

libcurl is already thread-safe so you can run parallel transfers multi-threaded perfectly fine if you want to, but it doesn’t allow the application to share handles between threads. Before this specific change, this limitation has forced multi-threaded applications to be satisfied with letting libcurl has a separate “connection cache” in each thread.

The connection cache, sometimes also referred to as the connection pool, is where libcurl keeps live connections that were previously used for a transfer and still haven’t been closed, so that a subsequent request might be able to re-use one of them. Getting a re-used connection for a request is much faster than having to create a new one. Having one connection cache per thread, is ineffective.

Starting now, libcurl’s “share concept” allows an application to specify a single connection cache to be used cross-thread and cross-handles, so that connection re-use will be much improved when libcurl is used multi-threaded. This will significantly benefit the most demanding libcurl applications, but it will also allow more flexible designs as now the connection pool can be designed to survive individual handles in a way that wasn’t previously possible.

Brotli compression

The popular browsers have supported brotli compression method for a while and it has already become widely supported by servers.

Now, curl supports it too and the command line tool’s –compressed option will ask for brotli as well as gzip, if your build supports it. Similarly, libcurl supports it with its CURLOPT_ACCEPT_ENCODING option. The server can then opt to respond using either compression format, depending on what it knows.

According to CertSimple, who ran tests on the top-1000 sites of the Internet, brotli gets contents 14-21% smaller than gzip.

As with other compression algorithms, libcurl uses a 3rd party library for brotli compression and you may find that Linux distributions and others are a bit behind in shipping packages for a brotli decompression library. Please join in and help this happen. At the moment of this writing, the Debian package is only available in experimental.

(Readers may remember my libbrotli project, but that effort isn’t really needed anymore since the brotli project itself builds a library these days.)

Three security issues

In spite of our hard work and best efforts, security issues keep getting reported and we fix them accordingly. This release has three new ones and I’ll describe them below. None of them are alarmingly serious and they will probably not hurt anyone badly.

Two things can be said about the security issues this time:

1. You’ll note that we’ve changed naming convention for the advisory URLs, so that they now have a random component. This is to reduce potential information leaks based on the name when we pass these around before releases.

2. Two of the flaws happen only on 32 bit systems, which reveals a weakness in our testing. Most of our CI tests, torture tests and fuzzing are made on 64 bit architectures. We have no immediate and good fix for this, but this is something we must work harder on.

1. NTLM buffer overflow via integer overflow

(CVE-2017-8816) Limited to 32 bit systems, this is a flaw where curl takes the combined length of the user name and password, doubles it, and allocates a memory area that big. If that doubling ends up larger than 4GB, an integer overflow makes a very small buffer be allocated instead and then curl will overwrite that.

Yes, having user name plus password be longer than two gigabytes is rather excessive and I hope very few applications would allow this.

2. FTP wildcard out of bounds read

(CVE-2017-8817) curl’s wildcard functionality for FTP transfers is not a not very widely used feature, but it was discovered that the default pattern matching function could erroneously read beyond the URL buffer if the match pattern ends with an open bracket ‘[‘ !

This problem was detected by the OSS-Fuzz project! This flaw  has existed in the code since this feature was added, over seven years ago.

3. SSL out of buffer access

(CVE-2017-8818) In July this year we introduced multissl support in libcurl. This allows an application to select which TLS backend libcurl should use, if it was built to support more than one. It was a fairly large overhaul to the TLS code in curl and unfortunately it also brought this bug.

Also, only happening on 32 bit systems, libcurl would allocate a buffer that was 4 bytes too small for the TLS backend’s data which would lead to the TLS library accessing and using data outside of the heap allocated buffer.

Next?

The next release will ship no later than January 24th 2018. I think that one will as well add changes and warrant the minor number to bump. We have fun pending stuff such as: a new SSH backend, modifiable happy eyeballs timeout and more. Get involved and help us do even more good!

HTTPS-only curl mirrors

We’ve had volunteers donating bandwidth to the curl project basically since its inception. They mirror our download archives so that you can download them directly from their server farms instead of hitting the main curl site.

On the main site we check the mirrors daily and offers convenient download links from the download page. It has historically been especially useful for the rare occasions when our site has been down for administrative purpose or others.

Since May 2017 the curl site is fronted by Fastly which then has reduced the bandwidth issue as well as the downtime problem. The mirrors are still there though.

Starting now, we will only link to download mirrors that offer the curl downloads over HTTPS in our continued efforts to help our users to stay secure and avoid malicious manipulation of data. I’ve contacted the mirror admins and asked if they can offer HTTPS instead.

The curl download page still contains links to HTTP-only packages and pages, and we would really like to fix them as well. But at the same time we’ve reasoned that it is better to still help users to find packages than not, so for the packages where there are no HTTPS linkable alternatives we still link to HTTP-only pages. For now.

If you host curl packages anywhere, for anyone, please consider hosting them over HTTPS for all the users’ sake.

The life of a curl security bug

The report

Usually, security problems in the curl project come to us out of the blue. Someone has found a bug they suspect may have a security impact and they tell us about it on the curl-security@haxx.se email address. Mails sent to this address reach a private mailing list with the curl security team members as the only subscribers.

An important first step is that we respond to the sender, acknowledging the report. Often we also include a few follow-up questions at once. It is important to us to keep the original reporter in the loop and included in all subsequent discussions about this issue – unless they prefer to opt out.

If we find the issue ourselves, we act pretty much the same way.

In the most obvious and well-reported cases there are no room for doubts or hesitation about what the bugs and the impact of them are, but very often the reports lead to discussions.

The assessment

Is it a bug in the first place, is it perhaps even documented or just plain bad use?

If it is a bug, is this a security problem that can be abused or somehow put users in some sort of risk?

Most issues we get reported as security issues are also in the end treated as such, as we tend to err on the safe side.

The time plan

Unless the issue is critical, we prefer to schedule a fix and announcement of the issue in association with the pending next release, and as we do releases every 8 weeks like clockwork, that’s never very far away.

We communicate the suggested schedule with the reporter to make sure we agree. If a sooner release is preferred, we work out a schedule for an extra release. In the past we’ve did occasional faster security releases also when the issue already had been made public, so we wanted to shorten the time window during which users could be harmed by the problem.

We really really do not want a problem to persist longer than until the next release.

The fix

The curl security team and the reporter work on fixing the issue. Ideally in part by the reporter making sure that they can’t reproduce it anymore and we add a test case or two.

We keep the fix undisclosed for the time being. It is not committed to the public git repository but kept in a private branch. We usually put it on a private URL so that we can link to it when we ask for a CVE, see below.

All security issues should make us ask ourselves – what did we do wrong that made us not discover this sooner? And ideally we should introduce processes, tests and checks to make sure we detect other similar mistakes now and in the future.

Typically we only generate a single patch from the git master master and offer that as the final solution. In the curl project we don’t maintain multiple branches. Distros and vendors who ship older or even multiple curl versions backport the patch to their systems by themselves. Sometimes we get backported patches back to offer users as well, but those are exceptions to the rule.

The advisory

In parallel to working on the fix, we write up a “security advisory” about the problem. It is a detailed description about the problem, what impact it may have if triggered or abused and if we know of any exploits of it.

What conditions need to be met for the bug to trigger. What’s the version range that is affected, what’s the remedies that can be done as a work-around if the patch is not applied etc.

We work out the advisory in cooperation with the reporter so that we get the description and the credits right.

The advisory also always contains a time line that clearly describes when we got to know about the problem etc.

The CVE

Once we have an advisory and a patch, none of which needs to be their final versions, we can proceed and ask for a CVE. A CVE is a unique “ID” that is issued for security problems to make them easy to reference. CVE stands for Common Vulnerabilities and Exposures.

Depending on where in the release cycle we are, we might have to hold off at this point. For all bugs that aren’t proprietary-operating-system specific, we pre-notify and ask for a CVE on the distros@openwall mailing list. This mailing list prohibits an embargo longer than 14 days, so we cannot ask for a CVE from them longer than 2 weeks in advance before our release.

The idea here is that the embargo time gives the distributions time and opportunity to prepare updates of their packages so they can be pretty much in sync with our release and reduce the time window their users are at risk. Of course, not all operating system vendors manage to actually ship a curl update on two weeks notice, and at least one major commercial vendor regularly informs me that this is a too short time frame for them.

For flaws that don’t affect the free operating systems at all, we ask MITRE directly for CVEs.

The last 48 hours

When there is roughly 48 hours left until the coming release and security announcement, we merge the private security fix branch into master and push it. That immediately makes the fix public and those who are alert can then take advantage of this knowledge – potentially for malicious purposes. The security advisory itself is however not made public until release day.

We use these 48 hours to get the fix tested on more systems to verify that it is not doing any major breakage. The weakest part of our security procedure is that the fix has been worked out in secret so it has not had the chance to get widely built and tested, so that is performed now.

The release

We upload the new release. We send out the release announcement email, update the web site and make the advisory for the issue public. We send out the security advisory alert on the proper email lists.

Bug Bounty?

Unfortunately we don’t have any bug bounties on our own in the curl project. We simply have no money for that. We actually don’t have money at all for anything.

Hackerone offers bounties for curl related issues. If you have reported a critical issue you can request one from them after it has been fixed in curl.

 

Say hi to curl 7.56.0

Another curl version has been released into the world. curl 7.56.0 is available for download from the usual place. Here are some news I think are worthy to mention this time…

An FTP security issue

A mistake in the code that parses responses to the PWD command could make curl read beyond the end of a buffer, Max Dymond figured it out, and we’ve released a security advisory about it. Our 69th security vulnerability counted from the beginning and the 8th reported in 2017.

Multiple SSL backends

Since basically forever you’ve been able to build curl with a selected SSL backend to make it get a different feature set or behave slightly different – or use a different license or get a different footprint. curl supports eleven different TLS libraries!

Starting now, libcurl can be built to support more than one SSL backend! You specify all the SSL backends at build-time and then you can tell libcurl at run-time exactly which of the backends it should use.

The selection can only happen once per invocation so there’s no switching back and forth among them, but still. It also of course requires that you actually build curl with more than one TLS library, which you do by telling configure all the libs to use.

The first user of this feature that I’m aware of is git for windows that can select between using the schannel and OpenSSL backends.

curl_global_sslset() is the new libcurl call to do this with.

This feature was brought by Johannes Schindelin.

New MIME API

The currently provided API for creating multipart formposts, curl_formadd, has always been considered a bit quirky and complicated to work with. Its extensive use of varargs is to blame for a significant part of that.

Now, we finally introduce a replacement API to accomplish basically the same features but also with a few additional ones, using a new API that is supposed to be easier to use and easier to wrap for bindings etc.

Introducing the mime API: curl_mime_init, curl_mime_addpart, curl_mime_name and more. See the postit2.c and multi-post.c examples for some easy to grasp examples.

This work was done by Patrick Monnerat.

SSH compression

The SSH protocol allows clients and servers to negotiate to use of compression when communicating, and now curl can too. curl has the new –compressed-ssh option and libcurl has a new setopt called CURLOPT_SSH_COMPRESSION using the familiar style.

Feature worked on by Viktor Szakats.

SSLKEYLOGFILE

Peter Wu and Jay Satiro have worked on this feature that allows curl to store SSL session secrets in a file if this environment variable is set. This is normally the way you tell Chrome and Firefox to do this, and is extremely helpful when you want to wireshark and analyze a TLS stream.

This is still disabled by default due to its early days. Enable it by defining ENABLE_SSLKEYLOGFILE when building libcurl and set environment variable SSLKEYLOGFILE to a pathname that will receive the keys.

Numbers

This, the 169th curl release, contains 89 bug fixes done during the 51 days since the previous release.

47 contributors helped making this release, out of whom 18 are new.

254 commits were done since the previous release, by 26 authors.

The top-5 commit authors this release are:

  1. Daniel Stenberg (116)
  2. Johannes Schindelin (37)
  3. Patrick Monnerat (28)
  4. Jay Satiro (12)
  5. Dan Fandrich (10)

Thanks a lot everyone!

(picture from pixabay)