Tag Archives: nss

Play TLS 1.3 with curl

The IESG recently approved the TLS 1.3 draft-28 for proposed standard and we can expect the real RFC for this protocol version to appear soon (within a few months probably).

TLS 1.3 has been in development for quite some time by now, and a lot of TLS libraries already support it to some extent. At varying draft levels.

curl and libcurl has supported an explicit option to select TLS 1.3 since curl 7.52.0 (December 2016) and assuming you build curl to use a TLS library with support, you’ve been able to use TLS 1.3 with curl since at least then. The support has gradually been expanded to cover more and more libraries since then.

Today, curl and libcurl support speaking TLS 1.3 if you build it to use one of these fine TLS libraries of a recent enough version:

  • OpenSSL
  • BoringSSL
  • libressl
  • NSS
  • WolfSSL
  • Secure Transport (on iOS 11 or later, and macOS 10.13 or later)

GnuTLS seems to be well on their way too. TLS 1.3 support exists in the GnuTLS master branch on gitlab.

curl’s TLS 1.3-support makes it possible to select TLS 1.3 as preferred minimum version.

curl and TLS 1.3

Draft 18 of the TLS version 1.3 spec was publiSSL padlockshed at the end of October 2016.

Already now, both Firefox and Chrome have test versions out with TLS 1.3 enabled. Firefox 52 will have it by default, and while Chrome will ship it, I couldn’t figure out exactly when we can expect it to be there by default.

Over the last few days we’ve merged TLS 1.3 support to curl, primarily in this commit by Kamil Dudka. Both the command line tool and libcurl will negotiate TLS 1.3 in the next version (7.52.0 – planned release date at the end of December 2016) if built with a TLS library that supports it and told to do it by the user.

The two current TLS libraries that will speak TLS 1.3 when built with curl right now is NSS and BoringSSL. The plan is to gradually adjust curl over time as the other libraries start to support 1.3 as well. As always we will appreciate your help in making this happen!

Right now, there’s also the minor flux in that servers and clients may end up running implementations of different draft versions of the TLS spec which contributes to a layer of extra fun!

Three TLS current 1.3 test servers to play with: https://enabled.tls13.com/ , https://www.allizom.org/ and https://tls13.crypto.mozilla.org/. I doubt any of these will give you any guarantees of functionality.

TLS 1.3 offers a few new features that allow clients such as curl to do subsequent TLS connections much faster, with only 1 or even 0 RTTs, but curl has no code for any of those features yet.

http2 in curl

While the first traces of http2 support in curl was added already back in September 2013 it hasn’t been until recently it actually was made useful. There’s been a lot of http2 related activities in the curl team recently and in the late January 2014 we could run our first command line inter-op tests against public http2 (draft-09) servers on the Internet.

There’s a lot to be said about http2 for those not into its nitty gritty details, but I’ll focus on the curl side of this universe in this blog post. I’ll do separate posts and presentations on http2 “internals” later.

A quick http2 overview

http2 (without the minor version, as per what the IETF work group has decided on) is a binary protocol that allows many logical streams multiplexed over the same physical TCP connection, it features compressed headers in both directions and it has stream priorities and more. It is being designed to maintain the user concepts and paradigms from HTTP 1.1 so web sites don’t have to change contents and web authors won’t need to relearn a lot. The web will not break because of http2, it will just magically work a little better, a little smoother and a little faster.

In libcurl we build http2 support with the help of the excellent library called nghttp2, which takes care of all the binary protocol details for us. You’ll also have to build it with a new enough version of the SSL library of your choice, as http2 over TLS will require use of some fairly recent TLS extensions that not many older releases have and several TLS libraries still completely lack!

The need for an extension is because with speaking TLS over port 443 which HTTPS implies, the current and former web infrastructure assumes that we will speak HTTP 1.1 over that, while we now want to be able to instead say we want to talk http2. When Google introduced SPDY then pushed for a new extension called NPN to do this, which when taken through the standardization in IETF has been forked, changed and renamed to ALPN with roughly the same characteristics (I don’t know the specific internals so I’ll stick to how they appear from the outside).

So, NPN and especially ALPN are fairly recent TLS extensions so you need a modern enough SSL library to get that support. OpenSSL and NSS both support NPN and ALPN with a recent enough version, while GnuTLS only supports ALPN. You can build libcurl to use any of these threes libraries to get it to talk http2 over TLS.

http2 using libcurl

(This still describes what’s in curl’s git repository, the first release to have this level of http2 support is the upcoming 7.36.0 release.)

Users of libcurl who want to enable http2 support will only have to set CURLOPT_HTTP_VERSION to CURL_HTTP_VERSION_2_0 and that’s it. It will make libcurl try to use http2 for the HTTP requests you do with that handle.

For HTTP URLs, this will make libcurl send a normal HTTP 1.1 request with an offer to the server to upgrade the connection to version 2 instead. If it does, libcurl will continue using http2 in the clear on the connection and if it doesn’t, it’ll continue using HTTP 1.1 on it. This mode is what Firefox and Chrome will not support.

For HTTPS URLs, libcurl will use NPN and ALPN as explained above and offer to speak http2 and if the server supports it. there will be http2 sweetness from than point onwards. Or it selects HTTP 1.1 and then that’s what will be used. The latter is also what will be picked if the server doesn’t support ALPN and NPN.

Alt-Svc and ALTSVC are new things planned to show up in time for http2 draft-11 so we haven’t really thought through how to best support them and provide their features in the libcurl API. Suggestions (and patches!) are of course welcome!

http2 with curl

Hardly surprising, the curl command line tool also has this power. You use the –http2 command line option to switch on the libcurl behavior as described above.

Translated into old-style

To reduce transition pains and problems and to work with the rest of the world to the highest possible degree, libcurl will (decompress and) translate received http2 headers into http 1.1 style headers so that applications and users will get a stream of headers that look very much the way you’re used to and it will produce an initial response line that says HTTP 2.0 blabla.

Building (lib)curl to support http2

See the README.http2 file in the lib/ directory.

This is still a draft version of http2!

I just want to make this perfectly clear: http2 is not out “for real” yet. We have tried our http2 support somewhat at the draft-09 level and Tatsuhiro has worked on the draft-10 support in nghttp2. I expect there to be at least one more draft, but perhaps even more, before http2 becomes an official RFC. We hope to be able to stay on the frontier of http2 and deliver support for the most recent draft going forward.

PS. If you try any of this and experience any sort of problems, please speak to us on the curl-library mailing list and help us smoothen out whatever problem you got!

cURL

Is there a case for a unified SSL front?

There are many, sorry very many, different SSL libraries today that various programs may want to use. In the open source world at least, it is more and more common that SSL (and other crypto) using programs offer build options to build with either at least OpenSSL or GnuTLS and very often they also offer optinal build with NSS and possibly a few other SSL libraries.

In the curl project we just added support for library number nine. In the libcurl source code we have an internal API that each SSL library backend must provide, and all the libcurl source code is internally using only that single and fixed API to do SSL and crypto operations without even knowing which backend library that is actually providing the functionality. I talked about libcurl’s internal SSL API before, and I asked about this on the libcurl list back in Feb 2011.

So, a common problem should be able to find a common solution. What if we fixed this in a way that would be possible for many projects to re-use? What if one project’s ability to select from 9 different provider libraries could be leveraged by others. A single SSL API with a simplified API but that still provides the functionality most “simple” SSL-using applications need?

Marc Hörsken and I have discussed this a bit, and pidgin/libpurble came up as a possible contender that could use such a single SSL. I’ve also talked about it with Claes Jakobsson and Magnus Hagander for postgresql and I know since before that wget certainly could use it. When I’ve performed my talks on the seven SSL libraries of libcurl I’ve been approached by several people who have expressed a desire in seeing such an externalized API, and I remember the guys from cyassl among those. I’m sure there will be a few other interested parities as well if this takes off.

What remains to be answered is if it is possible to make it reality in a decent way.

Upsides:

  1. will reduce code from the libcurl code base – in the amount of 10K or more lines of C code, and it should be a decreased amount of “own” code for all projects that would decide to use thins single SSL library
  2. will allow other projects to use one out of many SSL libraries, hopefully benefiting end users as a result
  3. should get more people involved in the code as more projects would use it, hopefully ending up in better tested and polished source code

There are several downsides with a unified library, from the angle of curl/libcurl and myself:

  1. it will undoubtedly lead to more code being added and implemented that curl/libcurl won’t need or use, thus grow the code base and the final binary
  2. the API of the unified library won’t be possible to be as tightly integrated with the libcurl internals as it is today, so there will be some added code and logic needed
  3. it will require that we produce a lot of documentation for all the functions and structs in the API which takes time and effort
  4. the above mention points will deduct time from my other projects (hopefully to benefit others, but still)

Some problems that would have to dealt with:

  1. how to deal with libraries that don’t provide functionality that the single SSL API does
  2. how to draw the line of what functionality to offer in the API, as the individual libraries will always provide richer and more complete APIs
  3. What is actually needed to get the work on this started for real? And if we do, what would we call the project…

PS, I know the term is more accurately “TLS” these days but somehow people (me included) seem to have gotten stuck with the word SSL to cover both SSL and TLS…

null-prefix domino

dominosAt the end of July 2009, Scott Cantor contacted us in the curl project and pointed out a security flaw in libcurl (in code that was using OpenSSL to verify server certificates). Having read his explanation I recalled that I had witnessed the discussion on the NSS list about this problem just a few days earlier (which resulted in their August 1st security advisory). The problem is basically that the cert can at times contain a name with an embedded zero in the middle, while most source code assumes plain C-style strings that ends with a zero. This turns out to be exploitable, and is explained in great detail in this document (PDF).

I started to work on a patch, and in the mean time I talked to Simon Josefsson of the GnuTLS team to see if GnuTLS was fine or not, only to get him confirm that GnuTLS did indeed have the same problem.

So I contacted vendor-sec, and then on the morning of August 5 I thought I’d just make a quick check how the other HTTPS client implementations do their cert checks.

Wget: vulnerable

neon: vulnerable

serf: vulnerable

So, Internet Explorer and Firefox were vulnerable. NSS and GnuTLS were. (OpenSSL wasn’t, but then it doesn’t provide this verifying feature by itself) (lib)curl, wget, neon, serf were all vulnerable. If that isn’t a large amount of the existing HTTPS clients then what is? I also think that this shows that it would be good for all of us if OpenSSL had this functionality, as even if it had been vulnerable we could’ve fixed a busload of different applications by repairing a single library. Now we instead need to hunt down all apps that use OpenSSL and that verify certificate names.

Quite clearly we (as implementers) have all had the same silly assumptions, and quite likely we’ve affected each other into doing these sloppy codes. SSL and certificates are over and over again getting hit by this kind of painful flaws and setbacks. Darn, getting things right really is very very hard…

(Disclaimer: I immediately notified the neon and serf projects but to my knowledge they have not yet released any fixed versions.)

Linux distros consolidate crypto libs

For a while already, the Fedora distribution has fought battles, done lots of work and pushed for a consolidation of all packages that use crypto libs to completely go with Mozilla’s NSS.

Now it seems to be OpenSUSE’s turn. The discussion I link to here doesn’t make any definite conclusions but they seem to lean towards NSS as well, claiming it has the most features. I wonder what they base that statement on – if there’s a public doc anywhere that state exactly which has what that makes any contender better than any other for them?

In the Fedora case it seems they’ve focused on the NSS FIPS license as the deciding factor but the license issue is also often brought up in this discussion.

I’ve personally been pondering on writing some kind of unified crypto layer that would expose a single API to an application and handle the different libs as backends, pretty much the same way we do it internally in libcurl at the moment. It hasn’t taken off (or even been started) since I’ve not had the time nor energy for it yet.

SSL certs crash without trust

Eddy Nigg found out and blogged about how he could buy SSL certificates for a domain he clearly doesn’t own nor control. The cert is certified by Comodo who apparently has outsourced (parts of) there cert business to a separate company who obviously does very little or perhaps no verification at all of the buyers.

As a result, buyers could buy certificates from there for just about any domain/site name, and Comodo being a trusted CA in at least Firefox would thus make it a lot easier for phishers and other cyber-style criminals to setup fraudulent sites that even get the padlock in Firefox and looks almost perfectly legitimate!

The question is now what Mozilla should do. What Firefox users should expect their browser to do when HTTPS sites use Comodo-verified certs and how Comodo and their resellers are going to deal with everything…

Read the scary thread on the mozilla dev-tech-crypto list.

Update: if you’re on the paranoid/safe side you can disable trusting their certificates by doing this:

Select Preferences -> Advanced -> View Certificates -> Authorities. Search for
AddTrust AB -> AddTrust External CA Root and click “Edit”. Remove all Flags.

Getting cacerts for your tools

As the primary curl author, I’m finding the comments here interesting. That blog entry “Teaching wget About Root Certificates” is about how you can get cacerts for wget by downloading them from curl’s web site, and people quickly point out how getting cacerts from an untrusted third party place of course is an ideal situation for an MITM “attack”.

Of course you can’t trust any files off a HTTP site or a HTTPS site without a “trusted” certificate, but thinking that the curl project would run one of those just to let random people load PEM files from our site seems a bit weird. Thus, we also provide the scripts we do all this with so that you can run them yourself with whatever input data you need, preferably something you trust. The more paranoid you are, the harder that gets of course.

On Fedora, curl does come with ca certs (at least I’m told recent Fedoras do) and even if it doesn’t, you can actually point curl to use whatever cacert you like and since most default installs of curl uses OpenSSL like wget does, you could tell curl to use the same cacert your wget install uses.

This last thing gets a little more complicated when one of the two gets compiled with a SSL library that doesn’t easily support PEM (read: NSS), but in the case of curl in recent Fedora they build it with NSS but with an additional patch that allows it to still be able to read PEM files.

CA cert bundle from Firefox

It could be interesting to note that extracting all the cacerts from your local Firefox installation isn’t that tricky, if you just use some of the magic that are at hand with the NSS certutil tool.

Users of OpenSSL or GnuTLS based tools or libraries (such as libcurl) might be pleased to learn this.

curl users in general of course should be aware that we no longer ship any ca-cert bundle with curl (as of curl 7.18.1), as it seems some ports haven’t yet updated or discovered this.

Update: this script is now present as lib/firefox-db2pem.sh in the curl CVS repository.

libcurl built with yassl

yassl is a (in comparison) small SSL/TLS library that libcurl can be built to use instead of using one of the other SSL/TLS libraries libcurl supports (OpenSSL, GnuTLS, NSS and QSSL). However, yassl differs somewhat from the others in the way that it provides an OpenSSL-emulating API layer so that in libcurl we pretty much use exactly the same code for OpenSSL as we do for yassl.libcurl

yassl claimed this libcurl support for several years ago and indeed libcurl builds fine with it and you can even do some basic SSL operations with it, but the emulation is just not good enough to let the curl test suite go through so lots of stuff breaks when libcurl is built with yassl.

I did this same test 15 months ago and it had roughly the same problems then.

As far as I know, the other three main libraries are much more stable (the QSSL/QsoSSL library is only available on the AS/400 platform and thus I don’t personally know much about how it performs) and thus they remain the libraries I recommend users to select from if they want something that’s proven working.