Tag Archives: Security

Less plain-text is better. Right?

Every connection and every user on the Internet is being monitored and snooped at to at least some extent every now and then. Everything from the casual firesheep user in your coffee shop, an admin in your ISP, your parents/kids on your wifi network, your employer on the company network, your country’s intelligence service in a national network hub or just a random rogue person somewhere in the middle of all this.

My involvement in HTTP make me mostly view and participate in this discussion with this protocol primarily in mind, but the discussion goes well beyond HTTP and the concepts can (and will?) be applied to most Internet protocols in the future. You can follow some of these discussions in the httpbis group, the UTA group, the tcpcrypt list on twitter and elsewhere.

IETF just published RFC 7258 which states:

Pervasive Monitoring Is a Widespread Attack on Privacy

Passive monitoring

Most networking surveillance can be done entirely passively by just running the correct software and listening in on the correct cable. Because most internet traffic is still plain-text and readable by anyone who wants to read it when the bytes come flying by. Like your postman can read your postcards.

Opportunistic?

Recently there’s been a fierce discussion going on both inside and outside of IETF and other protocol and standards groups about doing “opportunistic encryption” (OE) and its merits and drawbacks. The term, which in itself is being debated and often is said to be better called “opportunistic keying” (OK) instead, is about having protocols transparently (invisible to the user) upgrade plain-text versions to TLS unauthenticated encrypted versions of the protocols. I’m emphasizing the unauthenticated word there because that’s a key to the debate. Recently I’ve been told that the term “opportunistic security” is the term to use instead…

In the way of real security?

Basically the argument against opportunistic approaches tends to be like this: by opportunistically upgrading plain-text to unauthenticated encrypted communication, sysadmins and users in the world will consider that good enough and they will then not switch to using proper, strong and secure authentication encryption technologies. The less good alternative will hamper the adoption of the secure alternative. Server admins should just as well buy a cert for 10 USD and use proper HTTPS. Also, listeners can still listen in on or man-in-the-middle unauthenticated connections if they capture everything from the start of the connection, including the initial key exchange. Or the passive listener will just change to become an active party and this unauthenticated way doesn’t detect that. OE doesn’t prevent snooping.

Isn’t it better than plain text?

The argument for opportunism here is that there will be nothing to the user that shows that it is “upgrading” to something less bad than plain text. Browsers will not show the padlock, clients will not treat the connection as “secure”. It will just silently and transparently make passive monitoring of networks much harder and it will force actors who truly want to snoop on specific traffic to up their game and probably switch to active monitoring for more cases. Something that’s much more expensive for the listener. It isn’t about the cost of a cert. It is about setting up and keeping the cert up-to-date, about SNI not being widely enough adopted and that we can see only 30% of all sites on the Internet today use HTTPS – for these reasons and others.

HTTP:// over TLS

In the httpbis work group in IETF the outcome of this debate is that there is a way being defined on how to do HTTP as specified with a HTTP:// URL – that we’ve learned is plain-text – over TLS, as part of the http2 work. Alt-Svc is the way. (The header can also be used to just load balance HTTP etc but I’ll ignore that for now)

Mozilla and Firefox is basically the only team that initially stands behind the idea of implementing this in a browser. HTTP:// done over TLS will not be seen nor considered any more secure than ordinary HTTP is and users will not be aware if that happens or not. Only true HTTPS connections will get the padlock, secure cookies and the other goodies true HTTPS sites are known and expected to get and show.

HTTP:// over TLS will just silently send everything through TLS (assuming that it can actually negotiate such a connection), thus making passive monitoring of the network less easy.

Ideally, future http2 capable servers will only require a config entry to be set TRUE to make it possible for clients to do OE on them.

HTTPS is the secure protocol

HTTP:// over TLS is not secure. If you want security and privacy, you should use HTTPS. This said, MITMing HTTPS transfers is still a widespread practice in certain network setups…

TCPcrypt

I find this initiative rather interesting. If implemented, it removes the need for all these application level protocols to do anything about opportunistic approaches and it could instead be handled transparently on TCP level! It still has a long way to go though before we will see anything like this fly in real life.

The future will tell

Is this just a fad that will get no adoption and go away or is it the beginning of something that will change how we do protocols in the future? Time will tell. Many harsh words are being exchanged over this topic in many a debate right now…

(I’m trying to stick to “HTTP:// over TLS” here when referring to doing HTTP OE/OK over TLS. This is partly because RFC2818 that describes how to do HTTPS uses the phrase “HTTP over TLS”…)

tailmatching them cookies

A brand new libcurl security advisory was announced on April 12th, which details how libcurl can leak cookies to domains with tailmatch. Let me explain the details.

(Did I mention that security is hard?)

cookiecurl first implemented cookie support way back in the early days in the late 90s. I participated in the IETF work that much later documented how cookies work in real life. I know how cookies work, and yet this flaw still existed in the curl cookie implementation for over 13 years. Until someone spotted it. And once again that sense of gaaaah, how come we never saw this before!! came over me.

A quick cookie 101

When cookies are used over HTTP, it is (if we simplify things a little) only a name = value pair that is set to be valid a certain domain and a path. But the path is only specifying the prefix, and the domain only specifies the tail part. This means that a site can set a cookie that is for the entire site that is under the path /members so that it will be sent by the brower even for /members/names/ as well as for /members/profile/me etc. The cookie will then not be sent to the same domain for pages under a different path, such as /logout or similar.

A domain for a cookie can set to be valid for example.org and then it will be sent by the browser also for www.example.org and www.sub.example.org but not at all for example.com or badexample.org.

Unless of course you have a bug in the cookie tailmatching function. The bug libcurl had until 7.30.0 was released made it send cookies for the domain example.org also to sites that would have the same tail but a different prefix. Like badexample.org.

Let me try a story on you

It might not be obvious at first glance how terrible this can become to users. Let me take you through an imaginary story, backed up by some facts:

Imagine that there’s a known web site out there on the internet that provides an email service to users. Users login on a form and they read email. Or perhaps it is a social site. Preferably for our story, the site is using HTTP only but this trick can be done for most HTTPS sites as well with only a mildly bigger effort.

This known and popular site runs its services on ‘site.com’. When you’re logged in to site.com, your session is a cookie that keeps getting sent to the server and the server sometimes updates the contents and sends it back to the browser. This is the way millions of sites work.

As an evil person, you now register a domain and setup an attack server. You register a domain that has the same ending as the legitimate site. You call your domain ‘fun-cat-and-food-pics-from-site.com’ (FCAFPFS among friends).

anattackMr evil person also knows that there are several web browsers, typically special purpose ones for different kinds of devices, that use libcurl as its base. (But it doesn’t have to be a browser, it could be other tools as well but for this story a browser fits fine.) Lets say you know a person or two who use one of those browsers on site.com.

You send a phishing email to these persons. Or post a funny picture on the social site. The idea is to have them click your link to follow through to your funny FCAFPFS site. A little social engineering, who on the internet can truly resist funny cats?

The visitor’s browser (which uses a vulnerable libcurl) does the wrong “tailmatch” on the domain for the session cookie and gladly hands it off to the attacker site. The attacker site could then use that cookie to access site.com and hijack the user’s session. Quite likely the attacker would immediately change password or something and logout/login so that the innocent user who’s off looking at cats will get a “you are logged-out” message when he/she returns to site.com…

The attacker could then use “password reminder” features on other sites to get emails sent to site.com to allow him to continue attacking the user’s other accounts on other services. Or if site.com was a social site, the attacker would post more cat links and harvest more accounts etc…

End of story.

Any process improvements?

For every security vulnerability a project gets, it should be a reason for scrutinizing what went wrong. I don’t mean in the actual code necessarily, but more what processes we lack that made the bug sneak in and remain in there for so long without being detected.

What didn’t we do that made this bug survive this long?

Obviously we didn’t review the code properly. But this is a tricky beast that was added a very long time ago, back in the days when the project was young and not that many developers were involved. Before we even had a test suite. I do believe that we have slightly better reviews these days, but I will also claim that it is far from sure that we would detect this flaw by a sheer code review.

Test cases! We clearly lacked the necessary test case setup that tested the limitations of how cookies are supposed to work and get sent back and forth. We’ve added a few new ones now that detect this particular flaw fine, but I think we have reasons to continue to search for various kinds of negative tests we should do. Involving cookies of course, but also generally in other areas of the curl project.

Of course, we’re all just working voluntarily here on spare time so we can’t expect miracles.

(an attack, picture by Andy Gardner)

Why the latest security vulnerability in curl happened

In the end of January 2013 we got a fresh security vulnerability pointed out to us in the curl project (it was publicly announced on Feb 6). Another buffer overflow. This time in the SASL Digest-MD5 handling for POP3, IMAP and SMTP. It is the 16th security flaw during curl’s life-time of almost 15 years so it isn’t a disaster but still of course it is never fun when it happens. I put a lot of my own effort and pride into this project so every time something like this floats to the surface my pride and self-esteem get damaged a bit.

Everyone who’s concerned about open source and security and foremost in a reliable and secure libcurl of course now wonders: how did this happen? How could this piece of security problem get into libcurl and what are we doing to make sure it doesn’t happen again?

Let me tell you the story. It is not as interesting nor full of conspiracies as you’d like. It is instead rather dull and boring but nevertheless the truth.

I’m the lead developer and maintainer of curl and libcurl. I personally have done some 65% of all commits in the project and I do the majority of all code reviews on the mailing list. Our code might be used by some 500 million users, but the number of regulars that can be considered the “core team” can still basically be counted on a single hand. Also, we all do this primarily on our spare time.

During intense development periods we get flooded by bug reports and patch submissions and my backlog grows. It’s really not possible to foresee when these periods come, but occasionally it seems the planets align in this way and work piles up.

In order to then proceed the best way in the project, I try to focus on the architectural and “deep” matters that need me and my particular knowledge most. I then try to leave the “easy” problems that are easier to work on to others, and I try to stay away from the issues that already seem to be under control by some of the existing regulars in the project. I also have to let other “elders” in the project push things with slightly less scrutiny just to be able to plow through the work better. Unfortunately this leads to the occasional flaw getting through and in this case it was even a security vulnerability that when you look back on the code you really cannot understand how we could miss this.

We do take security seriously though and we make a big effort on handling all security reports swiftly and accurately. Even if this was the 16th time we let our guard down, I want to think that we at least react responsibly and in a good way when we realize our mistakes.

Please don’t judge us due to this. Please instead consider joining us and help us review code and help us find the next flaw before we merge it into mainline or at least before we do a public release with the code!

sasl-patch

libcurl claimed to be dangerous

On October 24th, my twitter feed suddenly got more activity than usual when suddenly there’s a mention of a newly(?) published paper:

The most dangerous code in the world: validating SSL certificates in non-browser software

Within the twelve page document they discuss flaws in various APIs and other certificate checking software, and for libcurl they say:

Internally, it uses OpenSSL to verify the chain of trust and verifies the hostname itself. This functionality is controlled by parameters CURLOPT_SSL_VERIFYPEER (default value: true) and CURLOPT_SSL_VERIFYHOST (default value: 2). This interface is almost perversely bad. The VERIFYPEER parameter is a boolean, while a similar-looking VERIFYHOST parameter is an integer.

(The fact that libcurl supports no less than nine(!) different SSL library backends seems to have been ignored but is irrelevant.)

The final part is their focus. It is an integer option but it looks like it could be similar to the VERIFYPEER option which could be considered a boolean option – but note that there is no boolean options at all in libcurl, those are all “long” values. They go on to explain:

Well-intentioned developers not only routinely misunderstand these parameters, but often set CURLOPT_SSL_VERIFY HOST to TRUE, thereby changing it to 1 and thus accidentally disabling hostname verification with disastrous consequences

They back up their claim with some snippets from PHP programs showing wrong use in chapter 7.

What did the authors do to try to fix the problem before posting rude comments in a report? Nothing. At. All. They could’ve emailed, tweeted or posted a bug report or patch but none of that happened.

They also only post examples of the bad use made by PHP code. The PHP code uses the PHP/CURL binding and a change could easily be done in the PHP binding. I don’t know PHP internals, but perhaps the option could be made to not accept a boolean value instead of a numerical there.

We’re now discussing this topic on the libcurl mailing list. If you have ideas or suggestions or just comments, feel free to join in!

Oh, and I feel that my recent blog post on the non-verifying users seems related and relevant.

I will also call the majority of all these suddenly appearing complainers on this API to be mostly hypocrites since the API has been established and working like this for over a 10 (ten!) years and not a single person has objected to it before. Joining up on the “bandwagon” now and calling the API stupid or silly is… well, I’d call it “non-intelligent behavior”. In libcurl we take a stable and solid API and ABI very seriously. We simply do not break API nor ABI unless forced brutally into a corner we can’t escape otherwise. Therefore we have kept this API to keep existing applications functional.

Update: the discussion thread on the topic from the PHP-DEV list. Thanks to Jan Ehrhardt.

Second update: we shipped libcurl 7.28.1 on November 20 2012, and it no longer accepts the value 1 to VERIFYHOST, but will instead cause curl_easy_setopt() return an error and use the default value (which is 2). This will prevent applications to accidentally be insecure due to use of 1.

SSL verification still often disabled

SSL padlockBack in 2002 I realized that having libcurl not do SSL server verification by default basically meant that everyone writing libcurl apps would inherit that flaw, simply because most people always just let the defaults remain unless they really have to read up on what something does and then modify them. If things work, things will just remain. So when we shipped libcurl 7.10 on the first of October that year, libcurl started verifying server certs by default.

Fast forward about ten years.

Surely SSL clients everywhere now do the right thing?

One day a couple of months ago, I was referred to this bug report for the pyssl module in Python which identifies that it doesn’t verify server certs by default! The default SSL handler in Python doesn’t verify the certificate properly. It makes all python programs that use this without special attention vulnerable for man in the middle attacks.

So let’s look at the state of another popular language: PHP. A plain standard PHP program opens a ssl:// or tls:// stream. Unless the author of said program knows and understands these things, it too runs without verifying server certs. If a program instead decides to use the PHP/CURL binding for HTTPS or similar, it will use libcurl’s default which verifies it (as I explained above).

But not everything is gloomy. Some parts of our community have decided to do the right thing:

I was told (and proven) that Ruby now does the right thing, but I don’t know how recent that is and thus how many older Ruby programs that suffer.

The same problem existed with perl’s major HTTPS using module, the LWP, for a very long time. The perl camp however already modified LWP to do verification by default with the release of libwww-perl 6.00, released in March 2011.

Side-note: in the curl project we make it easy for everyone on the Internet to use Firefox’s excellent CA cert bundle to verify server certs by providing the Firefox CA cert collection converted to PEM – the preferred format for OpenSSL, GnuTLS and others.

Conclusion:

Even today, lots and lots of applications and scripts will remain insecure – even though they probably think they’re fairly safe when they switch to a HTTPS or SSL using protocol –  and might be subject for man-in-the-middle attacks without even being able to spot it. I think it is pretty sad.

Sloppily using SSL_OP_ALL

This story begins with a security flaw in OpenSSL. OpenSSL is truly a fundamental piece of software these days and I would go so far and say that lots of our critical infrastructure today is using it and needs it. Flaws in OpenSSL literally affect entire societies or at least risk doing so if the flaws can be exploited.

SSL/TLS is a rather old and well used protocol with many different implementations, both client and server side. In order to enhance how OpenSSL works with older SSL implementations or just those that have different views on how to implement things, OpenSSL provides an API call to tweak behaviors. The SSL_CTX_set_options function. In the curl project we’ve found good use of it for this purpose, and we use the generic define SSL_OP_ALL to switch on all “rather harmless” workarounds that OpenSSL offers. Rather harmless, that’s what the comment in the header file says.

Ok, enough background and dancing around the issue. The flaw that ignited my idea to write this blog post was a particular mistake made within SSL a long time ago within the code handling SSL 3.0 and TLS 1.0 protocols when speaking this protocol with a peer that could select the plain-text (see this explanation) – the problem is a generic one with the protocol so different SSL libraries would approach it differently. Ok, so OpenSSL fixed the flaw back in the days of 0.9.6d (we’re talking May 9th 2002). As a user of a library such as OpenSSL it always feels good to see them being on top of security problems and releasing fixes. It makes you feel that you’re being looked after to some extent.

Shortly thereafter, the OpenSSL developers discovered that some broken server implementations didn’t work with the work-around they had done…

Alas, on July 30th 2002 the OpenSSL team released version 0.9.6e which offered a way for programs to disable this particular work-around. By switching this off, it would of course make the protocol less secure again but it would inter-operate better with faulty servers. How do you switch off this security measure? By using the SSL_CTX_set_options function setting the bit SSL_OP_DONT_INSERT_EMPTY_FRAGMENTS.

Ok, so far so good. But the next step is what changed everything from fine to not so fine anymore: they then added that new bit to the SSL_OP_ALL define.

Yes. In one blow every single application out there that use SSL_OP_ALL suddently started switching off this security measure as soon as they were recompiled against this version of OpenSSL. This change was made in 2002 and this is still like this today. It fixed the security problem from OpenSSL’s aspect, but the way the bit was later added to the SSL_OP_ALL define it was instead transferred to affect many programs.

In curl’s case, we were alerted about this flaw on January 19th 2012 and it resulted in a security advisory. I did a quick search for SSL_OP_ALL on koders.com and it is obvious that there are hundreds of programs out there still using this bitmask as-is. In the curl project we enabled the SSL_OP_ALL approach for the first time in the 7.10.6 release we did in July 2003. It was wrong already at the time we started using it. It turns out we’ve been enabling this flaw for almost nine years.

In the GnuTLS camp however, they simply stopped doing their work-around for this as soon as they started supporting TLS 1.1 due to the problems the work-around caused to some servers. This since TLS 1.1 isn’t vulnerable to the problem. OpenSSL 1.0.1 beta was released on Janurary 3 2012 and is the first OpenSSL version ever released to support TLS newer than 1.0… The browsers/NSS seem to have mitigated this problem in a different way and there’s a patch available for OpenSSL to implement the same work-around but there’s been no feedback on how or if it will be used.

News in curl 7.24.0

We continue doing curl releases roughly bi-monthly. This time we strike back with a release holding a few interesting new things that I thought are worth highlighting a little extra!

The most important and most depressing news about this release is the two security problems that were fixed. Never before have we released two security advisories for the same release.

Security fixes

The “curl URL sanitization vulnerability” is about how curl trusts user provided URL strings a little too much. Providing sneakily crafted URLs with embeded url-encoded carriage returns and line feeds users could trick curl to do un-intended actions when POP3, SMTP or IMAP protocols were used.

The “curl SSL CBC IV vulnerability” is about how curl inadvertently disables a security measurement in OpenSSL and thus weakens the security for some aspects of SSL 3.0 and TLS 1.0 connections.

Changes

We have a bunch of new changes added to curl and libcurl that some users might like:

  • curl has this ability to run a set of “extra commands” for a couple of protocols when doing a transfer – we call them “quote” operations. A while ago we introduced a way to mark commands within a series of quote commands as not being important if they fail and that the rest of the commands should be sent anyway. We mark such commands with a ‘*’-prefix. Starting now, we support that ‘*’-prefix for SFTP operations as well!
  • CURLOPT_DNS_SERVERS is a brand new option that allows programs to set which DNS server(s) libcurl should use to resolve host names. This function only works if libcurl was built to use a resolver backend that allows it to change DNS servers. That currently means nothing else but c-ares.
  • Now supports nettle for crypto functions. libcurl has long been supporting both OpenSSL and gcrypt backends for some of the crypto functions libcurl supports. The gcrypt made perfect sense when libcurl was built to use GnuTLS built to use gcrypt, but since GnuTLS recently has changed to using nettle by default the newly added support to use nettle with remove the need for an extra crypto link being linked for some users.
  • CURLOPT_INTERFACE was modified to allow “magic prefixes” for the application to tell that it uses an interface and not a host name and vice versa. The previous way would always test for both, which could lead to accidental (and slow) name resolves when the interface name isn’t currently present etc.
  • Active FTP sessions with the multi interface are now done much more non-blocking than before. Previously the multi interface would block while waiting for the server to connect back but it no longer does. A new option called CURLOPT_ACCEPTTIMEOUT_MS was added to allow programs to set how long libcurl should wait for accepting the server getting back.
  • Coming in from the Debian packaging guys, the configure script how features a new option called –enable-versioned-symbols that does exactly what it is called: it enables versioned symbols in the output libcurl.

HTTP security, websockets and more

owasp

Together with friends in OWASP I’m happy to mention that we will do an event on January 31st on the topic “HTTP security, websockets and more” where I’ll talk. Starting at 17:30, the exact location is not decided yet and it’ll depend a bit on popularity, but it will be in Stockholm, Sweden.

The two other speakers to appear at the event are, apart from myself, John Wilander and Martin Holst-Swende. My part of the session will be about the WebSockets protocol, about the upcoming cookie RFC and some bits about the ongoing HTTPbis work.

Sign up to attend, the opportunity is only open one week.

Omegapoint will sponsor with something to eat and drink, and we do plan to go out and grab a beer afterwards and continue the discussion.

See you!

Apple – only 391 days behind

In the curl project, we take security seriously. We work hard to make sure we don’t open up for security problems of any kind and once we fail, we work hard at analyzing the problem and coming up with a proper fix as swiftly as possible to make our “customer” as little vulnerable as possible.

Recently I’ve been surprised and slightly shocked by the fact that a lot of open source operating systems didn’t release any security upgrades to our most recent security flaw until well over a month after we first publicized the flaw. I’m not sure why they all reacted so slowly. Possibly it is because vendor-sec isn’t quite working as they were informed prior to the notification, and of course I don’t really expect many security guys to be subscribed to the curl mailing lists. Slow distros include Debian and Mandriva while Redhat did great.

Today however, I got a mail from Apple (and no, I don’t know why they send these mails to me but I guess they think I need them or something) with the subject “APPLE-SA-2010-03-29-1 Security Update 2010-002 / Mac OS X v10.6.3“. Aha! Did Apple now also finally update their curl version you might think?

They did. But they did not fix this problem. They fixed two previous problems universally known as CVE-2009-0037 and CVE-2009-2417. Look at the date of that first one. March 3, 2009. Yes, a whopping 391 days after the problem was first made public, Apple sends out the security update. Cool. At least they eventually fixed the problem…

a big curl forward

We’re proudly presenting a major new release of curl and libcurl and we call it 7.20.0.

The primary reason we decided to bump the minor number this time was that we introduce a range of new protocols, but we also did some other rather big works. This is the biggest update to curl and libcurl that have been made in recent years. Let me mention some of the other noteworthy changes and bugfixes:

We fixed a potential security issue, that would occur if an application requested to download compressed HTTP content and told libcurl to automatically uncompress it (CURLOPT_ENCODING) as then libcurl could wrongly call the write callback (CURLOPT_WRITEFUNCTION) with a larger buffer than what is documented to be the maximum size.

TFTP was finally converted to a “proper” protocol internally. By that I mean that it can now be used with the multi interface in an asynchronous way and it has far less special treatments. It is now “just another protocol” basically and that is a good thing. Also, the BLKSIZE problem with TFTP that has haunted us for a while was fixed so I really think this is the best version ever for TFTP in libcurl.

In several different places in the code older versions of libcurl didn’t properly call the progress callback while waiting for some special event to happen. This made the curl tool’s progress meter less responding but perhaps more importantly it prevented apps that use libcurl to abort the transfer during those phases. The affected periods included the ftp connection phase (including the initial FTP commands and responses), waiting for the TCP connect to complete and resolving host names using c-ares.

The DNS cache was found to have at least two bugs that could make entries linger in the database eternally and in another case too long. For apps that use a lot of connections to a lot of hosts, these problems could result in some serious performance punishments when the DNS cache lookups got slower and slower over time.

Users of the funny ftp server drftpd will appreciate that (lib)curl now support the PRET command, which is needed when getting data off such servers in passive mode. It’s a bit of a hack, but what can we do? We didn’t invent it nor can we help that it’s a popular thing to use! 😉

cURL