Category Archives: Open Source

Open Source, Free Software, and similar

More HTTP framing attempts

Previously, in my exciting series “improving the HTTP framing checks in Firefox” we learned that I landed a patch, got it backed out, struggled to improve the checks and finally landed the fixed version only to eventually get that one backed out as well.

And now I’ve landed my third version. The amendment I did this time:

When receiving HTTP content that is content-encoded and compressed I learned that when receiving deflate compression there is basically no good way for us to know if the content gets prematurely cut off. They seem to lack the footer too often for it to make any sense in checking for that. gzip streams however end with a footer so they are easier to reliably detect when they are incomplete. (As was discovered before, the Content-Length: is far too often not updated by the server so it is instead wrongly showing the uncompressed size.)

This (deflate vs gzip) knowledge is now used by the patch, meaning that deflate compressed downloads can be cut off without the browser noticing…

Will this version of the fix actually stick? I don’t know. There’s lots of bad voodoo out there in the HTTP world and I’m putting my finger right in the middle of some of it with this change. I’m pretty sure I’ve not written my last blog post on this topic just yet… If it sticks this time, it should show up in Firefox 39.

bolt-cutter

curl, smiley-URLs and libc

Some interesting Unicode URLs have recently been seen used in the wild – like in this billboard ad campaign from Coca Cola, and a friend of mine asked me about curl in reference to these and how it deals with such URLs.

emojicoke-by-stevecoleuk-450

(Picture by stevencoleuk)

I ran some tests and decided to blog my observations since they are a bit curious. The exact URL I tried was ‘www.O.ws’ (not the same smiley as shown on this billboard – note that I’ve replace the actual smiley with “O” in this entire post since wordpress craps on it) – it is really hard to enter by hand so now is the time to appreciate your ability to cut and paste! It appears they registered several domains for a set of different smileys.

These smileys are not really allowed IDN (where IDN means International Domain Names) symbols which make these domains a bit different. They should not (see below for details) be converted to punycode before getting resolved but instead I assume that the pure UTF-8 sequence should or at least will be fed into the name resolver function. Well, either way it should either pass in punycode or the UTF-8 string.

If curl was built to use libidn, it still won’t convert this to punycode and the verbose output says “Failed to convert www.O.ws to ACE; String preparation failed

curl (exact version doesn’t matter) using the stock threaded resolver

  • Debian Linux (glibc 2.19) – FAIL
  • Windows 7 – FAIL
  • Mac OS X 10.9 – SUCCESS

But then also perhaps to no surprise, the exact same results are shown if I try to ping those host names on these systems. It works on the mac, it fails on Linux and Windows. Wget 1.16 also fails on my Debian systems (just as a reference and I didn’t try it on any of the other platforms).

My curl build on Linux that uses c-ares for name resolving instead of glibc succeeds perfectly. host, nslookup and dig all work fine with it on Linux too (as well as nslookup on Windows):

$ host www.O.ws
www.O.ws has address 64.70.19.202
$ ping www.O.ws
ping: unknown host www.O.ws

While the same command sequence on the mac shows:

$ host www.O.ws
www.O.ws has address 64.70.19.202
$ ping www.O.ws
PING www.O.ws (64.70.19.202): 56 data bytes
64 bytes from 64.70.19.202: icmp_seq=0 ttl=44 time=191.689 ms
64 bytes from 64.70.19.202: icmp_seq=1 ttl=44 time=191.124 ms

Slightly interesting additional tidbit: if I rebuild curl to use gethostbyname_r() instead of getaddrinfo() it works just like on the mac, so clearly this is glibc having an opinion on how this should work when given this UTF-8 hostname.

Pasting in the URL into Firefox and Chrome works just fine. They both convert the name to punycode and use “www.xn--h28h.ws” which then resolves to the same IPv4 address.

Update: as was pointed out in a comment below, the “64.70.19.202” IP address is not the correct IP for the site. It is just the registrar’s landing page so it sends back that response to any host or domain name in the .ws domain that doesn’t exist!

What do the IDN specs say?

The U-263A smileyThis is not my area of expertise. I had to consult Patrik Fältström here to get this straightened out (but please if I got something wrong here the mistake is still all mine). Apparently this smiley is allowed in RFC 3940 (IDNA2003), but that has been replaced by RFC 5890-5892 (IDNA2008) where this is DISALLOWED. If you read the spec, this is 263A.

So, depending on which spec you follow it was a valid IDN character or it isn’t anymore.

What does the libc docs say?

The POSIX docs for getaddrinfo doesn’t contain enough info to tell who’s right but it doesn’t forbid UTF-8 encoded strings. The regular glibc docs for getaddrinfo also doesn’t say anything and interestingly, the Apple Mac OS X version of the docs says just as little.

With this complete lack of guidance, it is hardly any additional surprise that the glibc gethostbyname docs also doesn’t mention what it does in this case but clearly it doesn’t do the same as getaddrinfo in the glibc case at least.

What’s on the actual site?

A redirect to www.emoticoke.com which shows a rather boring page.

emoticoke

Who’s right?

I don’t know. What do you think?

Bug finding is slow in spite of many eyeballs

“given enough eyeballs, all bugs are shallow”

The saying (also known as Linus’ law) doesn’t say that the bugs are found fast and neither does it say who finds them. My version of the law would be much more cynical, something like: “eventually, bugs are found“, emphasizing the ‘eventually’ part.

(Jim Zemlin apparently said the other day that it can work the Linus way, if we just fund the eyeballs to watch. I don’t think that’s the way the saying originally intended.)

Because in reality, many many bugs are never really found by all those given “eyeballs” in the first place. They are found when someone trips over a problem and is annoyed enough to go searching for the culprit, the reason for the malfunction. Even if the code is open and has been around for years it doesn’t necessarily mean that any of all the people who casually read the code or single-stepped over it will actually ever discover the flaws in the logic. The last few years several world-shaking bugs turned out to have existed for decades until discovered. In code that had been read by lots of people – over and over.

So sure, in the end the bugs were found and fixed. I would argue though that it wasn’t because the projects or problems were given enough eyeballs. Some of those problems were found in extremely popular and widely used projects. They were found because eventually someone accidentally ran into a problem and started digging for the reason.

Time until discovery in the curl project

I decided to see how it looks in the curl project. A project near and dear to me. To take it up a notch, we’ll look only at security flaws. Not only because they are the probably most important bugs we’ve had but also because those are the ones we have the most carefully noted meta-data for. Like when they were reported, when they were introduced and when they were fixed.

We have no less than 30 logged vulnerabilities for curl and libcurl so far through-out our history, spread out over the past 16 years. I’ve spent some time going through them to see if there’s a pattern or something that sticks out that we should put some extra attention to in order to improve our processes and code. While doing this I gathered some random info about what we’ve found so far.

On average, each security problem had been present in the code for 2100 days when fixed – that’s more than five and a half years. On average! That means they survived about 30 releases each. If bugs truly are shallow, it is still certainly not a fast processes.

Perhaps you think these 30 bugs are really tricky, deeply hidden and complicated logic monsters that would explain the time they took to get found? Nope, I would say that every single one of them are pretty obvious once you spot them and none of them take a very long time for a reviewer to understand.

Vulnerability ages

This first graph (click it for the large version) shows the period each problem remained in the code for the 30 different problems, in number of days. The leftmost bar is the most recent flaw and the bar on the right the oldest vulnerability. The red line shows the trend and the green is the average.

The trend is clearly that the bugs are around longer before they are found, but since the project is also growing older all the time it sort of comes naturally and isn’t necessarily a sign of us getting worse at finding them. The average age of flaws is aging slower than the project itself.

Reports per year

How have the reports been distributed over the years? We have a  fairly linear increase in number of lines of code but yet the reports were submitted like this (now it goes from oldest to the left and most recent on the right – click for the large version):

vuln-trend

Compare that to this chart below over lines of code added in the project (chart from openhub and shows blanks in green, comments in grey and code in blue, click it for the large version):

curl source code growth

We received twice as many security reports in 2014 as in 2013 and we got half of all our reports during the last two years. Clearly we have gotten more eyes on the code or perhaps users pay more attention to problems or are generally more likely to see the security angle of problems? It is hard to say but clearly the frequency of security reports has increased a lot lately. (Note that I here count the report year, not the year we announced the particular problems, as they sometimes were done on the following year if the report happened late in the year.)

On average, we publish information about a found flaw 19 days after it was reported to us. We seem to have became slightly worse at this over time, the last two years the average has been 25 days.

Did people find the problems by reading code?

In general, no. Sure people read code but the typical pattern seems to be that people run into some sort of problem first, then dive in to investigate the root of it and then eventually they spot or learn about the security problem.

(This conclusion is based on my understanding from how people have reported the problems, I have not explicitly asked them about these details.)

Common patterns among the problems?

I went over the bugs and marked them with a bunch of descriptive keywords for each flaw, and then I wrote up a script to see how the frequent the keywords are used. This turned out to describe the flaws more than how they ended up in the code. Out of the 30 flaws, the 10 most used keywords ended up like this, showing number of flaws and the keyword:

9 TLS
9 HTTP
8 cert-check
8 buffer-overflow

6 info-leak
3 URL-parsing
3 openssl
3 NTLM
3 http-headers
3 cookie

I don’t think it is surprising that TLS, HTTP or certificate checking are common areas of security problems. TLS and certs are complicated, HTTP is huge and not easy to get right. curl is mostly C so buffer overflows is a mistake that sneaks in, and I don’t think 27% of the problems tells us that this is a problem we need to handle better. Also, only 2 of the last 15 flaws (13%) were buffer overflows.

The discussion following this blog post is on hacker news.

Tightening Firefox’s HTTP framing – again

An old http1.1 frameCall me crazy, but I’m at it again. First a little resume from our previous episodes in this exciting saga:

Chapter 1: I closed the 10+ year old bug that made the Firefox download manager not detect failed downloads, simply because Firefox didn’t care if the HTTP 1.1 Content-Length was larger than what was actually saved – after the connection potentially was cut off for example. There were additional details, but that was the bigger part.

Chapter 2: After having been included all the way to public release, we got a whole slew of bug reports immediately when Firefox 33 shipped and we had to revert parts of the fix I did.

Chapter 3.

Will it land before it turns 11 years old? The bug was originally submitted 2004-03-16.

Since chapter two of this drama brought back the original bugs again we still have to do something about them. I fully understand if not that many readers of this can even keep up of all this back and forth and juggling of HTTP protocol details, but this time we’re putting back the stricter frame checks with a few extra conditions to allow a few violations to remain but detect and react on others!

Here’s how I addressed this issue. I wanted to make the checks stricter but still allow some common protocol violations.

In particular I needed to allow two particular flaws that have proven to be somewhat common in the wild and were the reasons for the previous fix being backed out again:

A – HTTP chunk-encoded responses that lack the final 0-sized chunk.

B – HTTP gzipped responses where the Content-Length is not the same as the actual contents.

So, in order to allow A + B and yet be able to detect prematurely cut off transfers I decided to:

  1. Detect incomplete chunks then the transfer has ended. So, if a chunk-encoded transfer ends on exactly a chunk boundary we consider that fine. Good: This will allow case (A) to be considered fine. Bad: It will make us not detect a certain amount of cut-offs.
  2. When receiving a gzipped response, we consider a gzip stream that doesn’t end fine according to the gzip decompressing state machine to be a partial transfer. IOW: if a gzipped transfer ends fine according to the decompressor, we do not check for size misalignment. This allows case (B) as long as the content could be decoded.
  3. When receiving HTTP that isn’t content-encoded/compressed (like in case 2) and not chunked (like in case 1), perform the size comparison between Content-Length: and the actual size received and consider a mismatch to mean a NS_ERROR_NET_PARTIAL_TRANSFER error.

Firefox BallPrefs

When my first fix was backed out, it was actually not removed but was just put behind a config string (pref as we call it) named “network.http.enforce-framing.http1“. If you set that to true, Firefox will behave as it did with my original fix applied. It makes the HTTP1.1 framing fairly strict and standard compliant. In order to not mess with that setting that now has been around for a while (and I’ve also had it set to true for a while in my browser and I have not seen any problems with doing it this way), I decided to introduce my new changes pref’ed behind a separate variable.

network.http.enforce-framing.soft” is the new pref that is set to true by default with my patch. It will make Firefox do the detections outlined in 1 – 3 and setting it to false will disable those checks again.

Now I only hope there won’t ever be any chapter 4 in this story… If things go well, this will appear in Firefox 38.

Chromium

But how do they solve these problems in the Chromium project? They have slightly different heuristics (with the small disclaimer that I haven’t read their code for this in a while so details may have changed). First of all, they do not allow a missing final 0-chunk. Then, they basically allow any sort of misaligned size when the content is gzipped.

Update: this patch was subsequently backed out again due to several bug reports about it. I have yet to analyze exactly what went wrong.

Changing networks with Linux

A rather long time ago I blogged about my work to better deal with changing networks while Firefox is running, and the change was then pushed for Android and I subsequently pushed the same functionality for Firefox on Mac.

Today I’ve landed yet another change, which detects network changes on Firefox OS and Linux.

Firefox Nightly screenshotAs Firefox OS uses a Linux kernel, I ended up doing the same fix for both the Firefox OS devices as for Firefox on Linux desktop: I open a socket in the AF_NETLINK family and listen on the stream of messages the kernel sends when there are network updates. This way we’re told when the routing tables update or when we get a new IP address etc. I consider this way better than the NotifyIpInterfaceChange() API Windows provides, as this allows us to filter what we’re interested in. The windows API makes that rather complicated and in fact a lot of the times when we get the notification on windows it isn’t clear to me why!

The Mac API way is what I would consider even more obscure, but then I’m not at all used to their way of doing things and how you add things to the event handlers etc.

The journey to the landing of this particular patch was once again long and bumpy and full of sweat in this tradition that seem seems to be my destiny, and this time I ran into problems with the Firefox OS emulator which seems to have some interesting bugs that cause my code to not work properly and as a result of that our automated tests failed: occasionally data sent over a pipe or socketpair doesn’t end up in the receiving end. In my case this means that my signal to the child thread to die would sometimes not be noticed and thus the thread wouldn’t exit and die as intended.

I ended up implementing a work-around that makes it work even if the emulator eats the data by also checking a shared should-I-shutdown-now flag every once in a while. For more specific details on that, see the bug.

My talks at FOSDEM 2015

fosdem

Sunday 13:00, embedded room (Lameere)

Tile: Internet all the things – using curl in your device

Embedded devices are very often network connected these days. Network connected embedded devices often need to transfer data to and from them as clients, using one or more of the popular internet protocols.

libcurl is the world’s most used and most popular internet transfer library, already used in every imaginable sort of embedded device out there. How did this happen and how do you use libcurl to transfer data to or from your device?

Note that this talk was originally scheduled to be at a different time!

Sunday, 09:00 Mozilla room (UD2.218A)

Title: HTTP/2 right now

HTTP/2 is the new version of the web’s most important and used protocol. Version 2 is due to be out very soon after FOSDEM and I want to inform the audience about what’s going on with the protocol, why it matters to most web developers and users and not the last what its status is at the time of FOSDEM.

My first year at Mozilla

January 13th 2014 I started my fiMozilla dinosaur head logorst day at Mozilla. One year ago exactly today.

It still feels like it was just a very short while ago and I keep having this sense of being a beginner at the company, in the source tree and all over.

One year of networking code work that really at least during periods has not progressed as quickly as I would’ve wished for, and I’ve had some really hair-tearing problems and challenges that have taken me sweat and tears to get through. But I am getting through and I’m enjoying every (oh well, let’s say almost every) moment.

During the year I’ve had the chance to meetup with my team mates twice (in Paris and in Portland) and I’ve managed to attend one IETF (in London) and two special HTTP2 design meetings (in London and NYC).

openhub.net counts 47 commits by me in Firefox and that feels like counting high. bugzilla has tracked activity by me in 107 bug reports through the year.

I’ve barely started. I’ll spend the next year as well improving Firefox networking, hopefully with a higher turnout this year. (I don’t mean to make this sound as if Firefox networking is just me, I’m just speaking for my particular part of the networking team and effort and I let the others speak for themselves!)

Onwards and upwards!

curl 7.40.0: unix domain sockets and smb

curl and libcurl curl dot-to-dot7.40.0 was just released this morning. There’s a closer look at some of the perhaps more noteworthy changes. As usual, you can find the entire changelog on the curl web site.

HTTP over unix domain sockets

So just before the feature window closed for the pending 7.40.0 release of curl, Peter Wu’s patch series was merged that brings the ability to curl and libcurl to do HTTP over unix domain sockets. This is a feature that’s been mentioned many times through the history of curl but never previously truly implemented. Peter also very nicely adjusted the test server and made two test cases that verify the functionality.

To use this with the curl command line, you specify the socket path to the new –unix-domain option and assuming your local HTTP server listens on that socket, you’ll get the response back just as with an ordinary TCP connection.

Doing the operation from libcurl means using the new CURLOPT_UNIX_SOCKET_PATH option.

This feature is actually not limited to HTTP, you can do all the TCP-based protocols except FTP over the unix domain socket, but it is to my knowledge only HTTP that is regularly used this way. The reason FTP isn’t supported is of course its use of two connections which would be even weirder to do like this.

SMB

SMB is also known as CIFS and is an old network protocol from the Microsoft world access files. curl and libcurl now support this protocol with SMB:// URLs thanks to work by Bill Nagel and Steve Holme.

Security Advisories

Last year we had a large amount of security advisories published (eight to be precise), and this year we start out with two fresh ones already on the 8th day… The ones this time were of course discovered and researched already last year.

CVE-2014-8151 is a way we accidentally allowed an application to bypass the TLS server certificate check if a TLS Session-ID was already cached for a non-checked session – when using the Mac OS SecureTransport SSL backend.

CVE-2014-8150 is a URL request injection. When letting curl or libcurl speak over a HTTP proxy, it would copy the URL verbatim into the HTTP request going to the proxy, which means that if you craft the URL and insert CRLFs (carriage returns and linefeed characters) you can insert your own second request or even custom headers into the request that goes to the proxy.

You may enjoy taking a look at the curl vulnerabilities table.

Bugs bugs bugs

The release notes mention no less than 120 specific bug fixes, which in comparison to other releases is more than average.

Enjoy!

Can curl avoid to be in a future funnily named exploit that shakes the world?

During this year we’ve seen heartbleed and shellshock strike (and a  few more big flaws that I’ll skip for now). Two really eye opening recent vulnerabilities in projects with many similarities:

  1. Popular corner stones of open source stacks and internet servers
  2. Mostly run and maintained by volunteers
  3. Mature projects that have been around since “forever”
  4. Projects believed to be fairly stable and relatively trustworthy by now
  5. A myriad of features, switches and code that build on many platforms, with some parts of code only running on a rare few
  6. Written in C in a portable style

Does it sound like the curl project to you too? It does to me. Sure, this description also matches a slew of other projects but I lead the curl development so let me stay here and focus on this project.

cURL

Are we in jeopardy? I honestly don’t know, but I want to explain what we do in our project in order to minimize the risk and maximize our ability to find problems on our own before they become serious attack vectors somewhere!

previous flaws

There’s no secret that we have let security problems slip through at times. We’re right now working toward our 143rd release during our around 16 years of life-time. We have found and announced 28 security problems over the years. Looking at these found problems, it is clear that very few security problems are discovered quickly after introduction. Most of them linger around for several years until found and fixed. So, realistically speaking based on history: there are security bugs still in the code, and they have probably been present for a while already.

code reviews and code standards

We try to review all patches from people without push rights in the project. It would probably be a good idea to review all patches before they go in for real, but that just wouldn’t work with the (lack of) man power we have in the project while we at the same time want to develop curl, move it forward and introduce new things and features.

We maintain code standards and formatting to keep code easy to understand and follow. We keep individual commits smallish for easier review now or in the future.

test cases

As simple as it is, we test that the basic stuff works. We don’t and can’t test everything but having test cases for most things give us the confidence to change code when we see problems as we then remain fairly sure things keep working the same way as long as the test go through. In projects with much less test coverage, you become much more conservative with what you dare to change and that also makes you more vulnerable.

We always want more test cases and we want to improve on how we always add test cases when we add new features and ideally we should also add new test cases when we fix bugs so that we know that we don’t introduce any such bug again in the future.

static code analyzes

We regularly scan our code base using static code analyzers. Both clang-analyzer and coverity are good tools, and they help us by pointing out code that look wrong or suspicious. By making sure we have very few or no such flaws left in the code, we minimize the risk. A static code analyzer is better than run-time tools for cases where they can check code flows that are hard to repeat in my local environment.

valgrind

bike helmet

Valgrind is an awesome tool to detect memory problems in run-time. Leaks or just stupid uses of memory or related functions. We have our test suite automatically use valgrind when it runs tests in case it is present and it helps us make sure that all situations we test for are also error-free from valgrind’s point of view.

autobuilds

Building and testing curl on a plethora of platforms non-stop is also useful to make sure we don’t depend on behaviors of particular library implementations or non-standard features and more. Testing it all is basically the only way to make sure everything keeps working over the years while we continue to develop and fix bugs. We would course be even better off with more platforms that would test automatically and with more developers keeping an eye on problems that show up there…

code complexity

Arguably, one of the best ways to avoid security flaws and bugs in general, is to keep the source code as simple as possible. Complex functions need to be broken down into smaller functions that are possible to read and understand. A good way to identify functions suitable for fixing is pmccabe,

essential third parties

curl and libcurl are usually built to use a whole bunch of third party libraries in order to perform all the functionality. In order to not have any of those uses turn into a source for trouble we must of course also participate in those projects and help them stay strong and make sure that we use them the proper way that doesn’t lead to any bad side-effects.

You can help!

All this takes time, energy and system resources. Your contributions and help will be appreciated where ever among these tasks that you can insert any. We could do more of all this, more often and more thorough if we only were more people involved!