Going full-time Haxx

I realize noHaxxt a lot of you who read my site or blog are aware of my actual real world day-job situation (nor should you have to care), but I still want to let you guys know that I’m ending my employment at CAG Contactor and my intention is to find my way forward with my own company, Haxx AB, as employee number 1.

Haxx has existed for over ten years already, but we’ve so far only used it for stuff on the side that wasn’t full-time nor competing with our day-jobs. Starting in October, I’ll now instead work only for and with Haxx.

I don’t expect much in my actual day to day business to change much as I intend to continue as a contract developer / consultant / hacker doing embedded, Linux, open source and network development as an expert and senior engineer.

So if you want my help, you can continue to contact me the same way as before, and I can offer my services like before! 😉 The only difference is in my end where I get more freedom and control.

This move on my behalf will affect some of you indirectly: I will move a lot of web and other internet-based services from servers owned and run by Contactor to servers owned by Haxx. So, expect a lot of my sites and contents to get some uptime glitches in the upcoming month in my struggle to get things up on the new place(s).

50 hours offline

Several sites in the haxx.se domain and other stuff related to me and my fellows were completely offline for almost 50 hours between August 24th 19:00 UTC and August 26th 20:30 UTC.

The sites affected included the main web sites for the following projects: curl, c-ares, trio, libssh2 and Rockbox. It also affected mailing lists and CVS repositories etc for some of those.

The reason for the outage has been explained by the ISP (Black Internet) to be because of some kind of sabotage. Their explanation given so far (first in Swedish):

Strax efter kl 20 i måndags drabbades Black Internet och Black Internets kunder av ett mycket allvarligt sabotage. Sabotaget gjordes mot flera av våra core-switchar, våra knutpunkter. Detta resulterade i ett mer eller mindre totalt avbrott för oss och våra kunder. Vi har polisanmält händelsen och har ett bra samarbete med dem.

Translated to English (by me) it becomes:

Soon after 8pm on Monday, Black Internet and its customers were struck by a very serious act of sabotage. The sabotage was made against several of our core switches. This resulted in a more or less total disruption of service for us and our customers. We have reported the incident to the police and we have a good cooperation with them.

Do note that you could keep track of this situation by following me on twitter.

It’s good to be back. Let’s hope it’ll take ages until we go away like that again!

Update: according to my sources, someone erased/cleared Black Internet’s core routers and then they learned that they had no working backups so they had to restore everything by hand.

kernel hacker foodfights

The concept of flame wars and public pie throwing is not new in the open source world, and the open nature of the projects make us – the audience – get to see everything. To read every upset word and get to point back to the mails in retrospect.

I don’t think people in the open source community is any particularly more trigger-happy to start the flame wars than people are outside of the openness, but open it is and then we can see it.

I’ve always disliked the harsh attitude and language that seems to have become popular in some circles, and I believe Linus Torvalds himself is part of that movement as he’s often rude, bad-mouthed and very aggressive in his (leadership) style. I think that easily grows into a hostile and unfriendly atmosphere where little room is left for fun, for jest and for helping out among friends.

So even if that is not the reason for the recent developments, here’s two episodes from August 2009:

A short while ago we got to see well-known kernel hacker Alan Cox step down as tty maintainer after an emotional argument on the lkml. The argument there was basically Linus telling Alan he should’ve admitted his error and acted on it earlier than he did.

Nearby, on the mailing list linux-arm-kernel a long-going argument about the management of the actual mailing list itself again sparkled up a fire. The argument in this case have been a long going discussion whether the mailing list Russell King (the main ARM Linux maintainer) runs should be open to allow non-subscribers to post without them needing moderation or not. It ended today with Russell shutting down his lists.

Right now, it seems the linux-arm-kernel list is being transferred over to infradead.org by David Woodhouse to continue its life there, but I don’t think we’ve seen the end of this yet so things may settle differently. There’s also this patch pending which suggests using the linux-arm list on vger.kernel.org.

(Readers should note that I myself don’t take side in any of these arguments.)

fully respect your rights

This is [name removed] writing at Toshiba Corporation.

We are considering using your program curl (http://curl.haxx.se/) in our products. Before going any further, however, we would like to confirm the following so that we are sure to fully respect your rights.

I am so impressed. Thank you Toshiba for being this upfront and courteous when incorporating an open source product. The license is perfectly free and open for you to use curl for this purpose, but the sheer act of this “making sure” gets my 10 points for great business conduct.

Slacka-fun!

A bunch of the local OpenBSD fans here in Stockholm run this one-day event every year, called Slackathon. I missed it last year, but in 2007 I was there (and I did a little talk about open source management) and this year I was eager to participate again.

This year, the event was scheduled to take place immediately after a bunch of core OpenBSD developers had had their “hackathon f2k9” in Stockholm, so they could now boast with a series of very well known and very knowledgeable OpenBSD kernel hackers. As I am really not more than a distant observer of the OpenBSD project this of course put the lights on a lot of dusty corners I had no previous idea about. I’m not really a stranger to kernels and kernel hacking in general, and I must confess I had a great time and the team who spoke of various very detailed kernel topics are charismatic and put on a great show.

So I learned about the terrors of the VFS layer and hacking it (and how they’re working on making all the involved caches dynamically sized). I learned how to do active-active syncing  of pf-based firewalls (basically using two independent firewalls in front of something), or at least how the guys made it work fairly well. Or how the pf firewall was optimized to double its forwarding performance. And I got to hear a few wise words from Theo de Raadt and learn not only about their 6 month release schedules but also their plans and ideas around solving problems with livelocking and more. Not to mention the talk about managing physical memory, or the work to get OpenBSD ported to sparc64s with hardware-based virtualization support.

Taken all the hardcore kernel talks into account, I think my own talk on libssh2 (just before dinner) felt like a very light snack to chew and possibly a tiny bit out of the general topic… Anyway, I gave a quick overview of the project, how it started, why it was started, what it is and a bit how it works etc.

The slides from my slackathon talk. I expect to re-use a fair bunch of that, with some improvements and additions, in my libssh2 talk at FSCONS later this year.

Looking forward to Slackathon 2010!

Pictures from Slackathon 2009 by Vladimir Bogodist.

Me in front of the projector screen doing libssh2 talk

Snaxx 21

HaxxYes!

It is now time to once again leave your dark and dusty corners of your office or closet, bring yourself up to speed on what currency we’re using in this country and then unite with fellow hackers and technologists in Stockholm City during a fine September evening. The entire Haxx team is delighted to inform that Snaxx-21 is about to happen…

Monday, September 28th 2009

Time: around 18:30

Where: see the snaxx site!

As usual we’re informal, and as our friends you’re of course allowed and encouraged to bring other friends who are similar in spirit and who you think would appreciate an event such as this.

When you’ve decided to show up, please email me and say so.

There might even be free t-shirts involved this time!

Oh, and if you are a Stockholmer and didn’t get this invite by mail already, let me know and I’ll add you to the list of people who get this notice by the old trusty RFC822 way.

curl fooled by null-prefix

We’ve just now released a security advisory on curl and libcurl regarding how a forger can trick libcurl to verify a forged site as having a fine certificate if you just had a CA create one for you with a carefully crafted embedded zero…

I think this flaw brings the light so greatly on the problems we deal with to maintain code to be safe and secure. When writing code, and as in this case using C, we might believe we’re mostly vulnerable to buffer overflows, pointer messups, memory leaks or similar. Then we see this fascinatingly imaginative “attack” creep up…

The theory in short and somewhat simplified:

A server certificate is always presented by a server when a client connects to it using SSL. The certificate contains the servers name. The client verifies that A) the cert is signed by the correct authority and B) that the cert has the correct name inside.

The A) thing works because servers buy their cert from a CA authority that has its public signature in all browsers, and thus we can be “cryptographically safe” when we see a match.

This last flaw was in the naming part (B). Apparently someone managed to trick a CA to hand out a cert to them using an embedded zero byte. Like if haxx.se would buy the cert, we’d get it with an embedded zero like:

“example.com\0.haxx.se”

Now, this works fine in certificates since they store the string and its length separately. In the language C we’re used to have strings that are terminated with a trailing zero… so, if we would take over the “example.com” HTTPS server we could put our legitimately purchased certificate on that server and clients would use strcmp() or the equivalent to check the name in the certificate against the host name they try to connect to.

The embedded zero makes strcmp(host, certname) return MATCH and the client was successfully fooled.

curl is no longer vulnerable to this trick since 7.19.6, and we have released a boatload of patches for older versions in case upgrading is not an option.

curl 7.19.6 is here!

Yet again we strike back with an update to the popular download tool curl and the transfer library libcurl.

Noticeable changes this time include:

  • A security related fix, for the flaw named CVE-2009-2417.
  • CURLOPT_FTPPORT (and curl’s -P/–ftpport) support port ranges
  • Added CURLOPT_SSH_KNOWNHOSTS, CURLOPT_SSH_KEYFUNCTION, CURLOPT_SSH_KEYDATA so that both the library and the curl tool now understand and work with OpenSSH style known_hosts file (if built with libssh2 1.2 or later)
  • CURLOPT_QUOTE, CURLOPT_POSTQUOTE and  CURLOPT_PREQUOTE can be told to ignore error responses when used with FTP. Handy if you want to run custom commands that may fail, but still enjoy persistent connections properly.

Let me just mention that the known_host support will make the SCP and SFTP transfers done with curl one step more secure. My work on this feature (both in libssh2 and in libcurl) was sponsored by a well-known company that shall remain unidentified at their request.

cURL

libcurl in package management

A few days ago I noticed that the “urlgrabber” project now has switched to using pycurl (the python libcurl binding) in their bleeding edge development. It means that projects using that, such well-known apps like yum and anaconda then use libcurl. Already since ages the Suse installer named YaST is using libcurl and a few months ago I learned that the opensolaris package management (pkg) is also switching to become pycurl based.

According to the lead man on the urlgrabber project, Seth Vidal, there are several reasons to switch from Python’s native urllib for (mostly) HTTP transport and he was friendly enough to mention a few to me. Clearly the two primary reasons are FIPS certification and urllib’s lacking HTTP proxy support. The FIPS certification is something the Fedora project has been pushing for a lot during recent time and thus they’ve worked hard on making libcurl support NSS for SSL/TLS, and the lack of HTTP proxy support is supposedly hard to push into urllib itself due to its stagnant development etc.

In Debian-esque worlds, libcurl and curl are already used by the package system in forms of apt-transport-https and apt-file.

It seems that when you run an open source operating system tomorrow, chances are that libcurl is in the back-end of the package system.