Category Archives: Technology

Really everything related to technology

Mini 2440 Lyre

On ebay there’s a fancy S3C244-based board named mini 2440 with a 3.5″ touch LCD attached on sale for 85 USD. 64MB ram, 400MHz CPU, a nand flash and more. Lots of stuff for the money.

mini2440

The guys in the lyre project seem to have adopted this as yet another hardware platform to attempt to run Rockbox on. After their Atmel AT91SAM target was ditched, they went the ARMopendous route and now this seems to have entered. This third hardware platform is called the Lyre prototype 2

You should note that this Mini 2440 board has no batteries or anything and thus is not really meant to be a portable device in this shape.

“Bob” seems to have initial Rockbox code running on this device, and well-established Rockbox hackers JdGordon and domonoky have both ordered their own kits so the future looks bright.

Sandisk: “our sound fidelity isn’t perfect”

Some owners of the SansSanDisk Sansa Clipa Clip player from SanDisk noticed that it does playback of all songs with a minor pitch. Due to a flawed HW setup they don’t do a proper 44,100 Hz but instead 43,791 Hz (0.993 times the target value) or something like that. According to some sources this problem might be fixed in the Sansa Fuze players now.

Bugs aren’t really so surprising, perhaps what is surprising is that this bug has been around now for almost two years. To make matters worse, SanDisk now decided that due to them making cheap players people shouldn’t expect them to be very good sound quality wise and therefore they can simply not fix the problems:

due to trade-off decisions that were made in engineering these products to deliver superior consumer value at what we believe are extremely attractive price points, our sound fidelity isn’t perfect. We have re-evaluated the possibility of reducing the pitch variation and due to the engineering trade-offs the decision was made to stay with the current design. Very few listeners, however, have noticed or complained about it as an issue in actual practice.  For those who can detect sound differences with their naked ears during actual use and not via frequency analysis, our products may not be the best choice for them.

Clip owners out there now put their hope even more on Rockbox for Clip.


Why top-posting annoys me

This is hardly any news to anyone who cares, and those who should care the most are either not understanding what top-posting is in the first place or they’re not aware of that people like me think top-posting is an evil decease we need to extinguish.

My primary reason to hate top-posting is that it is fast and easy for the single user who writes the mail reply, but it gives more work to the large amounts of people who read it. When someone posts to a mailing list, one should rather expect that the single user would be the one to put in a little extra effort to make the result readable for the masses who will read it.

Top-posting also most often involves the habit of including the entire previous conversation in a quoted manner below.

A sensible post and quote ethic, is to only quote as much as you need from the previous conversation to make your point clear, and to respond in a way so that it is clear to what parts of the quotes you are referring to. That more or less implies doing “interlaced” or “inlined” posting, where you show a few lines of quotes and then a few lines of comments over and over until the end of the mail.

The act of doing bottom-posting but keeping the entire thing quoted above the new text you add is almost as bad as top-posting. You remove the focus of what you write by providing far too much irrelevant text. Remove the irrelevant parts!

These days large portions of the modern world use broadband connections so the actual size of the mail is not a concern for bandwidth or speed reasons, but you probably still want the receivers to focus on your actual point. Also, a lot of mails these days end up in web archives or similar so they are then searchable by internet search engines and browsable by future people and then you even more want the mail to be on topic to become more relevant and less misleading to searches.

In case it isn’t obvious: this of course primarily concerns mails sent to (largish) mailing lists.

Making better advisories

A while ago yet another security flaw was discovered in curl (actually the tenth flaw in more than eleven years) by Scott Cantor. He reported it privately to us. We worked on the issue and in the end I posted an official project cURL security advisory about it. It wasn’t anything out of the ordinary really. Scott did great and we fixed the problem rather promptly and in coordination with vendor-sec etc.

After a security advisory and the accompanying release, something particular always happens. It’s the same every time I’ve done this and there’s really no surprise: one by one the different Linux distros and similar parties start to ship their security advisories and alerts about the same problem and they offer their upgrade paths for their users to get a corrected version of the package.

But I’ll tell you why I think those advisories tend to make me really sad. It’s not because of the flaws they fix or how fast or slow they are to appear. It’s entirely due to the contents of them or perhaps in many times the lack of contents.

When the first distro-based advisory comes out, they often tend not to use the phrasing used in the original advisory (which we’ve crafted on for weeks and coordinated with vendor-sec) but they instead offer their own interpretation. This isn’t necessarily a bad thing, but when the guys simplify matters they tend to blur out the actual cause and make the real issue hard to understand. Not to mention that when the first guy had done a mistake, most others just repeat that without thinking.

Credit to the doers

The craft of hunting down security problems in software and the art of then creating a fix for that problem is very time consuming and takes a fair amount of skill and patience. Yet some people do this. Some of those even track down problems in open source code bases and tell the projects about the issues to give them a chance to fix them befor they’re made public.

Those people are good people that we need.

In the open source world, and in fact in a lot of other places too, the just about only reward we can offer guys who do outstanding work like this is with attribution. Give credit where credit is due. Mention the guy who did the job!

Distro advisories are not good

Very often the subsequent advisories go the lazy route and they borrow their advisory explanation from another distro’s advisory. Still not using the original explanation. They like short and not too detailed explanations. Factual errors seem to not be too important.

Very few distro-advisories give any credit to the original guy who found the error. The only one thing we can offer as payment is then neglected and this is more of an established practise than a mistake. All distros do this. At best they refer to a CVE number for the flaw, but CVE numbers have the great disadvantage that they very rarely reveal any particular details about the flaw until a long time after the advisory is made.

Not only do they often not credit the originator, they also rarely link back to the original advisory or even the advisory the originator sent out (sometimes they’re sent out independently) – so getting the full description from the actual upstream project is harder than it has to be. They do however generally  link to their own site, using their own issue number for the security problem. If things are good, you can find references to the original in that web page they link to. I’ve also seen several distro advisories that simply don’t at all mention what patches they’ve applied or what particular upstream changset they’ve backported.

In this latest advisory from curl, the common repeated mistake was that the certificate flaw concerned the Common Name field (and it implied that it was only about that field) which is wrong, and which is why the original advisory didn’t explicitly mention that field. It also affects the subjectAltName field and that’s at least – if not more – as important to address for this particular flaw. The flaw also only concerned curl built to use OpenSSL, which was a fact that was often not mentioned at all.

What I suggest!

That every vendor and Linux distro that ship security advisories do this:

  1. credit the original problem founder/researcher. This way the glory and fame goes to the person(s) who often did a lot of research and hard work.
  2. link to the original advisory so that everyone who wants to can get the info and details from the upstream project and their ideas of what the problems are and what the best fixes or work-arounds might be
  3. fact-check your error/solution description better and don’t just repeat what someone else wrote unless you know that’s an accurate description
  4. don’t repeat others’ simplifications and errors. The act of duplicating someone else’s description is pretty low in general and it would often only be a signal that you haven’t understood the issue in the first place. If you have a rule to not copy others you won’t risk copying their mistakes.

Slacka-fun!

A bunch of the local OpenBSD fans here in Stockholm run this one-day event every year, called Slackathon. I missed it last year, but in 2007 I was there (and I did a little talk about open source management) and this year I was eager to participate again.

This year, the event was scheduled to take place immediately after a bunch of core OpenBSD developers had had their “hackathon f2k9” in Stockholm, so they could now boast with a series of very well known and very knowledgeable OpenBSD kernel hackers. As I am really not more than a distant observer of the OpenBSD project this of course put the lights on a lot of dusty corners I had no previous idea about. I’m not really a stranger to kernels and kernel hacking in general, and I must confess I had a great time and the team who spoke of various very detailed kernel topics are charismatic and put on a great show.

So I learned about the terrors of the VFS layer and hacking it (and how they’re working on making all the involved caches dynamically sized). I learned how to do active-active syncing  of pf-based firewalls (basically using two independent firewalls in front of something), or at least how the guys made it work fairly well. Or how the pf firewall was optimized to double its forwarding performance. And I got to hear a few wise words from Theo de Raadt and learn not only about their 6 month release schedules but also their plans and ideas around solving problems with livelocking and more. Not to mention the talk about managing physical memory, or the work to get OpenBSD ported to sparc64s with hardware-based virtualization support.

Taken all the hardcore kernel talks into account, I think my own talk on libssh2 (just before dinner) felt like a very light snack to chew and possibly a tiny bit out of the general topic… Anyway, I gave a quick overview of the project, how it started, why it was started, what it is and a bit how it works etc.

The slides from my slackathon talk. I expect to re-use a fair bunch of that, with some improvements and additions, in my libssh2 talk at FSCONS later this year.

Looking forward to Slackathon 2010!

Pictures from Slackathon 2009 by Vladimir Bogodist.

Me in front of the projector screen doing libssh2 talk

Snaxx 21

HaxxYes!

It is now time to once again leave your dark and dusty corners of your office or closet, bring yourself up to speed on what currency we’re using in this country and then unite with fellow hackers and technologists in Stockholm City during a fine September evening. The entire Haxx team is delighted to inform that Snaxx-21 is about to happen…

Monday, September 28th 2009

Time: around 18:30

Where: see the snaxx site!

As usual we’re informal, and as our friends you’re of course allowed and encouraged to bring other friends who are similar in spirit and who you think would appreciate an event such as this.

When you’ve decided to show up, please email me and say so.

There might even be free t-shirts involved this time!

Oh, and if you are a Stockholmer and didn’t get this invite by mail already, let me know and I’ll add you to the list of people who get this notice by the old trusty RFC822 way.

curl fooled by null-prefix

We’ve just now released a security advisory on curl and libcurl regarding how a forger can trick libcurl to verify a forged site as having a fine certificate if you just had a CA create one for you with a carefully crafted embedded zero…

I think this flaw brings the light so greatly on the problems we deal with to maintain code to be safe and secure. When writing code, and as in this case using C, we might believe we’re mostly vulnerable to buffer overflows, pointer messups, memory leaks or similar. Then we see this fascinatingly imaginative “attack” creep up…

The theory in short and somewhat simplified:

A server certificate is always presented by a server when a client connects to it using SSL. The certificate contains the servers name. The client verifies that A) the cert is signed by the correct authority and B) that the cert has the correct name inside.

The A) thing works because servers buy their cert from a CA authority that has its public signature in all browsers, and thus we can be “cryptographically safe” when we see a match.

This last flaw was in the naming part (B). Apparently someone managed to trick a CA to hand out a cert to them using an embedded zero byte. Like if haxx.se would buy the cert, we’d get it with an embedded zero like:

“example.com\0.haxx.se”

Now, this works fine in certificates since they store the string and its length separately. In the language C we’re used to have strings that are terminated with a trailing zero… so, if we would take over the “example.com” HTTPS server we could put our legitimately purchased certificate on that server and clients would use strcmp() or the equivalent to check the name in the certificate against the host name they try to connect to.

The embedded zero makes strcmp(host, certname) return MATCH and the client was successfully fooled.

curl is no longer vulnerable to this trick since 7.19.6, and we have released a boatload of patches for older versions in case upgrading is not an option.

A view of a popular post

So I post frequently on this blog, but I’m not a particularly interesting person myself, I’m not really a master at writing and phrasing articles to make them thrilling and irresistible and I basically only deal with really geeky and technical subjects. It means there’s an average of perhaps 200 views per day.

The other day I wrote my multipath tcp post, and someone submitted it to reddit. It turned out to become my most read posting on my blog ever. By far. I think the “views per day” graph looks pretty cool:

visitor graph from daniel.haxx.se/blog

My IETF 75 Story

A while ago, like a couple of years ago, I joined the mailing list for the HTTPbis effort. That’s the IETF work group which is working on producing an update to the HTTP 1.1 spec RFC2616 that clarifies it and removes things that nobody does or that doesn’t work. That spec is huge and the number of conflicting statements or just generally muddy expressions is large. This work is not near completion yet.

So when I learned the IETF75 meeting was to be held here in Stockholm, I of course cheered the opportunity to get to join in the talks and meet the people here.IETF

IETF is basically just an informal bunch of people who likes internet protocols, “above the wire and below the application”. The goal of the IETF is to make the Internet work better. This is the organization behind the RFCs. They wrote the specs for TCP, FTP, SMTP, POP3, HTTP and a wide range of other protocols we all use non-stop these days. I’m actually quite surprised IETF is so little known. When I mention the organization name, most people just give me a blank face back that reveals it just isn’t known to very many “civilians”.

IETF has meetings around the world 2 or 3 times per year, and every time some 1K-2K people join up and there’s a week filled with working group meetings, talks and a lot of socializing and so on. The attitude is generally relaxed all over and very welcoming for newcomers (like me). Everyone can join any talk or discussion. There’s simply no dress code, but everyone just wears whatever they think is comfortable.

The whole week was packed with scheduled talks and sessions, but I only spent roughly 1.5 days at the conference center as I had to get some “real work” done as well and quite honestly I didn’t find sessions that matched my interest every single day anyway.

After the scheduled days, and between sessions and sometimes instead of the scheduled stuff, a lot of social events took place. People meet in informal gatherings, talk, plans, sessions and dinners. The organizers of this particular even also arranged two separate off-topic social events. As one of the old-timers I talked to said something like “this year was unusually productive, but just about all of that was done outside of the schedule”…

This time. The 75th IETF meeting was hosted by .se in Stockholm Sweden in the end of July 2009. 1230 persons had signed up to come. The entrance fee was 650 USD unless you wanted to pay very late or at the door, as then it was a 130 extra or so. Oh, and we got a t-shirt!

One amusing little detail: on the t-shirt we got there was a “free invite code” to the Spotify streaming music service. But when people at the IETF meeting tried to use it at the conference center, Spotify refused to accept the users claiming it doesn’t work in the US! Clearly they’re not using the most updated ip-geography database in the world! 😉

So let me quickly mention a few of the topics I caught and found interesting, some of which I’ve blogged about separately.

HTTP over SCTP

The guys in the SCTP team works on this draft on how to do HTTP the best possible way over SCTP. They cite tests that claim “web browsing” becomes a better experience when done over SCTP. I’m personally quite interested in this work and in SCTP in general and I hope to be able to play with making libcurl support this in a not too distant future.

How to select TCP or SCTP

Assuming that SCTP gets widespread adoption and even browsers and browser-like apps start support it. How should the clients figure out which transport mechanism to use? The problem is similar to the selection between IPv4 or IPv6 but for the IP choice we can at least use A and AAAA lookups. The suggestions that were presented basically argued for trying all combinations in parallel and going with the fastest to respond, but a few bright minds questioned the smartness of that as it scales very badly if more transport options are added and it potentially introduces a lot more (start-up) traffic.

Getting HTTP from multiple mirrors

What the authors call Multiserver HTTP, is a pretty small suggestion of a few additional HTTP headers that a server would be able to return to a client hinting about other URIs where the exact same file/resource can be downloaded from. A client would then get that resource in parallel from multiple HTTP servers using range requests. Quite inspired by other technologies such as bittorrent and metalink. This first draft faced some criticism of not using existing HTTP as it could, but the general spirit has been welcoming.

The presenter of the idea, Mike Hanley, also mentioned the ability to allow the server response include a wildcard mention, so that a server for example could hint that “other images from this dir path can also be found under this dir path on this other server”. Thus a client would be able to download such images from multiple servers. This idea is not in that draft and I’m not personally sure I think it fits as nicely.

HTTP-state wg

During the IETF week, it was announced that the HTTP-state working group is being formed. It didn’t actually happen on the actual meeting but still… See my separate blog post on http-state.

Multipath TCP

The idea and concept behind MPTCP was new to me but I quickly come to like the thought of getting this into network stacks around me. I hope this will grow up to become something fine! See my separate blog post on multipath tcp for all details.

IRI

I visited the IRI BOF and got some fine insights on the troubles of creating the IRI spec. Without revealing too much, it’s quite clear sometimes that politics can be hard even in these surroundings…

tng – Transport Next Generation

What felt pretty “researchy” and still not really ready for adaption (or am I wrong?) is this effort they call Transport Next Generation. Their ideas include the concept of inserting a whole bunch of more layers into the typical network stuff, to for example move congestion handling into its own layer to be able to make it per network-segment basis instead of only doing end-to-end like today. Apparently they have tests and studies that suggest that the per network-segment basis can improve traffic a lot. These days a lot of the first part and last part of network accesses are done over wireless networks while the core center tends to still be physical cables.

DCCP

I found it interesting and amusing when they presented DCCP with all its bells and whistles, and then toward the end of the presentation it surfaces that they don’t really know what DCCP would be used for and at the moment the work group is pretty much done but there’s just nobody that’s using the protocol…

HTTPbis

HTTPbis is more or less my “home” in the IETF. We had a meeting in which things were discussed, some decisions were made and some new topics were raised. RFC2616 is a monster of a spec and it certainly contains so much details, so many potentially conflicting statements and quite clearly very many implementers have interpreted sections differently, that doing these clarifications is a next to endless work. I figure the work will simply be deemed “done” one day, and the remaining confusions will then just be left. The good part is then that the new document should at least be heaps better than the former. It will certainly benefit future and existing HTTP implementers nonetheless.

And a bunch of the HTTPbis guys got together a bit outside of the meeting as well, so we get to talk quite a bit and top off the evening with a dinner…

OpenDNSSec

There was quite a lot of DNSSEC talk during the week, and it annoys me that I double-booked the evening they had their opendnssec release (or was it tech preview?) party so I couldn’t go there and take advantage of my two free beers!

Observations

Apple laptops. A crushing majority of the people seemed to have Apple branded laptops, and nearly all presentations I saw were done with Apples.

Not too surprising, the male vs female ratio was very very high. I would guess 20:1 to 30:1, at least in those surrounding where I spent my time.

Upcoming Meetings

This week was lots of fun. More fun than I have had in a conference in a very long time. I’ll definately consider going to some upcoming meetings, although the next one in Japan in November doesn’t fit my schedule. Possibly Anaheim in March 2010 and even more likely the 78th meeting in Maastricht in July 2010.

Thanks everyone who was there. Thanks to the hosters for a great event. It was a blast!

Multipath TCP

During the IETF 75 meeting in Stockholm, there was this multipath tcp BOF (“start-up meeting” sort of) on Thursday morning that I visited.

Multipath TCP (shortened to MPTCP at times) is basically an idea to make everything look like TCP for both end points, but allow for additional TCP paths to get added and allow packets to get routed over any of the added flows to overcome congestion and to select the routes where it flows “best”. The socket API would remain unmodified in both ends. The individual TCP paths would all look and work like regular TCP streams for the rest of the network. It is basically a way to introduce these new fancy features without breaking compatibility. Of course a big point of that is to keep functionality over NATs or other middle-boxes. (See full description.)

The guys holding the BOF had already presented a fairly detailed draft how it could be designed both one-ended and with multiple adresses,  but could also boast with an already written implementation that was even demoed live in front of the audience.

The term ‘path’ is basically used for a pair of address+port sets. I would personally rather call it “flow” or “stream” or something, as we cannot really control that the paths are separate from each other as those are entirely in the hands of those who route the IP packets to the destination.

They stressed that their goals here included:

  • perform no worse than TCP would on the best of the single TCP paths
  • be no harder on the network than a single TCP flow would be, not even for single bottlenecks (network and bottleneck fairness)
  • allow resource pooling over multiple TCP paths

A perfect use-case for this is hosts with multiple interfaces. Like a mobile phone with 3G and wifi, as it could have a single TCP connection using paths over both interfaces, and it could even change paths along the way when you move to handover to new wifi access-points or when you plug in your Ethernet cable or whatever. Kind of like a solution to the mobile ip concept with roaming that was never made to actually work in the past.

The multipath tcp mailinglist is already quite active, and it didn’t take long until possible flaws in the backwards compatibility have been discovered and are being discussed. Like if you use TCP to verify that a particular link is alive, MPTCP may in fact break that as the proposal is currently written.

What struck me as an interesting side-effect of this concept, is that if implemented it will separate packets from the same original stream further from each other and possibly make snooping on plain-text TCP traffic harder. Like in the case where you monitor traffic going through a router or similar.