Category Archives: Development

So THAT is the point of releases!

In the Rockbox project we’ve been using a rather sophisticated build system for many years that provide updated binary packages to the public after every single commit. We also provide daily built zips, manuals, fonts and other extras directly off the subversion server fully automatic every day.

I used to be in the camp that thought that this is a very good system to the extent that it makes ordinary version-numbered releases somewhat unnecessary since everyone can easily get recent downloads whenever they want anyway. We also had a general problem getting a release done.

But as you all know by now, we shipped Rockbox 3.0 the other day. And man did it hit the news!,,,,,, and others helped us really put our web server to a crawl. The 4 days following the release, we got roughly 160,000 more visits on our site than usual, 5 times the normal amount (200,000 visits compared to the “normal” 40,000).

Of course, as a pure open source project with no company or money involved anywhere, we don’t exactly need new users but we of course want more developers and hopefully we do reach out to a few new potential contributors when we become known to a larger amount of people.

So I’m now officially convinced: doing this release was a good thing!

Shared Dictionary Compression over HTTP

Wei-Hsin Lee of Google posted about their effort to create a dictionary-based compression scheme for HTTP. I find the idea rather interesting, and it’ll be fun to see what the actual browser and server vendors will say about this.

The idea is basically to use “cookie rules” (domain, path, port number, max-age etc) to make sure a client gets a dictionary and then the server can deliver responses that are diffs computed against the dictionary it has delivered before to the client. For repeated similar contents it should be able to achieve a lot better compression ratios than any other existing HTTP compression in use.

I figure it should be seen as a relative to the “Delta encoding in HTTP” idea, although the SDCH idea seems somewhat more generically applicable.

Since they seem to be using the VCDIFF algorithm for SDCH, the recent open-vcdiff announcement of course is interesting too.

popen() in pthreaded program confuses gdb

I just thought I’d share a lesson I learned today:

I’ve been struggling for a long time at work with a gdb problem. When I set a break-point and then single-step from that point, it sometimes (often) decides to act as if I had done ‘continue‘ and not ‘next‘. It is highly annoying and makes debugging nasty problems really awkward.

Today I searched around for the topic and after some experiments I can now confirm: if I remove all uses of popen() I no longer get the problems! I found posts that indicated that forking could confuse threaded programs, and since this program at work uses threads I could immediately identify that it uses both popen() and system() and both of them use fork() internally. (And yes, I believe my removal of popen() also removed the system() calls.)

And now I can finally debug my crappy code again to become less crappy!

My work PC runs glibc 2.6.1, gcc 4.1.3 and gdb 6.6. But I doubt the specific versions matter much.

Standardized cookies never took off

David M. Kristol is one of the authors of RFC2109 and RFC2965, “HTTP State Management Mechanism”. RFC2109 is also known as the first attempt to standardize how cookies should be sent and received. Prior to that document, the only cookie spec was the very brief document released by Netscape in the old days and it certainly left many loose ends.

Mr Kristol has published a great and long document, HTTP Cookies: Standards, Privacy, and Politics, about the slow and dwindling story of how the work on the IETF with the cookie standard took place and how it proceeded.

Still today, none of those documents are used very much. The original Netscape way is still how cookies are done and even if a lot of good will and great efforts were spent on doing things right in these RFCs, I can’t honestly say that I can see anything on the horizon that will push the web world towards changing the cookies to become compliant.

HTTP implementations

I previously mentioned on the libcurl mailing list, that Mark Nottingham in the IETF HTTP Working Group has initiated the work on putting together an overview of all (interesting) existing HTTP implementations

Of course curl is included in the bunch, or rather libcurl, but I would also urge you all to step forward and provide further details on other implementations you worked on or know of!

Portably Yours

Archos PlayerOne day in late 2001 I had a talk with my brother Björn and our mutual friend Linus about how their portable MP3 player was very cool but the software/firmware on it was rather limited and lame in many ways. (Correct, I didn’t own any portable music player at that time). As usual, we brought up the idea about being able to hack it yourself and oh how good couldn’t you make it then? Not very long afterwards, we had a mailing list setup for discussing how to reverse engineer and improve the Archos Player firmware. Personally, I had no device yet but it sounded like good fun so I subscribed and participated from the start. After a few months, I got myself an Archos Recorder to be able to get down on the metal too, and the Recorder was also slightly different and thus brought some more challenges to the team.

Archos Recorder

Archos RecorderThe Recorder was a step forwards since it provided better sound and a (gasp!) graphical LCD. Some of the first work I did in the Rockbox project was to work on the code for the LCD, to bring text to it using fonts and to provide line drawing routines etc. Keeping the entire screen in a separate “frame buffer” that is updated to the screen with a lcd_update() call was an early design decision that has stuck ever since.

We took the Rockbox a long way supporting more and more of the early SH-based Archos targets, including the V2s, the FM and the Ondio series. But eventually of course the models started to get hard to get and out of production. It was time to start looking into moving to other targets, and other targets would more or less force us into a world with software audio codecs!

I got my original Recorder stolen, but I had it replaced and put in a 80GB disk and it was much rejoicing.

iriver h140

iriver h140We did scan the market for targets that used somewhat standard components for which we could get specs and docs and we found the iriver h1x0 series and the work began. As usual we got plenty of help from everywhere and it didn’t take too long to show the nay-sayers that we could indeed transition Rockbox into the future with software codecs. We also took it to the h3x0 series with its color screen not that long after and my golly, didn’t an entirely new world of opportunities open? 40GB of disk was just about enough to hold most of my music collection.

iAudio X5

iAudio X5What’s the fun of Rockbox if I couldn’t follow along? I got the iAudio X5 early on in the porting effort and joined in and got my first color-screen target. Of course I didn’t like how the mere 20GB disk narrowed what music I was bringing with me, but hey some sacrifices had to be made for the greater good of advancing Rockbox! 😉

Rockbox was now booming and flourishing, coming to new targets all over and getting more and more developers involved.

Sansa E260

Sandisk Sansa e260A guy from SanDisk contacted us asking about a Rockbox port to their Sansa E200 series, and even though they sent me a bunch of targets etc they never provided any docs or actual help on the effort of porting Rockbox to these babies. Not even the figuring out the firmware format, as that was instead made by our own secret super-hero MrH.

Meizu M6

Time flies and soon enough (like in the late 2007) none of the targets Rockbox ran fine on were no longer being manufactured and started to get hard to get in shops all over the world. The eternal race to get Rockbox ported to a currently manufactured model of course just got more important.

It was almost two years since I got the Sansa e200 series from SanDisk and it was time to join in the efforts of bringing Rockbox to some Meizus. The amount of interested people and the existence of a (leaked) data sheet for the main SoC helped me settle for buying this.

Cowon D2

As I’ve already explained, I bought a D2 too at the same time I got the Meizu so that I could do comparison for people and just play around some extra.

Of course, my timing is dubious as I got my two new targets exactly at the same period in my life when I went back to work (almost) full-time from having been on paternity leave for six months. Together with my extra “admin duties” such as the euro devcon and gsoc 2008 happening, I really haven’t had much time to actually dive into low-level fiddling with the ports yet. Hopefully I soon get adjusted and get some time to really help out.

Coverity’s open source bug report

The great guys at published their Open Source Report 2008 in which they detail findings about source code they’ve monitored and how quality and bug density etc have changed over time since they started scanning over 250 popular open source projects. curl is one of the projects included.

Some highlights from the report:

  • curl is mentioned as one of the (few) projects that fixed all defects identified by coverity
  • from their start, the average defect frequency has gone down from one defect per 3333 lines of code to one defect per 4000 lines
  • they find no support to backup the old belief that there’s a correlation between function length and bug count
  • the average function length is 66 lines

And the top-5 most frequently detected defects are:

  1. NULL Pointer Dereference 28%
  2. Resource Leak 26%
  3. Unintentional Ignored Expressions 10%
  4. Use Before Test (NULL) 8%
  5. Buffer Overrun (statically allocated) 6%

For all details and more fun reading, see the full Open Source Report 2008 (1MB pdf)

Blaming Debian packaging

I happened to read the blog post called Open-Source Security Idiots which really is having a go at the poor Debian maintainer of OpenSSL for causing the recent much debated OpenSSL security problem in Debian and Debian-based distros.

While I think the author Steven J. Vaughan-Nichols is mostly correct about his criticism, I think he’s being far too specific and trying to pinpoint Debian and claiming that to be a single specific bad distro (and his additional confused complaint on Firefox vs Iceweasel just made the article lose focus).

As someone who’s involved in a bunch of projects that are being packed by a range of Linux distros, I can’t but to disagree. This habit of changing packages without passing the changes upstream is wide-spread and not limited to changes done by maintainers since it also includes mere bug reports. It is something that just about every distro is doing to at least some extent. It varies from package to package and over time, but given an overview I honestly can’t say that there’s a single specific distro that is worse than the others. It is a disease that follows the distros and we must all help out to exterminate it.

Of course, the upstream projects also need to be aware of this and help pushing packagers of their software to behave.