My Nordic Free Software Awards 2009 nominees

Hey, it’s really about time to nominate your favourite Free Software persons and projects from the nordic region for the 2009 awards before the time runs out.

This year, I decided to nominate the following “nordic” heroes:

Simon Josefsson

For his excellent work in GnuTLS, libssh2 and a bunch of other projects.

Henrik Nordström

For his work in the Squid project, and his efforts within IETF and its HTTP related struggles and more.

Björn Stenberg

As the primary founder of the Rockbox project. He started somehting special back in 2001 that now is a huge, thriving and succesful Free Software project.

As you might spot, I favor “doers”. I don’t believe in the concept of “nordic projects” when it comes to free or open software – the entire concept of open and free should mean that projects cross borders and regions.

In fact, it feels so out of the ordinary to think about open source people in a geographical context I find it hard to come up with a lot of names. It would be cool if ohloh had some ways to list people and projects based on where people live.

Then again, if a person from a nordic country moves somewhere else, is he or she still a nordic person? Does it depend on where the person lived during the actual act? Is Linus Torvalds a nordic person since he was born, lived many years and started his big project in Finland?

(yeah I already blogged about this subject but hey, it can’t hurt can it?)

Rockbox presentation video

Robert Menes held a talk at the NYLUG a while ago about the Rockbox project:

I started out the talk by giving a little background on Rockbox; basically how it started, how it was ported around to new targets, and how the community grew as interest peaked. I showed off features of Rockbox, as well as supported targets, nearly supported targets, and even in-progress targets. I then went into describing Rockbox’s features in greater detail, and then a run-down of the development process, as well as how to compile your own builds. Many people asked questions along the way, so I answered them as they were asked. I also think that people were probably shocked at the sheer amount of targets that were shown off (nearly 40 DAPs were there!)

The 76 minute (1.6 GB) video from that event is available from the download.rockbox.org mirrors. Set your video player to stream from there unless you really want to download that entire thing!

Rockbox at NYLUG presentation video

Sandisk: “our sound fidelity isn’t perfect”

Some owners of the SansSanDisk Sansa Clipa Clip player from SanDisk noticed that it does playback of all songs with a minor pitch. Due to a flawed HW setup they don’t do a proper 44,100 Hz but instead 43,791 Hz (0.993 times the target value) or something like that. According to some sources this problem might be fixed in the Sansa Fuze players now.

Bugs aren’t really so surprising, perhaps what is surprising is that this bug has been around now for almost two years. To make matters worse, SanDisk now decided that due to them making cheap players people shouldn’t expect them to be very good sound quality wise and therefore they can simply not fix the problems:

due to trade-off decisions that were made in engineering these products to deliver superior consumer value at what we believe are extremely attractive price points, our sound fidelity isn’t perfect. We have re-evaluated the possibility of reducing the pitch variation and due to the engineering trade-offs the decision was made to stay with the current design. Very few listeners, however, have noticed or complained about it as an issue in actual practice.  For those who can detect sound differences with their naked ears during actual use and not via frequency analysis, our products may not be the best choice for them.

Clip owners out there now put their hope even more on Rockbox for Clip.


Autotool alternatives

Lots of people whine and complain on the set of build tools we often refer to as a collective by the term ‘autotools’. That term tends to include autoconf, libtool and automake.

I think a certain amount of criticism is warranted against this family of aged tools that are unix-centric, have cryptic ways to control them (I think there’s a reason m4 macros  is not widely used…) and they are several independent tools with a tricky mix of cross-breeding.A build tool

The upsides include them being well tested, fairly well known, there’s a wide range of existing tests done for them, they work fine when cross-compiling and they support building out-of-source tree just fine.

But what about the alternatives?

I spend time in projects where the discussion of ditching autoconf come up every once in a while, as sure as that the sun will rise tomorrow. The discussion is always that tool Z is much better and easier to deal with and that everything gets shiny if we just switch. That Z is a lot of different tools that are available today, including CMake, scons, waf or cDetect.

The problem as I always see and why I almost always argue against Z is that autoconf is old, trusty, proven and I know it. The Z tool is often much newer, less proven, less peoeple involved in the project know Z, use Z or know how to customize it (since new tests will be needed and some tests will need to be changed etc). So even though Z is sometimes accepted as a testing ground in my projects, a year or two after the Z was accepted – unless I myself have accepted it and joined its efforts – Z has lagged behind to a point where it isn’t good anymore since I don’t know it and most people are rather fixing the traditional autoconf stuff. So we extract the Z support again.

But if we would never accept new tools we would never evolve, and yes indeed autoconf and friends have their share of flaws.

The question is of course when to switch – what kind of project in what development state etc – and which alternative that is useful for a particular project. Me being a developer primarily working with plain C and working with lowlevel code and libraries mostly will no doubt have a different view than those who use other languages, who do more “apps” or perhaps even GUI programming…

Can you help me point out good build system comparisions and overiews? I’ve tried to find good comparisions but I failed. Just about all of them are written by the authors of one of these tools.

My ambition is to create some sort of comparison document myself. I think the comparison could include autotools, cmake, waf, scons, cdetect, qmake and ant. Any more?

(I got triggered to write this blog post after my post to the trio mailing list on this topic.)

libcurl in version management

Already before, I’ve mentioned that libcurl is becoming popular within package management.

libcurllibcurl is a generic library for file transfers over a wide variety of protocols. Over the years, some of the recent ditributed version management softwares have learned about libcurl’s powers and they now use it:

darcs – was born in 2003 and is written in Haskell. I’m under the impression these guys wrote their own binding layer to interface libcurl from Haskell.

git – possibly best known for being created by Linus Torvalds and being used by the Linux kernel project, is using libcurl for HTTP(S) accesses.

bazaar – is written in Python and accordingly uses the pycurl binding for libcurl.

Anyone know of other version control systems using libcurl?

Ironies here include that libcurl itself is still kept within a CVS respository, and also quite possibly that the first version management project I myself participated is Subversion and that not only has two different HTTP dependencies, but none of those two are libcurl (they are neon and serf)…

Update: it seems that Mercurial is also using pycurl as an optional dependency.

Firmware Hackers!

I fell over this job ad at LinkedIn that sounds like a perfect match to some of our existing Rockbox hackers:

Job Description

In this position, the individual will work on the common back-end architecture common for the SanDisk product line. The individual will develop the programming specifications, will explore alternate designs, and program, debug and deliver completed firmware. The individual should be fluent in programming in C language, familiar with assembly and Python scripting language programming and should have experience with hard drive or flash device firmware development. The individual will analyze, design, program, debug, modify software and troubleshoot code for firmware (IC embedded code) applications. Work often involves analog and digital hardware and software operating systems.

So send in your resume. And once you get the job, don’t be shy and let’s get Rockbox on some more SanDisk devices! 😉

Why top-posting annoys me

This is hardly any news to anyone who cares, and those who should care the most are either not understanding what top-posting is in the first place or they’re not aware of that people like me think top-posting is an evil decease we need to extinguish.

My primary reason to hate top-posting is that it is fast and easy for the single user who writes the mail reply, but it gives more work to the large amounts of people who read it. When someone posts to a mailing list, one should rather expect that the single user would be the one to put in a little extra effort to make the result readable for the masses who will read it.

Top-posting also most often involves the habit of including the entire previous conversation in a quoted manner below.

A sensible post and quote ethic, is to only quote as much as you need from the previous conversation to make your point clear, and to respond in a way so that it is clear to what parts of the quotes you are referring to. That more or less implies doing “interlaced” or “inlined” posting, where you show a few lines of quotes and then a few lines of comments over and over until the end of the mail.

The act of doing bottom-posting but keeping the entire thing quoted above the new text you add is almost as bad as top-posting. You remove the focus of what you write by providing far too much irrelevant text. Remove the irrelevant parts!

These days large portions of the modern world use broadband connections so the actual size of the mail is not a concern for bandwidth or speed reasons, but you probably still want the receivers to focus on your actual point. Also, a lot of mails these days end up in web archives or similar so they are then searchable by internet search engines and browsable by future people and then you even more want the mail to be on topic to become more relevant and less misleading to searches.

In case it isn’t obvious: this of course primarily concerns mails sent to (largish) mailing lists.

How much for a bug?

no bugsWarning: blog post with no clear conclusion!

I offer support deals to companies that want to get help with Open Source programs I’ve contributed to. The deals I’ve made so far have primarily involved libcurl, c-ares or libssh2, but that’s basically because those are projects in which I participate a lot in (and maintain) so people find me easily in relation to those projects.

I wouldn’t mind accepting service and support deals for other projects or software products either, as long as they are products I know and am fairly familiar with already and I am not scared of digging in and fixing things under the hood when that is required.

In fact, I could very well consider to offer to fix bugs in any Open Source software. Like a general: if you have a bug in an open source project that you really want fixed and you can’t do it yourself I might be your man. Of course this would be limited to some certain kinds of projects and programs, but it could still include a wide range of software. A lot more than the ones I happen to be involved in at any particular point in time.

But while “a bug” is a fairly easily defined term to a user who can’t make something work in a given program it can be anything from dead simple to downright impossible for a developer to fix. The fact that users many times cannot determine if a “bug” is hard or easy, if it’s a bug or a feature not working on purpose, makes such a business deal very hard to provide.

How to pay to get a bug fixed?

Fixed price per bug? Presumably only tricky bugs would be considered for this so it would require a fairly high fixed price. But then it’ll also never be used for simple bugs either since the fixed price would scare away such use cases. I don’t think a fixed-price scheme works very well for this.One dollar bill

Then we only have a variable price approach left. A common way for a consultant like me is to charge for my time spent on a project: I set an hourly rate, I fix the issue in N hours. I charge hourly rate * N. For smallish projects, this is less attractive to customers. If we have no previous relationship, there’s a trust issue where the customer might not just blindly accept that I worked 10 hours on a task they think sounds easy so they feel overcharged. Also, there’s the risk that I estimate the job to be 2 hours but end up spending 12. My conclusion is that per-hour pricing doesn’t work for this either.

A variable price approach based on something else than number of hours it took for me to fix the problem is therefore needed.

A bug fix is of course worth whatever someone is willing to pay for it. But we don’t know what they are prepared to pay. On the other end, a bug fix can get done by someone for the price he/she is willing to accept to get the job done. So where is the cross section of those two unknown graphs?

I don’t have the answer here. I’m very interested in feedback and suggestions though. If you would pay for a bug fix, how would you like to get the price set?

Conversing through the Internet with cURL and libcurl

I fell over a really nice and friendly introductionary article on curl and libcurl, written by M. Tim Jones, on IBM’s developerWorks site.

I must confess I greatly enjoyed his image showing the network layers and how curl/libcurl fits into the general picture:

curl layers

While of course arguably there is no ‘socket layer’ (as sockets are a pure API) I still think pictures like this serves a good purpose helping to explain how things interconnect. I personally really suck at visualizing things with images so I’m happy when I see this nice work I can borrow ideas from!

Making better advisories

A while ago yet another security flaw was discovered in curl (actually the tenth flaw in more than eleven years) by Scott Cantor. He reported it privately to us. We worked on the issue and in the end I posted an official project cURL security advisory about it. It wasn’t anything out of the ordinary really. Scott did great and we fixed the problem rather promptly and in coordination with vendor-sec etc.

After a security advisory and the accompanying release, something particular always happens. It’s the same every time I’ve done this and there’s really no surprise: one by one the different Linux distros and similar parties start to ship their security advisories and alerts about the same problem and they offer their upgrade paths for their users to get a corrected version of the package.

But I’ll tell you why I think those advisories tend to make me really sad. It’s not because of the flaws they fix or how fast or slow they are to appear. It’s entirely due to the contents of them or perhaps in many times the lack of contents.

When the first distro-based advisory comes out, they often tend not to use the phrasing used in the original advisory (which we’ve crafted on for weeks and coordinated with vendor-sec) but they instead offer their own interpretation. This isn’t necessarily a bad thing, but when the guys simplify matters they tend to blur out the actual cause and make the real issue hard to understand. Not to mention that when the first guy had done a mistake, most others just repeat that without thinking.

Credit to the doers

The craft of hunting down security problems in software and the art of then creating a fix for that problem is very time consuming and takes a fair amount of skill and patience. Yet some people do this. Some of those even track down problems in open source code bases and tell the projects about the issues to give them a chance to fix them befor they’re made public.

Those people are good people that we need.

In the open source world, and in fact in a lot of other places too, the just about only reward we can offer guys who do outstanding work like this is with attribution. Give credit where credit is due. Mention the guy who did the job!

Distro advisories are not good

Very often the subsequent advisories go the lazy route and they borrow their advisory explanation from another distro’s advisory. Still not using the original explanation. They like short and not too detailed explanations. Factual errors seem to not be too important.

Very few distro-advisories give any credit to the original guy who found the error. The only one thing we can offer as payment is then neglected and this is more of an established practise than a mistake. All distros do this. At best they refer to a CVE number for the flaw, but CVE numbers have the great disadvantage that they very rarely reveal any particular details about the flaw until a long time after the advisory is made.

Not only do they often not credit the originator, they also rarely link back to the original advisory or even the advisory the originator sent out (sometimes they’re sent out independently) – so getting the full description from the actual upstream project is harder than it has to be. They do however generally  link to their own site, using their own issue number for the security problem. If things are good, you can find references to the original in that web page they link to. I’ve also seen several distro advisories that simply don’t at all mention what patches they’ve applied or what particular upstream changset they’ve backported.

In this latest advisory from curl, the common repeated mistake was that the certificate flaw concerned the Common Name field (and it implied that it was only about that field) which is wrong, and which is why the original advisory didn’t explicitly mention that field. It also affects the subjectAltName field and that’s at least – if not more – as important to address for this particular flaw. The flaw also only concerned curl built to use OpenSSL, which was a fact that was often not mentioned at all.

What I suggest!

That every vendor and Linux distro that ship security advisories do this:

  1. credit the original problem founder/researcher. This way the glory and fame goes to the person(s) who often did a lot of research and hard work.
  2. link to the original advisory so that everyone who wants to can get the info and details from the upstream project and their ideas of what the problems are and what the best fixes or work-arounds might be
  3. fact-check your error/solution description better and don’t just repeat what someone else wrote unless you know that’s an accurate description
  4. don’t repeat others’ simplifications and errors. The act of duplicating someone else’s description is pretty low in general and it would often only be a signal that you haven’t understood the issue in the first place. If you have a rule to not copy others you won’t risk copying their mistakes.

tech, open source and networking