Category Archives: Development

Autotool alternatives

Lots of people whine and complain on the set of build tools we often refer to as a collective by the term ‘autotools’. That term tends to include autoconf, libtool and automake.

I think a certain amount of criticism is warranted against this family of aged tools that are unix-centric, have cryptic ways to control them (I think there’s a reason m4 macros  is not widely used…) and they are several independent tools with a tricky mix of cross-breeding.A build tool

The upsides include them being well tested, fairly well known, there’s a wide range of existing tests done for them, they work fine when cross-compiling and they support building out-of-source tree just fine.

But what about the alternatives?

I spend time in projects where the discussion of ditching autoconf come up every once in a while, as sure as that the sun will rise tomorrow. The discussion is always that tool Z is much better and easier to deal with and that everything gets shiny if we just switch. That Z is a lot of different tools that are available today, including CMake, scons, waf or cDetect.

The problem as I always see and why I almost always argue against Z is that autoconf is old, trusty, proven and I know it. The Z tool is often much newer, less proven, less peoeple involved in the project know Z, use Z or know how to customize it (since new tests will be needed and some tests will need to be changed etc). So even though Z is sometimes accepted as a testing ground in my projects, a year or two after the Z was accepted – unless I myself have accepted it and joined its efforts – Z has lagged behind to a point where it isn’t good anymore since I don’t know it and most people are rather fixing the traditional autoconf stuff. So we extract the Z support again.

But if we would never accept new tools we would never evolve, and yes indeed autoconf and friends have their share of flaws.

The question is of course when to switch – what kind of project in what development state etc – and which alternative that is useful for a particular project. Me being a developer primarily working with plain C and working with lowlevel code and libraries mostly will no doubt have a different view than those who use other languages, who do more “apps” or perhaps even GUI programming…

Can you help me point out good build system comparisions and overiews? I’ve tried to find good comparisions but I failed. Just about all of them are written by the authors of one of these tools.

My ambition is to create some sort of comparison document myself. I think the comparison could include autotools, cmake, waf, scons, cdetect, qmake and ant. Any more?

(I got triggered to write this blog post after my post to the trio mailing list on this topic.)

How much for a bug?

no bugsWarning: blog post with no clear conclusion!

I offer support deals to companies that want to get help with Open Source programs I’ve contributed to. The deals I’ve made so far have primarily involved libcurl, c-ares or libssh2, but that’s basically because those are projects in which I participate a lot in (and maintain) so people find me easily in relation to those projects.

I wouldn’t mind accepting service and support deals for other projects or software products either, as long as they are products I know and am fairly familiar with already and I am not scared of digging in and fixing things under the hood when that is required.

In fact, I could very well consider to offer to fix bugs in any Open Source software. Like a general: if you have a bug in an open source project that you really want fixed and you can’t do it yourself I might be your man. Of course this would be limited to some certain kinds of projects and programs, but it could still include a wide range of software. A lot more than the ones I happen to be involved in at any particular point in time.

But while “a bug” is a fairly easily defined term to a user who can’t make something work in a given program it can be anything from dead simple to downright impossible for a developer to fix. The fact that users many times cannot determine if a “bug” is hard or easy, if it’s a bug or a feature not working on purpose, makes such a business deal very hard to provide.

How to pay to get a bug fixed?

Fixed price per bug? Presumably only tricky bugs would be considered for this so it would require a fairly high fixed price. But then it’ll also never be used for simple bugs either since the fixed price would scare away such use cases. I don’t think a fixed-price scheme works very well for this.One dollar bill

Then we only have a variable price approach left. A common way for a consultant like me is to charge for my time spent on a project: I set an hourly rate, I fix the issue in N hours. I charge hourly rate * N. For smallish projects, this is less attractive to customers. If we have no previous relationship, there’s a trust issue where the customer might not just blindly accept that I worked 10 hours on a task they think sounds easy so they feel overcharged. Also, there’s the risk that I estimate the job to be 2 hours but end up spending 12. My conclusion is that per-hour pricing doesn’t work for this either.

A variable price approach based on something else than number of hours it took for me to fix the problem is therefore needed.

A bug fix is of course worth whatever someone is willing to pay for it. But we don’t know what they are prepared to pay. On the other end, a bug fix can get done by someone for the price he/she is willing to accept to get the job done. So where is the cross section of those two unknown graphs?

I don’t have the answer here. I’m very interested in feedback and suggestions though. If you would pay for a bug fix, how would you like to get the price set?

Going full-time Haxx

I realize noHaxxt a lot of you who read my site or blog are aware of my actual real world day-job situation (nor should you have to care), but I still want to let you guys know that I’m ending my employment at CAG Contactor and my intention is to find my way forward with my own company, Haxx AB, as employee number 1.

Haxx has existed for over ten years already, but we’ve so far only used it for stuff on the side that wasn’t full-time nor competing with our day-jobs. Starting in October, I’ll now instead work only for and with Haxx.

I don’t expect much in my actual day to day business to change much as I intend to continue as a contract developer / consultant / hacker doing embedded, Linux, open source and network development as an expert and senior engineer.

So if you want my help, you can continue to contact me the same way as before, and I can offer my services like before! 😉 The only difference is in my end where I get more freedom and control.

This move on my behalf will affect some of you indirectly: I will move a lot of web and other internet-based services from servers owned and run by Contactor to servers owned by Haxx. So, expect a lot of my sites and contents to get some uptime glitches in the upcoming month in my struggle to get things up on the new place(s).

My IETF 75 Story

A while ago, like a couple of years ago, I joined the mailing list for the HTTPbis effort. That’s the IETF work group which is working on producing an update to the HTTP 1.1 spec RFC2616 that clarifies it and removes things that nobody does or that doesn’t work. That spec is huge and the number of conflicting statements or just generally muddy expressions is large. This work is not near completion yet.

So when I learned the IETF75 meeting was to be held here in Stockholm, I of course cheered the opportunity to get to join in the talks and meet the people here.IETF

IETF is basically just an informal bunch of people who likes internet protocols, “above the wire and below the application”. The goal of the IETF is to make the Internet work better. This is the organization behind the RFCs. They wrote the specs for TCP, FTP, SMTP, POP3, HTTP and a wide range of other protocols we all use non-stop these days. I’m actually quite surprised IETF is so little known. When I mention the organization name, most people just give me a blank face back that reveals it just isn’t known to very many “civilians”.

IETF has meetings around the world 2 or 3 times per year, and every time some 1K-2K people join up and there’s a week filled with working group meetings, talks and a lot of socializing and so on. The attitude is generally relaxed all over and very welcoming for newcomers (like me). Everyone can join any talk or discussion. There’s simply no dress code, but everyone just wears whatever they think is comfortable.

The whole week was packed with scheduled talks and sessions, but I only spent roughly 1.5 days at the conference center as I had to get some “real work” done as well and quite honestly I didn’t find sessions that matched my interest every single day anyway.

After the scheduled days, and between sessions and sometimes instead of the scheduled stuff, a lot of social events took place. People meet in informal gatherings, talk, plans, sessions and dinners. The organizers of this particular even also arranged two separate off-topic social events. As one of the old-timers I talked to said something like “this year was unusually productive, but just about all of that was done outside of the schedule”…

This time. The 75th IETF meeting was hosted by .se in Stockholm Sweden in the end of July 2009. 1230 persons had signed up to come. The entrance fee was 650 USD unless you wanted to pay very late or at the door, as then it was a 130 extra or so. Oh, and we got a t-shirt!

One amusing little detail: on the t-shirt we got there was a “free invite code” to the Spotify streaming music service. But when people at the IETF meeting tried to use it at the conference center, Spotify refused to accept the users claiming it doesn’t work in the US! Clearly they’re not using the most updated ip-geography database in the world! 😉

So let me quickly mention a few of the topics I caught and found interesting, some of which I’ve blogged about separately.

HTTP over SCTP

The guys in the SCTP team works on this draft on how to do HTTP the best possible way over SCTP. They cite tests that claim “web browsing” becomes a better experience when done over SCTP. I’m personally quite interested in this work and in SCTP in general and I hope to be able to play with making libcurl support this in a not too distant future.

How to select TCP or SCTP

Assuming that SCTP gets widespread adoption and even browsers and browser-like apps start support it. How should the clients figure out which transport mechanism to use? The problem is similar to the selection between IPv4 or IPv6 but for the IP choice we can at least use A and AAAA lookups. The suggestions that were presented basically argued for trying all combinations in parallel and going with the fastest to respond, but a few bright minds questioned the smartness of that as it scales very badly if more transport options are added and it potentially introduces a lot more (start-up) traffic.

Getting HTTP from multiple mirrors

What the authors call Multiserver HTTP, is a pretty small suggestion of a few additional HTTP headers that a server would be able to return to a client hinting about other URIs where the exact same file/resource can be downloaded from. A client would then get that resource in parallel from multiple HTTP servers using range requests. Quite inspired by other technologies such as bittorrent and metalink. This first draft faced some criticism of not using existing HTTP as it could, but the general spirit has been welcoming.

The presenter of the idea, Mike Hanley, also mentioned the ability to allow the server response include a wildcard mention, so that a server for example could hint that “other images from this dir path can also be found under this dir path on this other server”. Thus a client would be able to download such images from multiple servers. This idea is not in that draft and I’m not personally sure I think it fits as nicely.

HTTP-state wg

During the IETF week, it was announced that the HTTP-state working group is being formed. It didn’t actually happen on the actual meeting but still… See my separate blog post on http-state.

Multipath TCP

The idea and concept behind MPTCP was new to me but I quickly come to like the thought of getting this into network stacks around me. I hope this will grow up to become something fine! See my separate blog post on multipath tcp for all details.

IRI

I visited the IRI BOF and got some fine insights on the troubles of creating the IRI spec. Without revealing too much, it’s quite clear sometimes that politics can be hard even in these surroundings…

tng – Transport Next Generation

What felt pretty “researchy” and still not really ready for adaption (or am I wrong?) is this effort they call Transport Next Generation. Their ideas include the concept of inserting a whole bunch of more layers into the typical network stuff, to for example move congestion handling into its own layer to be able to make it per network-segment basis instead of only doing end-to-end like today. Apparently they have tests and studies that suggest that the per network-segment basis can improve traffic a lot. These days a lot of the first part and last part of network accesses are done over wireless networks while the core center tends to still be physical cables.

DCCP

I found it interesting and amusing when they presented DCCP with all its bells and whistles, and then toward the end of the presentation it surfaces that they don’t really know what DCCP would be used for and at the moment the work group is pretty much done but there’s just nobody that’s using the protocol…

HTTPbis

HTTPbis is more or less my “home” in the IETF. We had a meeting in which things were discussed, some decisions were made and some new topics were raised. RFC2616 is a monster of a spec and it certainly contains so much details, so many potentially conflicting statements and quite clearly very many implementers have interpreted sections differently, that doing these clarifications is a next to endless work. I figure the work will simply be deemed “done” one day, and the remaining confusions will then just be left. The good part is then that the new document should at least be heaps better than the former. It will certainly benefit future and existing HTTP implementers nonetheless.

And a bunch of the HTTPbis guys got together a bit outside of the meeting as well, so we get to talk quite a bit and top off the evening with a dinner…

OpenDNSSec

There was quite a lot of DNSSEC talk during the week, and it annoys me that I double-booked the evening they had their opendnssec release (or was it tech preview?) party so I couldn’t go there and take advantage of my two free beers!

Observations

Apple laptops. A crushing majority of the people seemed to have Apple branded laptops, and nearly all presentations I saw were done with Apples.

Not too surprising, the male vs female ratio was very very high. I would guess 20:1 to 30:1, at least in those surrounding where I spent my time.

Upcoming Meetings

This week was lots of fun. More fun than I have had in a conference in a very long time. I’ll definately consider going to some upcoming meetings, although the next one in Japan in November doesn’t fit my schedule. Possibly Anaheim in March 2010 and even more likely the 78th meeting in Maastricht in July 2010.

Thanks everyone who was there. Thanks to the hosters for a great event. It was a blast!

Multipath TCP

During the IETF 75 meeting in Stockholm, there was this multipath tcp BOF (“start-up meeting” sort of) on Thursday morning that I visited.

Multipath TCP (shortened to MPTCP at times) is basically an idea to make everything look like TCP for both end points, but allow for additional TCP paths to get added and allow packets to get routed over any of the added flows to overcome congestion and to select the routes where it flows “best”. The socket API would remain unmodified in both ends. The individual TCP paths would all look and work like regular TCP streams for the rest of the network. It is basically a way to introduce these new fancy features without breaking compatibility. Of course a big point of that is to keep functionality over NATs or other middle-boxes. (See full description.)

The guys holding the BOF had already presented a fairly detailed draft how it could be designed both one-ended and with multiple adresses,  but could also boast with an already written implementation that was even demoed live in front of the audience.

The term ‘path’ is basically used for a pair of address+port sets. I would personally rather call it “flow” or “stream” or something, as we cannot really control that the paths are separate from each other as those are entirely in the hands of those who route the IP packets to the destination.

They stressed that their goals here included:

  • perform no worse than TCP would on the best of the single TCP paths
  • be no harder on the network than a single TCP flow would be, not even for single bottlenecks (network and bottleneck fairness)
  • allow resource pooling over multiple TCP paths

A perfect use-case for this is hosts with multiple interfaces. Like a mobile phone with 3G and wifi, as it could have a single TCP connection using paths over both interfaces, and it could even change paths along the way when you move to handover to new wifi access-points or when you plug in your Ethernet cable or whatever. Kind of like a solution to the mobile ip concept with roaming that was never made to actually work in the past.

The multipath tcp mailinglist is already quite active, and it didn’t take long until possible flaws in the backwards compatibility have been discovered and are being discussed. Like if you use TCP to verify that a particular link is alive, MPTCP may in fact break that as the proposal is currently written.

What struck me as an interesting side-effect of this concept, is that if implemented it will separate packets from the same original stream further from each other and possibly make snooping on plain-text TCP traffic harder. Like in the case where you monitor traffic going through a router or similar.

My HTC Magic Review

This is my first “smartphone” I’ve owned myself so of course I have nothing else this fancy to actually compare against. I’ve played around with others’ a few times but that doesn’t really count. I’ve owned perhaps 8 mobile phones since I got my first one 1996, and they have all been Nokias and Sony Ericssons.

I was never really interested in iPhone due to many reasons. It is not open. It has a (very) restricted app distribution mechanism. It forbids apps from running simultaneously etc. And it has a pretty strong connection with itunes with no proper mass-storage syncing supported. But I admit that it has a slick UI and many cool apps.

My plan is to get some Android hacking going eventually and this is basically the first Android phone that has reached Swedish soil. I mean without requiring me to bend over backwards to get it, as I’m sure I could’ve bought previous Android phones from obroad if I really wanted to.

Random good things:

  • it’s fast, most things run faster than on my previous Sony Ericsson thing and yet this is way more advanced with much bigger screen estate and fancier UI
  • it has a nice gui that you mostly can guess how to work with
  • I love being able to use a qwerty-style keyboard when messaging instead of relying on T9 etc
  • wifi is fun, but with a decent data plan it basically only brings me slightly improved speed and I often can’t even tell the difference!
  • the integration with the Google services are nice, gmail and maps most noticeably
  • there really are a bunch of existing cool apps (I know iphone has lots more, but there are still thousands)
  • it has a much better approach to messaging, similar to what I’ve seen in the iphone, than I’ve ever experienced in a Nokia or Sony Ericsson. It focuses on conversations and keeps the “thread”.HTC Magic
  • I really really like the feeling of it being a networked thing that also can make phone calls. I can browse, use maps, use gmail just as easily as I can message or call people. With my previous phones all the internet-related services always felt tacked on like a very late afterthought.
  • The notification system is nice, and the three-screen wide “home” with its widget-system is really neat.

Bad stuff:

  • I’ve had some apps crash on me on occasion. But it’s rarely a problem as they’re restarted automatically for me.
  • Toggling wifi on/off a lot can sometimes lead to me not getting any data network at all, and I’ve had to reboot the phone to get back to phone-based (Edge/3G) data.

On-screen keyboard

Of course any and all geek friend I have ask me about how I deal with the on-screen keyboard. I must admit I’m still quite fond of it. Mostly because a physical keyboard makes the phone clonky and it adds physical contraints and wear-points that I don’t like. So the keyboard is a bit small, especially when the phone is in portrait mode, but the suggested completions are fine and I believe I’m already typing pretty quickly on the thing. When I ssh’ed from the phone to one of my servers I did find the obvious lack of cursor keys (to for example navigate an ordinary ncurses-based app or the command line history of a bash prompt) but other than that I really can’t complain.

Background Applications

One obvious advantage compared to iphones is of course the ability to run applications exactly the way I’d like. I can actually run the irc client and then have it in the background while I go browse the web or answer a call or whatever and then at my choice go back to the still connected irc client. In fact when playing with this it feels like a really ridiculous restriction of the iphone.

Comparing to my SE w550i

My previous phone is 94 grams compared to the Magic’s 116. The magic has a much bigger screen. The magic is roughly 11mm wider and 14mm taller. That makes it use 30% more volume (85 cm2) but still fits fine in the front pocket of any set of pants I use. The magic claims a lot longer battery life, but given that it has so much functionality I can’t help to play with all the time I doubt it’ll notice. It’ll more likely run down fast simply because I’ll use it more.

I’m also pleased that there’s no problem to just plug in the Magic to my Linux desktop and copy/sync the photos and the videos etc.

Google Integration

I realize some people will feel that the very tight integration with Google and Google’s services is a downside as it adds just another item that Google “owns” in your life. Still, it makes the experience very slick and as a user I get a lot of stuff “for free” as it just connects to lots of things that I already used and had accounts on. So gmail, sharing photos on picasaweb etc “just works”.

Decrypting ipods

Recently we’ve seen progress by the linux4nano guys in their quest to get custom code to run on an Ipod Nano 2nd generation. They’ve apparently managed to extract the bootrom off a 2nd gen ipod nano (my copy of their extracted data is here – a reminder on objdump usage: “arm-elf-objdump -D --target binary -marm [file]“). I believe their intent is to port Linux to the newer ipods. Possibly ipodlinux. They do mention providing the necessary info to Rockbox and yes we will welcome it.

A large crowd of Rockbox hackers have joined their IRC channel and have been hanging out with them and helped out discussing ideas and pushed them towards publishing their news and infos on how this all is accomplished etc. Their SVN repo hosts some (most?) of the tools made so far.

The Rockbox wiki page for nano2g has been updated and hopefully it will keep track of what happens.

There have been speculations, but I don’t yet know based on what facts, that this recent news and hacks will be usable on other recent (encrypted) ipod models.

Summary: very interesting progress has been made. Lots of it is still left to figure out. There seems to be a bunch of skilled people around and now we’re seeing information and documentation for this getting published so I can’t but to hope for a bright future!

Concepts of a new distributed build

It was time to make an overhaul of our distributed builds system for Rockbox. The one currently in place is quite fancy and it does build 106 builds in around 7-8 minutes, but during the years it has served us we have found a few areas where we want to improve.

The goals for the new system were primarily:

  • do all the builds faster
  • reverse the connection so that people can contribute clients easier
  • make a system that is more allowing for slower machines to contribute

The biggest weaknesses of the existing system:

  • The master uses ssh to the distributed clients, which forces them to have an accessible ssh server and port etc. It also makes it awkward for people behind NATs who wants to run more clients.
  • It only hands out a particular build to one client, so thus if a large build happens to get handed to a slow client towards the end of a build round, all the other clients will sit idle waiting for the last client to finish.
  • The build and the subsequent upload of results to the master are synchronous, so thus a client with a very slow uplink may spend a significant time on the upload before it can start the next build.

The  new system is currently in development. It consists of a server that runs on one of our main servers, and there’s a client script that each volunteer contributor runs on their systems.

The clients connect to the master on a dedicated TCP port, specifying user name, password, name of the particular client instance, what particular architectures the client can build and how many bogomips the client boasts. While bogomips is a bogus way to measure anything, we’ve started out using it for a rough way to sort the the build clients based on speed.

The clients keep connected to the server all the time. There’s a ping message from the master every N second of idleness to make sure the connection is kept alive. As soon as the master wants the client to do a build, it sends a message to it detailing exactly how it should build it and using what SVN revision. The client will then do the build at once, upload the results using HTTP to a dedicated place and then tell the server the build is complete.

The server knows about all builds to do at a  commit, what we call a build round. It has a rough “score” or “weight” for each build that grades them in a slow to fast order. When a build round starts, the server will first sort all builds based on number of times they’ve been handed out and as secondary sort key the “weight” of it. Then it loops over the currently connected build clients and hand out builds from the sorted build table. The server then continues to do that until all clients have three builds each to build. As soon as a build is reported to have been completed by a client, that client will get the next build from the sorted build list.

If a client connects to the server and the server deems the client to be too old (since it does specify its version in the handshake message), it will be told to update to a specific version instead and come back then. This way the server can update all build clients when important things are fixed.

The clients will soon start to get assigned builds that already have been assigned to another client. This is not a problem but in fact our intention. The client that completes the build first will simply tell the server, and the server will then tell all the other clients that build that same build that they should cancel that particular build.

A client that joins the server in the middle of a build round will simply get a bunch of builds immediately and join in. A client that disconnects during a build round simply won’t complete its builds and other clients will instead do them. The system is also tolerant against the fact that bogomips is lame to compare computers with, and that the build “score” may not be very accurate or even that some server will have very slow or very fast upload speeds at unpredictable times.

The build master itself does not know when to start a new build round. It simply knows about the concept and it knows how to tell clients to complete a round. To make the master to start a new round, you need to connect to the server’s listening port and issue a special command and provide a password and then you can tell the server to start a build of a specific SVN revision. Or to queue up a build to be performed after the current one if there happens to be one in progress already.

When a full build round is complete, a hundred or so builds have been done, and full packages and log files are now in a directory on the build server, the server will simply trigger an external script that then takes care of updating our build table etc. In fact, every single completed build will optionally trigger an external script to allow web pages or stats pages to get updated as we go.

This build system is currently pretty Rockbox-specific as this is the project and development system we’re writing this for, but there’s really nothing in this that must be this way. I’m sure that if someone (you?) wants to adapt this for another project, I’d be more than happy to assist and to help ensuring that this becomes a more generic distributed build system. Just raise your hand and step forward!

At the time of this writing, (primarily) me and Björn are still ironing out quirks in this new system to hopefully get it going live real soon…

Rockbox

libssh2 vs libssh

There are only two open source libraries for SSH that I am aware of. At least that are at the fundamental layer, written in C.

I researched the SSH library market years ago when I stuck with libssh2 as the one I thought was most promising, and since then I and others have taken it much further. The lib that I didn’t go with at that time, confusingly enough named libssh, recently came out with a new release.

Since there is now clearly two active open source SSH libraries it feels like we should help our users and potential newcomers by explaining how our projects and libraries differ. As a little teaser: one of the libraries turned out more than twice as fast as the other in my test…

While I admit to not having actually used libssh for real, I’ve read the docs and I’ve tried it a little bit. My take at a comparison is now online at:

http://www.libssh2.org/libssh2-vs-libssh.html

I will highly appreciate your feedback and additional things that differ between the two! The list isn’t really much to boast about as it currently looks!

Dear Apple Inc

Dear Apple Inc,

As one of the primary authors of libcurl and curl, two parts that are included in every Mac OS X release since years back, I was only wondering if you would consider sponsoring me with a Mac, to make it easier for me to do (lib)curl development, tuning and bug-fixing on/for the Mac?green-apple

I really don’t have any particular income from Macs so I don’t see how I can personally motivate spending some 2000 USD on a Mac only for curl. And to be honest, I can’t think of any other reason to get a Mac either!

I did look around Apple’s web site to find an email adress of someone to send my plea to, but I failed. So I’ll just put it here. I have exactly no hope in actually accomplishing anything with this other than putting some attention on how things are.

This post was triggered by recent libcurl bugs that seem to show up only on Mac!