Tag Archives: Development

libbrotli is brotli in lib form

Brotli is this new cool compression algorithm that Firefox now has support for in Content-Encoding, Chrome will too soon and Eric Lawrence wrote up this nice summary about.

So I’d love to see brotli supported as a Content-Encoding in curl too, and then we just basically have to write some conditional code to detect the brotli library, add the adaption code for it and we should be in a good position. But…

There is (was) no brotli library!

It turns out the brotli team just writes their code to be linked with their tools, without making any library nor making it easy to install and use for third party applications.

an unmotivated circle sawWe can’t have it like that! I rolled up my imaginary sleeves (imaginary since my swag tshirt doesn’t really have sleeves) and I now offer libbrotli to the world. It is just a bunch of files and a build system that sucks in the brotli upstream repo as a submodule and then it builds a decoder library (brotlidec) and an encoder library (brotlienc) out of them. So there’s no code of our own here. Just building on top of the great stuff done by others.

It’s not complicated. It’s nothing fancy. But you can configure, make and make install two libraries and I can now go on and write a curl adaption for this library so that we can get brotli support for it done. Ideally, this (making a library) is something the brotli project will do on their own at some point, but until they do I don’t mind handling this.

As always, dive in and try it out, file any issues you find and send us your pull-requests for everything you can help us out with!

A day in the curl project

cURLI maintain curl and lead the development there. This is how I spend my time an ordinary day in the project. Maybe I don’t do all of these things every single day, but sometimes I do and sometimes I just do a subset of them. I just want to give you a look into what I do and why I don’t add new stuff more often or faster… I spend about one to three hours on the project every day. Let me also stress that curl is a tiny little project in comparison with many other open source projects. I’m certainly not saying otherwise.

the new bug

Someone submits a new bug in the bug tracker or on one of the mailing lists. Most initial bug reports lack sufficient details so the first thing I do is ask for more info and possibly ask the submitter to try a recent version as very often we get bug reported on very old versions. Many bug reports take several demands for more info before the necessary details have been provided. I don’t really start to investigate a problem until I feel I have a sufficient amount of details. We’re a very small core team that acts on other people’s bugs.

the question by a newbie in the project

A new person shows up with a question. The question is usually similar to a FAQ entry or an example but not exactly. It deserves a proper response. This kind of question can often be answered by anyone, but also most people involved in the project don’t feel the need or “familiarity” to respond to such questions and therefore remain quiet.

the old mail I haven’t responded to yet

I want every serious email that reaches the mailing lists to get a response, so all mails that neither I nor anyone else responds to I keep around in my inbox and when I have idle time over I go back and catch up on old mails. Some of them can then of course result in a new bug or patch or whatever. Occasionally I have to resort to simply saving away the old mail without responding in order to catch up, just to cut the list of outstanding things to do a little.

the TODO list for my own sake, things I’d like to get working on

There are always things I really want to see done in the project, and I work on them far too little really. But every once in a while I ignore everything else in my life for a couple of hours and spend them on adding a new feature or fixing something I’ve been missing. Actual development of new features is a very small fraction of all time I spend on this project.

the list of open bug reports

I regularly revisit this list to see what I can do to push the open ones forward. Follow-up questions, deep dives into source code and specifications or just the sad realization that a particular issue won’t be fixed within the nearest time (year?) so that I close it as “future” and add the problem to our KNOWN_BUGS document. I strive to keep the bug list clean and only keep relevant bugs open. Those issues that are not reproducible, are left without the proper attention from the reporter or otherwise stall will get closed. In general I feel quite lonely as responder in the bug tracker…

the mailing list threads that are sort of dying but I do want some progress or feedback on

In my primary email inbox I usually keep ongoing threads around. Lots of discussions just silently stop getting more posts and thus slowly wither away further up the list to become forgotten and ignored. With some interval I go back to see if the posters are still around, if there’s any more feedback or whatever in order to figure out how to proceed with the subject. Very often this makes me get nothing at all back and instead I just save away the entire conversation thread, forget about it and move on.

the blog post I want to do about a recent change or fix I did I’d like to highlight

I try to explain some changes to the world in blog posts. Not all changes but the ones that are somehow noteworthy as they perhaps change the way things have been or introduce new fun features perhaps not that easily spotted. Of course all features are always documented etc, but sometimes I feel I need to put some extra attention on focus on things in a more free-form style. Or I just write about meta stuff, like this very posting.

the reviewing and merging of patches

One of the most important tasks I have is to review patches. I’m basically the only person in the project who volunteers to review patches against any angle or corner of the project. When people have spent time and effort and gallantly send the results of their labor our way in the best possible format (a patch!), the submitter deserves a good review and proper feedback. Also, paving the road for more patches is one of the best way to scale the project. Helping newcomers become productive is important.

Patches are preferably posted on the mailing lists but there’s also some coming in via pull requests on github and while I strongly discourage that (due to them not getting the same attention and possible scrutiny on the list like the others) I sometimes let them through anyway just to be smooth.

When the patch looks good (or sometimes good enough and I just edit some minor detail), I merge it.

the non-disclosed discussions about a potential security problem

We’re a small project with a wide reach and security problems can potentially have grave impact on users. We take security seriously, and we very often have at least one non-public discussion going on about a problem in curl that may have security implications. We then often work on phrasing security advisories, working down exactly which versions that are vulnerable, producing patches for at least the most recent ones of those affected versions and so on.

tame stackoverflow

stackoverflow.com has become almost like a wikipedia for source code and programming related issues (although it isn’t wiki), and that site is one of the primary referrers to curl’s web site these days. I tend to glance over the curl and libcurl related questions and offer my answers at times. If nothing else, it is good to help keeping the amount of disinformation at low levels.

I strongly disapprove of people filing bug reports on such places or even very detailed (lib)curl core questions that should’ve been asked on the curl-library list.

there are idle times too

Yeah. Not very often, but sometimes I actually just need a day off all this. Sometimes I just don’t find motivation or energy enough to dig into that terrible seldom-happening bug on a platform I’ve never seen personally. A project like this never ends. The same day we release a new release, we just reset our clocks and we’re back on improving curl, fixing bugs and cleaning up things for the next release. Forever and ever until the end of time.

keep-calm-and-improve-curl

The “right” keyboard layout

I’ve never considered myself very picky about the particular keyboard I use for my machines. Sure, I work full-time and spare time in front of the same computer and thus I easily spend 2500-3000 hours a year in front of it but I haven’t thought much about it. I wish I had some actual stats on how many key-presses I do on my keyboard on an average day or year or so.

Then, one of these hot summer days this summer I left the roof window above my work place a little bit too much open when a very intense rain storm hit our neighborhood when I was away for a brief moment and to put it shortly, the huge amounts of water that poured in luckily only destroyed one piece of electronics for me: my trusty old keyboard. The keyboard I just randomly picked from some old computer without any consideration a bunch of years ago.

So the old was dead, I just picked another keyboard I had lying around.

But man, very soft rubber-style keys are very annoying to work with. Then I picked another with a weird layout and a control-key that required a little too much pressure to work for it to be comfortable. So, my race for a good enough keyboard had begun. Obviously I couldn’t just pick a random cheap new one and be happy with it.

Nordic key layout

That’s what they call it. It is even a Swedish layout, which among a few other details means it features å, ä and ö keys at a rather prominent place. See illustration. Those letters are used fairly frequently in our language. We have a few peculiarities in the Swedish layout that is downright impractical for programming, like how the {[]} – symbols all require AltGr pressed and slash, asterisk and underscore require Shift to be pressed etc. Still, I’v’e learned to program on such a layout so I’m quite used to those odd choices by now…

kb-nordic

Cursor keys

I want the cursor keys to be of “standard size”, have the correct location and relative positions. Like below. Also, the page up and page down keys should not be located close to the cursor keys (like many laptop keyboards do).

keyboard with marked cursorkeys

Page up and down

The page up and page down keys should instead be located in the group of six keys above the cursor keys. The group should have a little gap between it and the three keys (print screen, scroll lock and pause/break) above them so that finding the upper row is easy and quick without looking.

page up and down keysBackspace

I’m not really a good keyboard typist. I do a lot of mistakes and I need to use the backspace key quite a lot when doing so. Thus I’m a huge fan of the slightly enlarged backspace key layout so that I can find and hit that key easily. Also, the return key is a fairly important one so I like the enlarged and strangely shaped version of that as well. Pretty standard.

kb-backspaceFurther details

The Escape key should have a little gap below it so that I can find it easily without looking.

The Caps lock key is completely useless for locking caps is not something a normal person does, but it can be reprogrammed for other purposes. I’ve still refrained from doing so, mostly to not get accustomed to “weird” setups that makes it (even) harder for me to move between different keyboards at different places. Just recently I’ve configured it to work as ctrl – let’s see how that works out.

The F-keys are pretty useless. I use F5 sometimes to refresh web pages but as ctrl-r works just as well I don’t see a strong need for them in my life.

Numpad – a completely useless piece of the keyboard that I would love to get rid of – I never use any of those key. Never. Unfortunately I haven’t found any otherwise decent keyboards without the numpad.

Func KB-460

The Func KB-460 is the keyboard I ended up with this time in my search. It has some fun extra cruft such as two USB ports and a red backlight (that can be made to pulse). The backlight gave me extra points from my kids.

Func KB-460 keyboard

It is “mechanical” which obviously is some sort of thing among keyboards that has followers and is supposed to be very good. I remain optimistic about this particular model, even if there are a few minor things with it I haven’t yet gotten used to. I hope I’ll just get used to them.

This keyboard has Cherry MX Red linear switches.

How it could look

Based on my preferences and what keys I think I use, I figure an ideal keyboard layout for me could very well look like this:

my keyboard layout

Keyfreq

I have decided to go further and “scientifically” measure how I use my keyboard, which keys I use the most and similar data and metrics. Turns out the most common keylog program on Linux doesn’t log enough details, so I forked it and created keyfreq for this purpose. I’ll report details about this separately – soon.

See also: fixing the Func KLB-460 key

Parallel Spaghetti – decoded

Here’s the decoding procedure for the Parallel Spaghetti Decode challenge.

Step 1, the answers to all the questions. You will notice that I did have some fun in D6 and E2, but since they were boxes that weren’t on the right track anyway I thought you’d still enjoy them.

Step 2, let me illustrate how the above answers will take you through the maze. The correct path is made up out of yellow boxes and the correct answers are shown with red arrows leading forward. Click it for full resolution version.

The parallel spaghetti challenge correct track shown

Step 3, those different colors in the “Word” column give you the words used for the two questions. If you rearrange them, the two questions become:

which tr command line option specifies delete characters

and

what curl command line option specifies POST requests

So, it took about 14 minutes at our event for Oscar Andersson to bring the correct answer to me:

-d

source code survival rate

The curl project has its roots in the late 1996, but we haven’t kept track of all of the early code history. We imported our code to Sourceforge late 1999 and that’s how far back we can see in our current git repository. The exact date is “Wed Dec 29 14:20:26 UTC 1999”. So, almost 14 years of development.

Warning: this blog post contains more useless info and graphs than many mortals can handle. Be aware!

How much old code remain in the current source tree? Or perhaps put differently: how is the refresh rate of the code? We fix bugs, we change things, we add features. Surely we’ll slowly over time rewrite the old code and replace it with new more shiny and better working code? I decided to check this. Here’s what I found!

The tools

We have all code in git. ‘git blame’ is the primary tool I used as it lists all lines of all source code and tells us when it was added. I did some additional perl scripting around it.

The code

I decided to check all code in the src/ and lib/ directories in the curl and libcurl source tree. The source code is used to create both the curl tool and the libcurl library and back in 1999 there was no libcurl like today so we do get a slightly better coverage of history this way.

In total this sums up to some 112000 lines in the current .c and .h files.

To count the total amount of commits done to those specific files through history I ran:

git log --oneline src/*.[ch] lib/*.[ch] | wc -l

6047 commits in total. (if I don’t specify the files and count all commits in the repo it ends up at 16954)

git stats

We run gitstats on the curl repo every day so you can go there for some more and current stats. Right now it tells us that average number of commits is 4.7 per active day (that means days when actually something was committed), or 3.4 per all days over the entire time. There was git activity 3576 days in total. By 224 authors.

Surviving commits

How much of the code would you think still remains that were present already that December day 1999?

How much of the code in the current code base would you think was written the last few years?

Commit vs Author vs Date

I wanted to see how much old code that exists, or perhaps how the age of the code is represented in the current code base. I decided to therefore base my logic on the author time that git tracks. It is basically the time when the author of a change commits it to his/her local tree as then the change can be applied later on by a committer that can be someone else, but the author time remains the same. Sometimes a committer commits multiple patches at once, possibly at a much later time etc so I figured the author time would be a better time stamp. I also decided to track the date instead of just the commit hash so that I can sort the changes properly and also make interesting graphs that are based on that time. I use the time with a second precision so changes done a second apart will be recorded as two separate changes while two commits done with the same author time stamp will be counted as the same time.

I had my script run ‘git blame –line-porcelain’ for all files and had my script sum up all changes done on the same time.

Some totals

The code base contains changes written at 4147 different times. Converted to UTC times, they happened on 2076 unique days. On 167 unique months. That’s every month since the beginning.

We’re talking about 312 files.

Number of lines changed over time

A graph with changes over time. The Y axis is number of lines that were changed on that particular time. (click for higher res)

Lines changed over time

Ok you object, that doesn’t look very appealing. So here’s the same data but with all the changes accumulated over time.

accumulated

Do you think the same as I do? Isn’t it strangely linear? It seems that the number of added lines that remain in the code today is virtually the same over time! But fair enough, the changes in the X axis are not distributed according to the time/date they represent so we shouldn’t be fooled by the time, but certainly we can see that changes in general only bring in a certain amount of surviving modified lines.

Another way to count the changes is then to check all the ~4000 change times of the present code, and see how many days between them there are:

delta

Ah, now finally we’re seeing something. Older code that is still present clearly was made with longer periods in between the changes that have lasted. It makes perfect sense to me, since the many years of development probably have later overwritten a lot of code that was written in between.

Also, it is clearly that among the more recent changes that have survived they were often done on the same day or just a few days away from another lasting change.

Grouped on date ranges

The number of modified lines split up on the individual year the change came in.

year

Interesting! The general trend is clear and not surprising. Two years stand out from the trend, 2004 and 2011. I have not yet investigated what particular larger changes that were made those years that have survived. The bump for 1999 is simply the original import and most of those lines are preprocessor lines like #ifdef and #include or just opening and closing braces { and }.

Splitting up the number of surviving lines on the specific year+month they were added:

month

This helps us analyze the previous chart. As we can see, the rather tall bars from 2004 and 2011 are actually several months wide and explains the bumps in the year-chart. Clearly we made some larger effort on those periods that were good enough to still remain in the code.

Correlate to added or removed lines?

So, can we perhaps see if some years’ more activity in number of added or removed source lines can be tracked back to explain the number of surviving source code lines? I ran “git diff [hash1]..[hash2] –stat — lib/*.[ch] src/*.[ch]” for all years to get a summary of number of added and removed source code lines that year. I added those number to the table with surviving lines and then I made another graph:

year-again

Funnily enough, we see almost an exact correlation there for the first eight years and then the pattern breaks. From the year 2009 the number of removed lines went down but still the amount of surving lines went up quite a bit and then the graphs jump around a bit.

My interpretation of this graph is this boring: the amount of surviving code in absolute numbers is clearly correlating to the amount of added code. And that we removed more code yearly in the 2000-2003 period than what has survived.

But notice how the blue line is closing the gap to the orange/red one over time, which means that percentage wise there’s more surviving code in more recent code! How much?

Here’s the amount of surviving lines/added lines and a second graph looking at surviving lines/(added + removed) to see if the mere source code activity would be a more suitable factor to compare against…

relation survival vs added and removed lines

Code committed within the last 5 years are basically 75% left but then it goes downhill down to the 18% survival rate of the 1999 code import.

If you can think of other good info to dig out, let me know!

1999,1699
2000,1115
2001,3061
2002,2432
2003,2578
2004,7644
2005,4016
2006,5101
2007,7665
2008,7292
2009,9460
2010,11762
2011,19642
2012,11842
2013,16844

Say hello to Moo

I decided it was about time to upgrade my main development machine to something modern and snappy. It is 5.5 years ago since I bought my current work horse, a dual-core AMD Athlon 64 X2 5600+ (2.8GHz) equipped thing.Fractal Design I’m using my machine primarily for development. I never game. I decided to go for the higher end of what’s available to get me something to live with for several years to come.

Motherboard: Asus P8Z77-M. Micro-ATX. Intel Z77 chipset.

CPU: Intel Core i7 3770K 3,5Ghz Socket 1155. This is a 22nm monster featuring 8 MB L3-cache

Memory: TridentX DDR3 PC19200/2400MHz CL10 2x8GB. 16GB of ram.

HDD: Seagate Barracuda ST3000DM001 64MB 3TB.

Chassi: Fractal Design Define R3 USB3. See picture. Rather big and fits a lot more drives and stuff than what I have now…

SSD: OCZ Vertex 4 256GB

CPU cooler: Cooler Master Hyper 412S

Graphics: ASUS Radeon HD5450 512MB (very simple and cheap thing but supports 2560×1600 which the MB doesn’t do)

PSU: Plexgear PS-500 500W

(a prisjakt list with the full setup)

All in all, this has two 120mm chassi fans, one 135mm fan on the big CPU cooler and there’s one fan in the PSU. I hope they won’t be causing too much noise or problems for me. The rather low-end graphics should keep the total power consumption (and thus heat production) at a decent level. ASUS p8z77-m

I purchased all the individual parts separately as I dislike how I can’t get an as optimized machine prebuilt from anywhere – I basically have to pay around 50% more, and then I still wouldn’t get the exact set of pieces I’d like. This way I also avoid the highly disturbing Microsoft tax prebuilt systems come with.

Unfortunately I got some bad luck included too, as when I first put everything together and pressed the power button nothing happened. Well, a single led was turned on but nothing else happened. It took me a while and some sweat to figure out where the problem lied and once I replaced the broken motherboard it would start properly and then I could proceed and install it.gskill TridentX ddr3

Once my new machine (which now goes under the name Moo) gets settled, my old box will become my daughter’s new machine as hers existing tired old PIII machine isn’t really fun to do a lot with.

Three static code analyzers compared

I’m a fan of static code analyzing. With the use of fancy scanner tools we can get detailed reports about source code mishaps and quite decently pinpoint what source code that is suspicious and may contain bugs. In the old days we used different lint versions but they were all annoying and very often just puked out far too many warnings and errors to be really useful.

Out of coincidence I ended up getting analyses done (by helpful volunteers) on the curl 7.26.0 source base with three different tools. An excellent opportunity for me to compare them all and to share the outcome and my insights of this with you, my friends. Perhaps I should add that the analyzed code base is 100% pure C89 compatible C code.

Some general observations

First out, each of the three tools detected several issues the other two didn’t spot. I would say this indicates that these tools still have a lot to improve and also that it actually is worth it to run multiple tools against the same source code for extra precaution.

Secondly, the libcurl source code has some known peculiarities that admittedly is hard for static analyzers to figure out and not alert with false positives. For example we have several macros that look like functions and on several platforms and build combinations they evaluate as nothing, which causes dead code to be generated. Another example is that we have several cases of vararg-style functions and these functions are documented to work in ways that the analyzers don’t always figure out (both clang-analyzer and Coverity show problems with these).

Thirdly, the same lesson we knew from the lint days is still true. Tools that generate too many false positives are really hard to work with since going through hundreds of issues that after analyses turn out to be nothing makes your eyes sore and your head hurt.

Fortify

The first report I got was done with Fortify. I had heard about this commercial tool before but I had never seen any results from a run but now I did. The report I got was a PDF containing 629 pages listing 1924 possible issues among the 130,000 lines of code in the project.

fortify-curl

Fortify claimed 843 possible buffer overflows. I quickly got bored trying to find even one that could lead to a problem. It turns out Fortify has a very short attention span and warns very easily on lots of places where a very quick glance by a human tells us there’s nothing to be worried about. Having hundreds and hundreds of these is really tedious and hard to work with.

If we’re kind we call them all false positives. But sometimes it is more than so, some of the alerts are plain bugs like when it warns on a buffer overflow on this line, warning that it may write beyond the buffer. All variables are ‘int’ and as we know sscanf() writes an integer to the passed in variable for each %d instance.

sscanf(ptr, "%d.%d.%d.%d", &int1, &int2, &int3, &int4);

I ended up finding and correcting two flaws detected with Fortify, both were cases where memory allocation failures weren’t handled properly.

LLVM, clang-analyzer

Given the exact same code base, clang-analyzer reported 62 potential issues. clang is an awesome and free tool. It really stands out in the way it clearly and very descriptive explains exactly how the code is executed and which code paths that are selected when it reaches the passage is thinks might be problematic.

clang-analyzer report - click for larger version

The reports from clang-analyzer are in HTML and there’s a single file for each issue and it generates a nice looking source code with embedded comments about which flow that was followed all the way down to the problem. A little snippet from a genuine issue in the curl code is shown in the screenshot I include above.

Coverity

Given the exact same code base, Coverity detected and reported 118 issues. In this case I got the report from a friend as a text file, which I’m sure is just one output version. Similar to Fortify, this is a proprietary tool.

coverity curl report - click for larger version

As you can see in the example screenshot, it does provide a rather fancy and descriptive analysis of the exact the code flow that leads to the problem it suggests exist in the code. The function referenced in this shot is a very large function with a state-machine featuring many states.

Out of the 118 issues, many of them were actually the same error but with different code paths leading to them. The report made me fix at least 4 accurate problems but they will probably silence over 20 warnings.

Coverity runs scans on open source code regularly, as I’ve mentioned before, including curl so I’ve appreciated their tool before as well.

Conclusion

From this test of a single source base, I rank them in this order:

  1. Coverity – very accurate reports and few false positives
  2. clang-analyzer – awesome reports, missed slightly too many issues and reported slightly too many false positives
  3. Fortify – the good parts drown in all those hundreds of false positives

Join the SPDY library development

Back in October I posted about my intentions to work on getting curl support for SPDY to be based on libspdy. I also got in touch with Thomas, the primary author of libspdy and owner of libspdy.org.

Unfortunately, he was ill already then and he was ill when I communicated with him what I wanted to see happen and I also posted a patch etc to him. He mentioned to me (in a private email) a lot of work they’ve done on the code in a private branch and he invited me to get access to that code to speed up development and allow me to use their code.

I never got any response on my eager “yes, please let me in!” mail and I’ve since mailed him twice over the period of the latest months and as there have been no responses I’ve decided to slowly ramp up my activities on my side while hoping he will soon get back.

I’ve started today by setting up the spdy-library mailing list. I hope to attract fellow interested hackers to join me on this. The goal is quite simply to make a libspdy that works for us. It is to be C89 code that is portable with an API that “makes sense”. I don’t know yet if we will work on libspdy as it currently looks, if Thomas’ team will push their updated work soon or if going with my current spindly fork off github is the way. I hope to get help to decide this!

Join the effort by simply adding yourself the mailing list and participate in the discussions: http://cool.haxx.se/cgi-bin/mailman/listinfo/spdy-library.

And a wiki on github.

Update: I’ve created a hub collecting all related info and pointers over at spindly.haxx.se.

Welcome!

curl meetup at Fosdem 2012

The FOSDEM 2012 dates were recently revealed (4-5 February 2012).

A pint of guinness

I’d be happy to arrange a get-together for libcurl hackers at Fosdem this year. To me, Brussels, Belgium seems mid-europe enough to be able to attract a bunch of us:

  • libcurl application users/authors
  • libcurl binding hackers
  • libcurl contributors
  • … and everyone else who’s doing related activities or who just is interested

Potential subjects to discuss at such a meeting:

  • what’s the most important stuff libcurl still lacks?
  • what’s the least documented/understood parts of libcurl?
  • are there shared problems several/many libcurl bindings have to solve?
  • can we improve how we work/develop libcurl and bindings?
  • what kind of beer is best at a curl meetup?
  • [fill in your own curl related subject]

I would like at least 4-5 people voicing interest for this to be worthwhile for me to actually try to do anything. Please speak up on the libcurl mailing list, tweet me or mail me privately! The more people that are interested, the more planning and stuff we’ll do for it.

Haxx – the first year

Last year I left my former employment, and focused on Haxx full-time. My brother joined me a few months afterward (January 2010). Today, at October 1 2010 we celebrate the official one year anniversary of Haxx AB as employer.Haxx

The history of Haxx goes far longer back than so. Linus Nielsen Feltzing and I first registered the company Haxx back in October 1997 and we used it then primarily as a way to market and do business on the side of our “real” jobs. To have a way to charge and do things we wanted to, that wasn’t conflicting with our day jobs. And of course we also bought the domain and could setup our “permanent” email addresses etc, which turned out great since I’ve thus used the same email address since back then and I hope I never need to change it again!

The first year of Haxx has been nothing but great fun and a major success.

As we’re contract developers and consultants, we of course need to make sure that our employees are sold to customers to a high degree with as little gaps as possible. Our projects are typically going on from a few months up to a year or two. During this year, both me and Björn have worked with several end customers and we’ve thus both managed to change assignments several times and none of the times caused any gaps – at all. Our services seem to be in high demand.

Being only two employees brings challenges on how to deal with sales, financial accounting etc as we’re just a few guys and we’re experts on development! We have found a few great partners that “sell” us (and of course we pay them a certain amount of percentage, but that’s a price we need to accept and is nothing but fair anyway since we can then remain doing what we’re good at and what we love) and we’re buying the bookkeeping etc from another company that is specialized at doing it for companies like us.

We’re looking forward to many more years of great fun. We also hope to be able to grow the company slightly over time, so if you’re a kick-ass embedded open source guy with networking experience and some 10+ years in the business and you live in the Stockholm Sweden area, do get in touch! As I’ve mentioned before, we’re gonna start out our second year with Linus onboard.

I’ll get back with an update next year! 🙂