Tag Archives: Open Source

More Means Less

Less is more it is said, and I can certainly subscribe to the reverse: more means less. The two primary open source projects I spend time in have been growing the last years, in source code contributions, but also in amount of users and in amount of contributors. I see the similar effects on myself and on my own role in both Rockbox and curl: I do more and more coordination, planning, admin work, talk (chatting on IRC, responding to mails etc) and “guidance” than actual coding work. My code/non-code work ratio has decreased massively.

This is not complaint, just an observation!

It makes sense to me that early on in a project, and until there’s enough momentum to get the project to more or less drive itself, it is important with a driving core that pushes the project forward. That makes sure every little peace fits together and gets the proper attention to make it a good product and project. As time goes by, more and more people get that knowledge, that ability and the amount of people that drive the project forward increases.

So being an “elderly” in both these projects, I’m more of an advisor, talker, tinker, admin, than a lead programmer now. This is at least most notable in Rockbox, since we have 80 committers now and I think at least 50 of them are active.

I probably spend roughly the same amount of time: somewhere around 2-3 hours/day on my open source projects.

Of course, in my particular case exactly now, I’ve also just recently ramped up my working hours and find myself trying to get accustomed to this life with full-time work, a two-kids-and-wife family and several time-consuming spare time projects. It takes a great deal of juggling and less sleeping.

Nothing is forever so I’m certain my situation will change over time. I’m determined to continue hacking in both projects. And my juggling skills will improve…

playogg without Rockbox?

playogg logoI find it noteworthy that the FSF runs a campaign they call playogg in which they detail the importance and stuff why people should avoid non-free formats and instead use Ogg Vorbis in preference to mp3 for example.

Yet, they document a number of alternatives for Mac users, for Windows users etc on the front page, but there’s not a single word of advice for people with portable music players. Then again, it is very hard for people to find free software alternatives to their portable music players and FSF being so very anti-closed source this makes me wonder why there’s no mention of Rockbox, ipodlinux or even sansalinux to be found?

The only place with this info that I could find when following links from their site, was about three clicks away on xiph.org’s PortablePlayers wiki page but the majority of the stuff mentioned there is non-free…!

TI and Neuros but is it open?

Neuros put out a press release yesterday saying that
Neuros and Texas Instruments create new bounty program for next-gen Open Internet Television Platform“, and Joe Born of Neuros said on their mailing list that “it will be a complete open platform that will allow developers of all levels to contribute and port applications.”. You can also read some additional thoughts and ideas in the ARS Technica article called “TI and Neuros team up to build open source media platform“. It is basically a hardware platform based on TI’s TMS320DM644x DSP system-on-a-chip line, also called DaVinci. There’s no coincidence of course that the Neuros OSD 2.0 will feature that.

Personally, I’m not convinced when I see TI speak of Open Source since I’m fully aware of their history and I even believe that this brand new “open” platform still requires TI’s restricted-but-free compiler for the DSP. Of course it is more open than many other platforms, but I dislike when someone tries to sound all fine and dandy while at the same time they’re trying to hide some of their better cards behind their back.

A truly open platform would not give TI an advantage. It would offer anyone wanting to do anything with it the same chance. This platform does not. After all, having it built around one of SoC flagships should be enough for them and should be a motivator for them to make this as successful (and thus as open) as possible.

I think it is sad that Neuros repeatedly does this kind of statements. Their original “open source” player was never open source (to any degree). Their OSD player is largely open source but huge chunks of it is not. Now they try to announce even more openness for an entire platform and yet again they fail to actually deliver a truly open product. Neuros shall forever be known as the company who seems to want to do right, but always fails to in the end nonetheless.

Update: Joe replied on the list to my question about the DSP tool(s) and it certainly sounds as if TI may in fact release a more open tool and/or even a gcc port!? If that turns out true it will of course squash most of my complaints here!

curl ten years today

Birthdaycake

On March 20th 1998 curl 4 was released. It was the first curl release ever even if already at version 4 since we kept the version number from the previous projects we did before curl – using other names. We started it all with having the tool named httpget (which was an existing small tool written by Rafael Sagula), soon changed name to urlget to end up with curl – all renames happening due to shifting features and focus.

Like many other projects, this started because of an itch. I wanted to get currency rates off the internet to allow an IRC bot to be able to provide an “exchange service” for users with accurate up-to-date rates. I thought the existing projects I found all did too much or did the wrong thing. That bot and service is now gone since long.

curl has been a truly portable project from day 1, and the first windows build was already urlget 2.1 (pre-curl). autoconf support for the build process was added in October 1998.

Unfortunately I don’t have the original release 4 tarball left anymore, the closest one I have is curl 4.8 (dated August 31 1998). curl 4.8 is about 3400 lines of code. Today we’re totaling in well over 100K source lines, so it has grown over 30 times!

I had no big plans for curl nor did I think very much about the future of the project. I just added the features I and my fellow contributors wanted to have for the moment. That’s actually pretty much how the project has continued to work. We don’t have many long-term plans for what to do with it, we mostly look just inches ahead of our noses and act accordingly.

During the version 6 period (Sep 1999 – Mar 2000) we learned that curl was getting popular, was useful and worked rather well, so the work on providing a libcurl started. We wanted to offer other applications the ability to use curl’s file transfer powers. Version 7.1 was released in August 2000 and thus libcurl was officially born.

curl and libcurl remained being a rather low-key project, I just work on it on my spare time and there are no full-time developers paid to work on this project – apart from some occasional sub-projects now and then that have been sponsored by companies and organizations. (See later on for an example.)

Slowly but surely more and more people started using libcurl and contributed with bug reports and patches. When the project turned 5 years in 2003 I collected all the names of all contributors so far and I reached the number 270. I found the number very high and I was mostly kidding when I said I hoped we would double that amount by the time we celebrate our tenth anniversary. Of course we’ve more than doubled that amount today when we have more than 620 named contributors so far – and continuously adding new ones with every release.

During this journey of a decade, I’ve remained the lead developer and project leader but we’re now some 10 developers with commit access (that also use it) and I try to be open and responsive in order to attract more developers to come aboard, to listen to their advice and ideas and to be sensitive on what our users want from us.

In 2005 I was lucky enough to get a grant from the Swedish IIS organization for the purpose of developing a new event-based API for libcurl to better deal with very large amount of connections, the problem so nicely called c10k.

In the days when our humble project turns 10, I spend about two hours spare time per day on the project and it is my primary hobby, we make 5-6 releases per year, we get about 7000 unique visitors on the web site a normal day, about one million curl packages are downloaded per year – from our servers.

Today, libcurl is feature-rich, portable, very widely used, very fast, well supported and there are no signs of stagnation in release nor development pace. In fact, looking at the source-code growth over the last couple of years we can see a pretty stable and continuous growth:

curl source code growth

Just as I never looked ahead and planned for the future much in the past, I don’t do that now either so I really don’t know and can’t tell what the future will hold for us. We’ll just continue to develop the world’s best client-side file transfer library, to make it even more solid for the foreseeable future, to make it do the things users and developers out there think it should do. Possibly that involves adding support for more protocols, removing some of the less popular ones or simply by enhancing how we support the existing ones.

Join the mailing lists and join us for the next ten years to come!

Neuros OSD 2.0

For you who are into things like open source hardware for your videos, it can be interesting to note Neuros‘ recent posting of their planned specs for their upcoming OSD 2.0 player that I guess then will replace the current Neuros OSD model.Neuros OSD 2.0

In hard techy interesting terms: they plan to upgrade to Texas Instruments Davinci 6446 chipset, which is a 300MHz ARM9 with a C64x DSP core embedded. Pretty much like the existing DM320 one, but it seems with a great deal of more horse power under the hood. Given their specs paper, it will support a lot of formats and at least partially up to HD resolutions. It’ll also support internal harddrive and offer 256MB RAM and 256MB internal NAND flash.

Personally I don’t care that much as I don’t even have analogue TV and don’t download/have many movies to watch and my existing DVB-T box has fine recording abilities and my DVD is good enough for my kids to repeatedly watch the same animated films over and over and over…

Oh btw, if this sounds like your kind of backyard and other things combine well, Neuros is hiring Linux developers for what I believe is this hardware.

(sorry for the crappy quality of the pic but I nicked it from the PDF)

curl feature freeze March 20 2008

It is yet again time to pause the add-new-features-craze in order to settle down and fix a few more remaining bugs before we go ship another curl and libcurl release in the beginning of April.

cURL

So at March 20 we hold back and only fix bugs for about 2 weeks until we release curl and libcurl 7.18.1.

The only currently mentioned flaw in TODO-RELEASE to fix before this release is the claimed race condition in win32 gethostbyname_thread but since the reporter doesn’t respond anymore and we can’t repeat the problem it is deemed to just be buried and forgotten.

Other problems currently mentioned on the mailing list is a POST problem with digest and read callbacks and a mysterious bad progress callbacks for uploads, but none of them seem very serious and thus terribly important to get fixed in case they should turn out hard-to-fix.

Yes, I picked the date on purpose as that is the magic date in this project. Especially this year.

Open Source Accessibility

SRF (synskadades riksförbund – the Swedish Association of the Visually Impaired) is a Swedish organization that recently expressed concerns about open source (in Swedish), since as they say “open source in itself is no guarantee for accessibility to disabled persons” (my translation).blind person symbol

The argument came up because Mats Odell, a minister in the Swedish government, expressed a positive attitude towards open source within governments (link in Swedish).

I find it disturbing that these visually impaired guys immediately bounce back and seem to imply and think that open source automatically somehow is less useful, less quality, less fitting or less accessible. But sure, open source is not a guarantee for better accessibility, but then nobody claimed it either and I don’t see how any software can be guaranteed to be better. A very weird statement it was I must say.

One perfect example showing how open source adds accessibility is how Rockbox works. By providing innovative functionality, it makes devices suddenly a whole lot more usable to blind or visually impaired persons. There’s simply no commercial alternatives coming close.

Other fine example on how open source makes software more accessible than any closed-source competitor, is in how translations can be done even to very small languages spoken by economically not so wealthy population groups. Like how closed-source programs fail to deliver software translated to the 11 official languages of South Africa and a lot of other ones.

To round off, the orca project makes openoffice, Firefox, gnome apps and Java-based apps accessible. I’m not saying I know all about being visually impaired and how they use open source, but I do know that open source is accessible to a far extent at some places and at others there’s room left for improvement. But open source gives everyone the ability to join in and make it happen.

Make Them Pick Us

Given that there are an endless series of open source and free software projects around. What makes companies and projects likely to chose to depend and use one of the existing ones rather than to write it themselves or possibly buy a closed-source solution instead? I’ll try to answer a few of the things that might matter, and deal with how curl and libcurl relates to them.

Proven Track Record

The project needs to have been around for a while, so that external people can see that the development continues and that there is a continued interest in the project from developers and users. That bug reports are acknowledged and fixed, that it has been scrutinized for the most obvious security problems etc. The curl project started almost ten years ago, have done more than one hundred releases and there is now more developer activity in the project than ever before.

Certified Goodness

With companies and associations that “certify” others, you can get others’ views on the quality of the projects.

The company named OpenLogic offers “certification” of open source software for companies to feel safer. I must admit I like seeing they’ve certified curl and libcurl. You can get their sales-pitch style description of their certification process here.

Of course I also like to see curl going to rung 2 on the scan.coverity.com list as it would mean a second (independent from the first) source would also claim that there’s a reasonable level of quality in the product.

If they did it so can we

With a vast list of existing companies and products that already are using the project, newcomers can see that this and that company and project already depend on this, and that fact alone makes the project even more likely to be a solid and trustworthy choice.

Being the answer when the question comes

Being known is important. When someone asks for help and guidance about what possible solutions there are to a particular problem, you want a large portion of your target audience to know about your project and to say “oh for doing X you could try project Y”. I want people to think libcurl when asked a question about doing internet-related transfers, like HTTP or FTP.

This is of course a matter of marketing and getting known to lots of people is a hard thing for an open source project with nothing but volunteers with no particular company backing.

Being a fine project

Of course the prerequisite to all points above is that the project is well maintained, the source is written in a nice manner and that there’s an open and prosperous community…

My Antispam Measures

I get a fair share of spam. I have something like 10 working private email addresses, I’m listed as recipient in numerous email aliases and they all end up in the same physical mailbox where I read them. I’ve also had my existing emails for many years and I’ve shown and used them publicly on the internet all the time. I’m a major spam email target now. A good day I get just 2000 spams, but bad days I’ve been well over 13000 spam emails.A can with spam

My biggest friends in this combat are: spamassassin and procmail.

I’ll describe how I have things setup, not as much as to inspire others but more to be able to get feedback from you on how I can or perhaps should improve my setup to get an even better email life.

  • I consider all mails with spam points >= 3 to be spam. I’ve also tweaked my spamassassin user_prefs to be harsher on (pure) HTML mail and a few other rules, and I’ve added a couple of my own rules to catch spams that previously did slip through a little too easy.
  • First, I filter out mail from trusted mailing lists that have their own antispam measures.
  • I catch what appears to be bounces (I have a huge regex) and if it looks like a bounce to an address I don’t send email from I nuke it immediately (and those could be a true bounce are saved in a dedicated mbox)
  • I have a white-list system that marks all incoming mails from previously marked friends as coming from a friend.
  • Mails from non-friends are passed through spamassassin. Those with spam points higher than N are put in the ‘hispam’ folder – of course with the intention that these are very very very unlikely to every have any false positives and can almost surely be deleted without check. N is currently 10 but I ponder on lowering it somewhat. Spams with less points than N are put in the ‘spam’ folder, and I need to check that before I kill it because it happens that I get occasional false positives that end up there.
  • So, mails that aren’t from friends (or from a trusted mailing list) and aren’t marked as spam are then stored in the ‘suspicious’ mailbox
  • Mails from friends or from trusted lists go directly into my mailbox, or into a dedicated mailbox (for lists with somewhat high traffic volumes).
  • Oh, a little additional detail: I “mark” my own outgoing mails with an additional custom header with no point whatsoever but to be able to detect when someone/something sends me mail using my own address…

My weakest point in all this right now is the fact that I don’t spam-check white-listed mails at all, so spams that are sent to me using my friends’ email addresses go through and annoy me.

BTW, I did use bogofilter in the past and for a while I actually ran both in parallel (both trained with rougly the same spam/ham boxes for the Bayes stuff) but quite heavily testing I performed at that time (a few years ago) showed that spamassissin caught a lot more spams than bogofilter, while bogofilter only caught a few extra so I dropped it then.