Tag Archives: Linux

My Debian Black-out – the price of bleeding edge

Ok, I admit it. I run Debian Unstable so I know I deserve to get hit really bad at times when things turn really ugly. It is called unstable for a reason.

The other day I decided it was about time I did a dist-upgrade. When I did that, I got a remark that I better restart my gnome session as otherwise apps would crash. So I logged out and… I couldn’t login again. In fact, neither my keyboard nor mouse (both on USB) worked anymore! I sighed, and rebooted (for the first time in many months) only to find out that 1) it didn’t fix the problem, both input devices were still non-functional and perhaps even more important 2) the wifi network didn’t work either so I couldn’t login to it from one of my other computers either!

Related to this story is the fact that I’ve been running an older kernel, 2.6.26, since that was the last version that built my madwifi drivers correctly and kernels after that I was supposed to use ath5k for my Atheros card, but I’ve not been very successful with ath5k and thus remained using the latest kernel I had a fine madwifi for.

I rebooted again and tried a more recent kernel (2.6.30). Yeah, then the keyboard and mouse worked again, but the ath5k didn’t get the wifi up properly. I think I basically was just lacking the proper tools to check the wifi network and set the desired ssid etc, but without network that’s a bit of a pain. Also, when I logged in on my normal gnome setup, it mentioned a panel something being broken and logged me out again! 🙁

Grrr. Of course I could switch to my backup – my laptop – but it was still highly annoying to end up being locked out from your computer.

Today I bought myself 20 meter cat5e cable and made my desktop wired so I can reach the network with the existing setup, I dist-upgraded again (now at kernel 2.6.31) and when I tried to login it just worked. Phew. Back in business. I think I’ll leave myself with the cable connected now that I’ve done the job on that already.

The lesson? Eeeh… when things break, fix them!

Open Android Alliance

In the past: cyanogenmod made one of the most popular 3rd party Android ROMs for HTC devices. Personally I haven’t yet tried it on my Magic, but friends tell me it’s the ROM to use.Android

On September 24th 2009, Google sets their legal team on the ROM creator, asking him to stop distributing the parts of Android that aren’t open source but in fact are good old traditional closed source apps – made by Google. Cyanogen himself (Steve Kondik) responded something in the spirit that since the ROM only runs on hardware that already runs the apps users already have a license to use them. Google responded, saying they protect the Google Phone Experience.

This C&D act triggered a huge reaction in the Android communities as people suddenly became aware of the fact that A) parts of the Android core OS aren’t at all open (source) and B) Google is not the cuddly Teddy Bear we all want it to be.

In the xda-developers.com front, where a lot of the custom ROMs are being discussed and users of them hang out, they created the Open Android Alliance with the intent of creating a completely open source Android.

In another end and indepedently of the xda-developers it seems, lots of participants in the google group android-platform pretty much decided the same thing but they rather started out discussing exactly what would be needed to do and what code there is and so on.

Currently, both camps have been made aware of each other and there have been expressed intents of joining into a single effort. I don’ t think such subtleties matter much, but we just might see the beginning of a more open more free Android project getting started here. I’ll certainly be interested in seeing where this is going…

Updated: they now have their own domain. Link in article updated.

libcurl in version management

Already before, I’ve mentioned that libcurl is becoming popular within package management.

libcurllibcurl is a generic library for file transfers over a wide variety of protocols. Over the years, some of the recent ditributed version management softwares have learned about libcurl’s powers and they now use it:

darcs – was born in 2003 and is written in Haskell. I’m under the impression these guys wrote their own binding layer to interface libcurl from Haskell.

git – possibly best known for being created by Linus Torvalds and being used by the Linux kernel project, is using libcurl for HTTP(S) accesses.

bazaar – is written in Python and accordingly uses the pycurl binding for libcurl.

Anyone know of other version control systems using libcurl?

Ironies here include that libcurl itself is still kept within a CVS respository, and also quite possibly that the first version management project I myself participated is Subversion and that not only has two different HTTP dependencies, but none of those two are libcurl (they are neon and serf)…

Update: it seems that Mercurial is also using pycurl as an optional dependency.

Making better advisories

A while ago yet another security flaw was discovered in curl (actually the tenth flaw in more than eleven years) by Scott Cantor. He reported it privately to us. We worked on the issue and in the end I posted an official project cURL security advisory about it. It wasn’t anything out of the ordinary really. Scott did great and we fixed the problem rather promptly and in coordination with vendor-sec etc.

After a security advisory and the accompanying release, something particular always happens. It’s the same every time I’ve done this and there’s really no surprise: one by one the different Linux distros and similar parties start to ship their security advisories and alerts about the same problem and they offer their upgrade paths for their users to get a corrected version of the package.

But I’ll tell you why I think those advisories tend to make me really sad. It’s not because of the flaws they fix or how fast or slow they are to appear. It’s entirely due to the contents of them or perhaps in many times the lack of contents.

When the first distro-based advisory comes out, they often tend not to use the phrasing used in the original advisory (which we’ve crafted on for weeks and coordinated with vendor-sec) but they instead offer their own interpretation. This isn’t necessarily a bad thing, but when the guys simplify matters they tend to blur out the actual cause and make the real issue hard to understand. Not to mention that when the first guy had done a mistake, most others just repeat that without thinking.

Credit to the doers

The craft of hunting down security problems in software and the art of then creating a fix for that problem is very time consuming and takes a fair amount of skill and patience. Yet some people do this. Some of those even track down problems in open source code bases and tell the projects about the issues to give them a chance to fix them befor they’re made public.

Those people are good people that we need.

In the open source world, and in fact in a lot of other places too, the just about only reward we can offer guys who do outstanding work like this is with attribution. Give credit where credit is due. Mention the guy who did the job!

Distro advisories are not good

Very often the subsequent advisories go the lazy route and they borrow their advisory explanation from another distro’s advisory. Still not using the original explanation. They like short and not too detailed explanations. Factual errors seem to not be too important.

Very few distro-advisories give any credit to the original guy who found the error. The only one thing we can offer as payment is then neglected and this is more of an established practise than a mistake. All distros do this. At best they refer to a CVE number for the flaw, but CVE numbers have the great disadvantage that they very rarely reveal any particular details about the flaw until a long time after the advisory is made.

Not only do they often not credit the originator, they also rarely link back to the original advisory or even the advisory the originator sent out (sometimes they’re sent out independently) – so getting the full description from the actual upstream project is harder than it has to be. They do however generally  link to their own site, using their own issue number for the security problem. If things are good, you can find references to the original in that web page they link to. I’ve also seen several distro advisories that simply don’t at all mention what patches they’ve applied or what particular upstream changset they’ve backported.

In this latest advisory from curl, the common repeated mistake was that the certificate flaw concerned the Common Name field (and it implied that it was only about that field) which is wrong, and which is why the original advisory didn’t explicitly mention that field. It also affects the subjectAltName field and that’s at least – if not more – as important to address for this particular flaw. The flaw also only concerned curl built to use OpenSSL, which was a fact that was often not mentioned at all.

What I suggest!

That every vendor and Linux distro that ship security advisories do this:

  1. credit the original problem founder/researcher. This way the glory and fame goes to the person(s) who often did a lot of research and hard work.
  2. link to the original advisory so that everyone who wants to can get the info and details from the upstream project and their ideas of what the problems are and what the best fixes or work-arounds might be
  3. fact-check your error/solution description better and don’t just repeat what someone else wrote unless you know that’s an accurate description
  4. don’t repeat others’ simplifications and errors. The act of duplicating someone else’s description is pretty low in general and it would often only be a signal that you haven’t understood the issue in the first place. If you have a rule to not copy others you won’t risk copying their mistakes.

kernel hacker foodfights

The concept of flame wars and public pie throwing is not new in the open source world, and the open nature of the projects make us – the audience – get to see everything. To read every upset word and get to point back to the mails in retrospect.

I don’t think people in the open source community is any particularly more trigger-happy to start the flame wars than people are outside of the openness, but open it is and then we can see it.

I’ve always disliked the harsh attitude and language that seems to have become popular in some circles, and I believe Linus Torvalds himself is part of that movement as he’s often rude, bad-mouthed and very aggressive in his (leadership) style. I think that easily grows into a hostile and unfriendly atmosphere where little room is left for fun, for jest and for helping out among friends.

So even if that is not the reason for the recent developments, here’s two episodes from August 2009:

A short while ago we got to see well-known kernel hacker Alan Cox step down as tty maintainer after an emotional argument on the lkml. The argument there was basically Linus telling Alan he should’ve admitted his error and acted on it earlier than he did.

Nearby, on the mailing list linux-arm-kernel a long-going argument about the management of the actual mailing list itself again sparkled up a fire. The argument in this case have been a long going discussion whether the mailing list Russell King (the main ARM Linux maintainer) runs should be open to allow non-subscribers to post without them needing moderation or not. It ended today with Russell shutting down his lists.

Right now, it seems the linux-arm-kernel list is being transferred over to infradead.org by David Woodhouse to continue its life there, but I don’t think we’ve seen the end of this yet so things may settle differently. There’s also this patch pending which suggests using the linux-arm list on vger.kernel.org.

(Readers should note that I myself don’t take side in any of these arguments.)

Kernels on those phones

So Google says there could be 18 phones running Android by the end of this year. In Sweden we just days ago got HTC Magic, the first ever Android phone showing up here (tied to a ridiculous operator deal that makes me and lots of my friends not go that route). Then Palm shipped their Palm Pre just days ago, also based on Linux.

This has brought the interesting questions: how is the state of these kernel HTC Magicports in regards to the mainline Linux tree? They’re both using ARM cores (of course).

The ARM kernel maintainer Russell King himself is not impressed. Apparently Google hasn’t even tried to push their work upstream to the kernel in a long while. The tone in that discussion did make it sound as if they might be starting to work on this again now.

The Palm guys apparently haven’t even yet shown any code at all, but is said to be releasing their code within two weeks to opensource.palm.com.  They have not even tried to push their work upstream, so I figure they’re either not even going to bother or they are facing a rather steep uphill battle in the future.

Eeepc with Linux and Swedish 3g

This is a follow-up on my “getting the new toy” from a week or so ago. An Eee PC S101.

I didn’t like easypeasy on it. It seems that distro is more or less Ubuntu Netbook Remix (UNR) with a little EEE flavor applied. What’s not to like about it? They seem to think that because this is a netbook, normal UI guidelines no longer apply so therefore they’ve scrapped the ordinary main desktop (and its menu) concept and instead have a new full-screen “app launcher”. That’s not too shabby, but it comes with another idea that I can’t accept: they run all applications in full-screen mode by default.md400 And I couldn’t figure out how to alter that default.

Full-screen might be fine for some apps at some times, but then I’d like to explicitly ask for it instead of having to learn now to “unmaximize” each app (they’ve also removed/altered the window decorations so there are no standard three buttons on the upper right corner of the maximized windows). To top it off, it seemed that the latest easypeasy isn’t built with the latest ubuntu and thus it failed to connect with my 3g modem…

Instead I took the base version of eeebuntu for a spin and that is so much closer to what I want in a linux. It’s ‘base’ so it only comes with the bare minimum. It has no fancy alternative UI but relies on the traditional well-proven and by me liked X11 (gnome) desktop.

I inserted my Sony Ericsson MD400 USB 3g modem that I got from Telenor/Bredbandsbolaget and within a few seconds I was online. It couldn’t have been a much smoother ride.

I know people have expressed opinions that it’s a better idea to use laptops/netbooks with an internal 3g modem so that you don’t have to use any external devices so that it’ll be more slick and all. I think I was of that opinion as well until I got this usb thing in my hand. It’s basically just a tad larger than any ordinary USB memory stick (70 x 28 x 15 mm) so it’s really not much “in the way” or disturbing when inserted in a laptop and it comes with windows drivers on it (as it dual-serves as a usb mass-storage device as well). It makes it a perfect little device to move between different laptops. We have so far three laptops in our household and now I can get any of them onto 3g if I want to.

A little side-note on my eeebuntu install on the SD card: when I ran unetbootin I selected to install the “live/install” version on the hard drive (which of course is a SSD but anyway) to then install it on my SDHC card, but it simply wouldn’t work. I tried three times and every time it froze somewhere in the middle of the install. When I then re-ran unetbootin and made a boot usb stick, and then ran from there instead when I did the install, it worked perfectly…

Linux on eee s101

I got myself a new toy the other day: an Eee PC S101 with 16GB SSD, an extra 32GB SDHC and 2GB ram.

Asus EEE PC S101There’s already a bazillion instructions on how to install and run Linux on your EEE PCs out there, but they all seemed to miss one (for me) crucial little detail:

In order to boot from the SD card, you need to press Escape when the bios start-up screen shows.

But now: to get the bios screen to show, you need some extra magic: you need to press F2 immediately at start-up to enter the bios setup screen and then you need to disable “boot booster” as otherwise it’ll skip the escape checking entirely!

Using this trick, I’ve now installed easypeasy on it and I’ll dual-boot with XP for a while since it came factory installed with that.

I’ve fallen for the commercials and also subscribed to 3g broadband now (you know the blatant lying “up to 7.2mbit” which never in reality can even come close even if I would be alone sitting on top of a base station) and I warmed up my toy and connection the other day (still running XP then) by working on curl code and made a few commits etc, while sitting on a wooden bench next to the field where my daughter was having her soccer practice.

In fact, SSHing to my primary servers and editing code with emacs or reading email with alpine turned out a much better experience than I anticipated as I’ve read about how terrible the roundtrip times can be over 3g. It actually didn’t feel a lot more different than my regular SSHing from home over wifi.

10G and Direct Cache Access

As some of you might know, I currently work with a client doing 10G network stuff. 10G as in 10 gigabit/second Ethernet. That’s a lot of data. It’s actually so much data it’s hard to even generate network loads of this magnitude to be able to do good tests, as a typical server using SATA harddrives hardly fills a one gigabit pipe due to “slow” I/O: ordinary SATA drives don’t even reach 100MB/sec. You need RAID solutions or putting the entire thing in RAM first. So generating 10 gigabit network loads thus requires some extraordinary solutions.

Having a server that tries to “eat” a line speed 10G is a big challenge, and in fact we can’t do it as 1.25 GB/sec is just too much and yet we run a quad-core 3.00GHz Xeon thing here which is at least near the best “off-the-shelf” CPU/server you can get at the moment. Of course our software does a little bit more with the data than just receiving it as well.

Anyway, recently I’ve been experimenting with 10G cards from Myricom and when trying to maximize our performance with these beauties, I fell over the three-letter acronym DCA. Direct Cache Access. A terribly overused acronym consisting of often-used words make it hard to research and learn about! But here’s a great document describing some of the gory details:

Direct Cache Access for High Bandwidth Network I/O

Summary: it is an Intel technology for delivering data directly into the CPU’s cache, to reduce the bandwidth requirement to memory (note: it only decreases the bandwidth requirement at that moment, not the total requirement as it still needs to be read from memory into the cache, as noted in a comment below). Using this technique it should be possible to drastically reduce the time for getting the traffic. Support for this tech has been added to the Linux kernel as well since a while back.

It seems DCA is (only?) implemented in Intel’s 7300 chipset family which seems to only exist for Xeon 7300 and 7400. Too bad we don’t have one of these monsters so I haven’t been able to try this out for real yet…

Currently we can generate 10G network loads using two different approaches: one is uploading a specially crafted binary blob embedded with the FPGA image to a Xilinx-equipped board with a 10G MAC that then can do some fiddling with the packages (like increasing a counter) so that they aren’t all 100% identical. It makes a pretty good load test, even if the traffic isn’t at all shaped like the “real” traffic our product will receive. Our other approach has been even less good: upload a custom firmware to the network card and have that send the same Ethernet frame… This latter approach didn’t get better because it was a bit too complicated and badly documented on how to make a really good generator out of it. Even if I liked being able to upload custom code to my network card! 😉

Allow me to also mention that the problems with generating 10G is with small packet sizes, like 100 bytes or so as the main problem in the hardwares seem to the number of packets, not the payload part. Thus it is easier to do full line speed with 9000 bytes packets (jumbo frames) than the tiny ones we are likely to get when this product is in use by customers in the wild.

Update: this article was written in 2008. Please note that many things may have changed since then.