Category Archives: Technology

Really everything related to technology

Linux distros consolidate crypto libs

For a while already, the Fedora distribution has fought battles, done lots of work and pushed for a consolidation of all packages that use crypto libs to completely go with Mozilla’s NSS.

Now it seems to be OpenSUSE’s turn. The discussion I link to here doesn’t make any definite conclusions but they seem to lean towards NSS as well, claiming it has the most features. I wonder what they base that statement on – if there’s a public doc anywhere that state exactly which has what that makes any contender better than any other for them?

In the Fedora case it seems they’ve focused on the NSS FIPS license as the deciding factor but the license issue is also often brought up in this discussion.

I’ve personally been pondering on writing some kind of unified crypto layer that would expose a single API to an application and handle the different libs as backends, pretty much the same way we do it internally in libcurl at the moment. It hasn’t taken off (or even been started) since I’ve not had the time nor energy for it yet.

BYO rockbox player partly alive

Jorge “casainho” Pinto is known in the Rockbox circles as the main guy behind the “Rockbox Player” project which strives to build their own portable music player to run Rockbox.

They’ve made some progress latetly, and they’ve now run Rockbox far enough to display stuff on their screen:

Click the image for the full photo. Cortesy of Casainho himself. “I hope to take no more than 1 month to finish the port.

The target is using an Atmel AT91SAM9260 at 200MHz and the screen is a 12bit color 128×128 one.

IETF http-state group created

Over at the IETF another group was just created named http-state (with an associated mailing list) with the specific goal:

Ultimately, the purpose of this group is to create an updated HTTP State Management Mechanism RFC (aka cookies) that will supersede the Netscape spec, RFCs 2109, 2964, 2965 then add in real-world usage (e.g. HTTPOnly), and possibly add in additional features and possibly merge in draft-broyer-http-cookie-auth-00.txt and draft-pettersen-cookie-v2-03.txt.

I’ve joined the list and I hope to follow and participate in this, as I believe the current state of HTTP cookies is a rather sorry mess and the Netscape spec is still what closest describes how cookies work in the wild. Of course I’ll do it with my libcurl experience in my luggage.

While it perhaps would be cool to join the group in more formal way, there’s no way for me to participate in that IETF meeting in San Francisco in March.

Fun with executable extensions in viewvc

A few years ago I wrote up silly little perl script (let’s call it script.pl) that would fetch a page from a site that returns a “random URL off the internet”. I needed a range of URLs for a test program of mine and just making up a thousand or so URLs is tricky. Thus I wrote this script that I would run and allow to get a range of URLs on each invoke and then run it again later and append to the log file. It wasn’t a fancy script, but it solved my task.

The script was part of a project I got funded to work on, that was improving libcurl back in 2005/2006 so I thought adding and committing the script to CVS felt only natural and served a good purpose. To allow others to repeat what I did.

Fast forward to late 2008. The script is now browsable via viewvc on a site that… eh, doesn’t have “.pl” disabled as a cgi extension in its config! The result of course is that each time someone tries to view the script using the web interface, the web server invokes the script locally!

All of a sudden I get a mail from someone, who apparently is admin or something of the site this old script was using, and he mentions that a machine on our network is hammering his site with many requests per second (38 requests/second apparently) and asked me to stop this. It turns out a search engine crawler has indexed the viewvc output several times, and now some 8 processes or so were running this script.pl and they were all looping around getting a page, outputting the URL, getting another page…

While I think 38 requests second is a bit low to even be considered a DOS, it certainly wasn’t intended nor friendly and I was greatly surprised when I slowly realized how it all came to end up like this! Man I suck! It reminds me of my other extension mess from just a few months ago…

Maybe I’ll learn how to do things right in the future when I grow up!

Rockbox 3.1

After three months of work since the last release, we manage to keep the schedule and ship Rockbox 3.1. The list of news since 3.0 include the following:

  • A bitmap scaler was added to Rockbox, which means that album art no longer has to be pre-scaled to the correct dimensions on your computer. See AlbumArt for more information.
  • The calendar plugin which has existed for the Archos units for a long time is now available on all devices equipped with a clock.
  • The spacerocks plugin which was removed from version 3.0 due to a major bug has been brought back.
  • Optimised MP3 decoder on dual-core targets, giving several more hours of battery life in most situations.
  • Optimizations for AAC and APE decoding
  • Backlight fading is now available on most targets.
  • When recording in mono, you can now chose between recording the left or right channel, or a mix of both.
  • It is now possible to configure which items are shown in the Quick Screen.
  • Several new features were added to the WPS syntax
  • The build system received a major overhaul. This only matters for people who compile their own builds.

Of course you can find a more detailed list in the MajorChanges wiki page, and the full release notes for 3.1.

My personal contribution has been very tiny this time around and I’ve basically just built the release builds and admined the distributed build system somewhat.

Rockbox

SSL certs crash without trust

Eddy Nigg found out and blogged about how he could buy SSL certificates for a domain he clearly doesn’t own nor control. The cert is certified by Comodo who apparently has outsourced (parts of) there cert business to a separate company who obviously does very little or perhaps no verification at all of the buyers.

As a result, buyers could buy certificates from there for just about any domain/site name, and Comodo being a trusted CA in at least Firefox would thus make it a lot easier for phishers and other cyber-style criminals to setup fraudulent sites that even get the padlock in Firefox and looks almost perfectly legitimate!

The question is now what Mozilla should do. What Firefox users should expect their browser to do when HTTPS sites use Comodo-verified certs and how Comodo and their resellers are going to deal with everything…

Read the scary thread on the mozilla dev-tech-crypto list.

Update: if you’re on the paranoid/safe side you can disable trusting their certificates by doing this:

Select Preferences -> Advanced -> View Certificates -> Authorities. Search for
AddTrust AB -> AddTrust External CA Root and click “Edit”. Remove all Flags.

10G and Direct Cache Access

As some of you might know, I currently work with a client doing 10G network stuff. 10G as in 10 gigabit/second Ethernet. That’s a lot of data. It’s actually so much data it’s hard to even generate network loads of this magnitude to be able to do good tests, as a typical server using SATA harddrives hardly fills a one gigabit pipe due to “slow” I/O: ordinary SATA drives don’t even reach 100MB/sec. You need RAID solutions or putting the entire thing in RAM first. So generating 10 gigabit network loads thus requires some extraordinary solutions.

Having a server that tries to “eat” a line speed 10G is a big challenge, and in fact we can’t do it as 1.25 GB/sec is just too much and yet we run a quad-core 3.00GHz Xeon thing here which is at least near the best “off-the-shelf” CPU/server you can get at the moment. Of course our software does a little bit more with the data than just receiving it as well.

Anyway, recently I’ve been experimenting with 10G cards from Myricom and when trying to maximize our performance with these beauties, I fell over the three-letter acronym DCA. Direct Cache Access. A terribly overused acronym consisting of often-used words make it hard to research and learn about! But here’s a great document describing some of the gory details:

Direct Cache Access for High Bandwidth Network I/O

Summary: it is an Intel technology for delivering data directly into the CPU’s cache, to reduce the bandwidth requirement to memory (note: it only decreases the bandwidth requirement at that moment, not the total requirement as it still needs to be read from memory into the cache, as noted in a comment below). Using this technique it should be possible to drastically reduce the time for getting the traffic. Support for this tech has been added to the Linux kernel as well since a while back.

It seems DCA is (only?) implemented in Intel’s 7300 chipset family which seems to only exist for Xeon 7300 and 7400. Too bad we don’t have one of these monsters so I haven’t been able to try this out for real yet…

Currently we can generate 10G network loads using two different approaches: one is uploading a specially crafted binary blob embedded with the FPGA image to a Xilinx-equipped board with a 10G MAC that then can do some fiddling with the packages (like increasing a counter) so that they aren’t all 100% identical. It makes a pretty good load test, even if the traffic isn’t at all shaped like the “real” traffic our product will receive. Our other approach has been even less good: upload a custom firmware to the network card and have that send the same Ethernet frame… This latter approach didn’t get better because it was a bit too complicated and badly documented on how to make a really good generator out of it. Even if I liked being able to upload custom code to my network card! 😉

Allow me to also mention that the problems with generating 10G is with small packet sizes, like 100 bytes or so as the main problem in the hardwares seem to the number of packets, not the payload part. Thus it is easier to do full line speed with 9000 bytes packets (jumbo frames) than the tiny ones we are likely to get when this product is in use by customers in the wild.

Update: this article was written in 2008. Please note that many things may have changed since then.

Filling our pipes

At around 13:43 GMT Friday the 5th of December 2008, the network that hosts a lot of services like this site, the curl site, the rockbox site, the c-ares site, CVS repositories, mailing lists, my own email and a set of other open source related stuff, become target of a vicious and intense DDoS attack. The attack was in progress until about 17:00 GMT on Sunday the 7th. The target network is owned and ran by CAG Contactor.

Tens of thousands of machines on the internet suddenly started trying to access a single host within the network. The IP they targeted has in fact never been publicly used as long as we’ve owned it (which is just a bit under two years) and it has never had any public services.

We have no clue whatsoever why someone would do this against us. We don’t have any particular services that anyone would gain anything by killing. We’re just very puzzled.

Our “ISP”, the guys we buy bandwidth and related services from, said they used up about 1 gigabit/sec worth of bandwidth and with our “mere” 10 megabit/sec connection it was of course impossible to offer any services while this was going on.

It turns out our ISP did the biggest blunder and is the main cause for the length of this outage: we could immediately spot that the target was a single IP in our class C network. We asked them to block all traffic to this IP as far out as possible to stop such packets from entering their network. And they did. For a short while there was silence and sense again.

For some reason that block “fell off” and our network got swamped again and it then remained unusable for another 48 hours or so. We know this, since our sysadmin guy investigated our firewall logs on midday Sunday and they all revealed that same target IP as destination. Since we only have a during-office-hours support deal with our network guys (as we’re just a consultant company with no services that really need 24 hour support) they simply didn’t care much about our problem but said they would deal with it Monday morning. So our sysadmin shutdown our firewall to save our own network from logging overload and what not.

Given the explanations I’ve got over phone (I have yet to see and analyze logs from this), it does sound like some sort of SYN flood and they attempted to connect to many different TCP ports.

4-5 hours after the firewall was shutdown, the machines outside of our firewall (but still on our network) suddenly became accessible again. The attack had stopped. We have not seen any traces of it since then. The firewall is still shutdown though, as the first guy coming to the office Monday morning will switch it on again and then – hopefully – all services should be back to normal.

Fujifilm FinePix F100fd

Ok, I bought myself a Fujifilm FinePix F100fd camera the other day, as it fulfilled my requirements pretty good:

1. It’s compact, noticeably smaller than my previous Sony one.

2. While not a 3″ LCD it features a 2.7″ one, which is a tiny bit larger than my previous’ 2.5″.

3. Image Stabilizer. And in my test shots it seems to make a difference. I’ll admit I haven’t yet played a lot with it on and off, but especially when zooming it seems to do some good.

4. Good low-light images. Yes it does. I’ve so far seen it go down to ISO1600 on auto and while that isn’t the best pictures, using flash is certainly not a good way to achieve great pics either (in general).

5. It accepts SDHC cards. I put a 4GB one in to start with as it costs virtually nothing. My previous camera had 512MB so it’s still 8 times the size. Of course my Sony was 5 megapixels and this does 12 so it will of course produce larger image files.

Possibly I’ll try to make some comparison pictures with my old and my new cameras later on.

Snooping on government HTTPS

As was reported by some Swedish bloggers, and I found out thanks to kryptoblog, it seems the members of the Swedish parliament all access the internet via a HTTP proxy. And not only that, they seem to access HTTPS sites using the same proxy and while a lot of the netizens of the world do this, the members of the Swedish parliament have an IT department that is more big-brotherish than most: they decided they “needed” to snoop on the network traffic even for HTTPS connections – and how do you accomplish this you may ask?

Simple! The proxy simply terminates the SSL connection, then fetches the remote HTTPS document and run-time generates a “faked” SSL cert for the peer that is signed by a CA that the client trusts and then delivers that to the client. This does require that the client has got a CA cert installed locally that makes it trust certificates signed by the “faked” CA but I figure the parliament’s IT department “help” its users to this service.

Not only does this let every IT admin there be able to snoop on user names and passwords etc, it also allows for Man-In-The-Middle attacks big-time as I assume the users will be allowed to go to HTTPS sites using self-signed certificates – but they probably won’t even know it!

The motivation for this weird and intrusive idea seems to be that they want to scan the traffic for viruses and other malware.

If I were a member of the Swedish parliament I would be really upset and I would uninstall the custom CA and I would seriously consider accessing the internet using an ssh tunnel or similar. But somehow I doubt that many of them care, and the rest of them won’t be capable to take counter-measures against this.