Category Archives: Linux

Everything related to Linux really

Linux distros consolidate crypto libs

For a while already, the Fedora distribution has fought battles, done lots of work and pushed for a consolidation of all packages that use crypto libs to completely go with Mozilla’s NSS.

Now it seems to be OpenSUSE’s turn. The discussion I link to here doesn’t make any definite conclusions but they seem to lean towards NSS as well, claiming it has the most features. I wonder what they base that statement on – if there’s a public doc anywhere that state exactly which has what that makes any contender better than any other for them?

In the Fedora case it seems they’ve focused on the NSS FIPS license as the deciding factor but the license issue is also often brought up in this discussion.

I’ve personally been pondering on writing some kind of unified crypto layer that would expose a single API to an application and handle the different libs as backends, pretty much the same way we do it internally in libcurl at the moment. It hasn’t taken off (or even been started) since I’ve not had the time nor energy for it yet.

IETF http-state group created

Over at the IETF another group was just created named http-state (with an associated mailing list) with the specific goal:

Ultimately, the purpose of this group is to create an updated HTTP State Management Mechanism RFC (aka cookies) that will supersede the Netscape spec, RFCs 2109, 2964, 2965 then add in real-world usage (e.g. HTTPOnly), and possibly add in additional features and possibly merge in draft-broyer-http-cookie-auth-00.txt and draft-pettersen-cookie-v2-03.txt.

I’ve joined the list and I hope to follow and participate in this, as I believe the current state of HTTP cookies is a rather sorry mess and the Netscape spec is still what closest describes how cookies work in the wild. Of course I’ll do it with my libcurl experience in my luggage.

While it perhaps would be cool to join the group in more formal way, there’s no way for me to participate in that IETF meeting in San Francisco in March.

10G and Direct Cache Access

As some of you might know, I currently work with a client doing 10G network stuff. 10G as in 10 gigabit/second Ethernet. That’s a lot of data. It’s actually so much data it’s hard to even generate network loads of this magnitude to be able to do good tests, as a typical server using SATA harddrives hardly fills a one gigabit pipe due to “slow” I/O: ordinary SATA drives don’t even reach 100MB/sec. You need RAID solutions or putting the entire thing in RAM first. So generating 10 gigabit network loads thus requires some extraordinary solutions.

Having a server that tries to “eat” a line speed 10G is a big challenge, and in fact we can’t do it as 1.25 GB/sec is just too much and yet we run a quad-core 3.00GHz Xeon thing here which is at least near the best “off-the-shelf” CPU/server you can get at the moment. Of course our software does a little bit more with the data than just receiving it as well.

Anyway, recently I’ve been experimenting with 10G cards from Myricom and when trying to maximize our performance with these beauties, I fell over the three-letter acronym DCA. Direct Cache Access. A terribly overused acronym consisting of often-used words make it hard to research and learn about! But here’s a great document describing some of the gory details:

Direct Cache Access for High Bandwidth Network I/O

Summary: it is an Intel technology for delivering data directly into the CPU’s cache, to reduce the bandwidth requirement to memory (note: it only decreases the bandwidth requirement at that moment, not the total requirement as it still needs to be read from memory into the cache, as noted in a comment below). Using this technique it should be possible to drastically reduce the time for getting the traffic. Support for this tech has been added to the Linux kernel as well since a while back.

It seems DCA is (only?) implemented in Intel’s 7300 chipset family which seems to only exist for Xeon 7300 and 7400. Too bad we don’t have one of these monsters so I haven’t been able to try this out for real yet…

Currently we can generate 10G network loads using two different approaches: one is uploading a specially crafted binary blob embedded with the FPGA image to a Xilinx-equipped board with a 10G MAC that then can do some fiddling with the packages (like increasing a counter) so that they aren’t all 100% identical. It makes a pretty good load test, even if the traffic isn’t at all shaped like the “real” traffic our product will receive. Our other approach has been even less good: upload a custom firmware to the network card and have that send the same Ethernet frame… This latter approach didn’t get better because it was a bit too complicated and badly documented on how to make a really good generator out of it. Even if I liked being able to upload custom code to my network card! 😉

Allow me to also mention that the problems with generating 10G is with small packet sizes, like 100 bytes or so as the main problem in the hardwares seem to the number of packets, not the payload part. Thus it is easier to do full line speed with 9000 bytes packets (jumbo frames) than the tiny ones we are likely to get when this product is in use by customers in the wild.

Update: this article was written in 2008. Please note that many things may have changed since then.

Solaris 10 ships libcurl

I fell over this document named “What’s New in the Solaris 10 10/08 Release” and it includes this funny little quote towards the end:

C-URL – The C-URL Wrappers Library

C-URL is a utility library that provides programmatic access to the most common Internet protocols such as, HTTP, FTP, TFTP, SFTP, and TELNET. C-URL is also extensively used in various applications.

The project is cURL, the tool is curl and the library is libcurl. There’s nothing named C-URL and it isn’t any “wrappers library”… And the list of protocols is also funny since it includes 6 protocols while a modern libcurl supports 13 different ones, and also if you build libcurl to support SFTP you also get SCP (which the list doesn’t include) etc.

It just looks so very sloppy to me. But hey, what do I know?

Nvidia chipset audio now works

I’ve mentioned some of my audio problems on my Linux desktop before, and just the other day a friend suggested I should remove ‘esd’ (“apt-get remove esound”) as a means to fix one of my complaints and frequent annoyance (to get the sound working I had to kill esd first, then reload some drivers etc).

Recently my standard “trick” to get the sound brought to life had started to fail so I needed to get a new angle at this and boy, when I did a reboot now without esound installed my on-board sound works! And this without me doing any manual fiddling at all.

My motherboard’s sound info is displayed like this with lspci -v:

00:10.1 Audio device: nVidia Corporation MCP51 High Definition Audio (rev a2)
Subsystem: ASUSTeK Computer Inc. Device 81cb
Flags: bus master, 66MHz, fast devsel, latency 0, IRQ 22
Memory at fe024000 (32-bit, non-prefetchable) [size=16K]
Capabilities: <access denied>
Kernel driver in use: HDA Intel
Kernel modules: snd-hda-intel

The hack will still be useful

Okay, in my recent blog entry about Flash 10 using native libcurl I got a bit side-tracked and mentioned something about distros confusing libcurl’s soname 3 and 4. This caused some comments in that post and some further activities behind the curtains, so let me spell out exactly what I mean:

The ABI for libcurl did change between soname 3 and 4, but the change was in a rather subtle area (FTP third party transfers, sometimes known as FXP) which is rarely used. It certainly will not hurt the Adobe Flash system.

I’m not against “the hack” (or perhaps “a hack” as there are several ways an ordinary system could provide work-arounds or fixes for this problem) per-se, I am mainly trying to fight the belief or misconception that the ABI break doesn’t exist.

Since Adobe doesn’t want to provide an updated package that links against a modern libcurl and refuses to provide multiple packages, distros of course need to address this dilemma.

I just want all to know that 3 != 4, even if the risk that it’ll cause problems is very slim.

Update: it seems Adobe will change this behavior in their next release and then try to load either 3 or 4.

Flash 10 uses native libcurl

In Adobe’s Penguin.SWF blog, we can learn some details about the upcoming version 10 of the Adobe flash player for Linux:

They’ll rely on more libraries to be present in the system rather than provide them all by themselves in their own install. This apparently includes libcurl.

So if you get the RPM of the pre-release player, you’ll notice that it requires “libcurl.so.3” which is the old SONAME for libcurl (libcurl 7.15.5 was the last release which used the number 3) which no up-to-date distribution should provide anymore. Since october 2006 we’ve shipped libcurl.so.4.

Apparently, this made the Fedora people first implement a work-around for this that re-introduces the SONAME 3 from the same source the SONAME 4 is made from, only to a short while afterwards revert that decision

An interesting side-note is how the Fedora people repeat over and over in those threads that libcurl with SONAME 3 and SONAME 4 use the same ABI, although that is not true (at least not by my definition of what an ABI is). The bump was not accidentally made.

Update: it seems some blame this 3 == 4 thing on Debian

Two fellow curl hackers

During many years I was really and truly the primary and almost single developer of curl and libcurl. Sure we’ve always got a steady stream of quality patches by contributors but I was the single guy who cared for the whole picture and who took on greater work to advance the project.

This is no longer the case. These days there are more people around that bite the really big bullets and who show that they know a lot about the internals, the protocols and have a feel and understanding for the general ideas and concepts of the project. I think they get too little attention, so I thought I’d put the light on two of our bright hackers that really are true rocks in the community:

Daniel Fandrich first appeared on the curl-library list in April 2003. More than 1500 email posts later, he’s a knowledgeable, friendly and skilled contributor in just about all areas of curl and libcurl.

Yang Tse appeared on the curl-users list in September 2005 and has somewhat specialized in cleaning up dusty corners of the code. Redoing things The Right Way, fixing compiler warnings and fixing up configure checks so that the code runs all over as it is supposed to.

These are two of our valuable committers. Ohloh.net counts 10 committers during the last 12 months, which puts us within the top 10% of all project teams on Ohloh!

But as I mentioned above, the curl development is largely built upon patches provided by people who send in one or two patches and never appear in the project again. We have over 650 named contributors and the list keeps growing at a steady pace all the time.

You can be our next contributor or even committer. Just join us and help out!

HTTP implementations

I previously mentioned on the libcurl mailing list, that Mark Nottingham in the IETF HTTP Working Group has initiated the work on putting together an overview of all (interesting) existing HTTP implementations

Of course curl is included in the bunch, or rather libcurl, but I would also urge you all to step forward and provide further details on other implementations you worked on or know of!