As some of you might know, I currently work with a client doing 10G network stuff. 10G as in 10 gigabit/second Ethernet. That’s a lot of data. It’s actually so much data it’s hard to even generate network loads of this magnitude to be able to do good tests, as a typical server using SATA harddrives hardly fills a one gigabit pipe due to “slow” I/O: ordinary SATA drives don’t even reach 100MB/sec. You need RAID solutions or putting the entire thing in RAM first. So generating 10 gigabit network loads thus requires some extraordinary solutions.
Having a server that tries to “eat” a line speed 10G is a big challenge, and in fact we can’t do it as 1.25 GB/sec is just too much and yet we run a quad-core 3.00GHz Xeon thing here which is at least near the best “off-the-shelf” CPU/server you can get at the moment. Of course our software does a little bit more with the data than just receiving it as well.
Anyway, recently I’ve been experimenting with 10G cards from Myricom and when trying to maximize our performance with these beauties, I fell over the three-letter acronym DCA. Direct Cache Access. A terribly overused acronym consisting of often-used words make it hard to research and learn about! But here’s a great document describing some of the gory details:
Direct Cache Access for High Bandwidth Network I/O
Summary: it is an Intel technology for delivering data directly into the CPU’s cache, to reduce the bandwidth requirement to memory (note: it only decreases the bandwidth requirement at that moment, not the total requirement as it still needs to be read from memory into the cache, as noted in a comment below). Using this technique it should be possible to drastically reduce the time for getting the traffic. Support for this tech has been added to the Linux kernel as well since a while back.
It seems DCA is (only?) implemented in Intel’s 7300 chipset family which seems to only exist for Xeon 7300 and 7400. Too bad we don’t have one of these monsters so I haven’t been able to try this out for real yet…
Currently we can generate 10G network loads using two different approaches: one is uploading a specially crafted binary blob embedded with the FPGA image to a Xilinx-equipped board with a 10G MAC that then can do some fiddling with the packages (like increasing a counter) so that they aren’t all 100% identical. It makes a pretty good load test, even if the traffic isn’t at all shaped like the “real” traffic our product will receive. Our other approach has been even less good: upload a custom firmware to the network card and have that send the same Ethernet frame… This latter approach didn’t get better because it was a bit too complicated and badly documented on how to make a really good generator out of it. Even if I liked being able to upload custom code to my network card! 😉
Allow me to also mention that the problems with generating 10G is with small packet sizes, like 100 bytes or so as the main problem in the hardwares seem to the number of packets, not the payload part. Thus it is easier to do full line speed with 9000 bytes packets (jumbo frames) than the tiny ones we are likely to get when this product is in use by customers in the wild.
Update: this article was written in 2008. Please note that many things may have changed since then.