Tag Archives: Network

Scalable application layer transfers

At FSCONS 2010 I had the pleasure to do a talk about how to make your client-side networking applications really scale when upping the number of simultaneous connections. Including some details that libcurl will support you all the way!

My talk was named “Scalable application layer transfers” and the slides from it is available online. See below. Hopefully the video recording of it will appear later and I’ll post a  follow-up then. A little extra bonus material as background would be my poll vs select vs event-based article.

As I mentioned in a previous post, the room was shock full when I started preparing my equipment for the talk since the session before me was a keynote, but by the time I actually starter presenting there were only the limited set of hardcore geeks left.

In the FSCONS program there were several talks over the weekend about women in FOSS and so on, while I on the other hand certainly only contributed to enforcing the stereotypes by being white, male, middle-aged, very techy and I delivered my two speeches for audiences in which I believe not a single woman attended. Whether I am part of the problem or the solution we can discuss in a separate post later on… 🙂

pNFS is my kind of toy

Ok, so NFS has never really been my cup of tea. Complicated and the problem with the root user and locking and what not have always made me get all itchy when thinking about or using NFS.

Enter pNFS, the NFS 4.1 invention that truly suddenly makes the NFS technology so much interesting for high performance solutions. I also think  it is a bit unknown so I thought I’d help to share the knowledge about this to you my dear readers. The p in pNFS stands for parallel. The whole idea is that the single NFS server just provides meta data back to the client, with enough information to allow the client to read the actual payload data directly from the storage server(s), that then supposedly are different ones than the main meta data server

img_pnfs_standard

(picture from www.nexenta.org)

As you can see on this fancy picture, it allows each client to speak directly to the storage device to get or send the data. This allows them to avoid using a single bottleneck NFS server.

NFS 4.1 and pNFS are IETF standards, RFC5661 to RFC5664. The first one being 616 pages long and one of the largest RFCs I’m aware of.

poll vs select

I’m a person working a lot with networking and development around it. I mostly do this on Linux, often involving drivers or otherwise very close to the operating system and C and the core libraries.

The other day I once again fell over some random inaccuracy about poll compared to select and instead of trying to whine on some IRC channel or complain on their mailing list, I decided I would instead strike back by writing up and presenting a web page of my own. It details as much as possible about poll vs select and related event-based functions. I want it to become a placeholder for everything that is relevant to say about poll and select in a comparison aspect and when comparing them to event-based alternatives like libevent and libev.

So the next time I face someone not quite understanding this whole situation or perhaps when someone reiterates something that isn’t quite true, I have a resource to point to.

Not to mention that I think this new poll vs select page fits in nicely with my other “X vs Y” articles and docs pages I’ve written the last few years.

If you find flaws, or miss details or have questions about this page. Please do not hesitate to comment here, or to mail me about it or tweet me on twitter or whatever method you prefer. I appreciate your feedback!

poll vs select

My Debian Black-out – the price of bleeding edge

Ok, I admit it. I run Debian Unstable so I know I deserve to get hit really bad at times when things turn really ugly. It is called unstable for a reason.

The other day I decided it was about time I did a dist-upgrade. When I did that, I got a remark that I better restart my gnome session as otherwise apps would crash. So I logged out and… I couldn’t login again. In fact, neither my keyboard nor mouse (both on USB) worked anymore! I sighed, and rebooted (for the first time in many months) only to find out that 1) it didn’t fix the problem, both input devices were still non-functional and perhaps even more important 2) the wifi network didn’t work either so I couldn’t login to it from one of my other computers either!

Related to this story is the fact that I’ve been running an older kernel, 2.6.26, since that was the last version that built my madwifi drivers correctly and kernels after that I was supposed to use ath5k for my Atheros card, but I’ve not been very successful with ath5k and thus remained using the latest kernel I had a fine madwifi for.

I rebooted again and tried a more recent kernel (2.6.30). Yeah, then the keyboard and mouse worked again, but the ath5k didn’t get the wifi up properly. I think I basically was just lacking the proper tools to check the wifi network and set the desired ssid etc, but without network that’s a bit of a pain. Also, when I logged in on my normal gnome setup, it mentioned a panel something being broken and logged me out again! 🙁

Grrr. Of course I could switch to my backup – my laptop – but it was still highly annoying to end up being locked out from your computer.

Today I bought myself 20 meter cat5e cable and made my desktop wired so I can reach the network with the existing setup, I dist-upgraded again (now at kernel 2.6.31) and when I tried to login it just worked. Phew. Back in business. I think I’ll leave myself with the cable connected now that I’ve done the job on that already.

The lesson? Eeeh… when things break, fix them!

libssh2 upped a notch

There have been some well-founded criticism against libssh2 for a long time for its bad transfer performance when doing SCP and SFTP based transfers. Tests have proved it to be significantly slower than the openssh based alternatives in comparisons done in similar conditions. We’re talking down to a tenth(!) of the speed for SFTP.

Luckily I have a unnamed (by agreement) sponsor who pays me for improving this.

Giving it some love

I basically started out reading the SFTP code and cleaned it up as I went over it, and I added some clarifying comments etc. I found some irregularities that I fixed. Soon I could spot an obvious performance boost, like perhaps 3-4 times the previous speed. But since SFTP was painfully slow originally, this was still very crappy compared to openssh.

I then switched over to plain SCP tests. SCP is basically just an “scp” command sent over SSH and then streaming the data over a plain SSH “channel”, while SFTP is a whole additional protocol layer on top. Thus SCP is more low-level, on the actual SSH level, and the foundation on which SFTP runs anyway so getting SCP faster was fundamental.

Make it speedier

My initial tests with libssh2 1.0 showed libssh2 to download data at roughly 25% of the speed of openssh when SCPing a 1GB file from an openssh server running on localhost. The openssh client shows roughly 40MB/sec on my test box.

Also, just checking my CPU load meter while doing the libssh2 transfers showed that it certainly wasn’t hitting the roof or anything. It was barely even noticeable! Of course something was really wrong but what was it?

SSH has a lower protocol layer that does the entire encryption thing, the transport layer, but on top of that is the “channel layer” that is packet based for sending data back and forth over the transport layer. This channel thing has a receive window concept, much like TCP itself has, which tells the remote side how much data it is allowed to send until it gets further notice.

libssh2 1.0 had a very conservative windowing logic. It started with a default window size of 64KB and it upped it at every read with the same amount that was read (which then could be 1K to 16KB something depending on the app).

My remake of this was to simplify the logic, read data from the network more evenly distributed over time, update the window size much less frequent and increase the window size by magnitudes! I found that when using a window size of 38MB (600 times the previous default size!!) things started flying.

Improved

With these modifications, libssh2 transfers SCP at close to 40MB/sec! SFTP is still left behind at a “mere” 14MB/sec on the same test setup but it has its own set of problems and solutions. Now this discussion on the libssh2 list is more about how to sensibly size the window to work the best way for different situations.

SFTP is a protocol that works more on file operations. The client sends OPEN, READ and CLOSE requests and the server replies with status and data. The READ request asks for N bytes starting at offset Z so a simple implementation like libssh2 asks for chunk after chunk in a serial manner, increasing the offset as it loops over the range. This causes a back-and-forth effect that certainly does not make optimized use of the network bandwidth.

SFTP ping pong

openssh has a nifty approach to enhance throughput for SFTP: it sends off and handles multiple outstanding READ requests in parallel so that it better can keep things busy (and the reverse when doing uploads). That concept is slightly harder to do with an API like the one libssh2 offers but it is of course still quite doable. I suspect that we might achieve results somewhat faster by simply use multiple connections as then we can remain using this simplistic approach but still use the full bandwidth. (Yes, I realize multiple connections may not be feasible for all applications.)

Previous tests we’ve done with SFTP uploads using multiple connections have proven libssh2 to be on par or even better than competitors on both Windows and Mac.

Please test

I’ll leave it like this for now. I’ll be very happy if people could test this version and report findings so that we make sure this is working and stable enough to release soonish. We’ll need to do something that offers window size controlling to apps, but we’ll discuss that further on the mailing list. Join in!

logo1-250

A stream of streamings

I’m a last.fm fan. I love its ability to not only stream music without needing a dedicated client installed (yes a flash application I think suits a purpose) and I think it’s ability to provide music I might also like is amazingly nice. I’m a “random it all” kind of guy when I listen to my local music collection in most situations as well. It is not specifly well suited for listening on exact the songs you want, as if you select a specific song it won’t even play the full-length version of it.

Lately there’s been a lot of buzz in Swedish tech media about spotify, which is a similar idea (at the moment still an on invitation-only thing in Sweden). They stream music, but only to a proprietary Windows or Mac client and currently they offer free listening with ads (embedded in the audio and visible in the client) or 99 SEK (== 9 Euros == 11 USD) per month. The client is highly focused on specific songs or artists and it has nothing in the way of “random artitists I generally like and similar ones”. I’m not too thrilled.

Spotify offers its service in several places, and I hear in the UK it’s not even invitation-only (which of course is useful for the more forward-thinking hacking kind of guys who thus use a UK based proxy to reach them). There’s however no sign of a Linux client. We’re forced to run their windows client with Wine.

I’ve gotten the impression that Pandora is a similar concept to play with if you happen to be based in the US. I’m in Sweden and Pandora just shows me a “We are deeply, deeply sorry to say that due to licensing constraints, we can no longer allow access to Pandora for listeners located outside of the U.S.”

The other day despotify.se showed up. A bunch of clever hackers reverse engineered the Spotify protocol and stream and offer a full unofficial open sourced ncurses/libvorbis/pulse-audio/gstreamer/expat/zlib/openssl-based player! Reading the code shows that these guys certainly had to crack some hard nuts, but the activity in their IRC channel seems fierce and the code is rather clean so I expect it to turn out to eventually become a fine player if Spotify just doesn’t decide to play hard ball with them. Unfortunately, despotify hasn’t yet been able to produce a single sound for me since it has just died on assert()s on basically any attempts I’ve tried. The interface is also a bit… strange and not the easiest to figure out. (It should be noted that the despotify client still requires you who have an actual spotify account.)

It’ll be interesting to see how Spotify, or perhaps the big media companies owning all the music rights, will act on this initiative. This client does open up abilities for new fancy features. How about ripping the stream? How about re-distributing the stream like as a proxy? And of course it being open, it does open up for adding features I want to add.

Update: just hours after I posted this, Spotify closed access to their service using the despotify client as long as you’re not a “premium” (paying) user…

Windows localhost slowness

A client of mine and myself ran a bunch of tests doing FTP and SFTP transfers against localhost to measure how fast our custom solution is compared to a set of existing solutions.

The specific results from this aren’t what caught my eyes, mostly because they’re currently still only used for comparisons and to measure relative improvements, but it was instead the relative speed differences between the tests run on Mac 10.5.5, on Windows XP SP3 and on Linux 2.6.26.

Some of the Windows transfers took a magnitude more time than the others. Ten times longer. Since we could see this across multiple tests each being run multiple times and it was also visible with third party tools, the only conclusion I can draw from this is that Windows for some reason has a much slower localhost.

Does any reader of this have any further knowledge or details to share on this topic? Anyone knows if more recent Windows versions do this any better?

It should be noted that on Windows the ssh server used was running in cygwin, which may account for some of the slowness as cygwin isn’t really known for being blazingly fast…

Update:

Three friends responded to this question:

The first mention that he’d got problems on windows in the past where 127.0.0.1 worked but ‘localhost’ didn’t which might indicate that localhost for some reason would be treated differently.

The second said that it has been mentioned that Windows Vista has significant TCP improvements compared to older versions for which version the TCP/IP stack was rewritten completely.

Pierre (at Microsoft) pointed out that on Vista localhost resolves first to ::1 (ipv6) only, which may explain why some people experience quirks on Vista at least. This test was however done on XP…

Fun with executable extensions in viewvc

A few years ago I wrote up silly little perl script (let’s call it script.pl) that would fetch a page from a site that returns a “random URL off the internet”. I needed a range of URLs for a test program of mine and just making up a thousand or so URLs is tricky. Thus I wrote this script that I would run and allow to get a range of URLs on each invoke and then run it again later and append to the log file. It wasn’t a fancy script, but it solved my task.

The script was part of a project I got funded to work on, that was improving libcurl back in 2005/2006 so I thought adding and committing the script to CVS felt only natural and served a good purpose. To allow others to repeat what I did.

Fast forward to late 2008. The script is now browsable via viewvc on a site that… eh, doesn’t have “.pl” disabled as a cgi extension in its config! The result of course is that each time someone tries to view the script using the web interface, the web server invokes the script locally!

All of a sudden I get a mail from someone, who apparently is admin or something of the site this old script was using, and he mentions that a machine on our network is hammering his site with many requests per second (38 requests/second apparently) and asked me to stop this. It turns out a search engine crawler has indexed the viewvc output several times, and now some 8 processes or so were running this script.pl and they were all looping around getting a page, outputting the URL, getting another page…

While I think 38 requests second is a bit low to even be considered a DOS, it certainly wasn’t intended nor friendly and I was greatly surprised when I slowly realized how it all came to end up like this! Man I suck! It reminds me of my other extension mess from just a few months ago…

Maybe I’ll learn how to do things right in the future when I grow up!

10G and Direct Cache Access

As some of you might know, I currently work with a client doing 10G network stuff. 10G as in 10 gigabit/second Ethernet. That’s a lot of data. It’s actually so much data it’s hard to even generate network loads of this magnitude to be able to do good tests, as a typical server using SATA harddrives hardly fills a one gigabit pipe due to “slow” I/O: ordinary SATA drives don’t even reach 100MB/sec. You need RAID solutions or putting the entire thing in RAM first. So generating 10 gigabit network loads thus requires some extraordinary solutions.

Having a server that tries to “eat” a line speed 10G is a big challenge, and in fact we can’t do it as 1.25 GB/sec is just too much and yet we run a quad-core 3.00GHz Xeon thing here which is at least near the best “off-the-shelf” CPU/server you can get at the moment. Of course our software does a little bit more with the data than just receiving it as well.

Anyway, recently I’ve been experimenting with 10G cards from Myricom and when trying to maximize our performance with these beauties, I fell over the three-letter acronym DCA. Direct Cache Access. A terribly overused acronym consisting of often-used words make it hard to research and learn about! But here’s a great document describing some of the gory details:

Direct Cache Access for High Bandwidth Network I/O

Summary: it is an Intel technology for delivering data directly into the CPU’s cache, to reduce the bandwidth requirement to memory (note: it only decreases the bandwidth requirement at that moment, not the total requirement as it still needs to be read from memory into the cache, as noted in a comment below). Using this technique it should be possible to drastically reduce the time for getting the traffic. Support for this tech has been added to the Linux kernel as well since a while back.

It seems DCA is (only?) implemented in Intel’s 7300 chipset family which seems to only exist for Xeon 7300 and 7400. Too bad we don’t have one of these monsters so I haven’t been able to try this out for real yet…

Currently we can generate 10G network loads using two different approaches: one is uploading a specially crafted binary blob embedded with the FPGA image to a Xilinx-equipped board with a 10G MAC that then can do some fiddling with the packages (like increasing a counter) so that they aren’t all 100% identical. It makes a pretty good load test, even if the traffic isn’t at all shaped like the “real” traffic our product will receive. Our other approach has been even less good: upload a custom firmware to the network card and have that send the same Ethernet frame… This latter approach didn’t get better because it was a bit too complicated and badly documented on how to make a really good generator out of it. Even if I liked being able to upload custom code to my network card! 😉

Allow me to also mention that the problems with generating 10G is with small packet sizes, like 100 bytes or so as the main problem in the hardwares seem to the number of packets, not the payload part. Thus it is easier to do full line speed with 9000 bytes packets (jumbo frames) than the tiny ones we are likely to get when this product is in use by customers in the wild.

Update: this article was written in 2008. Please note that many things may have changed since then.

Filling our pipes

At around 13:43 GMT Friday the 5th of December 2008, the network that hosts a lot of services like this site, the curl site, the rockbox site, the c-ares site, CVS repositories, mailing lists, my own email and a set of other open source related stuff, become target of a vicious and intense DDoS attack. The attack was in progress until about 17:00 GMT on Sunday the 7th. The target network is owned and ran by CAG Contactor.

Tens of thousands of machines on the internet suddenly started trying to access a single host within the network. The IP they targeted has in fact never been publicly used as long as we’ve owned it (which is just a bit under two years) and it has never had any public services.

We have no clue whatsoever why someone would do this against us. We don’t have any particular services that anyone would gain anything by killing. We’re just very puzzled.

Our “ISP”, the guys we buy bandwidth and related services from, said they used up about 1 gigabit/sec worth of bandwidth and with our “mere” 10 megabit/sec connection it was of course impossible to offer any services while this was going on.

It turns out our ISP did the biggest blunder and is the main cause for the length of this outage: we could immediately spot that the target was a single IP in our class C network. We asked them to block all traffic to this IP as far out as possible to stop such packets from entering their network. And they did. For a short while there was silence and sense again.

For some reason that block “fell off” and our network got swamped again and it then remained unusable for another 48 hours or so. We know this, since our sysadmin guy investigated our firewall logs on midday Sunday and they all revealed that same target IP as destination. Since we only have a during-office-hours support deal with our network guys (as we’re just a consultant company with no services that really need 24 hour support) they simply didn’t care much about our problem but said they would deal with it Monday morning. So our sysadmin shutdown our firewall to save our own network from logging overload and what not.

Given the explanations I’ve got over phone (I have yet to see and analyze logs from this), it does sound like some sort of SYN flood and they attempted to connect to many different TCP ports.

4-5 hours after the firewall was shutdown, the machines outside of our firewall (but still on our network) suddenly became accessible again. The attack had stopped. We have not seen any traces of it since then. The firewall is still shutdown though, as the first guy coming to the office Monday morning will switch it on again and then – hopefully – all services should be back to normal.