Category Archives: Technology

Really everything related to technology

libssh2 upped a notch

There have been some well-founded criticism against libssh2 for a long time for its bad transfer performance when doing SCP and SFTP based transfers. Tests have proved it to be significantly slower than the openssh based alternatives in comparisons done in similar conditions. We’re talking down to a tenth(!) of the speed for SFTP.

Luckily I have a unnamed (by agreement) sponsor who pays me for improving this.

Giving it some love

I basically started out reading the SFTP code and cleaned it up as I went over it, and I added some clarifying comments etc. I found some irregularities that I fixed. Soon I could spot an obvious performance boost, like perhaps 3-4 times the previous speed. But since SFTP was painfully slow originally, this was still very crappy compared to openssh.

I then switched over to plain SCP tests. SCP is basically just an “scp” command sent over SSH and then streaming the data over a plain SSH “channel”, while SFTP is a whole additional protocol layer on top. Thus SCP is more low-level, on the actual SSH level, and the foundation on which SFTP runs anyway so getting SCP faster was fundamental.

Make it speedier

My initial tests with libssh2 1.0 showed libssh2 to download data at roughly 25% of the speed of openssh when SCPing a 1GB file from an openssh server running on localhost. The openssh client shows roughly 40MB/sec on my test box.

Also, just checking my CPU load meter while doing the libssh2 transfers showed that it certainly wasn’t hitting the roof or anything. It was barely even noticeable! Of course something was really wrong but what was it?

SSH has a lower protocol layer that does the entire encryption thing, the transport layer, but on top of that is the “channel layer” that is packet based for sending data back and forth over the transport layer. This channel thing has a receive window concept, much like TCP itself has, which tells the remote side how much data it is allowed to send until it gets further notice.

libssh2 1.0 had a very conservative windowing logic. It started with a default window size of 64KB and it upped it at every read with the same amount that was read (which then could be 1K to 16KB something depending on the app).

My remake of this was to simplify the logic, read data from the network more evenly distributed over time, update the window size much less frequent and increase the window size by magnitudes! I found that when using a window size of 38MB (600 times the previous default size!!) things started flying.

Improved

With these modifications, libssh2 transfers SCP at close to 40MB/sec! SFTP is still left behind at a “mere” 14MB/sec on the same test setup but it has its own set of problems and solutions. Now this discussion on the libssh2 list is more about how to sensibly size the window to work the best way for different situations.

SFTP is a protocol that works more on file operations. The client sends OPEN, READ and CLOSE requests and the server replies with status and data. The READ request asks for N bytes starting at offset Z so a simple implementation like libssh2 asks for chunk after chunk in a serial manner, increasing the offset as it loops over the range. This causes a back-and-forth effect that certainly does not make optimized use of the network bandwidth.

SFTP ping pong

openssh has a nifty approach to enhance throughput for SFTP: it sends off and handles multiple outstanding READ requests in parallel so that it better can keep things busy (and the reverse when doing uploads). That concept is slightly harder to do with an API like the one libssh2 offers but it is of course still quite doable. I suspect that we might achieve results somewhat faster by simply use multiple connections as then we can remain using this simplistic approach but still use the full bandwidth. (Yes, I realize multiple connections may not be feasible for all applications.)

Previous tests we’ve done with SFTP uploads using multiple connections have proven libssh2 to be on par or even better than competitors on both Windows and Mac.

Please test

I’ll leave it like this for now. I’ll be very happy if people could test this version and report findings so that we make sure this is working and stable enough to release soonish. We’ll need to do something that offers window size controlling to apps, but we’ll discuss that further on the mailing list. Join in!

logo1-250

Snaxx 20

Haxx

In order to boost the amount of bullshit and beer drinking within a technological atmosphere, we give you Snaxx #20 on April the 15th.

The Haxx team invites friends to a cozy little crowded place in central Stockholm where we’ll eat, chat and drink while talking hard core tech, open source, repeated digits among the pi decimals, kernel scheduling, how threading works on the latest intel CPU cores and similar worldly topics.

We only ask that you tell us first if you intend to come so that we can make sure we all fit!

murl for extended curlness

I’m a firm believer in the old unix mantra of letting each tool do its job and do it well, and pass on the rest of the work to the next tool. I’ve always stated that curl should remain this way and that it should remain within its defined walls and not try to do everything.

But time passes and more and more ideas are thrown up in the air, or in some cases directly at me, and the list of things that we could do but don’t due to this philosophical limit of remaining focused has grown. It currently includes at least:

  • metalink support
  • recursive HTML downloads
  • recursive/wildcard FTP transfers
  • bittorrent support
  • automatic proxy configuration
  • simultaneous/parallel download support

Educated readers of course immediately detect that this list (if implemented) would make a tool that basically does what wget already does (and a lot more) and I’ve explicitly said for a decade that curl is not a wget clone. Maybe it is time for us (me?) to reevaluate that sentiment – at least in some sense.

I don’t want to sacrifice the concepts that have worked so fine for curl under so many years, so I’m still firmly against stuffing all this into curl (or libcurl). That simply will not happen with me at the wheel.

A much more interesting alternative would be to instead start working on a second tool within the curl project: murl. A tool that does basically everything that curl already does, but also opens the doors for adding just about everything else we can cram in and that is still related to data transfers. That would include, but not be restricted to, all the fancy stuff mentioned in the list above!

No the name murl is not set in stone, nor is this whole idea anything but plain and early thoughts thrown out at this point so it may or may not actually take off. It will probably depend on if I get support and help from fellow hackers to get started and moving along.

cURL

curl 7.19.4

curl and libcurl 7.19.4 has just been released! This time I think the perhaps most notable fix is the CVS-2009-0037 security fix which this release addresses. A little over 600 days passed since the previous vulnerability was announced.

Other than that major event, there are a bunch of interesting changes in this release:

  • Added CURLOPT_NOPROXY and the corresponding –noproxy
  • the OpenSSL-specific code disables TICKET (rfc5077) which is enabled by default in openssl 0.9.8j
  • Added CURLOPT_TFTP_BLKSIZE
  • Added CURLOPT_SOCKS5_GSSAPI_SERVICE and CURLOPT_SOCKS5_GSSAPI_NEC – with the corresponding curl options –socks5-gssapi-service and –socks5-gssapi-nec
  • Improved IPv6 support when built with with c-ares >= 1.6.1
  • Added CURLPROXY_HTTP_1_0 and –proxy1.0
  • Added docs/libcurl/symbols-in-versions
  • Added CURLINFO_CONDITION_UNMET
  • Added support for Digest and NTLM authentication using GnuTLS
  • CURLOPT_FTP_CREATE_MISSING_DIRS can now be set to 2 to retry the CWD even when MKD fails
  • GnuTLS initing moved to curl_global_init()
  • Added CURLOPT_REDIR_PROTOCOLS and CURLOPT_PROTOCOLS

We also did at least 15 documented bugfixes in this release and 25 people are credited for their help to make it happen.

A113

I have two kids, aged two and five. In our home I get to see a fair amount of animated movies, and yes most of them are run over and over again as the kids for some reason like to see the same movie endless number of times.

Anyway, what does a man like me do when he sees the same movies many times? He spots inconsistencies and patterns. My wife can get annoyed at times when I for example remark on how Nemo can get back to the main tank when the only way back is a pipe stuffed with a plant, in Finding Nemo.

Or the fact that Dinoco is both the name of a gas station in Toy Story I and the name of the racing team in Cars.

More recently I detected a bigger pattern that collides a bit with myself:

A113 is on the license plate of Andy’s mother’s car as visible towards the end of Toy Story I.

A113 in The Incredibles it is the conference room number where our main hero Bob is supposed to meet someone at that special island, only to get end up getting in fight with the big spider robot thing.

A113 shows up on a screen in Wall-e as some kind of instruction from the huge Axiom ship’s computer.

A113 is marked on the “electricity cabinet” outside my house! (see picture below)

Yeah, and once I had all this tracked down and it felt a bit strange I typed A113 into that search engine thing and of course I got to learn everything about A113

A stream of streamings

I’m a last.fm fan. I love its ability to not only stream music without needing a dedicated client installed (yes a flash application I think suits a purpose) and I think it’s ability to provide music I might also like is amazingly nice. I’m a “random it all” kind of guy when I listen to my local music collection in most situations as well. It is not specifly well suited for listening on exact the songs you want, as if you select a specific song it won’t even play the full-length version of it.

Lately there’s been a lot of buzz in Swedish tech media about spotify, which is a similar idea (at the moment still an on invitation-only thing in Sweden). They stream music, but only to a proprietary Windows or Mac client and currently they offer free listening with ads (embedded in the audio and visible in the client) or 99 SEK (== 9 Euros == 11 USD) per month. The client is highly focused on specific songs or artists and it has nothing in the way of “random artitists I generally like and similar ones”. I’m not too thrilled.

Spotify offers its service in several places, and I hear in the UK it’s not even invitation-only (which of course is useful for the more forward-thinking hacking kind of guys who thus use a UK based proxy to reach them). There’s however no sign of a Linux client. We’re forced to run their windows client with Wine.

I’ve gotten the impression that Pandora is a similar concept to play with if you happen to be based in the US. I’m in Sweden and Pandora just shows me a “We are deeply, deeply sorry to say that due to licensing constraints, we can no longer allow access to Pandora for listeners located outside of the U.S.”

The other day despotify.se showed up. A bunch of clever hackers reverse engineered the Spotify protocol and stream and offer a full unofficial open sourced ncurses/libvorbis/pulse-audio/gstreamer/expat/zlib/openssl-based player! Reading the code shows that these guys certainly had to crack some hard nuts, but the activity in their IRC channel seems fierce and the code is rather clean so I expect it to turn out to eventually become a fine player if Spotify just doesn’t decide to play hard ball with them. Unfortunately, despotify hasn’t yet been able to produce a single sound for me since it has just died on assert()s on basically any attempts I’ve tried. The interface is also a bit… strange and not the easiest to figure out. (It should be noted that the despotify client still requires you who have an actual spotify account.)

It’ll be interesting to see how Spotify, or perhaps the big media companies owning all the music rights, will act on this initiative. This client does open up abilities for new fancy features. How about ripping the stream? How about re-distributing the stream like as a proxy? And of course it being open, it does open up for adding features I want to add.

Update: just hours after I posted this, Spotify closed access to their service using the despotify client as long as you’re not a “premium” (paying) user…

Windows localhost slowness

A client of mine and myself ran a bunch of tests doing FTP and SFTP transfers against localhost to measure how fast our custom solution is compared to a set of existing solutions.

The specific results from this aren’t what caught my eyes, mostly because they’re currently still only used for comparisons and to measure relative improvements, but it was instead the relative speed differences between the tests run on Mac 10.5.5, on Windows XP SP3 and on Linux 2.6.26.

Some of the Windows transfers took a magnitude more time than the others. Ten times longer. Since we could see this across multiple tests each being run multiple times and it was also visible with third party tools, the only conclusion I can draw from this is that Windows for some reason has a much slower localhost.

Does any reader of this have any further knowledge or details to share on this topic? Anyone knows if more recent Windows versions do this any better?

It should be noted that on Windows the ssh server used was running in cygwin, which may account for some of the slowness as cygwin isn’t really known for being blazingly fast…

Update:

Three friends responded to this question:

The first mention that he’d got problems on windows in the past where 127.0.0.1 worked but ‘localhost’ didn’t which might indicate that localhost for some reason would be treated differently.

The second said that it has been mentioned that Windows Vista has significant TCP improvements compared to older versions for which version the TCP/IP stack was rewritten completely.

Pierre (at Microsoft) pointed out that on Vista localhost resolves first to ::1 (ipv6) only, which may explain why some people experience quirks on Vista at least. This test was however done on XP…

Explanation for hjsdhjerrddf.com domains

In case you’ve checked some of your spam mails recently you might’ve discovered how a large amount of them include links to sites using seemingly very random names in the domain names. Like hjsdhjerrddf.com or qwetyqfweyqt.com and so on. Hammering-the-keyboard looking names.

The explanation behind these is quite simple and sad: ICANN allows for a “tasting period” before you pay for the domain. Thus spammers register all sorts of random names, spam the world with mails referring the users to these domains and then they return the domain names again before they’ve paid anything, and go on to the next names.

With a large enough set of people and programs doing this, a large amount of names will constantly be kept in use but not paid for and constantly changing owners.

Conclusion: wherever there’s a loophole in the system, someone is there to exploit it for the purpose of sending spam.

More suggested HTTP fun

I’ve already previously expressed my deepest dislike with where the HTML5 work is going, and just yesterday two new internet-drafts appeared on ietf.org that spurred up discussions all around. They’re claimed to be “part of our effort to remove from HTML5 sections that are more appropriate elsewhere” but I’m thinking they’re rather inappropriate everywhere…

The first one named Content-Type Processing Model hits a subject that I’ve been over before, namely the stupidity of having web browsers guess the content based on what it looks like. IE introduced the “I really mean it property“, the HTML5 team wants to standardize the way of the guessing. Personally, I think the world of web will become a better place if the browsers would instead become stricter and more closer follow what the servers actually say the contents is, and then all users would complain to the site admins if things are wrong and then things should be fixed.

Guessing content types allows for sloppy behaviors, it makes it harder to write browsers for the web and it still features a significant risk of guessing wrong.

The second draft propagates for the new HTTP header “Origin”, which according to the authors would help to guard servers against CSRF (“Cross-Site Request Forgery“). The main author says 3% of users on the Internet gets their Referer header stripped while virtually none gets Origin stripped. I claim this is a bogus argument since they strip Referer beacause it is a known and established header and Origin is not. I also completely fail to see the goodness of this and based on several of the other responses on the ieth-http-wg mailing list I am not alone…