“Hacking me”

If you ever wonder how clever it was of me to make an FTP tool that used the default anonymous password curl_by_daniel@... once upon a time and you want to know why I changed that to ftp@example.com instead? Here’s a golden snippet to just absorb and enjoy:

Date: Thu, 23 Dec 2010 22:56:00
From: iHack3r <hidden>
To: info@[my company]
Subject: Hacking me

To the idiot named Daniel, Please stop brute force attacking my FTP client. I do not appreciate it, i have an anonymous account set up for the general public to access my files that i want them to access, QUIT trying to hack the admin because 1. DISABLED unless i am leaving to go somewhere without my computer 2: THE PASSWORD is random letters and numbers.

-iHack3r

The password was changed at Feb 13 2007 in curl version 7.16.2, but there are a surprisingly large amount of older curls still around out there…

Update: as the person responded again after having read this blog post and still didn’t get it, I felt the urge to speak up in even more clear terms:

I didn’t have anything to do with any “hacker attack” on any site. Not yours, and not anyone else’s. The fact that almost-my-email address appeared in your logs is because I wrote the FTP client. It is a general FTP client that is being used by a very very large amount of people all over the world. If I ever would attack a site, why on earth would I send along my real name or email address?

Byte ranges for FTP

In the IETF ftpext2 working group there have been some talks around clients’ and servers’ ability to do and support “ranged” file transfers, that is transferring only a piece of any given file. FTP supports the REST command and has done so since the dawn of man (RFC765 – June 1980), and using that command, a client can set the starting point for a transfer but there is no way to set the end point. HTTP has supported the Range: header since the first HTTP 1.1 spec back in January 1997, and that supports both a start and an end point. The HTTP header does in fact support multiple ranges within the same header, but let’s not overdo it here!

Currently, to avoid getting an entire file a client would simply close the data connection when it has got all the data it wants. The unfortunate reality is that some servers don’t notice clients doing this, so in order for this to work reliably a client also has to send ABOR, and after this command has been sent there is no way for the client to reliably figure out the state of the control connection so it has to get closed as well (which is crap in case more files are to be transferred to or from the same host). It primarily becomes unreliable because when ABOR is sent, the client gets one or two responses back due to a race condition between the closing and the actual end of transfer etc, and it isn’t possible to tell exactly how to continue.

A solution for the future is being worked on. I’ve joined up the effort to write a spec that will suggest a new FTP command that sets the end point for a transfer in the same vein REST sets the start point. For the moment, we’ve named our suggested command RANG (as short for range). “We” in this context means Tatsuhiro Tsujikawa, Anthony Bryan and myself but we of course hope to get further valuable feedback by the great ftpext2 people.

There already are use cases that want range request for FTP. The people behind metalinks for example want to download the same file from many servers, and then it makes sense to be able to download little pieces from different sources.

The people who found the libcurl bugs I linked to above use libcurl as part of the Fedora/Redhat installer Anaconda, and if I understand things right they use this feature to just get the beginning of some files to check them out and avoid having to download the full file before it knows it truly wants it. Thus it saves lots of bandwidth.

In short, the use-cases for ranged FTP retrievals are quite likely pretty much the same ones as they are for HTTP!

The first RANG draft is now available.

Add latency to localhost

Pádraig Brady taught me a great trick in a comment to a previous blog post and it was so neat I feel a need to highlight it further as it also makes it easier for me to find it again later!

To simulate a far away server, add RTT time to the localhost device. For example if we add 100 milliseconds (which then makes 200ms ping time to localhost):

$ tc qdisc add dev lo root handle 1:0 netem delay 100msec
Restore it back to normal again with:
$ tc qdisc del dev lo root
tc qdisc add dev lo root handle 1:0 netem delay 100msec

Restore it back to normal again with:

tc qdisc del dev lo root

In addition, add a random packet loss. 500ms latency with 2.5% packet loss:

tc qdisc add dev lo root handle 1:0 netem delay 250msec loss 2.5%


From Magic to Desire HD

htc-desire-hdI got into the world of Android for real when I got my HTC Magic in July last year, as my first smart phone. It has served me well for almost 18 months and now I’ve taken the next step. I got myself an HTC Desire HD to replace it. For your and my own pleasure and amusement, I’m presenting my comparison of the two phones here.

The bump up from a 3.2″ screen at 480×320 to a 4.3″ 800×480 is quite big. The big screen also feels crisper and brighter, but I’m not sure if the size helps to give that impression. Even though the 4.3″ screen has the same resolution that several phones already do at 3.7″ the pixel density is still higher than my old phone’s and if I may say so: it is quite OK.

The Desire HD phone is huge. 68 mm wide, 11.8mm thick and 123 mm tall and a massive 164 grams makes it a monster next to the magic. The Magic is 55.5 mm wide, 13.6 mm think and 113 mm tall at 116 grams.

So when put on top of the HD with two sides aligned, the HD is 10mm larger in two directions. Taken together, the bigger size is not a problem. The big screen is lovely to use when browsing the web, reading emails and using the on-screen keyboard. I don’t have any problems to slide the phone into my pocket and the weight is actually a pretty good weight as it makes the phone feel solid and reliable in my hand. Also, the HD has much less “margin” outside of the screen than the Magic, so the percentage of the front that is screen is now higher.

The big screen makes the keyboard much easier to type on. The Androd 2.2 (Sense?) keyboard is also better than the old 1.5 one that was shipped on the Magic. The ability to switch language quickly is going to make my life soooo much better. And again, the big screen makes the buttons larger and more separated and that is good.

The HD has soft buttons on the bottom of the phone, where the Magic has physical ones. I actually do like physical ones a bit better, but I’ve found these ones to work really nice and I’ve not had much reason to long for the old ones. I also appreciate that the HD has the four buttons in the same order as the Magic so I don’t have to retrain my spine for that. The fact that Android phones can have the buttons in other orders is a bit confusing to me and I think it is entirely pointless for manufacturers to not go with a single unified order!HTC Magic

I never upgraded the Magic. Yes, I know it’s a bit of a tragic reality when a hacker-minded person like myself doesn’t even get around to upgrade the firmware of his phone, and I haven’t experienced cyanogenmod other than through hearsay yet. Thus, the Android 2.2 on the HD feels like a solid upgrade from the old and crufty Magic’s Android 1.5. The availability of a long range of applications that didn’t work on the older Android is also nice.

Desire HD is a fast phone. It is clocked at twice the speed as the Magic, I believe the Android version is faster in general, it has more RAM and it has better graphics performance. Everything feels snappy and happens faster then before. Getting a web page to render, installing apps from the market, starting things. Everything.

The HTC Magic was the first Android phone to appear in Sweden. It was shipped with standard Android, before HTC started to populate everything with their HTC Sense customization. This is therefore also my introduction to HTC Sense and as I’ve not really used 2.2 before either, I’m not 100% sure exactly what stuff that is Sense and what’s just a better and newer Android. I don’t mind that very much. I think HTC Sense is a pretty polished thing and it isn’t too far away from the regular Android to annoy me too much.

HTC Desire HD under a HTC MagicI’ve not yet used the HD enough in a similar way that I used the Magic to be able to judge how the battery time compares. The Magic’s 1340 mAh battery spec against the HD’s 1230 mAh doesn’t really say much. The HD battery is also smaller physically.

USB micro vs mini. The USB micro plug was designed to handle more insert/unplug rounds and “every” phone these days use that. The Magic was of the former generation and came with a mini plug. There’s not much to say about that, other that the GPS in my car uses a mini plug and thus the cable in the car was conveniently able to charge both my phone and GPS, but now I have to track down a converter so that I don’t have to change between two cables just for that reason.

The upgrade to a proper earphone plug is a huge gain. The Magic was one of the early and few phones that only had a USB plug for charging, earphones and data exchange. The most annoying part of that was that I couldn’t listen with my earphones while charging.

The comparison image on the right side here is a digital mock-up that I’ve created using the correct scale, so it shows the devices true relative sizes. I just so failed at making a decent proper photograph…

Making SFTP transfers fast

SFTP, the SSH File Transfer Protocol, is a misleading name. It gives you the impression that it might be something like a secure version of FTP, perhaps something like FTPS but modeled over SSH instead of SSL. But it isn’t!The OpenSSH fish

I think a more suitable name would’ve been SNFS or FSSSH. That is: networked file system operations over SSH, as that is in fact what SFTP is. The SFTP protocol is closer to NFS in nature than FTP. It is a protocol for sending and receiving binary packets over a (secure) SSH channel to read files, write files, and so on. But not on the basis of entire files, like FTP, but by sending OPEN file as FILEHANDLE, “WRITE this piece of data at OFFSET using FILEHANDLE” etc.

SFTP was being defined by a working group with IETF but the effort died before any specification was finalized. I wasn’t around then so I don’t know how this happened. During the course of their work, they released several drafts of the protocol using different protocol versions. Version 3, 4, 5 and 6 are the ones most used these days. Lots of SFTP implementations today still only implement the version 3 draft. (like libssh2 does for example)

Each packet in the SFTP protocol gets a response from the server to acknowledge it was received. It also includes an error code etc. So, the basic concept to write a file over SFTP is:

[client] OPEN <filehandle>
[server] OPEN OK
[client] WRITE <data> <filehandle> <offset 0> <size N>
[server] WRITE OK
[client] WRITE <data> <filehandle> <offset N> <size N>
[server] WRITE OK
[client] WRITE <data> <filehandle> <offset N*2> <size N>
[server] WRITE OK
[client] CLOSE <filehandle>
[server] CLOSE OK

This example obviously assumes the whole file was written in three WRITE packets. A single SFTP packet cannot be larger than 32768 bytes so if your client could read the entire file into memory, it can only send it away using very many small chunks. I don’t know the rationale for selecting such a very small maximum packet size, especially since the SSH channel layer over which SFTP packets are transferred over doesn’t have the same limitation but allows much larger ones! Interestingly, if you send a READ of N bytes from the server, you apparently imply that you can deal with packets of that size as then the server can send packets back that are N bytes (plus header)…

Enter network latency.

More traditional transfer protocols like FTP, HTTP and even SCP work on entire files. Roughly like “send me that file and keep sending until the entire thing is sent”. The use of windowing in the transfer layer (TCP for FTP and HTTP and within the SSH channels for SCP) allows flow control to work without having to ACK every single little packet. This is a great concept to keep the flow going at high speed and still allow the receiver to not get drowned. Even if there’s a high network latency involved.

The nature of SFTP and its ACK for every small data chunk it sends, makes an initial naive SFTP implementation suffer badly when sending data over high latency networks. If you have to wait a few hundred milliseconds for each 32KB of data then there will never be fast SFTP transfers. This sort of naive implementation is what libssh2 has offered up until and including libssh2 1.2.7.

To achieve speedy transfers with SFTP, we need to “pipeline” the packets. We need to send out several packets before we expect the answers to previous ones, to make the sending of an SFTP packet and the checking of the corresponding ACKs asynchronous. Like in the above example, we would send all WRITE commands before we wait for/expect the ACKs to come back from the server. Then the round-trip time essentially becomes a non-factor (or at least a very small one).

libssh2

We’ve worked on implementing this kind of pipelining for SFTP uploads in libssh2 and it seems to have paid off. In some measurements libssh2 is now one of the faster SFTP clients.

In tests I did over a high-latency connection, I could boost libssh2’s SFTP upload performance 8 (eight) times compared to the former behavior. In fact, that’s compared to earlier git behavior, comparing to the latest libssh2 release version (1.2.7) would most likely show an even greater difference.

My plan is now to implement this same concept for SFTP downloads in libssh2, and then look over if we shouldn’t offer a slightly modified API to allow applications to use pipelined transfers better and easier.

What is Android anyway

Android, the software environment, has gotten a lot of press, popularity and interest from all over lately. People on the streets know there’s something called Android, companies know people know and so on. Everyone (well apart from a few competitors perhaps) likes Android it seems.

Being an embedded guy I like keeping an eye on the embedded world and Android being pretty embedded this at least tangents my universe. What is Android anyway? android.com says Android is “an open-source software stack for mobile devices, and a corresponding open-source project led by Google“. Not very specific, is it?

Android on different devicesAndroid

You can already find Android on mobile phones, media players, tablets, TVs and more. Very soon we’ll see it in car infotainment equipment, GPSes and all sorts of things that have displays. Clearly Android is not only for mobile phones and not even necessarily for mobile things. TVs often aren’t that mobile… And not touchscreen either.

Android with tweaks

The fact that there hardly are two Android installs completely alike is frequently debated. Lots of manufacturers patch and change the look and feel of Android to differentiate. Android is not associated with any particular look or feel quite clearly.

Samsung Galaxy Tab

Android with binary drivers

Almost all Android installations you get on the Android phones and devices today have a fair amount of closed, proprietary drivers. It means that even if the companies provide the source code for all the free parts in time (which they sometimes have a hard time to do it seems), there are still parts that you don’t get to see the code for. So getting a complete Android installation from source to run on your newly purchased Android device can be a challenge. Also, it shows that Android can consist of an unspecified amount of extra proprietary pieces that don’t disqualify it from being Android.

Android without apps

I have friends who work on devices where the customer has request them to run Android, although they don’t have any ability to run 3rd party apps. Android is then only there for the original developers writing specific code for that device. Potential buyers of that device won’t get any particular Android benefits that they might be used to from their mobile phones running Android, as the device is completely closed in all practical aspects.

Android without market

Devices that don’t meet Google’s demands and don’t get to be “Google certified” don’t get to install the google market app etc, but at least a company who wants to can then in fact install their own market app or offer another way for customers to get new apps. The concept of getting and installing apps aren’t bound to the market app being there. In fact, I’ve always been expecting that some other companies or parties would come along and provide an alternative app that would offer apps even to non-google-branded devices but obviously nobody has yet stepped up to provide that in any significant way.

Android without Java

I listened  to a talk at an embedded conference recently where the person did a 40 minute talk on why we should use Android on our embedded systems. He argued that Android was (in this context) primarily good for companies because it avoids GPL and LGPL to a large extent. He talked about using “Android” in embedded devices and cutting out everything that is java, basically only leaving the Linux kernel and the BSD licensed bionic libc implementation. Of course, bionic may now also provide features to the rest of the system that glibc and uClibc do not, they being designed as more generic libcs.

Personally, I would never call anything shipped without the Java goo layers to be Android. But since it was suggested, I decided to play with the idea that a platform can be “Android” even without Java…

mini2440

That particular license-avoiding argument of course was based on what I consider is a misunderstanding. While yes, lots of companies have problems with or are downright scared of the GPL and LGPL licences, but I’ve yet to meet a company who have any particular concerns about the licensing of the libc. I regularly meet and discuss with companies who have thoughts and worries about GPL in the kernel and they certainly often don’t like *GPL in regular libraries that they linked with in their applications. I have yet to find a customer who is worried about the glibc or uClibc licenses.

In fact, most embedded Linux customers also happily run busybox that is GPL although we know from history that many companies do that in violation with the license rules, only to get the lawyers running after them.

HTC Magic

Android is what?

As far as I know Android is a trademark of some sorts, and so is Linux. If you can run an “Android” that is just a kernel and libc (and I’m not saying this is true beyond doubt because I’ve not heard anyone authoritative say this), isn’t that then basically a very very small difference to any normal vanilla embedded Linux?

The latter examples above are even without any kind of graphical UI or user-visible interface, meaning that particular form of “Android” can just as well run your microwave or your wifi router.

Without the cruft can we change the kernel?

The Android team decided that a bunch of changes to the Linux kernel are necessary.to make Android. The changes have been debated back and forth, some of them were merged into mainline Linux only to later get backed out again while the greater part of them never even got that far. You cannot run a full-fledged Android system on a vanilla kernel: you need the features the patches introduce.

If we’re not running all the java stuff do we still need those kernel patches? Is bionic made to assume one or more of them? That brings me to my next stepping stone along this path:

Without the patches can we change libc?

If we don’t run the java layers, do we really have to run the bionic libc? Surely the Android kernel allows another libc and if we use another libc we don’t need the Android kernel patches – unless we think they provide functionality and features that really improve our device.

Android is the new Linux?ASUS M2NPV MX Motherboard

Android as a name to describe something really already is just as drained as Linux. All these Android devices are just as much Linux devices and just as a “linux device” doesn’t really tell anything about what it is actually more than what kernel it runs, neither does it seem “Android device” will mean in the long run or perhaps already.

Android however has reached some brand recognition already among mortals. I think Android is perceived as something more positive in the minds of the consumer electronic consumers than Linux is. Linux is that OS that nobody uses on their desktops, Android is that cool phone thing.

I will not be the slightest surprised if we start to see more traditional Linux systems call themselves Android in the future. Some of them possibly without changing a single line of code. Linux one day, Android the next. Who can tell the difference anyway? Is there a difference?

Re-evaluating the criticism

libssh2

A long while ago I posted my first version of the comparison of libssh vs libssh2. I have since then kept it updated and modified it over time. (Reminder: I am the libssh2 maintainer)

In that page, I included the performance differences I had measured which at the time showed libssh2 to be significantly faster when doing SCP operations.

The libssh guys always claimed I was wrong:

Please don’t be ridiculous. No competent network developer will take you seriously when you tell that libssh2 is 2.3 times faster that libssh.

and have even used rather harsh words when saying so.

you read this FUD page on the libssh2 website. I don’t want to start arguing here, the page is complete crap

(These two quotes are from the two leading libssh developers.)

Due to their complaints I withdrew the mentioning of the speed differences from the comparison page. Maybe I had done something wrong after all and since I didn’t care properly to go back and verify my methods and redo everything, I decided to just take it off until I have more backing or more accurate tests.

Fast forward to current time and Mark Riordan does his extensive performance tests of various SSH/SFTP implementations. He mailed the libssh mailing list about it, and his test results are interesting. I’m including it below for easier reading and just in case Mark’s original won’t be around as long as this.

It repeats very similar numbers to mine and shows the same speed difference that I was told cannot happen. Isn’t that funny? Am I still ridiculous?

SSH file transfer performance

The following table summarizes performance of SSH clients.

LAN: 1 Gbit/sec WAN: 6 Mb down, 0.9 Mb up
Solaris x86 server
Client Client OS Server Comp
Enable
File
Cmp
UL MB/s DL MB/s UL MB/s DL MB/s
libssh2 Win Solaris No No 0.147 12.2
libssh2 Win Solaris Yes No
libssh2 Linux Solaris No No 0.82 11.8
libssh2 Linux Solaris Yes No
libssh 0.4.6 Win Solaris No No
Bitvise Tunnelier Win Solaris No No 13.50 3.95
Bitvise Tunnelier Win Solaris Yes No 8.541 10.2
psftp Win Solaris No No 9.4 5.06 or 0.46
WS_FTP 12.3 Win Solaris No No 8.07 7.65
Ubuntu sftp Linux Solaris ? No 29.6 11.5
Linux server
libssh2 Win Linux No No 9.5 8.1 0.059 0.26
libssh2 Win Linux Yes No
libssh2 Linux Linux No No 7.4 7.4 0.083 0.267
libssh 0.4.6 Win Linux No No 15.4 2.8 0.10 0.13
libssh 0.4.6 Linux Linux No No 8.97 2.8 0.099 0.189
libssh 0.4.6 Linux Linux Yes Yes 19.7 3.3
libssh latest Win Linux No No 14.1 1.38
psftp Win Linux No No 4.59 6.58 0.070 0.10
WS_FTP 12.3 Win Linux No No 23.0 8.5 0.113 0.361
Bitvise Tunnelier Win Linux No No
Ubuntu sftp Linux Linux No No 16.2 6.6 0.11 0.51

What about SFTP?

It should be noted that in my original claim and in this test above we speak SSH speeds (like SCP), not SFTP. SFTP has its own slew of problems and libssh2 is in fact not very good at doing SFTP speedily yet. We have work in progress to improve this situation, but we’re not there yet. I’ll post a follow-up on SFTP speeds soonish as things have been developing nicely in there recently.

What about speeds compared to other clients?

libssh2 is not fully on par with for example openssh when it comes to raw SCP speed, but it is in the same “neighborhood”.

Getting a new look

Haxx logo

Recently we have refreshed our logo design, and subsequently we’ve now also refreshed our web site to use this new look and design that already have influenced how our presentations, business cards and more look and will look in the future.

When I say “we” did it, there should be little surprise that we did engage with someone else to do this for us, since all of us at Haxx are quite incapable of doing designs that look tasteful.

We’re quite happy with the new look. We like the cool blue colors. More machine, less human, less colorful. This logo should also work slightly better in grayscale than before and getting rid of the border will also make it easier to use on various merchandise.

my first embedded Linux course

I’m happy to announce that I did my first ever full-day training course for eleven embedded developers Monday November 15th 2010. I had the pleasure to write all the materials myself, come up with three exercises for them and then actually stand in front of the team and deliver a complete session 9 – 17.

Let's say this illustrates Embedded SystemsI did my day as part of a three-day course, and I got to do the easy part: user-space development. My day covered the topics of: Embedded Linux development introduction, how to build, autobuilding, how to run, git basics, debugging, profiling and finally some brief words on testing.

Doing stuff outside of your ordinary schedule and “comfort zone” is certainly a bit scary and encouraging and that’s the sort of thing that makes you grow as a person and as a professional. I mean, I know the topics by heart and certainly pretty much without even thinking (I’ve been working with embedded systems for over 17 years!), but from that into making a decent training course is not just a coffee break worth of work.

I was quite happy and satisfied that I pretty much kept to the program, I managed to go through all the topics I had set myself out to, I think we had a really nice conversation going during the day and the audience gave me really good feedback and high “grades” in the evaluation forms they filled in before they left. Of course there were flaws in the presentation and I got some valuable ideas from my audience on how to improve it.

Now I feel like doing it again!

curl, open source and networking