Haxx gets Linus over to the good side

Linus Nielsen Feltzing and I founded Haxx a long time ago, so therefore it is extra fun to welcome him to join me and Björn to work full-time for our small but already now skill-packed company. Starting December this year, Linus will do all his consultancy and contract work wearing his Haxx hat and no other. Employee number three.

He comes from an employment at the same consultant company I was employed by before (and Björn was too a while ago). With this addition Haxx is now having three full-time consultants with more than 20 years of experience each within software development and embedded systems. We have a long and thorough experience in Linux and networks, in embedded and in larger systems.Haxx

Björn and I originally got to know Linus back in 1988 when we visited a “copy-party” in Alvesta Sweden. There we (the C64 demo group named Horizon) competed against the other teams in the demo competition. We won the competition with our demo “Love This Now” while the fellows in Microsystems Digital Technology (MDT) came at second place with their “Bonanza“.

MDT consisted of two persons, and one of them was Linus.

After Alvesta, Linus and Jörgen (the other MDT member) joined Horizon and we’ve known each other since. We’ve worked on the same companies since the late 90s something until the day last year when I started working full-time for Haxx.

Linus is a hardcore embedded developer, working close to hardware and the OS, writing primarily C and assembler code. He has worked a lot with various RTOSes and Linux.

Linus is also known as one of the founders of the Rockbox project together with me and Björn.

My projects will never be GNU

I’m maintaining a bunch of projects and at times I think about joining some kind of umbrella organization to find a foster family for the project. An organization that’s bigger than just that single project that possibly could be helpful in a lot of ways.

One large and famous such umbrella project for free and open software is GNU, Gnu’s Not Unix. To submit your software to GNU, they have a set of rules you need to obey. and here’s my reasons why the projects I maintain most likely will not ever become GNU projects:A gnu head!

  • GNU programs should come with documentation in Texinfo format – Oh man, so we need to provide an inferior documentation format of the docs for our software just to be GNU? It doesn’t make sense. And of course, info also sucks.
  • A GNU program should use the latest version of the license that the GNU Project recommends—not just any free software license. For most packages, this means using the GNU GPL – for many existing projects the selected license was carefully chosen and if the project has existed for a while changing license is not an easy task. I would also consider it out of the question for many projects. A true stopping requirement for most of my projects.
  • the documentation files and comments in the program should speak of GNU/Linux systems, rather than calling the whole system “Linux”, and should use the term “free software” rather than “open source”. – blah. I often speak of “open source” and I like the term “Linux” because of its simplicity and it being easier to pronounce than GNU/Linux.

In all, this just proves that I don’t share the religious and strong philosophical views on life and everything that the GNU people posses.

I’m quite simply not a GNU person. I sympathize with their general goals and I know and support a lot of GNU hackers and projects. I just can’t make my projects join the project.

Linuxträff 2010

I’ll be brief:

On the Software Freedom Day 2010 (September 18th), the guys in “The Swedish Linux Association” (Svenska Linuxföreningen) are organizing a day with talks and presentations about Linux and foss related subjects, which they call Linuxträff 2010. It takes place in Stockholm city, Sweden.

At that event, in the 11:00 – 12:00 time slot, you will be able to see and hear me do a little talk about Rockbox and reverse engineering to get free software on consumer electronics.

See you there!

selinux-pingvin-gnu-demon

What do you see in a future curl?

cURLDuring the first few years of the cURL project I used to sum up something about the past year and what I expected of the year to come at every anniversary. I stopped that at some point as I felt I didn’t have much new to contribute with, it mostly got repetitive over the years. I’ve never been the guy with the eyes fixed at the horizon and a grand plan of what to do in a future or work towards on a long-term basis. I’m more the guy with my feet firmly on the ground solving today’s problems right now as good as possible so that it stays stable and fine tomorrow.

So here follows some thoughts and reasoning around stuff that may and may not be the future of curl and libcurl. I’m as always putting my trust in you my friends to help me out with the details…

Protocols

curl’s wide protocol support has surprised even me. While I think the amount of protocols that can be supported without bending over backwards far too much is shrinking rapidly, I also think that we’ve learned that there are always more protocols that can be supported and that people will like to use with curl and libcurl. I expect more to come.

Specific protocols that are on the watch list include:

  • Gopher – I just had to mention it just because curl doesn’t currently support it, it was there once but got yanked out when we found out it didn’t work and hadn’t been doing so for a very long time without anyone noticing! The support is about to come back though thanks to a patch being worked on right now.
  • SPDY is the Google HTTP replacement (experimental) protocol that if it turns out successful seems like a very important piece for curl to grok.
  • Websockets will come to stay in your browser and it will happen soon. To remain a really powerful web scraper tool in the future with more and more sites switching to using Websockets for parts of its functionality, I believe at least some level of support might be required.
  • SCTP. Long-shot.This transport layer protocol was standardized in 2007 but really has not taken off in any significant way, even though it features a lot of benefits compared to TCP in many aspects. The now dead HTTP over SCTP internet-draft shows that HTTP can indeed be moved over to use it and it would solve a bunch of the same problems that SPDY does (and MPTCP). SCTP also has its own share of problems that hamper its adoption, primarily the lack of support in middle-boxes like NATs and routers.

Do you think there’s any particular other protocol we should support in a future?

Stability

I’m not sure if curl is particular unstable. I don’t think it suffers from any unusual amounts of bugs or anything, but I figure a decent list of stuff to consider for the future is if we make things within the project in a good way. Do we use the proper procedures so that we reach stability and produce a good product? Should we fix, change or replace certain ways or practices so that our outputs get less flaws?

I suspect these questions are hard for someone as involved and inside the project as myself. Possibly it is also hard for someone coming from the outside to convince us old-timers about anything like that too…

Interface

The current libcurl API was basically introduced when libcurl was first made a reality back in the year 2000. While the multi interface and the subsequent multi_socket API were added later on, they are both heavily influenced and affected by the previous easy interface.

Maybe there’s a much better API we should have that would make it easier to make a better library and easier to write better application using this library? It would require some serious brain-storming to come up with something, or perhaps there’s someone out there who already have thought something out?

Contributors

We’re but a few people who push commits to the master branch. How should we proceed to attract more? Should we work differently, perhaps have sub-maintainers for parts of the code etc? Can we work differently or make anything better to encourage more people to send bug reports and/or patches?

We do in fact have a very low level to entry already so I’m not aware of many additional things we can carve out to streamline this further.

Sponsors

As friends of me know, I always feel a little bit envy for projects and developers who manage to get corporate sponsors or otherwise enough paid support contracts to make them able to work on their pet projects full-time or at least part-time. I certainly have never reached that level for more than just a few months at a time. I’m not sure this is very important, as lack of funding doesn’t stop us it just slows us down and really, in an open source project like ours there aren’t really many hard deadlines. If it doesn’t get done now, it’ll get done later if enough people want it to happen.

Getting funding isn’t always easy either, since there may very well appear a company that wants to pay for a feature to get added but we don’t agree that it is a feature suitable for the project…

Organization

At times I think of the long-term future of our little project. Like what’s gonna happen the day I want to give up my chieftainship or if a company would like to step up and sponsor the project, but won’t be able to because there’s no actual legal entity for the project to sponsor or pay.

Would it make sense to have a curl organization? We could (I presume) easily join an umbrella organization like ASF. Or perhaps even more suitable the Software Freedom Conservancy. I will certainly not advocate setting up our own non-profit or anything like that.

I don’t think FSF or GNU will ever be a serious alternative since a pretty solid design goal of mine has been to avoid the *GPL licenses for curl to keep it attractive for commercial interests in a higher degree than (L)GPL licensed projects are. (This is not a license debate, so please don’t lecture me about the existence of GPL’ed commercial stuff.) I will not change license.

Feedback

One thing I’m sure of though, is that we will continue to listen to our users and the general curl and libcurl community about what to work on and what isn’t good and what we should do next. Please tell me your opinions and views on these matters!

libssh2 release again

libssh2We’ve mostly been fixing bugs and making things internally look better in the libssh2 source code during the recent months so the new release I just uploaded, called version 1.2.7, isn’t really exciting to any particular level for outsiders. Existing users should however be fairly happy as we’ve addressed a fair bunch of bugs and some of them have been annoying us in the project for a long time.

I’m convinced this is the best libssh2 release we’ve ever made.

The list of bug-fixes include these:

  • Better handling of invalid key files
  • inputchecks: make lots of API functions check for NULL pointers
  • libssh2_session_callback_set: extended the man page
  • SFTP: limit write() to not produce overly large packets
  • agent: make libssh2_agent_userauth() work blocking properly
  • _libssh2_userauth_publickey: reject method names longer than the data
  • channel_free: ignore problems with channel_close()
  • typedef: make ssize_t get typedef without LIBSSH2_WIN32
  • _libssh2_wait_socket: poll needs milliseconds
  • libssh2_wait_socket: reset error code to “leak” EAGAIN less
  • Added include for sys/select.h to get fd.set on some platforms
  • session_free: free more data to avoid memory leaks
  • openssl: make use of the EVP interface
  • Fix underscore typo for 64-bit printf format specifiers on Windows
  • Make libssh2_debug() create a correctly terminated string
  • userauth_hostbased_fromfile: packet length too short
  • handshake: Compression enabled at the wrong time
  • Don’t overflow MD5 server hostkey

If you find other bugs or have patches, just bring them all to us!

Websockets right now

Lots of sites today have JavaScript running that connects to the site and keeps the connection open for a long time (or just doing very frequent checks if there is updated info to get). This to such an wide extent that people have started working on a better way. A way for scripts in browsers to connect to a site in a TCP-like manner to exchange messages back and forth, something that doesn’t suffer from the problems HTTP provides when (ab)used for this purpose.

Websockets is the name of the technology that has been lifted out from the HTML5 spec, taken to the IETF and is being worked on there to produce a network protocol that is basically a message-based low level protocol over TCP, designed to allow browsers to do long-living connections to servers instead of using “long-polling HTTP”, Ajax polling or other more or less ugly tricks.

Unfortunately, the name is used for both the on-the-wire protocol as well as the JavaScript API so there’s room for confusion when you read or hear this term, but I’m completely clueless about JavaScript so you can rest assured that I will only refer to the actual network protocol when I speak of Websockets!

I’m tracking the Hybi mailing list closely, and I’m summarizing this post on some of the issues that’s been debated lately. It is not an attempt to cover it all. There is a lot more to be said, both now and in the future.

The first versions of Websockets that were implemented by browsers (Chrome) was the -75 version, as written by Ian Hickson as editor and the main man behind it as it came directly from the WHATWG team (the group of browser-people that work on the HTML5 spec).

He also published a -76 version (that was implemented by Firefox) before it was taken over by the IETF’s Hybi working group, who published that same document as its -00 draft.

There are representatives for a lot of server software and from browsers like Opera, Firefox, Safari and Chrome on the list (no Internet Explorer people seem to be around).

Some Problems

The 76/00 version broke compatibility with the previous version and it also isn’t properly HTTP compatible that is wanted and needed by many.

The way Ian and WHATWG have previously worked on this (and the html5 spec?) is mostly by Ian leading and being the benevolent dictator of what’s good, what’s right and what goes into the spec. The collision with IETF’s way of working that include “everyone can have a say” and “rough consensus” has been harsh and a reason for a lot of arguments and long threads on the hybi mailing list, and I will throw out a guess that it will continue to be the source of “flames” even in the future.

Ian maintains – for example – that a hard requirement on the protocol is that it needs to be “trivial to implement for amateurs”, while basically everyone else have come to the conclusion that this requirement is mostly silly and should be reworded to “be simple” or similar, that avoids the word “amateur” completely or the implying that amateurs wouldn’t be able to follow specs etc.

Right now

(This is early August 2010, things and facts in this protocol are likely to change, potentially a lot, over time so if you read this later on you need to take the date of this writing into account.)

There was a Hybi meeting at the recent IETF 78 meeting in Maastricht where a range of issues got discussed and some of the controversial subjects did actually get quite convincing consensus – and some of those didn’t at all match the previous requirements or the existing -00 spec.

Almost in parallel, Ian suggested a schedule for work ahead that most people expressed a liking in: provide a version of the protocol that can be done in 4 weeks for browsers to adjust to right now, and then work on an updated version to be shipped in 6 months with more of the unknowns detailed.

There’s a very lively debate going on about the need for chunking, the need for multiplexing multiple streams over a single TCP connection and there was a very very long debate going on regarding if the protocol should send data length-prefixed or if data should use a sentinel that marks the end of it. The length-prefix approach (favored by yours truly) seem to have finally won. Exactly how the framing is to be done, what parts that are going to be core protocol and what to leave for extensions and how extensions should work are not yet settled. I think everyone wants the protocol to be “as simple as possible” but everyone has their own mind setup on what amount of features that “the simplest” possible form of the protocol has.

The subject on exactly how the handshake is going to be done isn’t quite agreed to either by my reading, even if there’s a way defined in the -00 spec. The primary connection method will be using a HTTP Upgrade: header and thus connect to the server’s port 80 and then upgrade the protocol from HTTP over to Websockets. A procedure that is documented and supported by HTTP, but there’s been a discussion on how exactly that should be done so that cross-protocol requests can be avoided to the furthest possible extent.

We’ve seen up and beyond 50 mails per day on the hybi mailing list during some of the busiest days the last couple of weeks. I don’t see any signs of it cooling down short-term, so I think the 4 weeks goal seems a bit too ambitious at this point but I’m not ruling it out. Especially since the working procedure with Ian at the wheel is not “IETF-ish” but it’s not controlled entirely by Ian either.

Future

We are likely to see a future world with a lot of client-side applications connecting back to the server. The amount of TCP connections for a single browser is likely to increase, a lot. In fact, even a single web application within a single tab is likely to do multiple websocket connections when they re-use components and widgets from several others.

It is also likely that libcurl will get a websocket implementation in the future to do raw websocket transfers with, as it most likely will be needed in a future to properly emulate a browser/client.

I hope to keep tracking the development, occasionally express my own opinion on the matter (on the list and here), but mostly stay on top of what’s going on so that I can feed that knowledge to my friends and make sure that the curl project keeps up with the time.

chinese-socket

(The pictured socket might not be a websocket, but it is a pretty remarkably designed power socket that is commonly seen in China, and as you can see it accepts and works with many of the world’s different power plug designs.)

curl performance

Benchmarks, speed, comparisons, performance. I get a lot of questions on how curl and libcurl compare against other tools or libraries, and I rarely have any specific answers as I personally basically never use or test any other tools or libraries!

This text will instead be more elaborate on how we work on libcurl, why I believe libcurl will remain the fastest alternative.

cURL

libcurl is low-level

libcurl is written in C and uses the native function calls of the operating system to perform its network operations. It features a lot of features, but when it comes to plain sending and receiving data the code paths are very short and the loops can’t be shortened or sped up by any significant amount. Based on these facts, I am confident that for simple single stream transfers you really cannot write a file transfer library that runs faster. (But yes, I believe other similarly low-level style libraries can reach the same speeds.)

When adding more complicated test cases, like doing SSL or perhaps many connections that need to be kept persistent between transfers, then of course libraries can start to differ. libcurl uses SSL libraries natively so if they are fast, so will libcurl’s SSL handling be and vice versa. Of course we also strive to provide features such as connection pooling, SSL session id reuse, DNS caching and more to make the normal and frequent use cases as fast as we possibly can. What takes time when using libcurl should be the underlying network operations, not the tricks libcurl adds to them.

event-based is the way to grow

If you plan on making an app that uses more than just a few connections, libcurl can of course still do the heavy lifting for you. You should consider taking precautions already when you do your design and make sure that you can use an event-based concept and avoid relying on select or poll for the socket handling. Using libcurl’s multi_socket API, you can go up and beyond tens of thousands of connections and still reach maximum performance. And this using basically all of the protocols libcurl supports, this is not limited to a small subset. (There are unfortunate exceptions, like for example “file://” URLs but there are completely technical reasons for this.)

Very few file transfer libraries have this direct support for event-based operations. I’ve read reports of apps that have gone up to and beyond 70,000 connections on the same host using libcurl like this. The fact that TCP only has a 16 bit field in the protocol header for “source port” of course forces users who want to try this stunt to use more than one interface as source address.

And before you ask: you cannot grow a client to that amount using any other technique than event-based using many connections in the same thread, as basically no other approach scales as well.

When handling very many connections, the mere “juggling” of the connections take time and can be done in good or bad ways. It would be interesting to one day measure exactly how good libcurl is at this.

Binding Benchmark and Comparisons

We are aware of something like 40 different bindings for libcurl to make it possible to use from just about any language you like. Lots, if not just every, language also tend to have its own native version of a transfer or at least HTTP library. For many languages, the native version is the one that is most preferred and most used in books, articles and promoted on the internet. Rarely can the native versions compete with the libcurl based ones in actual transfer performance. Because of what I mentioned above: libcurl does little extra when transferring stuff but the actual and raw transfer. Of course there still needs to be additional glue and logic to make libcurl work fine with the different languages’ own unique environments, but it is still often proven that it doesn’t make the speed gain get lost or become invisible. I’ll illustrate below with some sample environments.

Ruby comparisons

Paul Dix is a Ruby guy and he’s done a lot of work with HTTP libraries and Ruby, and he’s also done some benchmarks on libcurl-based Ruby libraries. They show that the tools built on top of libcurl run significantly faster than the native versions.

Perl comparisons

“Ivan” wrote up a benchmarking script that performs a number of transfers using three different mechanisms available to perl hackers. One of them being the official libcurl perl binding (WWW::Curl), one of them being the perl standard one called LWP. The results leaves no room for doubts: the libcurl-based version is significantly faster than the “native” alternatives.

PHP comparisons

The PHP binding for libcurl, PHP/CURL, is a popular one. In PHP the situation is possibly a bit different as they don’t have a native library that is nearly as feature complete as the libcurl binding, but they do have a native version for doing things like getting HTTP data etc. This function has been compared against PHP/CURL many times, for example Ricky’s comparisons, and Alix Axel’s comparisons. They all show that the libcurl-based alternative is faster. Exactly how much faster is of course depending on a lot of factors but I’m not going into such specific details here and now.

We miss more benchmarks!

I wish I knew about more benchmarks and comparisons of speed. If you know of others, or if you get inspired enough to write up and publish any after reading my rant here, please let me know! Not only is it fun and ego-boosting to see our project win, but I also want to learn from them and see where we’re lacking and if anyone beats us in a test, it’d be great to see what we could do to improve.