All posts by Daniel Stenberg

Unexpected C64 reference

Yesterday at the OPTIMERA STHLM conference, Isac Lagerblad, really surprised me when he brought up an image of me and my fellow hackers in the C64 demo group named Horizon from back in the days of 1990. I’m the guy in the middle in the lower row.

You can see it happen at 02:35 into the part 4 clip, where you’ll go if you click on the image (Isac’s talk is in Swedish).

OPTIMERA STHLM sudden tribute to Daniel for cURL with C64 referenceThe original picture is a very old scan and isn’t a lot better:

I was honored and flattered by this unexpected “tribute”. Thanks Isac, it was really fun to see!

And before anyone asks: me and my brother Björn got our first Commodore 64 1985 and that is what got me into computers. I’ve not stopped enjoying them since then. We did a lot of demos, a few games and we had a great time and got similar minded friends all over the world.

My talk Optimera Sthlm

30 minutes is a tricky period to fill with contents when you do a talk, and yesterday I did my best at confusing/informing the audience at the OPTIMERA STHLM conference in transport layer performance. Where time is spent or lost today in TCP, what to think about to get things to behave faster, that RTT is not getting better even though brandwidth is growing really fast these days and a little about some future technologies like WebSockets, SPDY, SCTP and MPTCP.

Note: this talk is entirely in Swedish.

My slides for this is also viewable with slideshare.net like this:

Bye bye Crystone

or, why we should give up on service providers that don’t treat us well enough.

We co-locate

We (Haxx) have a server (technically speaking we have more than one but this is about our main one that hosts most of our public stuff). This server is ours. We bought it, installed it, configured it and then we handed it over to a company that “co-locates” it for us. It means they put our hardware in their big server room and we pay them for it and for the bandwidth our server consumes.

It also means that we have less control over it and we need to call the company to get access to our machine and so on. Ok, so we’ve used Crystone for this for a long time. They’ve been cheap enough and they haven’t complained when we’ve greatly overrun our bandwidth “allowance” for many months in a row.

A bad track record

We did have concerns a while ago (August 2009 and then again in March 2010) when they had power problems in their facility and we suffered from outages and server down-times. Crystone was then really bad at communicating with what happened, what they do and we started to look around for alternative providers since it started to get annoying and they didn’t seem to care for us properly. But we didn’t really get around to actually moving and time passed.

Maybe they had fixed their flaws and things were now fine?

A Saturday in May

Suddenly, on the early morning Saturday May 22nd 2010 our machine didn’t respond to network traffic anymore. We didn’t find out until we woke up and tried to use our services and after having tried a few things. we contacted Crystone to hear if the problem was theirs or if the problem was ours – we’ve had some troubles lately with the network interface card and we feared that perhaps the network might had stopped working due to this flaky hardware.

The customer service at Crystone immediately said that they were experiencing problems due to their move of the server park to the new facilities (they moved from Liljeholmen to Hammarby, both different locations within the general Stockholm area). They said they had network problems and that they were working on it. They did not give any estimation of when our machine would be back online.

They also said that they had mailed their customers about this move, and yeah we felt a bit bad about not having noticed such a mail so that we had been prepared.

The entire day passed. No network. Their web site mentioned problems due to this particular server move. We waited, we got no further info. We were unhappy.

Saturday become Sunday

How big problems can you have when the down-time for your customers exceeds 24 hours and you still haven’t fixed it nor told us what the problems actually are? The Sunday passed and they updated their web site a few times. The last update mentioned the time 16:03 and it said “most customers” are now back online and that if there’s any remaining problem we should contact their customer service. I spotted that message a couple of hours later, when our machine still wasn’t available. And what did customer service have to say to us about it? Nothing, they were closed. Our server remained dead and inaccessible.

Monday, now beyond 50 hours

In the wee hours of the Monday we passed 50 hours offline time and when the customer service “desk” opened in the morning and answered our phone call, they could get our machine back online. By rebooting it. No explanation from their part why our machine was like the only one that suffered this long.

A search in the mail logs also proved that Crystone never mailed us to tell that our server would move. Isn’t that odd? (not really, as we would find out later)

We won’t stand it

Already during the weekend we had decided we are fed up with this complete ignorance and crappy treatment. Down-times and problems happen, but the complete lack of information and care about us – their customers – is what made it clear we are not suitable to be their customers. We had to go elsewhere.

Crystone offered us a month fee worth of deduction on the hosting charges as a compensation for the troubles we had. That was nice of them, but really this service isn’t expensive so it’s not the cost of this that is burdensome. We just can’t stand having a service this unreliable and working with a company that is this uncommunicative.

This big server move was Crystone moving a lot of equipment over to the facility that is owned and run by Phonera, another ISP, and the one that we happened to have an offer from since before when we were looking for alternatives. Handy – we thought – perhaps we could just go there and carry our server over from one shelf to another and we’ll be fine. Phonera is slightly more expensive but hey, perhaps we’d get peace of mind!

“We don’t steal customers”

Phonera was first glad to accept us as customers, but surprised us greatly when they turned around and declined getting us as new customers, since they claimed they don’t want to “steal” customers from Crystone (that are now themselves customers of Phonera). Baffled, we simply sent off another request to Portlane instead and within minutes we had a decision made and a contract signed.

Later that afternoon, a Phonera guy got back to us and had changed position again and said that perhaps we could become customers anyway. They had figured out that none of them would gain by us going to a third company, but in any case it was now too late for them and we had already made up our minds about going Portlane.

“Sir, your server is not here”

On Tuesday 13:00, Björn (as co-admin of the server) had an appointment with Crystone to extract our server from their care to take it over to its new home. When he appeared in Hammarby at the new facility to get the server he was up for (another) surprise. It wasn’t there. Now Crystone could inform us that our server is still left in the old facility in Liljeholmen. It was never moved!

Glad our business with these guys would soon be over, Björn  handed over our 1U of server to Portlane and within a short while it had found a new home, with a new IP address and a new caretaker.

We could once again take a deep breath of relief and carry on with whatever we were doing before again.

Foss-sthlm on Internetdagarna

Yes, I’m very happy to say that our good friends at .SE (who run and admin the .se TLD and more) like FOSS a lot and they are organizing Swedish perhaps biggest conference on internet-related stuff annually in October: Internetdagarna. This year, they’ve reached out to cooperate with us – the foss-sthlm foss network – to arrange and hold a meeting of our own during the conference.

The foss-sthlm meeting will not be within the actual conference, but will be held just next door. We intend to hold the meeting admission-free just as before – the way we like! I hope and think that we will be able to arrange another kick-ass meeting then and with .SE’s help we will get the arrangements done in style. I just very well may end up doing a talk myself at that meeting. (We call that meeting #4 for now, but it’s by no means decided that it actually will end up being the forth one this year.)

Let me again just mention that foss-sthlm is no formal organization and it has no leaders and no actual members. We’re all just individuals. However, I work to get things to happen within the network and I thus sometimes appear to “speak for” us, although in the end I of course only speak for myself and I help out to arrange things that I hope others will appreciate as well.

OPTIMERA STHLM

Our friends at .SE are once again putting together an interesting conference-style day with talks, and this time the title of it is “OPTIMERA STHLM” (yes they use all caps) and it is all about optimizing web-related things.

I’ve been invited and I will do a 30 minute talk during that day about the transport layer and stuff on top of the transport layer. In other words it’ll include things to consider for TCP, DNS, HTTP, handling sockets, libcurl and a quick look at things such as Websockets, SPDY, MPTCP and SCTP.

The full day’s program is now available on the linked page. Enjoy!

foss-sthlm second meetup

Within the network of people we call foss-sthlm, the time is closing in towards our second meetup. Or “Möte #2” as we so imaginatively call it (translates to “meeting #2”), to be held in the same great place as the first one, on May 19th 2010.

Our first gettogether was an astounding success and I don’t expect us to be able to repeat that massive attention and number of attendees this time. Things still look good and the number of attendees just reached 80 with exactly three weeks to go, so there’s still good time for more people to decide.

This time we’ve lined up 5 speakers and we’ve made an deliberate effort to not re-use any of the speakers from the first metting to make sure more people get the chance. Unfortunately, that means I won’t do any talk! 😉 Topics this time (for those lazy enough to not visit the link or if you for any reason think Swedish is hard to digest) include:

Smalltalk, FOSS legalities in Sweden, Cross platform Qt, Women in Open Source and FFmpeg.

I’m glad that we once again have sponsors that will let us use the room for free, provide sandwiches and coffee for the break without charge so that we can do this thing admission free once again. We learned from the first meeting that we need a little break in the middle.

If you’re in the area and still haven’t signed up, please go ahead. We’ll get a full afternoon with hardcore FOSS talks, followed by the rest of the evening at a pub talking FOSS with Stockholm’s FOSS hackers that matter.

Logotype Competition

We’re also throwing out different logo alternatives on the mailing list as a kind of competition and I guess we’ll just vote for a winning contribution at some point. My personal favorite so far is probably this version created by Tommy Nevtelen:

Tommy-Nevtelen-FOSS_STHLM

professional libcurl hackers look this way!

In my company, Haxx, we work as consultants and we do contract development for customers who pay for our skill, time and dedication. We help them develop stuff.Haxx

We’re a small company, with basically two full-time employees. Most of our working days, we are involved with a single customer each who pays for our full-time involvement during a number of months. This is all good and fine. We love our jobs and we love our customers. We’re in it for the fun.

Now, these days we can see that the economy is slowly but surely gaining ground again and is getting up to speed. We hear more and more requests for help and potential assignments are starting to pour in. That’s great and all. Except that we’re only two guys and can’t accept very many projects…

Recently we’ve experienced a noticeable increase in amount of requests for support and other development help that involves curl and libcurl. I am the originator and maintainer of curl, there’s really no surprise or wonder that these companies contact me and us about it. I’m always very happy to see that there are companies and persons who are willing to pay for support of open source and in many cases pay for extending and bug fixing libcurl and have those fixes going back to the mainline sources without complaints.

Since we fail to accept a lot of requests, I’m interested in finding you who are interested in helping out with such work. Are you interested in helping out customers with curl related problems? Customers often come to us when they’ve got stuck within something they can’t easily solve themselves and they turn to us as experts in general, and experts on curl and libcurl in particular. And we are.

Before you think this is a great idea and you send me an email introducing yourself and your greatness in this area, please be aware that I will require proof of your qualifications. Most preferably, that proof is at least one good patch posted to the libcurl mailing list and accepted into the mainline libcurl code, but I’m open to accepting slightly less ideal proofs as well if you can just motivate why you failed to provide the ideal ones. Of course you will also need to be able to communicate in English without problems. Your geographical location, gender, race, religion, skin-color and shoe size are completely uninteresting.

I’m looking for someone interested in contract development, not full-time employment. We still do these kinds of jobs on a case by case basis and there may be one every two days, one per week or sometimes even less frequently. I want to increase my network of people I know and trust can deliver quality code and services for this kind of projects.

Can you help us?

curl and speced cookie order

I just posted this on the curl-library list, but I feel it suits to be mentioned here separately.

As I’ve mentioned before, I’m involved in the IETF http-state working group which is working to document how cookies are used in the wild today. The idea is to create a spec that new implementations can follow and that existing implementations can use to become more interoperable.

(If you’re interested in these matters, I can only urge you to join the http-state mailing list and participate in the discussions.)

The subject of how to order cookies in outgoing HTTP Cookie: headers have been much debated over the recent months and I’ve also blogged about it. Now, the issue has been closed and the decision went quite opposite to my standpoint and now the spec will say that while the servers SHOULD not rely on the order (yeah right, some obviously already do and with this specified like this even more will soon do the same) it will recommend clients to sort the cookies in a given way that is close to the way current Firefox does it[*].

This has the unfortunate side-effect that to become fully compatible with how the browsers do cookies, we will need to sort our cookies a bit more than what we just recently introduced. That in itself really isn’t very hard since once we introduced qsort() it is easy to sort on more/other keys.

The biggest problem we get with this, is that the sorting uses creation time of the cookies. libcurl and curl and others mostly use the Netscape cookie files to store cookies and keep state between invokes, and that file format doesn’t include creation time info! It is a simple text-based file format with TAB-separated columns and the last (7th) column is the cookie’s content.

In order to support the correct sorting between sessions, we need to invent a way to pass on the creation time. My thinking is that we do this in a way that allows older libcurls still understand the file but just not see/understand the creation time, while newer versions will be able to get it. This would be possible by extending the expires field (the 6th) as it is a numerical value that the existing code will parse as a number and it will stop at the first non-digit character. We could easily add a character separation and store the
creation time after. Like:

Old expire time:

2345678

New expire+creation time:

2345678/1234567

This format might even work with other readers of this file format if they do similar assumptions on the data, but the truth is that while we picked the format in the first place to be able to exchange cookies with a well known and well used browser, no current browser uses that format anymore. I assume there are still a bunch of other tools that do, like wget and friends.

Update: as my friend Micah Cowan explains: we can in fact use the order of the cookie file as “creation time” hint and use that as basis for sorting. Then we don’t need to modify the file format. We just need to make sure to store them in time-order… Internally we will need to keep a line number or something per cookie so that we can use that for sorting.

[*] – I believe it sorts on path length, domain length and time of creation, but as soon as the -06 draft goes online it will be easy to read the exact phrasing. The existing -05 draft exists at: http://tools.ietf.org/html/draft-ietf-httpstate-cookie-05

Apple – only 391 days behind

In the curl project, we take security seriously. We work hard to make sure we don’t open up for security problems of any kind and once we fail, we work hard at analyzing the problem and coming up with a proper fix as swiftly as possible to make our “customer” as little vulnerable as possible.

Recently I’ve been surprised and slightly shocked by the fact that a lot of open source operating systems didn’t release any security upgrades to our most recent security flaw until well over a month after we first publicized the flaw. I’m not sure why they all reacted so slowly. Possibly it is because vendor-sec isn’t quite working as they were informed prior to the notification, and of course I don’t really expect many security guys to be subscribed to the curl mailing lists. Slow distros include Debian and Mandriva while Redhat did great.

Today however, I got a mail from Apple (and no, I don’t know why they send these mails to me but I guess they think I need them or something) with the subject “APPLE-SA-2010-03-29-1 Security Update 2010-002 / Mac OS X v10.6.3“. Aha! Did Apple now also finally update their curl version you might think?

They did. But they did not fix this problem. They fixed two previous problems universally known as CVE-2009-0037 and CVE-2009-2417. Look at the date of that first one. March 3, 2009. Yes, a whopping 391 days after the problem was first made public, Apple sends out the security update. Cool. At least they eventually fixed the problem…

An FTP hash command

Anthony Bryan strikes again. This time his name is attached to a new standards draft for how to get a hash checksum of a given file when using the FTP protocol. draft-bryan-ftp-hash-00 was published just a few days ago.

The idea is basically to introduce a spec for a new command named ‘HASH’ that a client can issue to a server to get a hash checksum for a given file in order to know that the file has the exact same contents you want before you even start downloading it or otherwise consider it for actions.

The spec details how you can ask for different hash algorithms, how the server announces its support for this in its FEAT response etc.

I’ve already provided some initial feedback on this draft, and I’ll try to assist Anthony a bit more to get this draft pushed onwards.