Tag Archives: cURL and libcurl

I’ll talk at FSCONS 2010

Recently I was informed that I got two talks accepted to the FSCONS 2010 conference, to be held in the beginning of November 2010.

My talks will be about the Future and current state of internet transport protocols (TCP, HTTP, SPDY, WebSockets, SCTP and more) and on High performance multi-protocol applications with libcurl, which will educate the audience on how to use libcurl when doing high performance clients with potentially a very large number of simultaneous transfers. A somewhat clueful reader will of course spot that these two talks have a lot in common, and yeah they do reveal a lot of what I do and what I like and what I poke on these days. I hope I’ll be able to put the light on some things not everyone is already perfectly aware of.

The talks will be held in English, and if the past FSCONS conferences tell anything, my talks will be video filmed and become available online afterward for the world to see if you have a funeral or something to attend to that prevents you from actually attending in person.

If you have thoughts, questions or anything on these topics that you would like to get answered in my talk, feel free to bring them up and I’ll see what I can do.

(If those fine guys and gals at FSCONS ever settled for a logo, or had one I could link to, I would’ve shown one of them right here.)

C-ares, now and ahead!

The project c-ares started many years ago (March 2004) when I decided to fork the existing ares project to get the changes done that I deemed necessary – and the original project owner didn’t want them.

I did my original work on c-ares back then primarily to get a good asynchronous name resolver for libcurl so that we would get around the limitation of having to do the name resolves totally synchronously as the libc interfaces mandate. Of course, c-ares was and is more than just name resolving and not too surprisingly, there have popped up other projects that are now using c-ares.

I’m maintaining a bunch of open source projects, and c-ares was never one that I felt a lot of love for, it was mostly a project that I needed to get done and when things worked the way I wanted them I found myself having ended up as maintainer for yet another project. I’ve repeatedly mentioned on the c-ares mailing list that I don’t really have time to maintain it and that I’d rather step down and let someone else “take over”.

After having said this for over 4 years, I’ve come to accept that even though c-ares has many users out there, and even seems to be appreciated by companies and open source projects, there just isn’t any particular big desire to help out in our project. I find it very hard to just “give up” a functional project, so I linger and do my best to give it the efforts and love it needs. I very much need and want help to maintain and develop c-ares. I’m not doing a very good job with it right now.

Threaded name resolves competes

I once thought we would be able to make c-ares capable of becoming a true drop-in replacement for the native system name resolver functions, but over the years with c-ares I’ve learned that the dusty corners of name resolving in unix and Linux have so many features and fancy stuff that c-ares is still a long way from that. It has also made me turn around somewhat and I’ve reconsidered that perhaps using a threaded native resolver is the better way for libcurl to do asynchronous name resolves. That way we don’t need any half-baked implementations of the resolver. Of course it comes at the price of a new thread for each name resolve, which turns really nasty of you grow the number of connections just a tad bit, but still most libcurl-using applications today hardly use more than just a few (say less than a hundred) simultaneous transfers.

Future!

I don’t think the future has any radical changes or drastically new stuff in the pipe for c-ares. I think we should keep polishing off bugs and add the small functions and features that we’re missing. I believe we’re not yet parsing all records we could do, to a convenient format.

As usual, a project is not about how much we can add but about how much we can avoid adding and how much we can remain true to our core objectives. I wish the growing popularity will make more people join the project and then not only to through a single patch at us, but to also hand around a while and help us somewhat more.

Hopefully we will one day be able to use c-ares instead of a typical libc-based name resolver and yet resolve the same names.

Join us and help us give c-ares a better future!

c-ares

Webscrapers unite!

The term web scraping has come to stay. It refers to getting stuff off web sites in an automated fashion. Other terms occasionally used for include spidering, web harvesting and more. I’ve used the term HTTP scripting for it.

The art is in many cases to act more or less like a browser with the single purpose of making the server not detect that it is a program in the other end.

This is something I’ve done a lot over the years while developing curl, and I’ve answered countless of questions on the matter. There are several good books (all of them having at least parts of them covering curl).

Today, we’ve setup a little web site over at webscrapers.haxx.se and an associated mailing list, in an attempt to gather people who are working in this field. Whether you do it for fun or profit doesn’t matter and while we can certainly discuss what tools you should or shouldn’t use for scraping, we certainly don’t limit which to discuss on the list as long as the subject is somewhat relevant.

So, if you’re into scraping or you want to learn more on how to automate your web bots, come on over and join the list and let’s see where this might take us!

curl vs libcurl

In my mini-series of articles A vs B, the time is up for curl vs libcurl.

For me, the differences are so very clear and obvious but I get a fair stream of questions from users and random people that I thought it was about time to make an effort to once and for all make a page with the facts stated. A fixed home for curl vs libcurl knowledge.

So I did. And now I mentioned it to you. Enjoy! If you have additional content you think belong there or if you think anything is unclear or wrong, don’t hesitate to let me know!

cURL

Daniel’s currency exchange is no more

For quite a number of years I maintained a little web service to provide currency exchange rates in a handy format and in a way that was friendly for machines and other machine-exchangers. My personal favorite feature was the “easy conversion” helper that would provide a “easy to calculate in head” formula for back and forth between two currencies based on their current rates. Like “multiply by 5 and divide by 2” etc.

This service goes all the way back to 1997 when I started to work on getting exchange rates downloaded as a service to the IRC bot I ran in #amiga on efnet (even before the split when ircnet was created). Back then I was primarily working on the IRC bot named Dancer. 1997 I started the work on a tool to fetch rates. The tool would become curl and the web site to access the rates was initially hosted by the company Frontec for which I worked back then.One dollar bill

The URL changed a few more times but it has been available at http://daniel.haxx.se/currency for the last few years until a few weeks ago. Well, technically the URL still works but the service does not.

So a few weeks ago the primary site I’ve scraped for this info changed their format and I decided to not play cat and mouse anymore. I was already bending the rules by not reading their terms of service as I feared I wouldn’t be allowed to use their data like this. Also, I really don’t have any use for this service myself so I decided to do myself a service and stop wasting spare time on one of these projects that don’t give me enough personal satisfaction. I’m sure that if there is a demand for such a service I now closed down, there will be someone else out there ready to fire it up and serve users.

So long, and thanks for all the currency exchange fun.

roffit lives!

Many moons ago I created a little tool I named roffit. It is just a tiny perl script that converts a man page written in the nroff format to good-looking HTML. I should perhaps also add that I didn’t find any decent alternatives then so I wrote up my own version. I’ve been using it since in projects such as curl, c-ares and libssh2 to produce web versions of the docs.

It has just done its job and I haven’t had any needs to fiddle with it. The project page lists it as last modified in 2004, even though I actually moved it to a sourceforge CVS repo back in 2007.

Just a few days ago, I got emailed and was notified that Debian has it included as a package in the distribution and someone was annoyed on some particular flaws.

This resulted in a bunch of bugs getting submitted to the Debian bug tracker, I started up the brand new roffit-devel mailing list to easier host roffit discussions and I switched over the CVS repo to a git one on github.

If you like seeing man pages turned into web pages, consider joining up and help us improve this thing!

My talk Optimera Sthlm

30 minutes is a tricky period to fill with contents when you do a talk, and yesterday I did my best at confusing/informing the audience at the OPTIMERA STHLM conference in transport layer performance. Where time is spent or lost today in TCP, what to think about to get things to behave faster, that RTT is not getting better even though brandwidth is growing really fast these days and a little about some future technologies like WebSockets, SPDY, SCTP and MPTCP.

Note: this talk is entirely in Swedish.

My slides for this is also viewable with slideshare.net like this:

OPTIMERA STHLM

Our friends at .SE are once again putting together an interesting conference-style day with talks, and this time the title of it is “OPTIMERA STHLM” (yes they use all caps) and it is all about optimizing web-related things.

I’ve been invited and I will do a 30 minute talk during that day about the transport layer and stuff on top of the transport layer. In other words it’ll include things to consider for TCP, DNS, HTTP, handling sockets, libcurl and a quick look at things such as Websockets, SPDY, MPTCP and SCTP.

The full day’s program is now available on the linked page. Enjoy!

professional libcurl hackers look this way!

In my company, Haxx, we work as consultants and we do contract development for customers who pay for our skill, time and dedication. We help them develop stuff.Haxx

We’re a small company, with basically two full-time employees. Most of our working days, we are involved with a single customer each who pays for our full-time involvement during a number of months. This is all good and fine. We love our jobs and we love our customers. We’re in it for the fun.

Now, these days we can see that the economy is slowly but surely gaining ground again and is getting up to speed. We hear more and more requests for help and potential assignments are starting to pour in. That’s great and all. Except that we’re only two guys and can’t accept very many projects…

Recently we’ve experienced a noticeable increase in amount of requests for support and other development help that involves curl and libcurl. I am the originator and maintainer of curl, there’s really no surprise or wonder that these companies contact me and us about it. I’m always very happy to see that there are companies and persons who are willing to pay for support of open source and in many cases pay for extending and bug fixing libcurl and have those fixes going back to the mainline sources without complaints.

Since we fail to accept a lot of requests, I’m interested in finding you who are interested in helping out with such work. Are you interested in helping out customers with curl related problems? Customers often come to us when they’ve got stuck within something they can’t easily solve themselves and they turn to us as experts in general, and experts on curl and libcurl in particular. And we are.

Before you think this is a great idea and you send me an email introducing yourself and your greatness in this area, please be aware that I will require proof of your qualifications. Most preferably, that proof is at least one good patch posted to the libcurl mailing list and accepted into the mainline libcurl code, but I’m open to accepting slightly less ideal proofs as well if you can just motivate why you failed to provide the ideal ones. Of course you will also need to be able to communicate in English without problems. Your geographical location, gender, race, religion, skin-color and shoe size are completely uninteresting.

I’m looking for someone interested in contract development, not full-time employment. We still do these kinds of jobs on a case by case basis and there may be one every two days, one per week or sometimes even less frequently. I want to increase my network of people I know and trust can deliver quality code and services for this kind of projects.

Can you help us?

curl and speced cookie order

I just posted this on the curl-library list, but I feel it suits to be mentioned here separately.

As I’ve mentioned before, I’m involved in the IETF http-state working group which is working to document how cookies are used in the wild today. The idea is to create a spec that new implementations can follow and that existing implementations can use to become more interoperable.

(If you’re interested in these matters, I can only urge you to join the http-state mailing list and participate in the discussions.)

The subject of how to order cookies in outgoing HTTP Cookie: headers have been much debated over the recent months and I’ve also blogged about it. Now, the issue has been closed and the decision went quite opposite to my standpoint and now the spec will say that while the servers SHOULD not rely on the order (yeah right, some obviously already do and with this specified like this even more will soon do the same) it will recommend clients to sort the cookies in a given way that is close to the way current Firefox does it[*].

This has the unfortunate side-effect that to become fully compatible with how the browsers do cookies, we will need to sort our cookies a bit more than what we just recently introduced. That in itself really isn’t very hard since once we introduced qsort() it is easy to sort on more/other keys.

The biggest problem we get with this, is that the sorting uses creation time of the cookies. libcurl and curl and others mostly use the Netscape cookie files to store cookies and keep state between invokes, and that file format doesn’t include creation time info! It is a simple text-based file format with TAB-separated columns and the last (7th) column is the cookie’s content.

In order to support the correct sorting between sessions, we need to invent a way to pass on the creation time. My thinking is that we do this in a way that allows older libcurls still understand the file but just not see/understand the creation time, while newer versions will be able to get it. This would be possible by extending the expires field (the 6th) as it is a numerical value that the existing code will parse as a number and it will stop at the first non-digit character. We could easily add a character separation and store the
creation time after. Like:

Old expire time:

2345678

New expire+creation time:

2345678/1234567

This format might even work with other readers of this file format if they do similar assumptions on the data, but the truth is that while we picked the format in the first place to be able to exchange cookies with a well known and well used browser, no current browser uses that format anymore. I assume there are still a bunch of other tools that do, like wget and friends.

Update: as my friend Micah Cowan explains: we can in fact use the order of the cookie file as “creation time” hint and use that as basis for sorting. Then we don’t need to modify the file format. We just need to make sure to store them in time-order… Internally we will need to keep a line number or something per cookie so that we can use that for sorting.

[*] – I believe it sorts on path length, domain length and time of creation, but as soon as the -06 draft goes online it will be easy to read the exact phrasing. The existing -05 draft exists at: http://tools.ietf.org/html/draft-ietf-httpstate-cookie-05