c-ares 1.5.3

I’m happy to announce the release of c-ares 1.5.3. c-ares is an asynchronous name resolver and somewhat generic DNS library with a liberal MIT-style license.

The news this time include:

  • fix adig sample application compilation failure on some systems
  • fix pkg-config reporting of private libraries needed for static linking
  • fallback to gettimeofday when monotonic clock is unavailable at run-time
  • ares_gethostbyname() fallback from AAA to A records with CNAME present
  • allow –enable-largefile and –disable-largefile configurations
  • configure process no longer needs nor checks size of curl_off_t
  • library will now be built with _REENTRANT symbol defined if needed
  • Improved configure detection of number of arguments for getservbyport_r
  • Improved query-ID randomness
  • Validate that DNS response address matches the request address
  • fix acountry sample application compilation failure on some systems

I’m also happy to see that the development version of Wireshark is currently using c-ares.

If you’re a graphics person, we’ll appreciate some kind of logo/symbol thing for the project!

Good port day

Things happen in bursts. Development goes on and on for long periods without any noticeable big breakthroughs, and then all of a sudden a lot happens at once. And those days are the best days!

Rockbox now works somewhat on the iAudio7, new patch posted today!

Rockbox now almost runs on the Creative Zen Vision:m, but at least the guys can now install the bootloader to load and start without having to rip out the harddrive and put it into a PC first!

We can now install new firmwares on the M6 players when running linux thanks to new tools being developed!

FTP vs HTTP, really!

Since I’m doing my share of both FTP and HTTP hacking in the curl project, I quite often see and sometimes get the questions about what the actual differences are between FTP and HTTP, which is the “best” and isn’t it so that … is the faster one?

FTP vs HTTP is my attempt at a write-up covering most differences to users of the protocols without going into too technical details. If you find flaws or have additional info you think should be included, please let me know!

The document includes comparisons between the protocols in these areas:

  • Age
  • Upload
  • ASCII/binary
  • Headers
  • Pipelining
  • FTP Command/Response
  • Two Connections
  • Active and Passive
  • Firewalls
  • Encrypted Control Connections
  • Authentications
  • Download
  • Ranges/resume
  • Persistent Connections
  • Chunked Encoding
  • Compression
  • FXP
  • IPv6
  • Name based virtual hosting
  • Proxy Support
  • Transfer Speed

With your help it could become a good resource to point curious minds to in the future…

The hack will still be useful

Okay, in my recent blog entry about Flash 10 using native libcurl I got a bit side-tracked and mentioned something about distros confusing libcurl’s soname 3 and 4. This caused some comments in that post and some further activities behind the curtains, so let me spell out exactly what I mean:

The ABI for libcurl did change between soname 3 and 4, but the change was in a rather subtle area (FTP third party transfers, sometimes known as FXP) which is rarely used. It certainly will not hurt the Adobe Flash system.

I’m not against “the hack” (or perhaps “a hack” as there are several ways an ordinary system could provide work-arounds or fixes for this problem) per-se, I am mainly trying to fight the belief or misconception that the ABI break doesn’t exist.

Since Adobe doesn’t want to provide an updated package that links against a modern libcurl and refuses to provide multiple packages, distros of course need to address this dilemma.

I just want all to know that 3 != 4, even if the risk that it’ll cause problems is very slim.

Update: it seems Adobe will change this behavior in their next release and then try to load either 3 or 4.

CA cert bundle from Firefox

It could be interesting to note that extracting all the cacerts from your local Firefox installation isn’t that tricky, if you just use some of the magic that are at hand with the NSS certutil tool.

Users of OpenSSL or GnuTLS based tools or libraries (such as libcurl) might be pleased to learn this.

curl users in general of course should be aware that we no longer ship any ca-cert bundle with curl (as of curl 7.18.1), as it seems some ports haven’t yet updated or discovered this.

Update: this script is now present as lib/firefox-db2pem.sh in the curl CVS repository.

A talk that won’t happen

I had already talked to the guys about going to FSCONS 2008 and do a talk about Rockbox and reverse engineering when I realized that the very same weekend we’re going to Rome, Italy with my company and if that isn’t enough, the Google summer of code mentor summit is also taking place that weekend in late October so if Italy somehow got canceled I would probably rather go to California… I quite enjoyed FSCONS last year so I’m a bit annoyed about this situation.

Isn’t that just the irony of life? Here I have an entire autumn with nothing much planned and then all of a sudden three events all take place on the same weekend.

I’m still on for another talk in September in Stockholm though but since I haven’t seen any public details about it I’ll refrain from being specific just yet.

Site deadness

When I got to work this morning I immediately noticed that one of the servers that host a lot of services for open source projects I tend to play around with (curl, Rockbox and more), had died. It responded to pings but didn’t allow my usual login via ssh. It also hosts this blog.

I called our sysadmin guy who works next to the server and he reported that the screen mentioned inode problems on an ext3 filesystem on sda1. Powercycling the machine did nothing good but the machine simply didn’t even see the hard drive…

I did change our slave DNS for rockbox.org and made it point to a backup web server in the mean time, just to make people aware of the situation.

Some 12 hours after the discovery of the situation, Linus Nielsen Feltzing had the system back up again and it’s looking more or less identical to how it was yesterday. The backup procedure proved itself to be working flawlessly. Linus inserted a new disk, partitioned similar like the previous one, restored the whole backup, fixed the boot (lilo) and wham (ignoring some minor additional fiddling) the server was again up and running.

Thanks Linus!

popen() in pthreaded program confuses gdb

I just thought I’d share a lesson I learned today:

I’ve been struggling for a long time at work with a gdb problem. When I set a break-point and then single-step from that point, it sometimes (often) decides to act as if I had done ‘continue‘ and not ‘next‘. It is highly annoying and makes debugging nasty problems really awkward.

Today I searched around for the topic and after some experiments I can now confirm: if I remove all uses of popen() I no longer get the problems! I found posts that indicated that forking could confuse threaded programs, and since this program at work uses threads I could immediately identify that it uses both popen() and system() and both of them use fork() internally. (And yes, I believe my removal of popen() also removed the system() calls.)

And now I can finally debug my crappy code again to become less crappy!

My work PC runs glibc 2.6.1, gcc 4.1.3 and gdb 6.6. But I doubt the specific versions matter much.

Standardized cookies never took off

David M. Kristol is one of the authors of RFC2109 and RFC2965, “HTTP State Management Mechanism”. RFC2109 is also known as the first attempt to standardize how cookies should be sent and received. Prior to that document, the only cookie spec was the very brief document released by Netscape in the old days and it certainly left many loose ends.

Mr Kristol has published a great and long document, HTTP Cookies: Standards, Privacy, and Politics, about the slow and dwindling story of how the work on the IETF with the cookie standard took place and how it proceeded.

Still today, none of those documents are used very much. The original Netscape way is still how cookies are done and even if a lot of good will and great efforts were spent on doing things right in these RFCs, I can’t honestly say that I can see anything on the horizon that will push the web world towards changing the cookies to become compliant.