Category Archives: Open Source

Open Source, Free Software, and similar

curl and libcurl 7.19.0

With almost 40 described bug fixes curl and libcurl 7.19.0 come flying with a range of new things, including the following:

  • curl_off_t gets its size/typedef somewhat differently than before. This may cause an ABI change for you. See lib/README.curl_off_t for a full explanation.
  • Added CURLINFO_PRIMARY_IP
  • Added CURLOPT_CRLFILE and CURLE_SSL_CRL_BADFILE
  • Added CURLOPT_ISSUERCERT and CURLE_SSL_ISSUER_ERROR
  • curl’s option parser for boolean options reworked
  • Added –remote-name-all
  • Now builds for the INTEGRITY operating system
  • Added CURLINFO_APPCONNECT_TIME
  • Added test selection by key word in runtests.pl
  • the curl tool’s -w option support the %{ssl_verify_result} variable
  • Added CURLOPT_ADDRESS_SCOPE and scope parsing of the URL according to RFC4007
  • Support –append on SFTP uploads (not with OpenSSH, though)
  • Added curlbuild.h and curlrules.h to the external library interface

We’ve worked really hard to get this to be a really solid and fine release. I hope it’ll show.

Getting cacerts for your tools

As the primary curl author, I’m finding the comments here interesting. That blog entry “Teaching wget About Root Certificates” is about how you can get cacerts for wget by downloading them from curl’s web site, and people quickly point out how getting cacerts from an untrusted third party place of course is an ideal situation for an MITM “attack”.

Of course you can’t trust any files off a HTTP site or a HTTPS site without a “trusted” certificate, but thinking that the curl project would run one of those just to let random people load PEM files from our site seems a bit weird. Thus, we also provide the scripts we do all this with so that you can run them yourself with whatever input data you need, preferably something you trust. The more paranoid you are, the harder that gets of course.

On Fedora, curl does come with ca certs (at least I’m told recent Fedoras do) and even if it doesn’t, you can actually point curl to use whatever cacert you like and since most default installs of curl uses OpenSSL like wget does, you could tell curl to use the same cacert your wget install uses.

This last thing gets a little more complicated when one of the two gets compiled with a SSL library that doesn’t easily support PEM (read: NSS), but in the case of curl in recent Fedora they build it with NSS but with an additional patch that allows it to still be able to read PEM files.

c-ares 1.5.3

I’m happy to announce the release of c-ares 1.5.3. c-ares is an asynchronous name resolver and somewhat generic DNS library with a liberal MIT-style license.

The news this time include:

  • fix adig sample application compilation failure on some systems
  • fix pkg-config reporting of private libraries needed for static linking
  • fallback to gettimeofday when monotonic clock is unavailable at run-time
  • ares_gethostbyname() fallback from AAA to A records with CNAME present
  • allow –enable-largefile and –disable-largefile configurations
  • configure process no longer needs nor checks size of curl_off_t
  • library will now be built with _REENTRANT symbol defined if needed
  • Improved configure detection of number of arguments for getservbyport_r
  • Improved query-ID randomness
  • Validate that DNS response address matches the request address
  • fix acountry sample application compilation failure on some systems

I’m also happy to see that the development version of Wireshark is currently using c-ares.

If you’re a graphics person, we’ll appreciate some kind of logo/symbol thing for the project!

Good port day

Things happen in bursts. Development goes on and on for long periods without any noticeable big breakthroughs, and then all of a sudden a lot happens at once. And those days are the best days!

Rockbox now works somewhat on the iAudio7, new patch posted today!

Rockbox now almost runs on the Creative Zen Vision:m, but at least the guys can now install the bootloader to load and start without having to rip out the harddrive and put it into a PC first!

We can now install new firmwares on the M6 players when running linux thanks to new tools being developed!

FTP vs HTTP, really!

Since I’m doing my share of both FTP and HTTP hacking in the curl project, I quite often see and sometimes get the questions about what the actual differences are between FTP and HTTP, which is the “best” and isn’t it so that … is the faster one?

FTP vs HTTP is my attempt at a write-up covering most differences to users of the protocols without going into too technical details. If you find flaws or have additional info you think should be included, please let me know!

The document includes comparisons between the protocols in these areas:

  • Age
  • Upload
  • ASCII/binary
  • Headers
  • Pipelining
  • FTP Command/Response
  • Two Connections
  • Active and Passive
  • Firewalls
  • Encrypted Control Connections
  • Authentications
  • Download
  • Ranges/resume
  • Persistent Connections
  • Chunked Encoding
  • Compression
  • FXP
  • IPv6
  • Name based virtual hosting
  • Proxy Support
  • Transfer Speed

With your help it could become a good resource to point curious minds to in the future…

The hack will still be useful

Okay, in my recent blog entry about Flash 10 using native libcurl I got a bit side-tracked and mentioned something about distros confusing libcurl’s soname 3 and 4. This caused some comments in that post and some further activities behind the curtains, so let me spell out exactly what I mean:

The ABI for libcurl did change between soname 3 and 4, but the change was in a rather subtle area (FTP third party transfers, sometimes known as FXP) which is rarely used. It certainly will not hurt the Adobe Flash system.

I’m not against “the hack” (or perhaps “a hack” as there are several ways an ordinary system could provide work-arounds or fixes for this problem) per-se, I am mainly trying to fight the belief or misconception that the ABI break doesn’t exist.

Since Adobe doesn’t want to provide an updated package that links against a modern libcurl and refuses to provide multiple packages, distros of course need to address this dilemma.

I just want all to know that 3 != 4, even if the risk that it’ll cause problems is very slim.

Update: it seems Adobe will change this behavior in their next release and then try to load either 3 or 4.

CA cert bundle from Firefox

It could be interesting to note that extracting all the cacerts from your local Firefox installation isn’t that tricky, if you just use some of the magic that are at hand with the NSS certutil tool.

Users of OpenSSL or GnuTLS based tools or libraries (such as libcurl) might be pleased to learn this.

curl users in general of course should be aware that we no longer ship any ca-cert bundle with curl (as of curl 7.18.1), as it seems some ports haven’t yet updated or discovered this.

Update: this script is now present as lib/firefox-db2pem.sh in the curl CVS repository.

A talk that won’t happen

I had already talked to the guys about going to FSCONS 2008 and do a talk about Rockbox and reverse engineering when I realized that the very same weekend we’re going to Rome, Italy with my company and if that isn’t enough, the Google summer of code mentor summit is also taking place that weekend in late October so if Italy somehow got canceled I would probably rather go to California… I quite enjoyed FSCONS last year so I’m a bit annoyed about this situation.

Isn’t that just the irony of life? Here I have an entire autumn with nothing much planned and then all of a sudden three events all take place on the same weekend.

I’m still on for another talk in September in Stockholm though but since I haven’t seen any public details about it I’ll refrain from being specific just yet.

Site deadness

When I got to work this morning I immediately noticed that one of the servers that host a lot of services for open source projects I tend to play around with (curl, Rockbox and more), had died. It responded to pings but didn’t allow my usual login via ssh. It also hosts this blog.

I called our sysadmin guy who works next to the server and he reported that the screen mentioned inode problems on an ext3 filesystem on sda1. Powercycling the machine did nothing good but the machine simply didn’t even see the hard drive…

I did change our slave DNS for rockbox.org and made it point to a backup web server in the mean time, just to make people aware of the situation.

Some 12 hours after the discovery of the situation, Linus Nielsen Feltzing had the system back up again and it’s looking more or less identical to how it was yesterday. The backup procedure proved itself to be working flawlessly. Linus inserted a new disk, partitioned similar like the previous one, restored the whole backup, fixed the boot (lilo) and wham (ignoring some minor additional fiddling) the server was again up and running.

Thanks Linus!

popen() in pthreaded program confuses gdb

I just thought I’d share a lesson I learned today:

I’ve been struggling for a long time at work with a gdb problem. When I set a break-point and then single-step from that point, it sometimes (often) decides to act as if I had done ‘continue‘ and not ‘next‘. It is highly annoying and makes debugging nasty problems really awkward.

Today I searched around for the topic and after some experiments I can now confirm: if I remove all uses of popen() I no longer get the problems! I found posts that indicated that forking could confuse threaded programs, and since this program at work uses threads I could immediately identify that it uses both popen() and system() and both of them use fork() internally. (And yes, I believe my removal of popen() also removed the system() calls.)

And now I can finally debug my crappy code again to become less crappy!

My work PC runs glibc 2.6.1, gcc 4.1.3 and gdb 6.6. But I doubt the specific versions matter much.