Tag Archives: cURL and libcurl

Axis2/C going libcurl?

Axis2/CApache’s Axis2/C project, said to be “the only complete SOAP engine” is considering to move over to use libcurl for HTTP transport by default. At least Axis2/C developer Dinesh Premalal thinks they should, and he lists multiple reasons in his blog and I can of course do nothing but agree.libcurl

One reason he failed to mention is that we all (Axis2/C users and libcurl users) benefit from them switching to libcurl since then we’ll have a larger combined potential developer base and we’ll get more eyes on the code, more testing done and thus in the end we will get a better transport library all over.

I’m slightly puzzled by Dinesh’s blog entry since this bug tracker entry submitted to Axis2/C mentions their failure to include curl’s copyright/license text in the distribution, which seems to imply that they already use (parts of) curl. Or?

curl 7.18.0 feature freeze

Feature freeze!I just mailed the curl-library list about us entering feature freeze for the upcoming 7.18.0 release. The plan is to have two weeks of bug fixing and time to allow people to find bugs, before we release it to the public. Please get a daily snapshot and give it a spin!

Here’s the changes that’ll be coming:

… and there are 26 bug fixes mentioned in the RELEASE-NOTES in progress so far!

curl on scan.coverity.com

On scan.coverity.com, the nice guys at Coverity run scans on open source projects to check for flaws in their source code. Their list currently includes 265 projects, and curl is one of them. I have only good words to say about their scanning, as they found no less than 27 flaws in curl 7.16.1 and only one of them was a false positive. All the others were valid and true flaws that we could fix. I don’t think anyone was any serious security risk, but still. 26 bugs detected in one go.

On January 8th 2008, Coverity announced their “rung 2” for eleven projects that had zero flaws left in rung 1 and the rung 2 projects get an upgraded analysis. curl was also at zero flaws left, but it isn’t clear to me what else we could to do to reach rung 2 or even how we can get them to do a follow-up scan on a newer release since 7.16.1 is quite old by now and with all the changes in the code over time there’s always the risk new nasty bugs have crept in… So we’re at rung 1 still with no recent release scanned.

Aiming for 7.18.0 in January 2008

cURLThis info was also posted to the curl-library list today.

I previously thought of releasing 7.18.0 in December but since there are still outstanding topics in the list and since there’s no pressure due to any serious bug fixes or anything, I decided we can just as wait until January. I want January 13th to be the feature freeze day after which no new features will be committed until the release, which hopefully then could be done by January 28th or so.

The live updated TODO-RELEASE document will change over time, but it currently contains these items:

Is there anything we’ve forgotten we should include in the next release? To get a feel for how the next release will look like, check out the RELEASE-NOTES in progress, or try out a daily snapshot!

Fresh CA Cert Bundle Anyone?

cURLThe popular ca extract service on the curl web site converts the Firefox ca certs into a PEM file suitable for use with curl, wget or anything else OpenSSL-based that likes PEM formatted CA cert bundles.

The main script was fixed yesterday as it was previously getting a nightly source code snapshot to get the “magic” file to convert from, but I noticed they stopped updating the nightly source snapshots a good while ago so the updates had stopped!

Now, the script only gets the actually needed certdata file and converts it, so now it downloads a lot less data in vain and it also thus runs much faster. Now the PEM files offered on that page are up-to-date with the most recent Firefox.

wget going libcurl?

Micah Cowan is the current maintainer of GNU Wget, and he recently posted a long mail to the wgetA gnu head! mailing list titled “Thoughts on Wget 1.x, 2.0“.

Two fun quotes for the curious who don’t feel like reading the whole post:

1. On the subject of making wget deal with multiple simultanous connections/requests: The obvious solution to that is to use c-ares, which does exactly that: handle DNS queries asynchronously. Actually, I didn’t know this until just now, but c-ares was split off from ares to meet the needs of the curl developers.

2. Following the first reasoning, they can indeed get away with even less work if they base that work on an existing solution: While I’ve talked about not reinventing the wheel, using existing packages to save us the trouble of having to maintain portable async code, higher-level buffered-IO and network comm code, etc, I’ve been neglecting one more package choice. There is, after all, already a Free Software package that goes beyond handling asynchronous network operations, to specifically handle asynchronous _web_ operations; I’m speaking, of course, of libcurl. There would seem to be some obvious motivation for simply using libcurl to handle all asynchronous web traffic, and wrapping it with the logic we need to handle retries, recursion, timestamping, traversing, selecting which files to download, etc. Besides async web code, of course, we’d also automatically get support for a number of various protocols (SFTP, for example) that have been requested in Wget.libcurl

I am of course happy to see that the consideration exists – even if this won’t go further than just expressed in a mail. I did ventilate this idea to the wget people back in 2001, and even though we’re now more than six years down the road since then the situation is now even more clear: libcurl is a much more capable and proven transport layer solution and it supports much more protocols than wget is/does.

Me biased? naaah… 🙂

curl vs wget, really

Ok, since people truly and actually often ask me about what the differences are between curl and Wget, it might be suitable to throw in my garbage here and state the main differences as I see them. Please consider my bias towards curl since after all, curl is my baby – but I have contributed code to Wget as well.

curl

  • Features and is powered by libcurl -a cross-platform library with a stable API that can be used by each and everyone. This difference is major since it creates a completely different attitude on how to do things internally. It is also slightly harder to make a library than a “mere” command line tool.
  • Pipes. curl is more in the traditional unix-style, it sends more stuff to stdout, and reads more from stdin in a “everything is a pipe” manner.cURL
  • Return codes. curl returns a range of defined and documented return codes for various (error) situations.
  • Single shot. curl is bascially made to do single-shot transfers of data.It transfers just the URLs that
    the user specifies, and does not contain any recursive downloading logic or any sort of HTML parser.
  • More protocols. curl supports FTP, FTPS, HTTP, HTTPS, SCP, SFTP, TFTP, TELNET, DICT, LDAP, LDAPS and FILE at the time of this writing. Wget supports HTTP, HTTPS and FTP.
  • More portable. Ironically curl builds and runs on lots of more platforms than wget, in spite of their attempts to keep things conservative. For example, VMS, OS/400, TPF and other more “exotic” platforms that aren’t straight-forward unix clones.
  • More SSL libraries and SSL support. curl can be built with one out of four different SSL/TLS libraries, and it offers more control and wider support for protocol details.
  • curl (or rather libcurl) supports more HTTP authentication methods, and especially when you try over HTTP proxies.

wget

  • Wget is command line only. There’s no lib or anything. Personally I’ve always disliked the project doesn’t provide a man page as they stand in the GNU side of this and consider “info” pages to be the superior way to document things like this. I strongly disagree.
  • Recursive! Wget’s major strong side compared to curl is its ability to download recursively, or even just download everything that is referred to from a remote resource, be it a HTML page or a FTP directory listing.A gnu head!
  • Older. Wget has traces back to 1995, while curl can be tracked back no longer than 1997.
  • Less developer activity. While this can be debated, I consider three metrics here: mailing list activity, source code commit frequency and release frequency. Anyone following these two projects can see that the curl project has a lot higher pace in all these areas, and it has indeed been so for several years.
  • HTTP 1.0. Wget still does its HTTP operations using HTTP 1.0, and while that is still working remarkably fine and hardly ever is troublesome to the end-users, it is still a fact. curl has done HTTP 1.1 since March 2001 (while still offering optional 1.0 requests).
  • GPL. Wget is 100% GPL v2, which I believe is going v3 really soon when they release their next release. curl is MIT licensed.
  • GNU. Wget is part of the GNU project and all copyrights are assigned to them etc. The curl project is entirely stand-alone and independent with no organization parenting at all.

This turned out to be a long post and it might in fact be usable to save for the future, so I’m also posting this as a more permanent doc on my site on this URL: http://daniel.haxx.se/docs/curl-vs-wget.html. So possible updates will be done there. Do let me know if you have further evident differences or if you disagree with me on details here!

curl on Fedora uses NSS

I noticed curl on Fedora suddenly started using NSS for TLS/SSL, as I believe the first distro out there.

I’ve been under the impression that Debian is the only distro shipping it built with GnuTLS.

I must admit I enjoy seeing more use of curl’s wide support of various underlying technologies, and it also makes it more certain that they will remain working and even get improved as we go. When we add support for things and they never really end up getting used those features just risk serious bitrotting and slowly dying away when the code changes but nobody uses them.