Time again for a happy release event. Can you believe this is in fact the 113th release?
Run over to the curl download page to get it!
This time, we bring happiness with the best curl and libcurl release ever and it features four changes and a range of bug fixes. The changes to note this time include:
- -T. is now for non-blocking uploading from stdin
- SYST handling on FTP for OS/400 FTP server cases
- libcurl refuses to read a single HTTP header longer than 100K
- added the –crlfile option to curl
And a collection of bugs fixed since the previous release involves these issues:
- The windows makefiles work again
- libcurl-NSS acknowledges verifyhost
- SIGSEGV when pipelined pipe unexpectedly breaks
- data corruption issue with re-connected transfers
- use after free if we’re completed but easy_conn not NULL (pipelined)
- missing strdup() return code check
- CURLOPT_PROXY_TRANSFER_MODE could pass along wrong syntax
- configure –with-gnutls=PATH fixed
- ftp response reader bug on failed control connections
- improved NSS error message on failed host name verifications
- ftp NOBODY on re-used connection hang
- configure uses pkg-config for cross-compiles as well
- improved NSS detection in configure
- cookie expiry date at 1970-jan-1 00:00:00
- libcurl-OpenSSL failed to verify some certs with Subject Alternative Name
- libcurl-OpenSSL can load CRL files with more than one certificate inside
- received cookies without explicit path got saved wrong if the URL had a query part
- don’t shrink SO_SNDBUF on windows for those who have it set large already
- connect next bug
- invalid file name characters handling on Windows
- double close() on the primary socket with libcurl-NSS
- GSS negotiate infinite loop on bad credentials
- memory leak in SCP/SFTP connections
- use pkg-config to find out libssh2 installation details in configure
- unparsable cookie expire dates make cookies get treated as session coookies
- POST with Digest authentication and “Transfer-Encoding: chunked”
- SCP connection re-use with wrong auth
- CURLINFO_CONTENT_LENGTH_DOWNLOAD for 0 bytes transfers
- CURLINFO_SIZE_DOWNLOAD for ldap transfers (-w size_download)
cURL is the greatest thing for web developers.
Thanks too much for that.
You are great!
Guys, why is it so difficult to include a .lib file for inclusion in Windows versions ? Everybody has had problem with building etc etc….. People are leaving without even being allowed to try it.
N: that’s a question for Microsoft isn’t it?
We provide both makefiles and Visual Studio project files that can build libcurl just fine on Windows. What else do people need?
I discovered curl just a week ago when I needed a command line download tool. My being a Windows user was perhaps the reason for such a late discovery. Actually, I needed to run some unattended automated non-interactive chain downloads (I’ve mostly used Flashget before but it does not respond well to AutoHotkey scripts to the extent I need).
I use AutoHotkey for all my scripting needs and I now run curl from within my autohotkey script with all my preconfigured user-agent, referer, number of retries, urls etc. and I’m very impressed. Not a single error with curl so far! This program is great.
Naturally, I was interested if other tools like it existed. And I discovered wget as well.
I don’t know if you might be interested, but these are the command lines I use from my autohotkey scripts using curl and wget. Both scripts are identical except this one line. %userAgent%, %webpage% (referer) and %url% are defined elsewhere in the script.
curl -C – -v -q -O –retry 10 –retry-delay 2 -A “%userAgent%” -e %webpage% %url%
wget -c -d -U “%userAgent%” –referer=”%webpage%” %url%
Always starting a download with resume option (-C – with curl & -c with wget) works fine with both curl and wget. It was easier for me to give the same command line for both – starting new download & resuming an incomplete/partial download – from my autohotkey script. I’ve found that both programs handle the uniform command line for both starting and resuming situations perfectly well.
Although curl requires -O and –retry options explicitly unlike wget which supports them by default, this is NOT a problem for me (unlike some people who mailed to you mentioning their wget bias just for this reason) as long as it supports those features in any way.
So, basically both curl and wget serve my personal automated chain downloading process. But the main reason I PREFER curl is that I need all of these three real time updates – Total Time, Time Spent & Time Left. These three real-time statistics are very useful to me when I’m actually attending my downloads.
By comparison, wget just shows Time Left – it does not show Total Time or Time Spent, which every downloader should.
But the most important thing is the reliability of downloads, especially performance over slow networks. In this repect, the statement “Wget has been designed for robustness over slow network connections; if a download fails due to a network problem, it will keep retrying until the whole file has been retrieved” really helps the end user’s peace of mind. And although wget uses HTTP/1.0, it does support the significant features introduced with HTTP/1.1 such as Range headers necessary for resuming partial/interrupted downloads – which it does so perfectly.
By comparison, I read that “data corruption issue with re-connected transfers” was one of the bugs fixed with the latest version of curl 7.19.7. While I am extremely glad that this bug is now fixed, I cannot help being surprised that this bug remained till this version even though curl dates back to 1997!
I must add that in my limited experience of just one week, curl also has resumed each and every partial/interrupted download over my slow network perfectly well. I’m sure that extensive tests have been made for such data reliability but I want you to assure in the front web page of curl as well as at the top of every kind of curl documentation, the help and manual pages for example, that curl can also be relied for “robustness over slow networks and re-connected transfers” just as much as any other download tool in the world.
I would also like to point out that it would be nice if curl also showed the program name and the percent downloaded on the title bar like wget. That helps when I work on my computer with download window sitting minimized in the Windows Taskbar. Just a cursory glance can show the percent downloaded status.
I don’t know whether this is a proper place for such a lengthy comment but I figured I’d get lesser competition for your attention here. Please forgive me if I made a wrong judgement.
This may be trivial – but why have the double dashes before “retry” and “retry-delay” options been converted to single dash after posting?
Regarding the bug you cite, all software has bugs and so does curl. I don’t think you should criticize us for having had that particular bug unless you’ve researched the specific problem or its reasons or solutions. It simply NEVER was a problem to any user until this one discovered/reported it. How would we fix bugs we don’t know about and never experienced?
The double-dash question just shows you’re not a regular user of command line tools in the *nix world. They all have those for the long names, single-dashes are for the single-letter options.
And please, if you really want your feedback to reach the curl project and not just me personally, you should subscribe to the appropriate curl mailing list and post your feedback there and be prepared for a discussion with other users and hackers around them.
Thanks nonetheless!