fscons.org went live
Dan F’s call for internationalization
The curl vs wget document and work on that
The work-in-progress ABI document for what we “guarantee” in libcurl regarding to binary interface compatibility etc between releases.
Preparing for next release, feature freeze on October 14, likely release date for 7.17.1 somewhere around October 25-28
Since Henrik replied to my previous blog posts, I figured I better write a new one to simply state the fact:
…is now up and working. Go there and read all about it! And yeah, my curl talk is currently set for 15:00 on that Saturday.
My wife wants to keep some videos found on youtube, and I really can’t recommend just keeping bookmarks to a random web site like that. Not if you want the content to be available in a few years ahead, or even ten or twenty years. Then downloading the files to keep the locally is the only sane way to make it somewhat more reliable.
To download the files you can do it with a browser or with a command line tool:
- Use Firefox
- Install Greasemonkey
- Within Greasemonkey there’s concept of user scripts that customize it, and we want a certain customization for youtube pages. So we get the YouTube to me v2 script installed.
- Now, each youtube web page gets a red stripe on the top of the page that allows you to download the FLV.
Command Line Style
There exist several command line tools “out there” that do the job. I tried youtube-dl and it did the job splendidly by only proving the main HTTP URL on the command line.
The main lacking feature is that it names the output flv based on the ‘v’ variable in the URL so the downloads end up being named things like “f_8wuVEYMZ8.flv”…
Play the local FLV movies
For this, I can only recommend the lovely VLC media player, available on all modern platforms.
Ok, since people truly and actually often ask me about what the differences are between curl and Wget, it might be suitable to throw in my garbage here and state the main differences as I see them. Please consider my bias towards curl since after all, curl is my baby – but I have contributed code to Wget as well.
- Features and is powered by libcurl -a cross-platform library with a stable API that can be used by each and everyone. This difference is major since it creates a completely different attitude on how to do things internally. It is also slightly harder to make a library than a “mere” command line tool.
- Pipes. curl is more in the traditional unix-style, it sends more stuff to stdout, and reads more from stdin in a “everything is a pipe” manner.
- Return codes. curl returns a range of defined and documented return codes for various (error) situations.
- Single shot. curl is bascially made to do single-shot transfers of data.It transfers just the URLs that
the user specifies, and does not contain any recursive downloading logic or any sort of HTML parser.
- More protocols. curl supports FTP, FTPS, HTTP, HTTPS, SCP, SFTP, TFTP, TELNET, DICT, LDAP, LDAPS and FILE at the time of this writing. Wget supports HTTP, HTTPS and FTP.
- More portable. Ironically curl builds and runs on lots of more platforms than wget, in spite of their attempts to keep things conservative. For example, VMS, OS/400, TPF and other more “exotic” platforms that aren’t straight-forward unix clones.
- More SSL libraries and SSL support. curl can be built with one out of four different SSL/TLS libraries, and it offers more control and wider support for protocol details.
- curl (or rather libcurl) supports more HTTP authentication methods, and especially when you try over HTTP proxies.
- Wget is command line only. There’s no lib or anything. Personally I’ve always disliked the project doesn’t provide a man page as they stand in the GNU side of this and consider “info” pages to be the superior way to document things like this. I strongly disagree.
- Recursive! Wget’s major strong side compared to curl is its ability to download recursively, or even just download everything that is referred to from a remote resource, be it a HTML page or a FTP directory listing.
- Older. Wget has traces back to 1995, while curl can be tracked back no longer than 1997.
- Less developer activity. While this can be debated, I consider three metrics here: mailing list activity, source code commit frequency and release frequency. Anyone following these two projects can see that the curl project has a lot higher pace in all these areas, and it has indeed been so for several years.
- HTTP 1.0. Wget still does its HTTP operations using HTTP 1.0, and while that is still working remarkably fine and hardly ever is troublesome to the end-users, it is still a fact. curl has done HTTP 1.1 since March 2001 (while still offering optional 1.0 requests).
- GPL. Wget is 100% GPL v2, which I believe is going v3 really soon when they release their next release. curl is MIT licensed.
- GNU. Wget is part of the GNU project and all copyrights are assigned to them etc. The curl project is entirely stand-alone and independent with no organization parenting at all.
This turned out to be a long post and it might in fact be usable to save for the future, so I’m also posting this as a more permanent doc on my site on this URL: http://daniel.haxx.se/docs/curl-vs-wget.html. So possible updates will be done there. Do let me know if you have further evident differences or if you disagree with me on details here!
Right now there’s darkness outside my window. It is in the middle of the night and this is my prime hacking time, when the rest of the family are all sound asleep.
The three first days of nursing Rex full time has been gentle, as he’s been his very happy himself and we’ve had some great weather allowing walks outdoors and visits to the nearby playgrounds etc. Also, Rex’s two naps per day (totaling at around 3-4 hours) does allow for some personal time as well, so I manage to read my mails and even do occasional commits during the days.
Things will get rougher when the days go darker, colder and wetter. Or when Rex is getting more cranky and similar. But I’m optimistic.
Keeping up with IRC like I can when I’m sitting in front of my computer all day at work isn’t really possible and the no commuting will prevent me from keeping up with the podcasts I used to listen to, but those are no biggies to overcome. I quite like not having to go anywhere particular in the morning and thus not have to “travel” home again in the later afternoon.
Instead, I’ve fixed bugs and worked in several patches to curl and c-ares and I’ve even been able to submit blog posts at a decent pace.
Microsoft hasn’t given in yet it seems, as they announced their updated Zunes yesterday. They’re available as 4 or 8GB flash and a 80GB hdd version, and these ones are claimed to play more movie formats (like h.264 and MPEG-4) and they actually seem to be capable of using the wifi for things like syncing music etc.
The zune music is also said to go DRM-free… All in all, I’d say they seem to really make an effort to be a serious iPod alternative.
Anyway, there hasn’t of course been any serious dissect of these new Zunes yet but given how their earlier models were made it seems unlikely that they will attract any larger crowds of eager hackers. They also seem to have applied a fair amount of cryptography, another Apple-like approach, so it is hard to put a replacement firmware on it.
The guys in the Zune Linux project have really no clues about what hacking these things require, and their early chatter on deciding what logo to use and what “distro” to base their work on have just been hilarious jokes. I don’t expect this new set of models to change this situation in any significant way.
I’m not aware of any known skilled (Rockbox) hacker having a go at Zune. The old Zune models are however quite similar (but not identical) hardware wise to the Toshiba Gigabeat S models, for which there is a Rockbox port in the works (as I’ve mentioned before).
We’re slowly building a team and effort in the Rockbox project to make a port to the Cowon iAudio 7 player.
It’s a 60 gram 4/8/16 GB flash player with a 1.3″ 160×128 TFT LCD, FM tuner, Telechips TCC771 MCU and a bunch of chips familiar to us from other existing Rockbox ports.
TMM already bricked his first player…
Update: this entry does not allow comments anymore. Go to the Rockbox forums to continue!
There’s a bunch of eager hackers hanging out in the Rockbox forums, working on getting a Rockbox port for the Dell DJ going.
This player has a monochrome LCD at 160×104, features the dreaded TMS320 series MCU and comes with up to 20GB hard drive.
MrH mailed me a document describing his latest research on the PP5024 memory controller, and I figure we have reasons to believe that the other chips in the PP family might be similar. He did the work by running test programs and disassembling the Sansa firmware.
Of course, I keep the collected e200 details from MrH on my Sansa e200 page.
The other day while I was browsing the endless stream of pointless articles about iPhone this and iPhone that, I fell over this slashdot article that mentioned the US Magnuson-Moss Warranty Act which basically says that a company cannot void a warranty just because the user has tampered with its software if the company cannot prove that the alternative software is to blame for the failure.
Of course I’m not a lawyer or even in the US, but it certainly seems to be something that should apply for quite a few Rockbox users who have feared returning broken units to manufacturers with the Rockbox installation left intact. (Both Archos and iriver are known to have refused to service such players – but I guess neither of those cases actually were in the US with US customers.)
It does however require that there is an existing written warranty in the first place.
And then I figure the struggle for a mere single human being to fight against one of these companies claiming that Rockbox isn’t to blame could be more than just a little intimidating and probably just won’t happen…