fscons.org went live
Dan F’s call for internationalization
The curl vs wget document and work on that
The work-in-progress ABI document for what we “guarantee” in libcurl regarding to binary interface compatibility etc between releases.
Preparing for next release, feature freeze on October 14, likely release date for 7.17.1 somewhere around October 25-28
Recently I’ve read two books (in Swedish) about China and the Chinese, and I’ll offer some quick reviews on them here.
Inga problem! : om livet i dagens Kina
(ISBN: 9789127356375, Author: LilliehÃ¶Ã¶k, Catarina)
This is a 320 page story about the author’s trip to China. She studies Chinese, lives in China, travels around and eventually gets a work there. We get to follow the cultural clashes when a blond Swedish woman faces the (traditional) Chinese. It certainly is interesting and educational, but the book is a bit repetitive towards the end as the main point has already gone through by then. The book is however still a light and fast read.
Vilda svanar – Tre dÃ¶ttrar av Kina
(ISBN: 9789151845678, Author: Jung Chang, English title: Wild Swans)
This international best-seller is a 500+ page novel about three generations of Chinese women. The author’s grandma, mother and herself. Starting in the early 1900s over the years and the major changes that the poeple of China went through, all the way to modern time.
While a slightly harder read, I’d say this is much more interesting in comparison to the previous one, and it offers a great insight to why many of the cultural differences mentioned in the first book exist in the first place. It shows a people tormented by their leaders in many different ways, and a people that have learned the hard way to obey whatever they say and to stop thinking by themselves.
Since Henrik replied to my previous blog posts, I figured I better write a new one to simply state the fact:
…is now up and working. Go there and read all about it! And yeah, my curl talk is currently set for 15:00 on that Saturday.
My wife wants to keep some videos found on youtube, and I really can’t recommend just keeping bookmarks to a random web site like that. Not if you want the content to be available in a few years ahead, or even ten or twenty years. Then downloading the files to keep the locally is the only sane way to make it somewhat more reliable.
To download the files you can do it with a browser or with a command line tool:
- Use Firefox
- Install Greasemonkey
- Within Greasemonkey there’s concept of user scripts that customize it, and we want a certain customization for youtube pages. So we get the YouTube to me v2 script installed.
- Now, each youtube web page gets a red stripe on the top of the page that allows you to download the FLV.
Command Line Style
There exist several command line tools “out there” that do the job. I tried youtube-dl and it did the job splendidly by only proving the main HTTP URL on the command line.
The main lacking feature is that it names the output flv based on the ‘v’ variable in the URL so the downloads end up being named things like “f_8wuVEYMZ8.flv”…
Play the local FLV movies
For this, I can only recommend the lovely VLC media player, available on all modern platforms.
Ok, since people truly and actually often ask me about what the differences are between curl and Wget, it might be suitable to throw in my garbage here and state the main differences as I see them. Please consider my bias towards curl since after all, curl is my baby – but I have contributed code to Wget as well.
- Features and is powered by libcurl -a cross-platform library with a stable API that can be used by each and everyone. This difference is major since it creates a completely different attitude on how to do things internally. It is also slightly harder to make a library than a “mere” command line tool.
- Pipes. curl is more in the traditional unix-style, it sends more stuff to stdout, and reads more from stdin in a “everything is a pipe” manner.
- Return codes. curl returns a range of defined and documented return codes for various (error) situations.
- Single shot. curl is bascially made to do single-shot transfers of data.It transfers just the URLs that
the user specifies, and does not contain any recursive downloading logic or any sort of HTML parser.
- More protocols. curl supports FTP, FTPS, HTTP, HTTPS, SCP, SFTP, TFTP, TELNET, DICT, LDAP, LDAPS and FILE at the time of this writing. Wget supports HTTP, HTTPS and FTP.
- More portable. Ironically curl builds and runs on lots of more platforms than wget, in spite of their attempts to keep things conservative. For example, VMS, OS/400, TPF and other more “exotic” platforms that aren’t straight-forward unix clones.
- More SSL libraries and SSL support. curl can be built with one out of four different SSL/TLS libraries, and it offers more control and wider support for protocol details.
- curl (or rather libcurl) supports more HTTP authentication methods, and especially when you try over HTTP proxies.
- Wget is command line only. There’s no lib or anything. Personally I’ve always disliked the project doesn’t provide a man page as they stand in the GNU side of this and consider “info” pages to be the superior way to document things like this. I strongly disagree.
- Recursive! Wget’s major strong side compared to curl is its ability to download recursively, or even just download everything that is referred to from a remote resource, be it a HTML page or a FTP directory listing.
- Older. Wget has traces back to 1995, while curl can be tracked back no longer than 1997.
- Less developer activity. While this can be debated, I consider three metrics here: mailing list activity, source code commit frequency and release frequency. Anyone following these two projects can see that the curl project has a lot higher pace in all these areas, and it has indeed been so for several years.
- HTTP 1.0. Wget still does its HTTP operations using HTTP 1.0, and while that is still working remarkably fine and hardly ever is troublesome to the end-users, it is still a fact. curl has done HTTP 1.1 since March 2001 (while still offering optional 1.0 requests).
- GPL. Wget is 100% GPL v2, which I believe is going v3 really soon when they release their next release. curl is MIT licensed.
- GNU. Wget is part of the GNU project and all copyrights are assigned to them etc. The curl project is entirely stand-alone and independent with no organization parenting at all.
This turned out to be a long post and it might in fact be usable to save for the future, so I’m also posting this as a more permanent doc on my site on this URL: http://daniel.haxx.se/docs/curl-vs-wget.html. So possible updates will be done there. Do let me know if you have further evident differences or if you disagree with me on details here!
Right now there’s darkness outside my window. It is in the middle of the night and this is my prime hacking time, when the rest of the family are all sound asleep.
The three first days of nursing Rex full time has been gentle, as he’s been his very happy himself and we’ve had some great weather allowing walks outdoors and visits to the nearby playgrounds etc. Also, Rex’s two naps per day (totaling at around 3-4 hours) does allow for some personal time as well, so I manage to read my mails and even do occasional commits during the days.
Things will get rougher when the days go darker, colder and wetter. Or when Rex is getting more cranky and similar. But I’m optimistic.
Keeping up with IRC like I can when I’m sitting in front of my computer all day at work isn’t really possible and the no commuting will prevent me from keeping up with the podcasts I used to listen to, but those are no biggies to overcome. I quite like not having to go anywhere particular in the morning and thus not have to “travel” home again in the later afternoon.
Instead, I’ve fixed bugs and worked in several patches to curl and c-ares and I’ve even been able to submit blog posts at a decent pace.
I’m quite impressed by what these Chinese cloners can produce.
As can be seen on this youtube video, there’s a nice typo on the boot-up screen (“tPhone”) and it does use the Windows startup sound(!), but it is an otherwise pretty decent-looking clone.
As for someone who’s never seen a real iPhone, just seeing them on shabby youtube videos certainly gives the impression that this fake is pretty similar to the original one. Even functionality wise. Apparently this one even supports MMS and works with any SIM card…
Now, it’s less than two weeks till I’ll go to China… 🙂
I’m going to China for a week in October. I have a few things I feel I better figure out while I’m there, including:
- Exactly how evil is the internet censorship and filtering for ordinary tourists such as me. A laptop and some network tools should be enough to tell, as the hotel boasts free internet access…
- Time zone stuff. China has a single time zone for the entire country, which by itself is fascinating for such a huge area. And when looking at a time zone map, you can see that the country picked a time zone that is the equivalent of western Australia, which seems to be a lot of sense for the east coast of China. So, the western parts of China are terribly off. The question is: do the people ignore the time and follow the light, or do they obey the time and spend parts of the lives in more darkness than what they otherwise would? If you go north/south at the eastern and western edges of China, you’ll end up in time zones that are five hours apart!
- Chinese typing is with glyphs; some tens of thousands of different ones. In a Chinese dictionary, like a Chinese to English one, how do they sort the Chinese words? I mean, glyphs just have got to be very hard to sort in a logical manner?! Or how do they find the words?
Microsoft hasn’t given in yet it seems, as they announced their updated Zunes yesterday. They’re available as 4 or 8GB flash and a 80GB hdd version, and these ones are claimed to play more movie formats (like h.264 and MPEG-4) and they actually seem to be capable of using the wifi for things like syncing music etc.
The zune music is also said to go DRM-free… All in all, I’d say they seem to really make an effort to be a serious iPod alternative.
Anyway, there hasn’t of course been any serious dissect of these new Zunes yet but given how their earlier models were made it seems unlikely that they will attract any larger crowds of eager hackers. They also seem to have applied a fair amount of cryptography, another Apple-like approach, so it is hard to put a replacement firmware on it.
The guys in the Zune Linux project have really no clues about what hacking these things require, and their early chatter on deciding what logo to use and what “distro” to base their work on have just been hilarious jokes. I don’t expect this new set of models to change this situation in any significant way.
I’m not aware of any known skilled (Rockbox) hacker having a go at Zune. The old Zune models are however quite similar (but not identical) hardware wise to the Toshiba Gigabeat S models, for which there is a Rockbox port in the works (as I’ve mentioned before).
Yeah, you losers out there in the rest of the world: we no longer do analog terrestrial TV broadcasting in Sweden. At October 15 2007 the last parts of Sweden go DVB-T.
(Yes, some dinosaurs are said to still get analog TV over their cable networks but I’m sure they will be extinct soon enough..)
The upsides with DVB for a casual TV user such as myself, is the built-in standard EPG and the fact that the time is sent to allow “terminals” to sync and stay accurate easily. It would be even better if the Swedish broadcasters would a) send program data for more days ahead than what they currently do (currently they only provide roughly 48 hours ahead) b) fill in more details in the meta data about the programs – right now a film or a TV show can be explained as “American movie” or “drama” with no further explanation.
BTW, a little side-note. In my house-hold we switched to digital early this spring with the Stockholm area switched off analog and not long after the switch I one day noticed how my EPG would show all the Swedish åäö letters in a funny way (and my box uses the name of the show for recorded files so they ended up looking funny as well). It looked exactly like they were UTF-8 encoded suddenly – but assumed a more regular character set like iso8859-1 or similar. I filed a bug report to Teracom, and I don’t know if I had any impact at all but in a day or two the bug was gone.