Wei-Hsin Lee of Google posted about their effort to create a dictionary-based compression scheme for HTTP. I find the idea rather interesting, and it’ll be fun to see what the actual browser and server vendors will say about this.
The idea is basically to use “cookie rules” (domain, path, port number, max-age etc) to make sure a client gets a dictionary and then the server can deliver responses that are diffs computed against the dictionary it has delivered before to the client. For repeated similar contents it should be able to achieve a lot better compression ratios than any other existing HTTP compression in use.
I figure it should be seen as a relative to the “Delta encoding in HTTP” idea, although the SDCH idea seems somewhat more generically applicable.
Since they seem to be using the VCDIFF algorithm for SDCH, the recent open-vcdiff announcement of course is interesting too.