Here’s an encouraging graph from our regular Coverity scans of the curlsource code, showing that we’ve maintained a fairly low “defect density” over the last two years, staying way below the average density level. Click the image to view it slightly larger.
Defect density is simply the number of found problems per 1,000 lines of code. As a little (and probably unfair) comparison, right now when curl is flat on 0, Firefox is at 0.47, c-ares at 0.12 and libssh2 at 0.21.
Coverity is still the primary static code analyzer for C code that I’m aware of. None of the flaws Coverity picked up in curl during the last two years were detected by clang-analyzer for example.
The curl repository on github has now been forked 1,000 times. Or actually, there are 1,000 forks kept alive as the counter is actually decreased when people remove their forks again. curl has had its primary git repository on github since March 22, 2010. A little more than two days between every newly created fork.
If you’re not used to the github model: a fork is typically made to get yourself your own copy of someone’s source tree so that you can make changes to that and publish your own version of the source tree without having to get the changes you’ve done merged into the original repository that you forked off from. But it is also the most common way to offer changes back to github based projects: send a request that a particular change in your version of the source tree should get merged into the mother project. A so called Pull Request.
Trivia: The term “fork” when meaning “to divide in branches, go separate ways” has been used in the English language since the 14th century.
I’m only aware of one actual separate line of development that is a true fork of libcurl that I believe is still being maintained: libgnurl.
Another notch on the wall as we’ve reached the esteemed age of 18 years in the cURL project. 9 releases were shipped since our last birthday and we managed to fix no less than a total of 457 bugs in that time.
On this single day in history…
20,000 persons will be visiting the web site, transferring over 4GB of data.
1.3 bug fixes will get pushed to the git repository (out of the 3 commits made)
300 git clones are made of the curl source tree, by 100 unique users.
4000 curl source archives will be downloaded from the curl web site
8 mails get posted on the curl mailing lists (at least one of them will be posted by me).
I will spend roughly 2 hours on curl related work. Mostly answering mail, bug reports and debugging, but also maintaining infrastructure, poke on the web site and if lucky, actually spending a few minutes writing new code.
Every human in the connected world will use at least one service, tool or application that runs curl.
My phone just lighted up. POWERMASTER 10 told me something. It said “POWERMASTR 10: KOM OK”.
Over the last few months, I’ve received almost 30 weird text messages from a “POWERMASTER 10”, originating from a Swedish phone number in a number range reserved for “devices”. Yeps, I’m showing the actual number below in the screenshot because I think it doesn’t matter and if for the unlikely event that the owner of +467190005601245 would see this, he/she might want to change his/her alarm config.
Powermaster 10 is probably a house alarm control panel made by Visonic. It is also clearly localized and sends messages in Swedish.
As this habit has been going on for months already, one can only suspect that the user hasn’t really found the SMS feedback to be a really valuable feature. It also makes me wonder what the feedback it sends really means.
The upside of this story is that you seem to be a very happy person when you have one of these control panels, as this picture from their booklet shows. Alarm systems, control panels, text messages. Why wouldn’t you laugh?!
Edit: I contacted Telenor about this after my initial blog post but they simply refused to do anything since I’m not the customer and they just didn’t want to understand that I only wanted them to tell their customer that they’re doing something wrong. These messages kept on coming to me with irregular intervals until July 2018.
Challenge: you have 90 pictures of various sizes, taken in different formats and shapes. Using all sorts strange file names. Make a movie out of all of them, with the images using the correct aspect ratio. And add music. Use only command line tools on Linux.
Solution: this is a solution, you can most likely solve this in 22 other ways as well. And by posting it here, I can find it myself if I ever want to do the same stunt again…
# convert options
pic="-resize 1920x1080 -background black -gravity center -extent 1920x1080"
# loop over the images
for i in `ls *jpg | sort -R`; do
echo "Convert $i"
convert $pic $i "pic-$j.jpg"
j=`expr $j + 1`
# now generate the movie
echo "make movie"
ffmpeg -framerate 3 -i pic-%d.jpg -i $mp3 -acodec copy -c:v libx264 -r 30 -pix_fmt yuv420p -s 1920x1080 -shortest out.mp4
This is a shell script.
The ‘pic’ variable holds command line options for the ImageMagick ‘convert‘ tool. It resizes each picture to 1920×1080 while maintaining aspect ratio and if the pic gets smaller, it is centered and gets a black border.
The loop goes through all files matching *,jpg, randomizes the order with ‘sort’ and then runs ‘convert’ on them one by one and calls the output files pic-[number].jpg where number is increased by one for each image.
Once all images have the correct and same size, ‘ffmpeg‘ is invoked. It is told to produce a movie with 3 photos per second, how to find all the images, to include an mp3 file into the output and to stop encoding when one of the streams ends – this assumes the playing time of the mp3 file is longer than the total time the images are shown so the movie stops when we run out of images to show.
The ‘out.mp4’ file, uploaded to youtube could then look like this:
In July 2015, 40-something HTTP implementers and experts of the world gathered in the city of Münster, Germany, to discuss nitty gritty details about the HTTP protocol during four intense days. Representatives for major browsers, other well used HTTP tools and the most popular HTTP servers were present. We discussed topics like how HTTP/2 had done so far, what we thought we should fix going forward and even some early blue sky talk about what people could potentially see being subjects to address in a future HTTP/3 protocol.
You can relive the 2015 version somewhat from my daily blog entries from then that include a bunch of details of what we discussed: day one, two,three and four.
The HTTP Workshop was much appreciated by the attendees and it is now about to be repeated. In the summer of 2016, the HTTP Workshop is again taking place in Europe, but this time as a three-day event slightly further up north: in the capital of Sweden and my home town: Stockholm. During 25-27 July 2016, we intend to again dig in deep.
If you feel this is something for you, then please head over to the workshop site and submit your proposal and show your willingness to attend. This year, I’m also joining the Program Committee and I’ve signed up for arranging some of the local stuff required for this to work out logistically.
The HTTP Workshop 2015 was one of my favorite events of last year. I’m now eagerly looking forward to this year’s version. It’ll be great to meet you here!