Tag Archives: Linux

Coming: a curl distros meeting

The curl project arranges a two hour video conference meeting on March 21, 2024, with the aim of getting together people from curl and persons packaging curl for distributions. Linux distros and others. If you distribute curl to your users, consider yourself invited!

The main purpose is to improve curl and how curl is shipped to end users. The idea is to make it a multi-party conversation to see how we can improve collaboration, information exchange, testing, experiences etc so that we in the curl project can help the distros ship curl better, faster and more convenient. To the benefit for everyone who uses curl and libcurl packaged by distributions.

Of course, getting people working on related matters together and talking could also help us get to know each other a little better, to get some faces and voices attached to email addresses and by that, making cooperation smoother in the future.

The meeting is open for anyone interested to join and participate in.

The event starts at 16:00 UTC and might last up to two hours.

All details and specifics for this event is put in this wiki page:

https://github.com/curl/curl/wiki/curl-distro-discussion-2024

Welcome!

Fedora and curl-minimal

(This blog post has been updated a few times after the initial publication.)

In the Fedora project there is/was a proposal to introduce a curl-minimal package (and its companion libcurl-minimal) by default, as a way to provide default packages with smaller security risk areas. The full curl version packages would then be offered next to the minimal ones and require users to opt-in. (Related article on lwn.net)

The proposal is for making curl-minimal the default for “non-containerized installations of Fedora”. The curl-minimal packages already exists since 2017. Kamil Dudka had a talk about it at curl-up 2018.

curl-minimal would disable lesser used protocols and features. The discussion around exactly which parts it should disable is ongoing. The proposal to make it default was at least initially shut down by the Fedora Steering Committee on March 8, 2022 but I get the sense the curl-minimal idea have not died yet.

Balance

The balance is really tricky but yet seems to be the key to if this is going to be a worthwhile effort or not.

Disabling too many things in the name of security will make many more users install the full package, and then there is no security gain.

Enabling too many things in the minimal version makes it less of a security gain to begin with.

Security

The harsh truth about past security problems in curl and libcurl, is that most of them were found in components and parts that this minimal package would include.

The question is really how much a minimal package will actually save users from risk and not just cause endless amounts of friction going forward.

Not to mention that since Fedora aims to provide the full package as well, they will not avoid the risk of security problems even in the parts that are disabled in the minimal version. They can only reduce the impact of such flaws.

Features

It is really hard for packagers to know what curl features that are used and not used. There simply is no way to find out, besides shipping a version and listening to the screams of users in pain when things break. It will also force them into line-drawing decisions such as “only N users seem to use feature Z so let’s keep that in the full package” and figuring out the N number is a fuzzy estimate at best.

Some curl features are generally assumed to be there by tools and environments. An example is how a lot of tools and services, like for example web browsers, these days offer copy-as-curl functions. They put a generated curl command line in the clipboard so that users can paste that command in a shell prompt to reproduce an operation with curl.

If those generated command lines stop working because the newly installed curl package doesn’t have feature Z enabled while the generated curl command lines uses it, that’s going to make users unhappy.

The worst part for us in the curl project is probably that this is ultimately going to lead to an increased number of bug reports to the curl project because people will not understand why or how things go wrong.

Nobody asked us

Neither the curl project nor me personally have been asked or prompted for our views or feedback on this. It seems the Fedora people have not even considered the little and uncertain numbers on curl usage that exist – namely the results from the annual curl user surveys.

The 2021 analysis is here.

Update: I have been informed they are using that data and results as input. I was wrong above.

Loadable modules is not the fix

In the lwn comments on this topic, several people brought up that the curl project could “fix this” by making the support for different protocols into separate loadable modules, as then people could chose to only install the modules for the particular protocols they want.

That wouldn’t solve the issue at all. That would then instead just push users into installing several different protocol modules instead of minimal vs full. It would still be the same “this application suddenly broke because its needs YYY from libcurl”. Plus, the discussion around the curl-minimal package goes into more details and features than just protocols, and we can’t do every single feature a loadable module.

I have no intention of working on loadable modules for libcurl – for anything. That’s just a lot of work for no obvious benefit and it will introduce lots of new error and problem surfaces to users and it will not be possible to support on all platforms so it also needs to be provided conditionally.

Will curl-minimal happen?

I don’t know.

isalnum() is not my friend

The other day we noticed some curl test case failures, that only happened on macos and not on Linux. Curious!

The failures were detected in our unit test 1307, when testing a particular internal pattern matching function (Curl_fnmatch). Both targets run almost identical code but somehow they ended up with different results! Test cases acting differently on different platforms isn’t an extremely rare situation, but in this case it is just a pattern matching function and there’s really nothing timing dependent or anything that I thought could explain different behaviors. It piqued my interest, so I dug in.

The isalnum() return value

Eventually I figured out that the libc function isalnum(), when it got the 8 input value hexadecimal c3 (decimal 195), would return true on the macos machine and false on the box running Linux with glibc!

int value = isalnum(0xc3);

Setting LANG=C before running the test on macos made its isalnum() return false. The input became c3 because the test program has an UTF-8 encoded character in it and the function works on bytes, not “characters”.

Or in the words of the opengroup.org documentation:

The isalnum() function shall test whether c is a character of class alpha or digit in the program’s current locale.

It’s all documented – of course. It was just me not really considering the impact of this.

Avoiding this

I don’t like different behaviors on different platforms given the same input. I don’t like having string functions in curl act differently depending on locale, mostly because curl and libcurl can very well be used with many different locales and I prefer having a stable fixed behavior that we can document and stand by. Also, the libcurl functionality has never been documented to vary due to locale so it would be a surprise (bug!) to users anyway.

We’ve now introduced a private version of isalnum() and the rest of the ctype family of functions for curl. Hopefully this will make the tests more stable now. And make our functions work more similar and independent of locale.

See also: strcasecmp in Turkish.

New screen and new fuses

I got myself a new 27″ 4K screen to my work setup, a Dell P2715Q, and replaced one of my old trusty twenty-four inch friends with it.

I now work with the “Thinkpad 13″ on the left as my video conference machine (it does nothing else and it runs Windows!), the two mid screens are a 24″ and the new 27” and they are connected to my primary dev machine while the rightmost thing is my laptop for when I need to move.

Did everything run smoothly? Heck no.

When I first inserted the 4K screen without modifying anything else in the setup, it was immediately obvious that I really needed to upgrade my graphics card since it didn’t have muscles enough to drive the screen at 4K so the screen would then instead upscale a 1920×1200 image in a slightly blurry fashion. I couldn’t have that!

New graphics card

So when I was out and about later that day I more or less accidentally passed a Webhallen store, and I got myself a new card. I wanted to play it easy so I stayed with an AMD processor and went with ASUS Dual-Rx460-O2G. The key feature I wanted was to be able to drive one 4K screen and one at 1920×1200, and then I unfortunately had to give up on the ones with only passive cooling and I instead had to pick what sounds like a gaming card. (I hate shopping graphics cards.)As I was about to do surgery on the machine anyway. I checked and noticed that I could add more memory to the motherboard so I bought 16 more GB to a total of 32GB.

Blow some fuses

Later that night, when the house was quiet and dark I shut down my machine, inserted the new card, the new memory DIMMs and powered it back up again.

At least that was the plan. When I fired it back on, it said clock and my lamps around me all got dark and the machine didn’t light up at all. The fuse was blown! Man, wasn’t that totally unexpected?

I did some further research on what exactly caused the fuse to blow and blew a few more in the process, as I finally restored the former card and removed the memory DIMMs again and it still blew the fuse. Puzzled and slightly disappointed I went to bed when I had no more spare fuses.

I hate leaving the machine dead in parts on the floor with an uncertain future, but what could I do?

A new PSU

Tuesday morning I went to get myself a PSU replacement (Plexgear PS-600 Bronze), and once I had that installed no more fuses blew and I could start the machine again!

I put the new memory back in and I could get into the BIOS config with both screens working with the new card (and it detected 32GB ram just fine). But as soon as I tried to boot Linux, the boot process halted after just 3-4 seconds and seemingly just froze. Hm, I tested a few different kernels and safety mode etc but they all acted like that. Weird!

BIOS update

A little googling on the messages that appeared just before it froze gave me the idea that maybe I should see if there’s an update for my bios available. After all, I’ve never upgraded it and it was a while since I got my motherboard (more than 4 years).

I found a much updated bios image on ASUS support site, put it on a FAT-formatted USB-drive and I upgraded.

Now it booted. Of course the error messages I had googled for are still present, and I suppose they were there before too, I just hadn’t put any attention to them when everything was working dandy!

Displayport vs HDMI

I had the wrong idea that I should use the display port to get 4K working, but it just wouldn’t work. DP + DVI just showed up on one screen and I even went as far as trying to download some Ubuntu Linux driver package for Radeon RX460 that I found, but of course it failed miserably due to my Debian Unstable having a totally different kernel running and what not.

In a slightly desperate move (I had now wasted quite a few hours on this and my machine still wasn’t working), I put back the old graphics card – (with DVI + hdmi) only to note that it no longer works like it did (the DVI one didn’t find the correct resolution anymore). Presumably the BIOS upgrade or something shook the balance?

Back on the new card I booted with DVI + HDMI, leaving DP entirely, and now suddenly both screens worked!

HiDPI + LoDPI

Once I had logged in, I could configure the 4K screen to show at its full 3840×2160 resolution glory. I was back.

Now I only had to start fiddling with getting the two screens to somehow co-exist next to each other, which is a challenge in its own. The large difference in DPI makes it hard to have one config that works across both screens. Like I usually have terminals on both screens – which font size should I use? And I put browser windows on both screens…

So far I’ve settled with increasing the font DPI in KDE and I use two different terminal profiles depending on which screen I put the terminal on. Seems to work okayish. Some texts on the 4K screen are still terribly small, so I guess it is good that I still have good eye sight!

24 + 27

So is it comfortable to combine a 24″ with a 27″ ? Sure, the size difference really isn’t that notable. The 27 one is really just a few centimeters taller and the differences in width isn’t an inconvenience. The photo below shows how similar they look, size-wise:

Turn many pictures into a movie

Challenge: you have 90 pictures of various sizes, taken in different formats and shapes. Using all sorts strange file names. Make a movie out of all of them, with the images using the correct aspect ratio. And add music. Use only command line tools on Linux.

Solution: this is a solution, you can most likely solve this in 22 other ways as well. And by posting it here, I can find it myself if I ever want to do the same stunt again…

#!/bin/sh

j=0
# convert options
pic="-resize 1920x1080 -background black -gravity center -extent 1920x1080"

# loop over the images
for i in `ls *jpg | sort -R`; do
 echo "Convert $i"
 convert $pic $i "pic-$j.jpg"
 j=`expr $j + 1`
done

# now generate the movie
mp3="file.mp3"
echo "make movie"
ffmpeg -framerate 3 -i pic-%d.jpg -i $mp3 -acodec copy -c:v libx264 -r 30 -pix_fmt yuv420p -s 1920x1080 -shortest out.mp4

Explained

This is a shell script.

The ‘pic’ variable holds command line options for the ImageMagick ‘convert‘ tool. It resizes each picture to 1920×1080 while maintaining aspect ratio and if the pic gets smaller, it is centered and gets a black border.

The loop goes through all files matching *,jpg, randomizes the order with ‘sort’ and then runs ‘convert’ on them one by one and calls the output files pic-[number].jpg where number is increased by one for each image.

Once all images have the correct and same size, ‘ffmpeg‘ is invoked. It is told to produce a movie with 3 photos per second, how to find all the images, to include an mp3 file into the output and to stop encoding when one of the streams ends – this assumes the playing time of the mp3 file is longer than the total time the images are shown so the movie stops when we run out of images to show.

Result

The ‘out.mp4’ file, uploaded to youtube could then look like this:

(music by Bensound.com)

Fixing the Func KB-460 ‘-key

Func KB-460 keyboardI use a Func KB-460 keyboard with Nordic layout – that basically means it is a qwerty design with the Nordic keys for “åäö” on the right side as shown on the picture above. (yeah yeah Swedish has those letters fairly prominent in the language, don’t mock me now)

The most annoying part with this keyboard has been that the key repeat on the apostrophe key has been sort of broken. If you pressed it and then another key, it would immediately generate another (or more than one) apostrophe. I’ve sort of learned to work around it with some muscle memory and treating the key with care but it hasn’t been ideal.

This problem is apparently only happening on Linux someone told me (I’ve never used it on anything else) and what do you know? Here’s how to fix it on a recent Debian machine that happens to run and use systemd so your mileage will vary if you have something else:

1. Edit the file “/lib/udev/hwdb.d/60-keyboard.hwdb”. It contains keyboard mappings of scan codes to key codes for various keyboards. We will add a special line for a single scan code and for this particular keyboard model only. The line includes the USB vendor and product IDs in uppercase and you can verify that it is correct with lsusb -v and check your own keyboard.

So, add something like this at the end of the file:

# func KB-460
keyboard:usb:v195Dp2030*
KEYBOARD_KEY_70031=reserved

2. Now update the database:

$ udevadm hwdb –update

3. … and finally reload the tweaks:

$ udevadm trigger

4. Now you should have a better working key and life has improved!

With a slightly older Debian without systemd, the instructions I got that I have not tested myself but I include here for the world:

1. Find the relevant input for the device by “cat /proc/bus/input/devices”

2. Make a very simple keymap. Make a file with only a single line like this:

$ cat /lib/udev/keymaps/func
0x70031 reserved

3 Map the key with ‘keymap’:

$ sudo /lib/udev/keymap -i /dev/input/eventX /lib/udev/keymaps/func

where X is the event number you figured out in step 1.

The related kernel issue.

Changing networks with Linux

A rather long time ago I blogged about my work to better deal with changing networks while Firefox is running, and the change was then pushed for Android and I subsequently pushed the same functionality for Firefox on Mac.

Today I’ve landed yet another change, which detects network changes on Firefox OS and Linux.

Firefox Nightly screenshotAs Firefox OS uses a Linux kernel, I ended up doing the same fix for both the Firefox OS devices as for Firefox on Linux desktop: I open a socket in the AF_NETLINK family and listen on the stream of messages the kernel sends when there are network updates. This way we’re told when the routing tables update or when we get a new IP address etc. I consider this way better than the NotifyIpInterfaceChange() API Windows provides, as this allows us to filter what we’re interested in. The windows API makes that rather complicated and in fact a lot of the times when we get the notification on windows it isn’t clear to me why!

The Mac API way is what I would consider even more obscure, but then I’m not at all used to their way of doing things and how you add things to the event handlers etc.

The journey to the landing of this particular patch was once again long and bumpy and full of sweat in this tradition that seem seems to be my destiny, and this time I ran into problems with the Firefox OS emulator which seems to have some interesting bugs that cause my code to not work properly and as a result of that our automated tests failed: occasionally data sent over a pipe or socketpair doesn’t end up in the receiving end. In my case this means that my signal to the child thread to die would sometimes not be noticed and thus the thread wouldn’t exit and die as intended.

I ended up implementing a work-around that makes it work even if the emulator eats the data by also checking a shared should-I-shutdown-now flag every once in a while. For more specific details on that, see the bug.

Rpi night in GBG

pelagicore logo

Daniel talking So I flew down to and participated at yet another embedded Linux hacking event that was also co-organized by me, that took place yesterday (November 20th 2013) in Gothenburg Sweden.

The event was hosted by Pelagicore in their nice downtown facilities and it was fully signed up with some 28 attendees.

I held a talk about the current situation of real-time and low latency in the Linux kernel, a variation of a talk I’ve done before and even if I have modified it since before you can still get the gist of it on this old slideshare upload. As you can see on the photo I can do hand-wavy gestures while talking! When I finally shut up, we were fed tasty sandwiches and there was some time to socialize and actually hack on some stuff.

Embedded Linux hackers in GBG

I then continued my tradition and held a contest. This time I did raise the complexity level a bit as I decided I wanted a game with more challenges and something that feels less like a quiz and more like a game or a maze. See my separate post for full details and for your chance to test your skills.

This event was also nicely synced in time with the recent introduction of the foss-gbg mailing list, which is an effort to gather people in the area that have an interest in Free and Open Source Software. Much in the same way foss-sthlm was made a couple of years ago.

Pelagicore also handed out 9 Raspberry Pis at the event to lucky attendees.

Embedded and Raspberry Pis in GBG

Kjell Ericson's blinking leds

On November 20, we’ll gather a bunch of interested people in the same room and talk embedded Linux, open source and related matters. I’ll do a talk about real-time in Linux and I’ll run a contest in the same spirit as I’ve done before several times.

Sign-up here!

Pelagicore is hosting and sponsoring everything. I’ll mostly just show up and do what I always do: talk a lot.

So if you live in the area and are into open source and possibly embedded, do show up and I can promise you a good time.

(The photo is actually taken during one of our previous embedded hacking events.)

Embedded hacking contest #2

eneaI created another contest for the Embedded hacking event we just pulled off again, organized with foss-sthlm and Enea. Remember that I made one previously at our former hacking day?

The lesson from that time was that the puzzle ingredient then was slightly too difficult so people had to work a bit too long. It made many people give up and the ones who didn’t had to spend a significant time on solving it.

This time, I decided to use the same basic principle: ask N questions that all provide hints for the (N+1)th question, so that the first one to give me the answer to that final question is the winner. It makes it very easy for me to judge and it is a rather neat competition style game. I decided 10 questions should be enough.

To reduce some of the complexity from last time, I decided to provide the individual clues in the correct chronological order but instead add another twist: they aren’t in plain text! But since they’re chronological, the participants can go back and quite “easily” try other alternatives if there are some strange words appearing in the output. I made sure that all alternatives always have fine English alternatives so that if you pick the wrong answer it might still sound or look like English for a while…

I was very happy to see over 30 persons in the room that decided to accept the challenge. I suspect the prize did its part in attracting people to give it a go.

The rules in slightly longer terms as I put them (click it to see a higher resolution version):

the rules

And I clarified how the questions work:

the-questions

I then started my timer, and I showed all the questions on the projector to everyone. I gave them around 40 seconds per question. It thus took almost seven minutes to go through them and then I left a final slide up showing all questions:

The 10 questions

To allow readers to give this contest a go first before checking the answers. See the full answer and explanation.

A room full of competitive hackers