Category Archives: Technology

Really everything related to technology

Nvidia chipset audio now works

I’ve mentioned some of my audio problems on my Linux desktop before, and just the other day a friend suggested I should remove ‘esd’ (“apt-get remove esound”) as a means to fix one of my complaints and frequent annoyance (to get the sound working I had to kill esd first, then reload some drivers etc).

Recently my standard “trick” to get the sound brought to life had started to fail so I needed to get a new angle at this and boy, when I did a reboot now without esound installed my on-board sound works! And this without me doing any manual fiddling at all.

My motherboard’s sound info is displayed like this with lspci -v:

00:10.1 Audio device: nVidia Corporation MCP51 High Definition Audio (rev a2)
Subsystem: ASUSTeK Computer Inc. Device 81cb
Flags: bus master, 66MHz, fast devsel, latency 0, IRQ 22
Memory at fe024000 (32-bit, non-prefetchable) [size=16K]
Capabilities: <access denied>
Kernel driver in use: HDA Intel
Kernel modules: snd-hda-intel

Cure coming for Wrap Rage?

This phenomena you thought you were alone to experience, the rage and anger you feel when you’ve bought some new toy and you get it packaged in tight and nearly un-enforceable plastic that demands a decent amount of violence and persistence to crack. It’s called Wrap Rage.

I’ve been told the packages (called blister packs or clam shells) are designed to be this way to be able to show off the merchandise while at the same time prevent thefts: it is hard for a customer to just extract something out of those things in your typical physical store.

Amazon’s initiative Frustration-Free packaging is indeed a refreshing take on this and apparently an attempt to reverse this development. Online stores really cannot have any good reasons to use this kind of armor around products since there’s no risk of stealing. I wish others will follow to make the manufacturers realize that there is a market for this. This needs to be done by manufacturers of stuff, the stores cannot be made to repackage stuff due to warranties and what not.

It wouldn’t surprise me if you could even find cheaper ways to package products once you let go of some of the requirements that no longer apply for online stores. Visibility of the products once packaged is another thing that is pointless for online stores but I would expect is very important to sales in physical stores. I’ve always thought it is pretty pointless and expensive that every single package is made to be able to be a display model. To be able to attract customers to buy it. When you buy the thing online it’s no longer just pointless, it’s plain stupid.

Imagine a future when you can just open your new toy without getting bruises or scratch marks!

Times I listen

I listen to perhaps 4-5 podcast episodes per week. I figure they last a total of three hours or perhaps a little less. I don’t consider that to be much in any sense, but still I find that a lot of my friends ask me how I get time to listen to them while at the same time run a “normal” real life with two kids and hack on a zillion open source projects.

I honestly find the question a bit funny, since I know a lot of people listen to radio or music a lot more than so per week.

I just happen to always put the latest episodes of my favorite podcasts on my mp3 player and I carry the player with me. Whenever I’m about to do something on my own that doesn’t need my full brain present, like shopping groceries, doing the dishes, cleaning up in the house, mowing the lawn or in fact even when watching cartoons or children’s television I can just put an earplug into one of my ears and get quality shows and thus enrich the situation I’m in! I can tell you doing the dishes is a lot better with a great podcast!

I don’t commute or drive very long to and back from work currently which otherwise are the perfect podcast moments.

Rockbox coming along on Sansa v2s

There have been fierce activity in the dusty corners of the Rockbox project known as the SanDisk Sansa v2 hackers guild (no not really but I thought it sounded amusing) and this has so far resulted in early code like LCD drivers and NAND drivers on three new upcoming targets: The e200, Fuze and Clip.

There’s still work to do before the celebrations can start for real, but it’s still nice to see good progress.

Now run over and help out!

(picture by Bertrik Sikken)

Please hide my email

… I don’t want my employer/wife/friends to see that I’ve contributed something cool to an open source project, or perhaps that I said something stupid 10 years ago.

I host and co-host a bunch of different mailing list archives for projects on web sites, and I never cease to get stumped by how many people are trying hard to avoid getting seen on the internet. I can understand the cases where users accidentally leak information they intended to be kept private (although the removal from an archive is then not a fix since it has already been leaked to the world), but I can never understand the large crowd that tries to hide previous contributions to open source projects because they think the current or future employers may notice and have a (bad) opinion about it.

I don’t have the slightest sympathy for the claim that they get a lot of spam because of their email on my archives, since I only host very public lists and the person’s address was already posted publicly to hundreds of receivers and in most cases also to several other mailing list archives.

People are weird!

Can Ipv6 be made to succeed?

One of the “big guys” in Sweden on issues such as this – Patrik Fältström – apparently held a keynote at a recent internet-related conference (“Internetdagarna”), and there he addressed this topic (in Swedish). His slides from his talk is available from his blog.

Indeed a good read. Again: in Swedish…

In summary: the state is currently bad. There’s little being done to improve things. All alternatives to ipv6 look like worse solutions.

Estimated-Content-Length

Greg Dean posted an interesting idea on the ietf-http-wg mailing list, suggesting that a new response header would be added to HTTP (Estimated-Content-Length:) to allow servers to indicate a rough estimation of the content length in situation where it doesn’t actually now the exact size before it starts sending data.

In the current world, HTTP servers can only report the exact size to the client or no size at all and then the client will have to just deal with the response becoming any size at all. It then has no way to know even roughly how large the data is or how long the transfer is going to take.

The discussions following Greg’s post seem mostly positive thus far from several people.

In the middle there is a man

The other day an interesting bug report was posted against the Firefox browser, and it caused some interesting discussions and blog posts on the subject of Man-In-The-Middle attacks and how current browsers etc make it (too?) easy to accept self-signed certificates and thus users are easily mislead. (Peter Burkholder wrote a great piece on SSL MITMing already back in 2002 which goes into detail on how this can be done.).

The entire issue essentially boils down to this:

To be able to really know that you’re communicating with the true remote site (and not an impostor), you must have some kind of verification system.

In SSL land we have this system with CA certs for verifying certs and it works pretty good in most cases (I think). However, so many sites on the internet use HTTPS today without having the certificate signed by a party that is known to the browser already – most of them are so called self-signed which means there’s nobody else that guarantees that they are who they claim to be, just themselves.

All current modern browsers want to give the users easy access to HTTP sites, to HTTPS sites with valid properly-signed certs but also allow users to connect to and browse on HTTPS sites with self-signed certs. And here comes the problem: how to tell users that HTTPS with self-signed certs is very insecure but still let them proceed? How do we tell them that the user may proceed but if this is a known popular site you really should expect a true and valid certificate as otherwise it is quite possibly a MITM you’re seeing?

People are so used to just accept exceptions and click away nagging pop-ups so the warnings and alerts that are explicit and implied by the prompts you have to go through to accept the self-signed certificate. They don’t seem to have much effect. As can be seen in this bug report, accepting an impostor’s certificate for a large known site is far too easy.

In the SSH land however, we don’t have the ca cert system and top-down trust hierarchy that SSL/TLS imposes. But does this matter? I’d say no, as most if not all users still don’t reflect much over the fact when a server’s host key is reported different than what you used before. Or when you connect to a host the first time you accept the host key without trying to verify it using a different channel. Thus you’re subject to pretty much the same MITM risk. The difference is perhaps that less “mere end users” are using SSH this way.

Let me just put emphasis on this: SSL and SSH are secure. The insecureness here is not due to how the protocols work, but rather they are flaws that appear when we mix in real world users and UIs and so.

I don’t have any sensible solutions to these problems myself. I’m crap at designing things for mere humans and UIs etc and I make no claims of understanding end users.

It seems there’s a nice tool called ettercap that’s supposedly a fine thing to use when you want to run your own MITM attacks on your LAN! And on the other side: an interesting take at improving the “accept this certificate” UI is offered by the Firefox’s Perspectives plugin which basically also checks with N other sources’ view to help you decide whether to trust a certificate.

I want to round off my rant with a little quote:

I have little, and decreasing, desire to continue to invest in strong security for a product that discards that security for the masses” [*] / Nelson B Bolyard – prominent NSS hacker

Curl Cyclomatic Complexity

I was at the OWASP Sweden meeting last night and spoke about Open source and security. One of the other speakers present was Simon Josefsson who in his talk showed a nice table listing functions in his project sorted by “complexity“. Functions above a certain score are then considered “high risk” as they are hard to read and follow and thus may be subject to security problems.

The kind man he is, Simon already shows a page with a Curl Cyclomatic Complexity Report nicely identifying a bunch of functions we should really consider poking at to decrease complexity of. The top-10 “bad” functions are:

Function Score Statements Lines Code
ssh_statemach_act 254 880 1582 lib/ssh.c
Curl_http 204 395 886 lib/http.c
readwrite_headers 129 269 709 lib/transfer.c
Curl_cookie_add 118 247 502 lib/cookie.c
FormAdd 105 210 421 lib/formdata.c
dprintf_formatf 92 233 395 lib/mprintf.c
multi_runsingle 94 251 606 lib/multi.c
Curl_proxyCONNECT 74 212 443 lib/http.c
readwrite_data 73 127 319 lib/transfer.c
ftp_state_use_port 60 195 387 lib/ftp.c

I intend to use this as an indication on what functions within libcurl to work on. My plan is to primarily break down each of these functions to smaller ones to make them easier to read and follow. It would be cool to get every single function below 50. But I’m not sure that’s feasible or even really a good idea.