Tag Archives: Mozilla

My talks at FOSDEM 2015

fosdem

Sunday 13:00, embedded room (Lameere)

Tile: Internet all the things – using curl in your device

Embedded devices are very often network connected these days. Network connected embedded devices often need to transfer data to and from them as clients, using one or more of the popular internet protocols.

libcurl is the world’s most used and most popular internet transfer library, already used in every imaginable sort of embedded device out there. How did this happen and how do you use libcurl to transfer data to or from your device?

Note that this talk was originally scheduled to be at a different time!

Sunday, 09:00 Mozilla room (UD2.218A)

Title: HTTP/2 right now

HTTP/2 is the new version of the web’s most important and used protocol. Version 2 is due to be out very soon after FOSDEM and I want to inform the audience about what’s going on with the protocol, why it matters to most web developers and users and not the last what its status is at the time of FOSDEM.

My first year at Mozilla

January 13th 2014 I started my fiMozilla dinosaur head logorst day at Mozilla. One year ago exactly today.

It still feels like it was just a very short while ago and I keep having this sense of being a beginner at the company, in the source tree and all over.

One year of networking code work that really at least during periods has not progressed as quickly as I would’ve wished for, and I’ve had some really hair-tearing problems and challenges that have taken me sweat and tears to get through. But I am getting through and I’m enjoying every (oh well, let’s say almost every) moment.

During the year I’ve had the chance to meetup with my team mates twice (in Paris and in Portland) and I’ve managed to attend one IETF (in London) and two special HTTP2 design meetings (in London and NYC).

openhub.net counts 47 commits by me in Firefox and that feels like counting high. bugzilla has tracked activity by me in 107 bug reports through the year.

I’ve barely started. I’ll spend the next year as well improving Firefox networking, hopefully with a higher turnout this year. (I don’t mean to make this sound as if Firefox networking is just me, I’m just speaking for my particular part of the networking team and effort and I let the others speak for themselves!)

Onwards and upwards!

FOSS them students

On October 16th, I visited DSV at Stockholm University where I had the pleasure of holding a talk and discussion with students (and a few teachers) under the topic Contribute to Open Source. Around 30 persons attended.

Here are the slides I use, as usual possibly not perfectly telling stand-alone without the talk but there was no recording made and I talked in Swedish anyway…

My home setup

I work in my home office which is upstairs in my house, perhaps 20 steps from my kitchen and the coffee refill. I have a largish desk with room for a number of computers. The photo below shows the three meter beauty. My two kids have their two machines on the left side while I use the right side of it for my desktop and laptop.

Daniel's home office

Many computers

The kids use my old desktop computer with a 20″ Dell screen and my old 15.6″ dual-core Asus laptop. My wife has her laptop downstairs and we have a permanent computer installed underneath the TV for media (an Asus VivoPC).

My desktop computer

I’m primarily developing C and C++ code and I’m frequently compiling rather large projects – repeatedly. I use a desktop machine for my ordinary development, equipped with a fairly powerful 3.5GHz quad-core Core-I7 CPU, I have my OS, my home dir and all source code put on an SSD. I have a larger HDD for larger and slower content. With ccache and friends, this baby can build Firefox really fast. I put my machine together from parts myself as I couldn’t find a suitable one focused on horse power but yet a “normal” 2D graphics card that works Fractal Designfine with Linux. I use a Radeon HD 5450 based ASUS card, which works fine with fully open source drivers.

I have two basic 24 inch LCD monitors (Benq and Dell) both using 1920×1200 resolution. I like having lots of windows up, nothing runs full-screen. I use KDE as desktop and I edit everything in Emacs. Firefox is my primary browser. I don’t shut down this machine, it runs a few simple servers for private purposes.

My machines (and my kids’) all run Debian Linux, typically of the unstable flavor allowing me to get new code reasonably fast.

Func KB-460 keyboardMy desktop keyboard is a Func KB-460, mechanical keyboard with some funky extra candy such as red backlight and two USB ports. Both my keyboard and my mouse are wired, not wireless, to take away the need for batteries or recharging etc in this environment. My mouse is a basic and old Logitech MX 310.

I have a crufty old USB headset with a mic, that works fine for hangouts and listening to music when the rest of the family is home. I have Logitech webcam thing sitting on the screen too, but I hardly ever use it for anything.

When on the move

I need to sometimes move around and work from other places. Going to conferences or even our regular Mozilla work weeks. Hence I also have a laptop that is powerful enough to build Firefox is a sane amount of time. I have Lenovo Thinkpad w540a Lenovo Thinkpad W540 with a 2.7GHz quad-core Core-I7, 16GB of RAM and 512GB of SSD. It has the most annoying touch pad on it. I don’t’ like that it doesn’t have the explicit buttons so for example both-clicking (to simulate a middle-click) like when pasting text in X11 is virtually impossible.

On this machine I also run a VM with win7 installed and associated development environment so I can build and debug Firefox for Windows on it.

I have a second portable. A small and lightweight netbook, an Eeepc S101, 10.1″ that I’ve been using when I go and just do presentations at places but recently I’ve started to simply use my primary laptop even for those occasions – primarily because it is too slow to do anything else on.

I do video conferences a couple of times a week and we use Vidyo for that. Its Linux client is shaky to say the least, so I tend to use my Nexus 7 tablet for it since the Vidyo app at least works decently on that. It also allows me to quite easily change location when it turns necessary, which it sometimes does since my meetings tend to occur in the evenings and then there’s also varying amounts of “family activities” going on!

Backup

For backup, I have a Synology NAS equipped with 2TB of disk in a RAIDSynology DS211j stashed downstairs, on the wired in-house gigabit Ethernet. I run an rsync job every night that syncs the important stuff to the NAS and I run a second rsync that also mirrors relevant data over to a friends house just in case something terribly bad would go down. My NAS backup has already saved me really good at least once.

Printer

HP Officejet 8500ANext to the NAS downstairs is the house printer, also attached to the gigabit even if it has a wifi interface of its own. I just like increasing reliability to have the “fixed services” in the house on wired network.

The printer also has scanning capability which actually has come handy several times. The thing works nicely from my Linux machines as well as my wife’s windows laptop.

Internet

fiber cableI have fiber going directly into my house. It is still “just” a 100/100 connection in the other end of the fiber since at the time I installed this they didn’t yet have equipment to deliver beyond 100 megabit in my area. I’m sure I’ll upgrade this to something more impressive in the future but this is a pretty snappy connection already. I also have just a few milliseconds latency to my primary servers.

Having the fast uplink is perfect for doing good remote backups.

Router  and wifi

dlink DIR 635I have a lowly D-Link DIR 635 router and wifi access point providing wifi for the 2.4GHz and 5GHz bands and gigabit speed on the wired side. It was dead cheap it just works. It NATs my traffic and port forwards some ports through to my desktop machine.

The router itself can also update the dyndns info which ultimately allows me to use a fixed name to my home machine even without a fixed ip.

Frequent Wifi users in the household include my wife’s laptop, the TV computer and all our phones and tablets.

Telephony

Ping Communication Voice Catcher 201EWhen I installed the fiber I gave up the copper connection to my home and since then I use IP telephony for the “land line”. Basically a little box that translates IP to old phone tech and I keep using my old DECT phone. We basically only have our parents that still call this number and it has been useful to have the kids use this for outgoing calls up until they’ve gotten their own mobile phones to use.

It doesn’t cost very much, but the usage is dropping over time so I guess we’ll just give it up one of these days.

Mobile phones and tablets

I have a Nexus 5 as my daily phone. I also have a Nexus 7 and Nexus 10 that tend to be used by the kids mostly.

I have two Firefox OS devices for development/work.

I’m with Firefox OS!

Tablet

I have received a Firefox OS tablet as part of a development program. My plan is to use this device to try out stuff I work on and see how it behaves on Firefox OS “for real” instead of just in emulators or on other systems. While Firefox OS is a product of my employer Mozilla, I personally don’t work particularly much with Firefox OS specifically. I work on networking in general for Firefox, and large chunks of the networking stack is used in both the ordinary Firefox browser like on desktops as well as in Firefox OS. I hope to polish and improve networking on Firefox OS too over time.

Firefox OS tablet

Phone

The primary development device for Firefox OS is right now apparently the Flame phone, and I have one of these too now in my possession. I took a few photos when I unpacked it and crammed them into the same image, click it for higher res:

Flame - Firefox OS phone

A brief explanation of Firefox OS

Firefox OS is an Android kernel (including drivers etc) and a bionic libc – simply the libc that Android uses. Linux-wise and slightly simplified, it runs a single application full-screen: Firefox, which then can run individual Firefox-apps that appears as apps on the phone. This means that the underlying fundamentals are shared with Android, while the layers over that are Firefox and then a world of HTML and javascript. Thus most of the network stack used for Firefox – that I work with – the http, ftp, dns, cookies and so forth is shared between Firefox for desktop and Firefox for Android and Firefox OS.

Firefox OS is made to use a small footprint to allow cheaper smartphones than Android itself can. Hence it is targeted to developing nations and continents.

Both my devices came with Firefox OS version 1.3 pre-installed.

The phone

The specs: Qualcomm Snapdragon 1.2GHZ dual-core processor, 4.5-inch 854×480 pixel screen, five-megapixel rear camera with auto-focus and flash, two-megapixel front-facing camera. Dual-SIM 3G, 8GB of onboard memory with a microSD slot, and a 1800 mAh capacity battery.

The Flame phone should be snappy enough although at times it seems to take a moment too long to populate a newly shown screen with icons etc. The screen surface is somehow not as smooth as my Nexus devices (we have the 4,5,7,10 nexuses in the house), leaving me with a constant feeling the screen isn’t cleaned.

Its dual-sim support is something that seems ideal for traveling etc to be able to use my home sim for incoming calls but use a local sim for data and outgoing calls… I’ve never had a phone featuring that before. I’ve purchased a prepaid SIM-card to use with this phone as my secondary device.

Some Good

I like the feel of the tablet. It feels like a solid and sturdy 10″ tablet, just like it should. I think the design language of Firefox OS for a newbie such as myself is pleasing and good-looking. The quad-core 1GHz thing is certainly fast enough CPU-wise to eat most of what you can throw at it.

These are really good devices to do web browsing on as the browser is a highly capable and fast browser.

Mapping: while of course there’s Google maps app, using the openstreetmap map is great on the device and Google maps in the browser is also a perfectly decent way to view maps. Using openstreetmap also of course has the added bonus that it feels great to see your own edits in your own neck of the woods!

I really appreciate that Mozilla pushes for new, more and better standardized APIs to enable all of this to get done in web applications. To me, this is one of the major benefits with Firefox OS. It benefits all of us who use the web.

Some Bad

Firefox OS feels highly US-centric (which greatly surprised me, seeing the primary markets for Firefox OS are certainly not in the US). As a Swede, I of course want my calendar to show Monday as the first day of the week. No can do. I want my digital clock to show me the time using 24 hour format (the am/pm scheme only confuses me). No can do. Tiny teeny details in the grand scheme of things, yes, but annoying. Possibly I’m just stupid and didn’t find how to switch these settings, but I did look for them on both my devices.

The actual Firefox OS system feels like a scaled-down Android where all apps are simpler and less fancy than Android. There’s a Facebook “app” for it that shows Facebook looking much crappier than it usually does in a browser or in the Android app – although on the phone it looked much better than on the tablet for some reason that I don’t understand.

I managed to get the device to sync my contacts from Google (even with my google 2-factor auth activated) but trying to sync my Facebook contacts just gave me a very strange error window in spite of repeated attempts, but again that worked on my phone!

I really miss a proper back button! Without it, we end up in this handicapped iphone-like world where each app has to provide a back button in its own UI or I have to hit the home button – which doesn’t just go back one step.

The tablet supports a gesture, pull up from the button of the screen, to get to the home screen while the phone doesn’t support that but instead has a dedicated home button which if pressed a long time shows up cards with all currently running apps. I’m not even sure how to do that latter operation on the tablet as it doesn’t’ have a home button.

The gmail web interface and experience is not very good on either of the devices.

Building Firefox OS

I’ve only just started this venture and dipped my toes in that water. All code is there in the open and you build it all with open tools. I might get back on this topic later if I get the urge to ventilate something from it… 🙂 I didn’t find any proper device specific setup for the tablet, but maybe I just don’t know its proper code word and I’ve only given it a quick glance so far. I’ll do my first builds and installs for the phone. Any day now!

More

My seven year old son immediately found at least one game on my dev phone (he actually found the market and downloaded it all by himself the first time he tried the device) that he really likes and now he wants to borrow this from time to time to play that game – in competition with the android phones and tablets we have here already. A pretty good sign I’d say.

Firefox OS is already a complete and competent phone operating system and app ecosystem. If you’re not coming from Android or Iphone it is a step up from everything else. If you do come from Android or Iphone I think you have to accept that this is meant for the lower end spectrum of smart-phones.

I think the smart-phone world can use more competition and Firefox OS brings exactly that.

firefox-os-bootscreen

I’m eight months in on my Mozilla adventure

I started working for Mozilla in January 2014. Here’s some reflections from my first time as Mozilla employee.

Working from home

I’ve worked completely from home during some short periods before in my life so I had an idea what it would be like. So far, it has been even better than I had anticipated. It suits me so well it is almost scary! No commutes. No delays due to traffic. No problems ever with over-crowded trains or buses. No time wasted going to work and home again. And I’m around when my kids get home from school and it’s easy to receive deliveries all days. I don’t think I ever want to work elsewhere again… 🙂

Another effect of my work place is also that I probably have become somewhat more active on social networks and IRC. If I don’t use those means, I may spent whole days without talking to any humans.

Also, I’m the only Mozilla developer in Sweden – although we have a few more employees in Sweden. (Update: apparently this is wrong and there’s’ also a Mats here!)

Daniel's home office

The freedom

I have freedom at work. I control and decide a lot of what I do and I get to do a lot of what I want at work. I can work during the hours I want. As long as I deliver, my employer doesn’t mind. The freedom isn’t just about working hours but I also have a lot of control and saying about what I want to work on and what I think we as a team should work on going further.

The not counting hours

For the last 16 years I’ve been a consultant where my customers almost always have paid for my time. Paid by the hour I spent working for them. For the last 16 years I’ve counted every single hour I’ve worked and made sure to keep detailed logs and tracking of whatever I do so that I can present that to the customer and use that to send invoices. Counting hours has been tightly integrated in my work life for 16 years. No more. I don’t count my work time. I start work in the morning, I stop work in the evening. Unless I work longer, and sometimes I start later. And sometimes I work on the weekend or late at night. And I do meetings after regular “office hours” many times. But I don’t keep track – because I don’t have to and it would serve no purpose!

The big code base

I work with Firefox, in the networking team. Firefox has about 10 million lines C and C++ code alone. Add to that everything else that is other languages, glue logic, build files, tests and lots and lots of JavaScript.

It takes time to get acquainted with such a large and old code base, and lots of the architecture or traces of the original architecture are also designed almost 20 years ago in ways that not many people would still call good or preferable.

Mozilla is using Mercurial as the primary revision control tool, and I started out convinced I should too and really get to learn it. But darn it, it is really too similar to git and yet lots of words are intermixed and used as command but they don’t do the same as for git so it turns out really confusing and yeah, I felt I got handicapped a little bit too often. I’ve switched over to use the git mirror and I’m now a much happier person. A couple of months in, I’ve not once been forced to switch away from using git. Mostly thanks to fancy scripts and helpers from fellow colleagues who did this jump before me and already paved the road.

C++ and code standards

I’m a C guy (note the absence of “++”). I’ve primarily developed in C for the whole of my professional developer life – which is approaching 25 years. Firefox is a C++ fortress. I know my way around most C++ stuff but I’m not “at home” with C++ in any way just yet (I never was) so sometimes it takes me a little time and reading up to get all the C++-ishness correct. Templates, casting, different code styles, subtleties that isn’t in C and more. I’m slowly adapting but some things and habits are hard to “unlearn”…

The publicness and Bugzilla

I love working full time for an open source project. Everything I do during my work days are public knowledge. We work a lot with Bugzilla where all (well except the security sensitive ones) bugs are open and public. My comments, my reviews, my flaws and my patches can all be reviewed, ridiculed or improved by anyone out there who feels like doing it.

Development speed

There are several hundred developers involved in basically the same project and products. The commit frequency and speed in which changes are being crammed into the source repository is mind boggling. Several hundred commits daily. Many hundred and sometimes up to a thousand new bug reports are filed – daily.

yet slowness of moving some bugs forward

Moving a particular bug forward into actually getting it land and included in pending releases can be a lot of work and it can be tedious. It is a large project with lots of legacy, traditions and people with opinions on how things should be done. Getting something to change from an old behavior can take a whole lot of time and massaging and discussions until they can get through. Don’t get me wrong, it is a good thing, it just stands in direct conflict to my previous paragraph about the development speed.

In the public eye

I knew about Mozilla before I started here. I knew Firefox. Just about every person I’ve ever mentioned those two brands to have known about at least Firefox. This is different to what I’m used to. Of course hardly anyone still fully grasp what I’m actually doing on a day to day basis but I’ve long given up on even trying to explain that to family and friends. Unless they really insist.

Vitriol and expectations of high standards

I must say that being in the Mozilla camp when changes are made or announced has given me a less favorable view on the human race. Almost anything or any chance is received by a certain amount of users that are very aggressively against the change. All changes really. “If you’ll do that I’ll be forced to switch to Chrome” is a very common “threat” – as if that would A) work B) be a browser that would care more about such “conservative loonies” (you should consider that my personal term for such people)). I can only assume that the Chrome team also gets a fair share of that sort of threats in the other direction…

Still, it seems a lot of people out there and perhaps especially in the Free Software world seem to hold Mozilla to very high standards. This is both good and bad. This expectation of being very good also comes from people who aren’t even Firefox users – we must remain the bright light in a world that goes darker. In my (biased) view that tends to lead to unfair criticisms. The other browsers can do some of those changes without anyone raising an eyebrow but when Mozilla does similar for Firefox, a shitstorm breaks out. Lots of those people criticizing us for doing change NN already use browser Y that has been doing NN for a good while already…

Or maybe I’m just not seeing these things with clear enough eyes.

How does Mozilla make money?

Yeps. This is by far the most common question I’ve gotten from friends when I mention who I work for. In fact, that’s just about the only question I get from a lot of people… (possibly because after that we get into complicated questions such as what exactly do I do there?)

curl and IETF

I’m grateful that Mozilla allows me to spend part of my work time working on curl.

I’m also happy to now work for a company that allows me to attend to IETF/httpbis and related activities much better than ever I’ve had the opportunity to in the past. Previously I’ve pretty much had to spend spare time and my own money, which has limited my participation a great deal. The support from Mozilla has allowed me to attend to two meetings so far during the year, in London and in NYC and I suspect there will be more chances in the future.

Future

I only just started. I hope to grab on to more and bigger challenges and tasks as I get warmer and more into everything. I want to make a difference. See you in bugzilla.

Firefox and partial content

Update: parts of the change mentioned in this blog post has subsequently been reverted since clearly I had a too positive view of the Internet.

Firefox BallOne of the first bugs that fell into my lap when I started working for Mozilla not a very long time ago, was bug 237623. Anyone involved in Mozilla knows a bug in that range is fairly old (we just recently passed one million filed bugs). This particular bug was filed in March 2004 and there are (right now) 26 other bugs marked as duplicates of this. Today, the fix for this problem has landed.

The core of the problem is that when a HTTP server sends contents back to a client, it can send a header along indicating the size of the data in the response. The header is called “Content-Length:”. If the connection gets broken during transfer for whatever reason and the browser hasn’t received as much data as was initially claimed to be delivered, that’s a very good hint that something is wrong and the transfer was incomplete.

The perhaps most annoying way this could be seen is when you download a huge DVD image or something and for some reason the connection gets cut off after only a short time, way before the entire file is downloaded, but Firefox just silently accept that as the end of the transfer and think everything was fine and dandy.

What complicates the issue is the eternal problem: not everything abides to the protocol. This said, if there are frequent violators of the protocol we can’t strictly fail on each case of problem we detect but we must instead do our best to handle it anyway.

Is Content-Length a frequently violated HTTP response header?

Let’s see…

  1. Back in the HTTP 1.0 days, the Content-Length header was not very important as the connection was mostly shut down after each response anyway. Alas, clients/browsers would swiftly learn to just wait for the disconnect anyway.
  2. Back in the old days, there were cases of problems with “large files” (files larger than 2 or 4GB) which every now and then caused the Content-Length: header to turn into negative or otherwise confused values when it wrapped. That’s not really happening these days anymore.
  3. With HTTP 1.1 and its persuasive use of persistent connections it is important to get the size right, as otherwise the chain of requests get messed up and we end up with tears and sad faces
  4. In curl‘s HTTP parser we’ve always been strictly abiding to this header and we’ve bailed out hard on mismatches. This is a very rare error for users to get and based on this (admittedly unscientific data) I believe that there is not a widespread use of servers sending bad Content-Length headers.
  5. It seems Chrome at least in some aspects is already much more strict about this header.

My fix for this problem takes a slightly careful approach and only enforces the strictness for HTTP 1.1 or later servers. But then as a bonus, it has grown to also signal failure if a chunked encoded transfer ends without the ending trailer or if a SPDY or http2 transfer gets prematurely stopped.

This is basically a 6-line patch at its core. The rest is fixing up old test cases, added new tests etc.

As a counter-point, Eric Lawrence apparently worked on adding stricter checks in IE9 three years ago as he wrote about in Content-Length in the Real World. They apparently subsequently added the check again in IE10 which seems to have caused some problems for them. It remains to be seen how this change affects Firefox users out in the real world. I believe it’ll be fine.

This patch also introduces the error code for a few other similar network situations when the connection is closed prematurely and we know there are outstanding data that never arrived, and I got the opportunity to improve how Firefox behaves when downloading an image and it gets an error before the complete image has been transferred. Previously (when a partial transfer wasn’t an error), it would always throw away the image on an error and instead show the “image not found” picture. That really doesn’t make sense I believe, as a partial image is better than that default one – especially when a large portion of the image has been downloaded already.

Follow-up effects

Other effects of this change that possibly might be discovered and cause some new fun reports: prematurely cut off transfers of javascript or CSS will discard the entire javascript/CSS file. Previously the partial file would be used.

Of course, I doubt that these are the files that are as commonly cut off as many other file types but still on a very slow and bad connection it may still happen and the new behavior will make Firefox act as if the file wasn’t loaded at all, instead of previously when it would happily used the portions of the files that it had actually received. Partial CSS and partial javascript of course could lead to some “fun” effects of brokenness.

Crashed and recovered in no time

Working from home, even writing software from home, my computer setup is pretty crucial for a productive work day.

Yesterday morning after I had sat down with my coffee and started to work on my latest patch iteration I noticed that some disk operations seemed to be very slow. I looked around and then suddenly an ‘ls’ of a directory returned an error!

I checked the system logs and I saw them filling up with error messages identifying problems with a hard drive. Very quickly I identified the drive as the bigger one (I have one SSD and one much larger HDD). Luckily, that’s the one I mostly store document, pictures and videos on and I backup that thing every night. This disk is not very old and I’ve never experienced this sort of disk crash before, not even with disks that I’ve used for many years more than I’ve used this…

boomI ripped the thing out, booted up again and I could still work since my source code and OS are on the SSD. I ordered a new one at once. Phew.

Tuesday morning I noticed that for some unexplainable reason I had my /var partition on the dead drive (and not backed up). That turned out to be a bit inconvenient since now my Debian Linux had no idea which packages I had installed and apt-get and dpkg were all crippled to death.

I did some googling and as my laptop is also a Debian sid install I managed to restore it pretty swiftly by copying over data from there. At least it (the /var contents) is now mostly back to where it was before.

On Tuesday midday, some 26 hours after I ripped out the disk, my doorbell bing-bonged and the delivery guy handed me a box with a new and shiny 3 TB drive. A couple of hours ago I inserted it, portioned it, read back a couple of hundred gigabytes of backup, put back the backup job in cron again and … yeah, I think I’m basically back to where I was before it went south.

All in all: saved by the backup. Not many tears. Phew this time.

Less plain-text is better. Right?

Every connection and every user on the Internet is being monitored and snooped at to at least some extent every now and then. Everything from the casual firesheep user in your coffee shop, an admin in your ISP, your parents/kids on your wifi network, your employer on the company network, your country’s intelligence service in a national network hub or just a random rogue person somewhere in the middle of all this.

My involvement in HTTP make me mostly view and participate in this discussion with this protocol primarily in mind, but the discussion goes well beyond HTTP and the concepts can (and will?) be applied to most Internet protocols in the future. You can follow some of these discussions in the httpbis group, the UTA group, the tcpcrypt list on twitter and elsewhere.

IETF just published RFC 7258 which states:

Pervasive Monitoring Is a Widespread Attack on Privacy

Passive monitoring

Most networking surveillance can be done entirely passively by just running the correct software and listening in on the correct cable. Because most internet traffic is still plain-text and readable by anyone who wants to read it when the bytes come flying by. Like your postman can read your postcards.

Opportunistic?

Recently there’s been a fierce discussion going on both inside and outside of IETF and other protocol and standards groups about doing “opportunistic encryption” (OE) and its merits and drawbacks. The term, which in itself is being debated and often is said to be better called “opportunistic keying” (OK) instead, is about having protocols transparently (invisible to the user) upgrade plain-text versions to TLS unauthenticated encrypted versions of the protocols. I’m emphasizing the unauthenticated word there because that’s a key to the debate. Recently I’ve been told that the term “opportunistic security” is the term to use instead…

In the way of real security?

Basically the argument against opportunistic approaches tends to be like this: by opportunistically upgrading plain-text to unauthenticated encrypted communication, sysadmins and users in the world will consider that good enough and they will then not switch to using proper, strong and secure authentication encryption technologies. The less good alternative will hamper the adoption of the secure alternative. Server admins should just as well buy a cert for 10 USD and use proper HTTPS. Also, listeners can still listen in on or man-in-the-middle unauthenticated connections if they capture everything from the start of the connection, including the initial key exchange. Or the passive listener will just change to become an active party and this unauthenticated way doesn’t detect that. OE doesn’t prevent snooping.

Isn’t it better than plain text?

The argument for opportunism here is that there will be nothing to the user that shows that it is “upgrading” to something less bad than plain text. Browsers will not show the padlock, clients will not treat the connection as “secure”. It will just silently and transparently make passive monitoring of networks much harder and it will force actors who truly want to snoop on specific traffic to up their game and probably switch to active monitoring for more cases. Something that’s much more expensive for the listener. It isn’t about the cost of a cert. It is about setting up and keeping the cert up-to-date, about SNI not being widely enough adopted and that we can see only 30% of all sites on the Internet today use HTTPS – for these reasons and others.

HTTP:// over TLS

In the httpbis work group in IETF the outcome of this debate is that there is a way being defined on how to do HTTP as specified with a HTTP:// URL – that we’ve learned is plain-text – over TLS, as part of the http2 work. Alt-Svc is the way. (The header can also be used to just load balance HTTP etc but I’ll ignore that for now)

Mozilla and Firefox is basically the only team that initially stands behind the idea of implementing this in a browser. HTTP:// done over TLS will not be seen nor considered any more secure than ordinary HTTP is and users will not be aware if that happens or not. Only true HTTPS connections will get the padlock, secure cookies and the other goodies true HTTPS sites are known and expected to get and show.

HTTP:// over TLS will just silently send everything through TLS (assuming that it can actually negotiate such a connection), thus making passive monitoring of the network less easy.

Ideally, future http2 capable servers will only require a config entry to be set TRUE to make it possible for clients to do OE on them.

HTTPS is the secure protocol

HTTP:// over TLS is not secure. If you want security and privacy, you should use HTTPS. This said, MITMing HTTPS transfers is still a widespread practice in certain network setups…

TCPcrypt

I find this initiative rather interesting. If implemented, it removes the need for all these application level protocols to do anything about opportunistic approaches and it could instead be handled transparently on TCP level! It still has a long way to go though before we will see anything like this fly in real life.

The future will tell

Is this just a fad that will get no adoption and go away or is it the beginning of something that will change how we do protocols in the future? Time will tell. Many harsh words are being exchanged over this topic in many a debate right now…

(I’m trying to stick to “HTTP:// over TLS” here when referring to doing HTTP OE/OK over TLS. This is partly because RFC2818 that describes how to do HTTPS uses the phrase “HTTP over TLS”…)

My first Mozilla week

Working from home

I get up in the morning, shave, eat breakfast and make sure all family members get off as they should. Most days I walk my son to school (some 800 meters) and then back again. When they’re all gone, the house is quiet and then me and my cup of coffee go upstairs and my work day begins.

Systems and accounts

I have spent time this week to setup accounts and sign up for various lists and services. Created profiles, uploaded pictures, confirmed passwords. I’ve submitted stuff and I’ve signed things. There’s quite a lot of systems in use.

My colleagues

I’ve met a few. The Necko team isn’t very big but the entire company is huge and there are just so many people and names. I haven’t yet had any pressing reason to meet a lot of people nor learn a lot of names. I feel like I’m starting out this really slowly and gradually.

Code base

Firefox is a large chunk of code. It takes some 20 minutes to rebuild on my 3.5GHz quad-core Core-i7 with SSD. I try to pull code and rebuild every morning now so that I can dogfood and live on the edge. I also have a bunch of local patches now, some of them which I want to have stewing in my own browser for a while so that I know they at least don’t have any major negative impact!

Figuring out the threading, XPCOM, the JavaScript stuff and everything is a massive task. I really cannot claim to have done more than just scratched the surface so far, but at least I am scratching and I’ve “etagged” the whole lot and I’ve spent some time reading and reviewing code. Attaching a gdb to a running Firefox and checking out behavior and how it looks has also helped.

Netwerk code size

“Netwerk” is the directory name of the source tree where most of the network code is located. It is actually not so ridiculously large as one could fear. Counting only C++ and header files, it sums up to about 220K lines of code. Of course not everything interesting is in this tree, but still. Not mindbogglingly large.

Video conferencing

I’ll admit I’ve not participated in this sort of large scale video conferences before this. With Vidyo and all the different people and offices signed up at once – it is a quite impressive setup actually. My only annoyance so far is that I didn’t get the sound for Vidyo to work for me in Linux with my headphones. The other end could hear me but I couldn’t hear them! I had to defer to using Vidyo on a windows laptop instead.

Doing the video conferencing on a laptop instead of on my desktop machine has its advantages when I do them during the evenings when the rest of the family is at home since then I can move my machine somewhere and sit down somewhere where they won’t disturb me and I won’t disturb them.

Bugzilla

The bug tracker is really in the center for this project, or at least for how I view it and work with it right now. During my first week I’ve so far filed two bug reports and I’ve submitted a suggested patch for a third bug. One of my bugs (Bug 959100 – ParseChunkRemaining doesn’t detect chunk size overflow) has been reviewed fine and is now hopefully about to be committed.

I’ve requested commit access (#961018) as a “level 1” and I’ve signed the committer’s agreement. Level 1 is entry level and only lets me push to the Try server but still, I fully accept that there’s a process to follow and I’m in no hurry. I’ll get to level 3 soon enough I’m sure.

Mercurial

What can I say. After having used it a bit this week without any particularly fancy operations, I prefer git so much more. Of course I’m also much more used to git, but I find that for a lot of the stuff where both have similar concepts I prefer to git way. Oh well, its just a tool. I’ll get around. Possibly I’ll try out the git mirror soon and see if that provides a more convenient environment for me.

curl

What impact did all this new protocol and network code stuff during my work days have on my curl activities?

I got inspired to fix both the chunked encoding parser and the cookie parser’s handling of max-age in libcurl.

What didn’t happen

I feel behind in the implementing-http2 department. I didn’t get my new work laptop yet.

Next weekDaniel's work place

More of the same, land more patches and figure out more code. Grab more smallish bugs others have filed and work on fixing them as more practice.

Also, there’s a HTTPbis meeting in Zürich on Wednesday to Friday that I won’t go to (I’ll spare you the explanation why) but I’ll try to participate remotely.