Category Archives: Technology

Really everything related to technology

My talks at FOSDEM 2019

I’ll be celebrating my 10th FOSDEM when I travel down to Brussels again in early February 2019. That’s ten years in a row. It’ll also be the 6th year I present something there, as I’ve done these seven talks in the past:

My past FOSDEM appearances

2010. I talked Rockbox in the embedded room.

2011. libcurl, seven SSL libs and one SSH lib in the security room.

2015. Internet all the things – using curl in your device. In the embedded room.

2015. HTTP/2 right now. In the Mozilla room.

2016. an HTTP/2 update. In the Mozilla room.

2017. curl. On the main track.

2017. So that was HTTP/2, what’s next? In the Mozilla room.

DNS over HTTPS – the good, the bad and the ugly

On the main track, in Janson at 15:00 on Saturday 2nd of February.

DNS over HTTPS (aka “DoH”, RFC 8484) introduces a new transport protocol to do secure and private DNS messaging. Why was it made, how does it work and how users are free (to resolve names).

The presentation will discuss reasons why DoH was deemed necessary and interesting to ship and deploy and how it compares to alternative technologies that offer similar properties. It will discuss how this protocol “liberates” users and offers stronger privacy (than the typical status quo).

How to enable and start using DoH today.

It will also discuss some downsides with DoH and what you should consider before you decide to use a random DoH server on the Internet.

HTTP/3

In the Mozilla room, at 11:30 on Saturday 2nd of February.

HTTP/3 is the next coming HTTP version.

This time TCP is replaced by the new transport protocol QUIC and things are different yet again! This is a presentation about HTTP/3 and QUIC with a following Q&A about everything HTTP. Join us at Goto 10.

HTTP/3 is the designated name for the coming next version of the protocol that is currently under development within the QUIC working group in the IETF.

HTTP/3 is designed to improve in areas where HTTP/2 still has some shortcomings, primarily by changing the transport layer. HTTP/3 is the first major protocol to step away from TCP and instead it uses QUIC. I’ll talk about HTTP/3 and QUIC. Why the new protocols are deemed necessary, how they work, how they change how things are sent over the network and what some of the coming deployment challenges will be.

DNS Privacy panel

In the DNS room, at 11:55 on Sunday 3rd of February.

This isn’t strictly a prepared talk or presentation but I’ll still be there and participate in the panel discussion on DNS privacy. I hope to get most of my finer points expressed in the DoH talk mentioned above, but I’m fully prepared to elaborate on some of them in this session.

HTTP/3 talk in Stockholm on January 22

HTTP/3 – the coming HTTP version

This time TCP is replaced by the new transport protocol QUIC and things are different yet again! This is a presentation by Daniel Stenberg about HTTP/3 and QUIC with a following Q&A about everything HTTP.

The presentation will be done in English. It will be recorded and possibly live-streamed. Organized by me, together with our friends at goto10. It is free of charge, but you need to register.

When

17:30 – 19:00
January 22, 2019

Goto 10: Hörsalen, Hammarby Kaj 10D plan 5

Register here!

Fancy map to goto 10


HTTP/3 Explained

I’m happy to tell that the booklet HTTP/3 Explained is now ready for the world. It is entirely free and open and is available in several different formats to fit your reading habits. (It is not available on dead trees.)

The book describes what HTTP/3 and its underlying transport protocol QUIC are, why they exist, what features they have and how they work. The book is meant to be readable and understandable for most people with a rudimentary level of network knowledge or better.

These protocols are not done yet, there aren’t even any implementation of these protocols in the main browsers yet! The book will be updated and extended along the way when things change, implementations mature and the protocols settle.

If you find bugs, mistakes, something that needs to be explained better/deeper or otherwise want to help out with the contents, file a bug!

It was just a short while ago I mentioned the decision to change the name of the protocol to HTTP/3. That triggered me to refresh my document in progress and there are now over 8,000 words there to help.

The entire HTTP/3 Explained contents are available on github.

If you haven’t caught up with HTTP/2 quite yet, don’t worry. We have you covered for that as well, with the http2 explained book.

I’m leaving Mozilla

It’s been five great years, but now it is time for me to move on and try something else.

During these five years I’ve met and interacted with a large number of awesome people at Mozilla, lots of new friends! I got the chance to work from home and yet work with a global team on a widely used product, all done with open source. I have worked on internet protocols during work-hours (in addition to my regular spare-time working with them) and its been great! Heck, lots of the HTTP/2 development and the publication of that was made while I was employed by Mozilla and I fondly participated in that. I shall forever have this time ingrained in my memory as a very good period of my life.

I had already before I joined the Firefox development understood some of the challenges of making a browser in the modern era, but that understanding has now been properly enriched with lots of hands-on and code-digging in sometimes decades-old messy C++, a spaghetti armada of threads and the wild wild west of users on the Internet.

A very big thank you and a warm bye bye go to everyone of my friends at Mozilla. I won’t be far off and I’m sure I will have reasons to see many of you again.

My last day as officially employed by Mozilla is December 11 2018, but I plan to spend some of my remaining saved up vacation days before then so I’ll hand over most of my responsibilities way before.

The future is bright but unknown!

I don’t yet know what to do next.

I have some ideas and communications with friends and companies, but nothing is firmly decided yet. I will certainly entertain you with a totally separate post on this blog once I have that figured out! Don’t worry.

Will it affect curl or other open source I do?

I had worked on curl for a very long time already before joining Mozilla and I expect to keep doing curl and other open source things even going forward. I don’t think my choice of future employer should have to affect that negatively too much, except of course in periods.

With me leaving Mozilla, we’re also losing Mozilla as a primary sponsor of the curl project, since that was made up of them allowing me to spend some of my work days on curl and that’s now over.

Short-term at least, this move might increase my curl activities since I don’t have any new job yet and I need to fill my days with something…

What about toying with HTTP?

I was involved in the IETF HTTPbis working group for many years before I joined Mozilla (for over ten years now!) and I hope to be involved for many years still. I still have a lot of things I want to do with curl and to keep curl the champion of its class I need to stay on top of the game.

I will continue to follow and work with HTTP and other internet protocols very closely. After all curl remains the world’s most widely used HTTP client.

Can I enter the US now?

No. That’s unfortunately not related, and I’m not leaving Mozilla because of this problem and I unfortunately don’t expect my visa situation to change because of this change. My visa counter is now showing more than 214 days since I applied.

HTTP/3

The protocol that’s been called HTTP-over-QUIC for quite some time has now changed name and will officially become HTTP/3. This was triggered by this original suggestion by Mark Nottingham.

The QUIC Working Group in the IETF works on creating the QUIC transport protocol. QUIC is a TCP replacement done over UDP. Originally, QUIC was started as an effort by Google and then more of a “HTTP/2-encrypted-over-UDP” protocol.

When the work took off in the IETF to standardize the protocol, it was split up in two layers: the transport and the HTTP parts. The idea being that this transport protocol can be used to transfer other data too and its not just done explicitly for HTTP or HTTP-like protocols. But the name was still QUIC.

People in the community has referred to these different versions of the protocol using informal names such as iQUIC and gQUIC to separate the QUIC protocols from IETF and Google (since they differed quite a lot in the details). The protocol that sends HTTP over “iQUIC” was called “hq” (HTTP-over-QUIC) for a long time.

Mike Bishop scared the room at the QUIC working group meeting in IETF 103 when he presented this slide with what could be thought of almost a logo…

On November 7, 2018 Dmitri of Litespeed announced that they and Facebook had successfully done the first interop ever between two HTTP/3 implementations. Mike Bihop’s follow-up presentation in the HTTPbis session on the topic can be seen here. The consensus in the end of that meeting said the new name is HTTP/3!

No more confusion. HTTP/3 is the coming new HTTP version that uses QUIC for transport!

Get the CA cert for curl

When you use curl to communicate with a HTTPS site (or any other protocol that uses TLS), it will by default verify that the server is signed by a trusted Certificate Authority (CA). It does this by checking the CA bundle it was built to use, or instructed to use with the –cacert command line option.

Sometimes you end up in a situation where you don’t have the necessary CA cert in your bundle. It could then look something like this:

$ curl https://example.com/
curl: (60) SSL certificate problem: self signed certificate
More details here: https://curl.se/docs/sslcerts.html

Do not disable!

A first gut reaction could be to disable the certificate check. Don’t do that. You’ll just make that end up in production or get copied by someone else and then you’ll spread the insecure use to other places and eventually cause a security problem.

Get the CA cert

I’ll show you four different ways to fix this.

1. Update your OS CA store

Operating systems come with a CA bundle of their own and on most of them, curl is setup to use the system CA store. A system update often makes curl work again.

This of course doesn’t help you if you have a self-signed certificate or otherwise use a CA that your operating system doesn’t have in its trust store.

2. Get an updated CA bundle from us

curl can be told to use a separate stand-alone file as CA store, and conveniently enough curl provides an updated one on the curl web site. That one is automatically converted from the one Mozilla provides for Firefox, updated daily. It also provides a little backlog so the ten most recent CA stores are available.

If you agree to trust the same CAs that Firefox trusts. This is a good choice.

3. Get it with openssl

Now we’re approaching the less good options. It’s way better to get the CA certificates via other means than from the actual site you’re trying to connect to!

This method uses the openssl command line tool. The servername option used below is there to set the SNI field, which often is necessary to tell the server which actual site’s certificate you want.

$ echo quit | openssl s_client -showcerts -servername server -connect server:443 > cacert.pem

A real world example, getting the certs for daniel.haxx.se and then getting the main page with curl using them:

$ echo quit | openssl s_client -showcerts -servername daniel.haxx.se -connect daniel.haxx.se:443 > cacert.pem

$ curl --cacert cacert.pem https://daniel.haxx.se
4. Get it with Firefox

Suppose you’re browsing the site already fine with Firefox. Then you can do inspect it using the browser and export to use with curl.

Step 1 – click the i in the circle on the left of the URL in the address bar of your browser.

Step 2 – click the right arrow on the right side in the drop-down window that appeared.

Step 3 – new contents appeared, now click the “More Information” at the bottom, which pops up a new separate window…

Step 4 – Here you get security information from Firefox about the site you’re visiting. Click the “View Certificate” button on the right. It pops up yet another separate window.

Step 5 – in this window full of certificate information, select the “Details” tab…

Step 6 – when switched to the details tab, there’s the certificate hierarchy shown at the top and we select the top choice there. This list will of course look different for different sites

Step 7 – now click the “Export” tab at the bottom left and save the file (that uses a .crt extension) somewhere suitable.

If you for example saved the exported certificate using in /tmp, you could then use curl with that saved certificate something like this:

$ curl --cacert /tmp/GlobalSignRootCA-R3.crt https://curl.se

But I’m not using openssl!

This description assumes you’re using a curl that uses a CA bundle in the PEM format, which not all do – in particular not the ones built with NSS, Schannel (native Windows) or Secure Transport (native macOS and iOS) don’t.

If you use one of those, you need to then add additional command to import the PEM formatted cert into the particular CA store of yours.

A CA store is many PEM files concatenated

Just concatenate many different PEM files into a single file to create a CA store with multiple certificates.

curl up 2019 will happen in Prague

The curl project is happy to invite you to the city of Prague, the Czech Republic, where curl up 2019 will take place.

curl up is our annual curl developers conference where we gather and talk Internet protocols, curl’s past, current situation and how to design its future. A weekend of curl.

Previous years we’ve gathered twenty-something people for an intimate meetup in a very friendly atmosphere. The way we like it!

In a spirit to move the meeting around to give different people easier travel, we have settled on the city of Prague for 2019, and we’ll be there March 29-31.

Sign up now!

Symposium on the Future of HTTP

This year, we’re starting off the Friday afternoon with a Symposium dedicated to “the future of HTTP” which is aimed to be less about curl and more about where HTTP is and where it will go next. Suitable for a slightly wider audience than just curl fans.

That’s Friday the 29th of March, 2019.

Program and talks

We are open for registrations and we would love to hear what you would like to come and present for us – on the topics of HTTP, of curl or related matters. I’m sure I will present something too, but it becomes a much better and more fun event if we distribute the talking as much as possible.

The final program for these days is not likely to get set until much later and rather close in time to the actual event.

The curl up 2019 wiki page is where you’ll find more specific details appear over time. Just go back there and see.

Helping out and planning?

If you want to follow the planning, help out, offer improvements or you have questions on any of this? Then join the curl-meet mailing list, which is dedicated for this!

Free or charge thanks to sponsors

We’re happy to call our event free, or “almost free” of charge and we can do this only due to the greatness and generosity of our awesome sponsors. This year we say thanks to Mullvad, Sticker Mule, Apiary and Charles University.

There’s still a chance for your company to help out too! Just get in touch.

curl up 2019 with logos

DNS-over-HTTPS is RFC 8484

The protocol we fondly know as DoH, DNS-over-HTTPS, is now  officially RFC 8484 with the official title “DNS Queries over HTTPS (DoH)”. It documents the protocol that is already in production and used by several client-side implementations, including Firefox, Chrome and curl. Put simply, DoH sends a regular RFC 1035 DNS packet over HTTPS instead of over plain UDP.

I’m happy to have contributed my little bits to this standard effort and I’m credited in the Acknowledgements section. I’ve also implemented DoH client-side several times now.

Firefox has done studies and tests in cooperation with a CDN provider (which has sometimes made people conflate Firefox’s DoH support with those studies and that operator). These studies have shown and proven that DoH is a working way for many users to do secure name resolves at a reasonable penalty cost. At least when using a fallback to the native resolver for the tricky situations. In general DoH resolves are slower than the native ones but in the tail end, the absolutely slowest name resolves got a lot better with the DoH option.

To me, DoH is partly necessary because the “DNS world” has failed to ship and deploy secure and safe name lookups to the masses and this is the one way applications “one layer up” can still secure our users.

More curl bug bounty

Together with Bountygraph, the curl project now offers money to security researchers for report security vulnerabilities to us.

https://bountygraph.com/programs/curl

The idea is that sponsors donate money to the bounty fund, and we will use that fund to hand out rewards for reported issues. It is a way for the curl project to help compensate researchers for the time and effort they spend helping us improving our security.

Right now the bounty fund is very small as we just started this project, but hopefully we can get a few sponsors interested and soon offer “proper” rewards at decent levels in case serious flaws are detected and reported here.

If you’re a company using curl or libcurl and value security, you know what you can do…

Already before, people who reported security problems could ask for money from Hackerone’s IBB program, and this new program is in addition to that – even though you won’t be able to receive money from both bounties for the same issue.

After I announced this program on twitter yesterday, I did an interview with Arif Khan for latesthackingnews.com. Here’s what I had to say:

A few questions

Q: You have launched a self-managed bug bounty program for the first time. Earlier, IBB used to pay out for most security issues in libcurl. How do you think the idea of self-management of a bug bounty program, which has some obvious problems such as active funding might eventually succeed?

First, this bounty program is run on bountygraph.com so I wouldn’t call it “self-managed” since we’re standing on a lot of infra setup and handled by others.

To me, this is an attempt to make a bounty program that is more visible as clearly a curl bounty program. I love Hackerone and the IBB program for what they offer, but it is A) very generic, so the fact that you can get money for curl flaws there is not easy to figure out and there’s no obvious way for companies to sponsor curl security research and B) they are very picky to which flaws they pay money for (“only critical flaws”) and I hope this program can be a little more accommodating – assuming we get sponsors of course.

Will it work and make any differences compared to IBB? I don’t know. We will just have to see how it plays out.

Q: How do you think the crowdsourcing model is going to help this bug bounty program?

It’s crucial. If nobody sponsors this program, there will be no money to do payouts with and without payouts there are no bounties. Then I’d call the curl bounty program a failure. But we’re also not in a hurry. We can give this some time to see how it works out.

My hope is though that because curl is such a widely used component, we will get sponsors interested in helping out.

Q: What would be the maximum reward for most critical a.k.a. P0 security vulnerabilities for this program?

Right now we have a total of 500 USD to hand out. If you report a p0 bug now, I suppose you’ll get that. If we just get sponsors, I’m hoping we should be able to raise that reward level significantly. I might be very naive, but I think we won’t have to pay for very many critical flaws.

It goes back to the previous question: this model will only work if we get sponsors.

Q: Do you feel there’s a risk that bounty hunters could turn malicious?

I don’t think this bounty program particularly increases or reduces that risk to any significant degree. Malicious hunters probably already exist and I would assume that blackhat researchers might be able to extract more money on the less righteous markets if they’re so inclined. I don’t think we can “outbid” such buyers with this program.

Q: How will this new program mutually benefit security researchers as well as the open source community around curl as a whole?

Again, assuming that this works out…

Researchers can get compensated for the time and efforts they spend helping the curl project to produce and provide a more secure product to the world.

curl is used by virtually every connected device in the world in one way or another, affecting every human in the connected world on a daily basis. By making sure curl is secure we keep users safe; users of countless devices, applications and networked infrastructure.

Update: just hours after this blog post, Dropbox chipped in 32,768 USD to the curl bounty fund…

DoH in curl

DNS-over-HTTPS (DoH) is being designed (it is not an RFC quite yet but very soon!) to allow internet clients to get increased privacy and security for their name resolves. I’ve previously explained the DNS-over-HTTPS functionality within Firefox that ships in Firefox 62 and I did a presentation about DoH and its future in curl at curl up 2018.

We are now introducing DoH support in curl. I hope this will not only allow users to start getting better privacy and security for their curl based internet transfers, but ideally this will also provide an additional debugging tool for DoH in other clients and servers.

Let’s take a look at how we plan to let applications enable this when using libcurl and how libcurl has to work with this internally to glue things together.

How do I make my libcurl transfer use DoH?

There’s a primary new option added, which is the “DoH URL”. An application sets the CURLOPT_DOH_URL for a transfer, and then libcurl will use that service for resolving host names. Easy peasy. There should be nothing else in the transfer that changes or appears differently. It’ll just resolve the host names over DoH instead of using the default resolver!

What about bootstrap, how does libcurl find the DoH server’s host name?

Since the DoH URL itself typically is given using a host name, that first host name will be resolved using the normal resolver – or if you so desire, you can provide the IP address for that host name with the CURLOPT_RESOLVE option just like you can for any host name.

If done using the resolver, the resolved address will then be kept in libcurl’s DNS cache for a short while and the DoH connection will be kept in the regular connection pool with the other connections, making subsequent DoH resolves on the same handle much faster.

How do I use this from the command line?

Tell curl which DoH URL to use with the new –doh-url command line option:

$ curl --doh-url https://dns-server.example.com https://www.example.com

How do I make my libcurl code use this?

curl = curl_easy_init();
curl_easy_setopt(curl, CURLOPT_URL,
                 "https://curl.haxx.se/");
curl_easy_setopt(curl, CURLOPT_DOH_URL,
                 "https://doh.example.com/");
res = curl_easy_perform(curl);

Internals

Internally, libcurl itself creates two new easy handles that it adds to the existing multi handles and they are then performing two HTTP requests while the original transfer sits in the “waiting for name resolve” state. Once the DoH requests are completed, the original transfer’s state can progress and continue on.

libcurl handles parallel transfers perfectly well already and by leveraging the already existing support for this, it was easy to add this new functionality and still work non-blocking and even event-based correctly depending on what libcurl API that is being used.

We had to add a new little special thing that makes libcurl handle the end of a transfer in a new way since there are now easy handles that are created and added to the multi handle entirely without the user’s knowledge, so the code also needs to remove and delete those handles when they’re done serving their purposes.

Was this hard to add to a 20 year old code base?

Actually, no. It was surprisingly easy, but then I’ve also worked on a few different client-side DoH implementations already so I had gotten myself a clear view of how I wanted the functionality to work plus the fact that I’m very familiar with the libcurl internals.

Plus, everything inside libcurl is already using non-blocking code and the multi interface paradigms so the foundation for adding parallel transfers like this was already in place.

The entire DoH patch for curl, including documentation and test cases, was a mere 1500 lines.

Ship?

This is merged into the master branch in git and is planned to ship as part of the next release: 7.62.0 at the end of October 2018.