alt-svc in curl

The RFC 7838 was published already in April 2016. It describes the new HTTP header Alt-Svc, or as the title of the document says HTTP Alternative Services.

HTTP Alternative Services

An alternative service in HTTP lingo is a quite simply another server instance that can provide the same service and act as the same origin as the original one. The alternative service can run on another port, on another host name, on another IP address, or over another HTTP version.

An HTTP server can inform a client about the existence of such alternatives by returning this Alt-Svc header. The header, which has an expiry time, tells the client that there’s an optional alternative to this service that is hosted on that host name, that port number using that protocol. If that client is a browser, it can connect to the alternative in the background and if that works out fine, continue to use that host for the rest of the time that alternative is said to work.

In reality, this header becomes a little similar to the DNS records SRV or URI: it points out a different route to the server than what the A/AAAA records for it say.

The Alt-Svc header came into life as an attempt to help out with HTTP/2 load balancing, since with the introduction of HTTP/2 clients would suddenly use much more persistent and long-living connections instead of the very short ones used for traditional HTTP/1 web browsing which changed the nature of how connections are done. This way, a system that is about to go down can hint the clients on how to continue using the service, elsewhere.

Alt-Svc: h2="backup.example.com:443"; ma=2592000;

HTTP upgrades

Once that header was published, the by then already existing and deployed Google QUIC protocol switched to using the Alt-Svc header to hint clients (read “Chrome users”) that “hey, this service is also available over gQUIC“. (Prior to that, they used their own custom alternative header that basically had the same meaning.)

This is important because QUIC is not TCP. Resources on the web that are pointed out using the traditional HTTPS:// URLs, still imply that you connect to them using TCP on port 443 and you negotiate TLS over that connection. Upgrading from HTTP/1 to HTTP/2 on the same connection was “easy” since they were both still TCP and TLS. All we needed then was to use the ALPN extension and voila: a nice and clean version negotiation.

To upgrade a client and server communication into a post-TCP protocol, the only official way to it is to first connect using the lowest common denominator that the HTTPS URL implies: TLS over TCP, and only once the server tells the client what more there is to try, the client can go on and try out the new toys.

For HTTP/3, this is the official way for HTTP servers to tell users about the availability of an HTTP/3 upgrade option.

curl

I want curl to support HTTP/3 as soon as possible and then as I’ve mentioned above, understanding Alt-Svc is a key prerequisite to have a working “bootstrap”. curl needs to support Alt-Svc. When we’re implementing support for it, we can just as well support the whole concept and other protocol versions and not just limit it to HTTP/3 purposes.

curl will only consider received Alt-Svc headers when talking HTTPS since only then can it know that it actually speaks with the right host that has the authority enough to point to other places.

Experimental

This is the first feature and code that we merge into curl under a new concept we do for “experimental” code. It is a way for us to mark this code as: we’re not quite sure exactly how everything should work so we allow users in to test and help us smooth out the quirks but as a consequence of this we might actually change how it works, both behavior and API wise, before we make the support official.

We strongly discourage anyone from shipping code marked experimental in production. You need to explicitly enable this in the build to get the feature. (./configure –enable-alt-svc)

But at the same time we urge and encourage interested users to test it out, try how it works and bring back your feedback, criticism, praise, bug reports and help us make it work the way we’d like it to work so that we can make it land as a “normal” feature as soon as possible.

Ship

The experimental alt-svc code has been merged into curl as of commit 98441f3586 (merged March 3rd 2019) and will be present in the curl code starting in the public release 7.64.1 that is planned to ship on March 27, 2019. I don’t have any time schedule for when to remove the experimental tag but ideally it should happen within just a few release cycles.

alt-svc cache

The curl implementation of alt-svc has an in-memory cache of known alternatives. It can also both save that cache to a text file and load that file back into memory. Saving the alt-svc cache to disk allows it to survive curl invokes and to truly work the way it was intended. The cache file stores the expire timestamp per entry so it doesn’t matter if you try to use a stale file.

curl –alt-svc

Caveat: I now talk about how a feature works that I’ve just above said might change before it ships. With the curl tool you ask for alt-svc support by pointing out the alt-svc cache file to use. Or pass a “” (empty name) to make it not load or save any file. It makes curl load an existing cache from that file and at the end, also save the cache to that file.

curl also already since a long time features fancy connection options such as –resolve and –connect-to, which both let a user control where curl connects to, which in many cases work a little like a static poor man’s alt-svc. Learn more about those in my curl another host post.

libcurl options for alt-svc

We start out the alt-svc support for libcurl with two separate options. One sets the file name to the alt-svc cache on disk (CURLOPT_ALTSVC), and the other control various aspects of how libcurl should behave in regards to alt-svc specifics (CURLOPT_ALTSVC_CTRL).

I’m quite sure that we will have reason to slightly adjust these when the HTTP/3 support comes closer to actually merging.

commercial curl support!

If you want commercial support, ports of curl to other operating systems or just instant help to fix your curl related problems, we’re here to help. Get in touch now! This is the premiere. This has not been offered by me or anyone else before.

I’m not sure I need to say it, but I personally have authored almost 60% of all commits in the curl source code during my more than twenty years in the project. I started the project, I’ve designed its architecture etc. There is simply no one around with my curl experience and knowledge of curl internals. You can’t buy better curl expertise.

curl has become one of the world’s most widely used software components and is the transfer engine doing a large chunk of all non-browser Internet transfers in the world today. curl has reached this level of success entirely without anyone offering commercial services around it. Still, not every company and product made out there has a team of curl experts and in this demanding time and age we know there are times when you rather hire the right team to help you out.

We are the curl experts that can help you and your team. Contact us for all and any support questions at support@wolfssl.com.

What about the curl project?

I’m heading into this new chapter of my life and the curl project with the full knowledge that this blurs the lines between my job and my spare time even more than before. But fear not!

The curl project is free and open and will remain independent of any commercial enterprise helping out customers. I realize me offering companies and organizations to deal with curl problems and solving curl issues for compensation creates new challenges and questions where boundaries go, if for nothing else for me personally. I still think this is worth pursuing and I’m sure we can figure out and handle whatever minor issues this can lead to.

My friends, the community, the users and harsh critiques on twitter will all help me stay true and honest. I know this. This should end up a plus for the curl project in general as well as for me personally. More focus, more work and more money involved in curl related activities should improve the project.

It is with great joy and excitement I take on this new step.

curl 7.64.0 – like there’s no tomorrow

I know, has there been eight weeks since the previous release already? But yes it has – I double-checked! And then as the laws of nature dictates, there has been yet another fresh curl version released out into the wild.

Numbers

the 179th release
5 changes
56 days (total: 7,628)

76 bug fixes (total: 4,913)
128 commits (total: 23,927)
0 new public libcurl functions (total: 80)
3 new curl_easy_setopt() options (total: 265)

1 new curl command line option (total: 220)
56 contributors, 29 new (total: 1,904)
32 authors, 13 new (total: 658)
  3 security fixes (total: 87)

Security fixes

This release we have no less than three different security related fixes. I’ll describe them briefly here, but for the finer details I advice you to read the dedicated pages and documentation we’ve written for each one of them.

CVE-2018-16890 is a bug where the existing range check in the NTLM code is wrong, which allows a malicious or broken NTLM server to send a header to curl that will make it read outside a buffer and possibly crash or otherwise misbehave.

CVE-2019-3822 is related to the previous but with much worse potential effects. Another bad range check actually allows a sneaky NTLMv2 server to be able to send back crafted contents that can overflow a local stack based buffer. This is potentially in the worst case a remote code execution risk. I think this might be the worst security issue found in curl in a long time. A small comfort is that by disabling NTLM, you will avoid it until patched.

CVE-2019-3823 is a potential read out of bounds of a heap based buffer in the SMTP code. It is fairly hard to trigger and it will mostly cause a crash when it does.

Changes

  1. curl now supports Mike West’s cookie update known as draft-ietf-httpbis-cookie-alone. It basically means that cookies that are set as “secure” has to be set over HTTPS to be allow to override a previous secure cookie. Safer cookies.
  2. The –resolve option as well as CURLOPT_RESOLVE now support specifying a wildcard as port number.
  3. libcurl can now send trailing headers in chunked uploads using the new options.
  4. curl now offers options to enable HTTP/0.9 responses, The default is still enabled, but the plan is to deprecate that and in 6 months time switch over the to default to off.
  5. curl now uses higher resolution timer accuracy on windows.

Bug-fixes

Check out the full change log to see the whole list. Here are some of the bug fixes I consider to be most noteworthy:

  • We re-implemented the code coverage support for autotools builds due to a license problem. It turned out the previously used macro was GPLv2 licensed in an unusual way for autoconf macros.
  • We make sure –xattr never stores URLs with credentials, following the security problem reported on a related tool. Not considered a security problem since this is actually what the user asked for, but still done like this for added safety.
  • With -J, curl should not be allowed to append to the file. It could lead to curl appending to a file that was in the download directory since before.
  • –tls-max didn’t work correctly on macOS when built to use Secure Transport.
  • A couple of improvements in the libssh-powered SSH backend.
  • Adjusted the build for OpenSSL 3.0.0 (the coming future version).
  • We no longer refer to Schannel as “winssl” anywhere. winssl is dead. Long live Schannel!
  • When built with mbedTLS, ignore SIGPIPE accordingly!
  • Test cases were adjusted and verified to work fine up until February 2037.
  • We fixed several parsing errors in the URL parser, mostly related to IPv6 addresses. Regressions introduced in 7.62.0.

Next

The next release cycle will be one week shorter and we expect to ship next release on March 27 – just immediately after curl turns 22 years old. There are already several changes in the pipe so we expect that to become 7.65.0.

We love your help and support! File bugs you experience or see, submit pull requests for the features or corrections you work on!

My 10th FOSDEM

I didn’t present anything during last year’s conference, so I submitted my DNS-over-HTTPS presentation proposal early on for this year’s FOSDEM. Someone suggested it was generic enough I should rather ask for main track instead of the DNS room, and so I did. Then time passed and in November 2018 “HTTP/3” was officially coined as a real term and then, after the Mozilla devroom’s deadline had been extended for a week I filed my second proposal. I might possibly even have been an hour or two after the deadline. I hoped at least one of them would be accepted.

Not only were both my proposed talks accepted, I was also approached and couldn’t decline the honor of participating in the DNS privacy panel. Ok, three slots in the same FOSDEM is a new record for me, but hey, surely that’s no problems for a grown-up..

HTTP/3

I of coursed hoped there would be interest in what I had to say.

I spent the time immediately before my talk with a coffee in the awesome newly opened cafeteria part to have a moment of calmness before I started. I then headed over to the U2.208 room maybe half an hour before the start time.

It was packed. Quite literally there were hundreds of persons waiting in the area outside the U2 rooms and there was this totally massive line of waiting visitors queuing to get into the Mozilla room once it would open.

The “Sorry, this room is FULL” sign is commonly seen on FOSDEM.

People don’t know who I am by my appearance so I certainly didn’t get any special treatment, waiting for my talk to start. I waited in line with the rest and when the time for my presentation started to get closer I just had to excuse myself, leave my friends behind and push through the crowd. I managed to get a “sorry, it’s full” told to me by a conference admin before one of the room organizers recognized me as the speaker of the next talk and I could walk by a very long line of humans that eventually would end up not being able to get in. The room could fit 170 souls, and every single seat was occupied when I started my presentation just a few minutes late.

This presentation could have filled a much larger room. Two years ago my HTTP/2 talk filled up the 300 seat room Mozilla had that year.

Video

Video from my HTTP/3 talk. Duration 1 hour.

The slides from my HTTP/3 presentation.

DNS over HTTPS

I tend to need a little “landing time” after having done a presentation to cool off an come back to normal senses and adrenaline levels again. I got myself a lunch, a beer and chatted with friends in the cafeteria (again). During this conversation, it struck me I had forgotten something in my coming presentation and I added a slide that I felt would improve it (the screenshot showing “about:networking#dns” output with DoH enabled). In what felt like no time, it was again to move. I walked over to Janson, the giant hall that fits 1,470 persons, which I entered a few minutes ahead of my scheduled time and began setting up my machine.

I started off with a little technical glitch because the projector was correctly detected and setup as a second screen on my laptop but it would detect and use a too high resolution for it, but after just a short moment of panic I lowered the resolution on that screen manually and the image appeared fine. Phew! With a slightly raised pulse, I witnessed the room fill up. Almost full. I estimate over 90% of the seats were occupied.

The DNS over HTTPS talk seen from far back. Photo by Steve Holme.

This was a brand new talk with all new material and I performed it for the largest audience I think I’ve ever talked in front of.

Video

Video of my DNS over HTTPS presentation. Duration 50 minutes.

To no surprise, my talk triggered questions and objections. I spent a while in the corridor behind Janson afterward, discussing DoH details, the future of secure DNS and other subtle points of the different protocols involved. In the end I think I manged pretty good, and I had expected more arguments and more tough questions. This is after all the single topic I’ve had more abuse and name-calling for than anything else I’ve ever worked on before in my 20+ years in Internet protocols. (After all, I now often refer to myself and what I do as webshit.)

My DNS over HTTPS slides.

DNS Privacy panel

I never really intended to involve myself in DNS privacy discussions, but due to the constant misunderstandings and mischaracterizations (both on purpose and by ignorance) sometimes spread about DoH, I’ve felt a need to stand up for it a few times. I think that was a contributing factor to me getting invited to be part of the DNS privacy panel that the organizers of the DNS devroom setup.

There are several problems and challenges left to solve before we’re in a world with correctly and mostly secure DNS. DoH is one attempt to raise the bar. I was content to had the opportunity to really spell out my view of things before the DNS privacy panel.

While sitting next to these giants from the DNS world, Stéphane Bortzmeyer, Bert Hubert and me discussed DoT, DoH, DNS centralization, user choice, quad-dns-hosters and more. The discussion didn’t get very heated but instead I think it showed that we’re all largely in agreement that we need more secure DNS and that there are obstacles in the way forward that we need to work further on to overcome. Moderator Jan-Piet Mens did an excellent job I think, handing over the word, juggling the questions and taking in questions from the audience.

Video

Video from the DNS Privacy panel. Duration 30 minutes.

Ten years, ten slots

Appearing in three scheduled slots during the same FOSDEM was a bit much, and it effectively made me not attend many other talks. They were all great fun to do though, and I appreciate people giving me the chance to share my knowledge and views to the world. As usually very nicely organized and handled. The videos of each presentation are linked to above.

I met many people, old and new friends. I handed out a lot of curl stickers and I enjoyed talking to people about my recently announced new job at wolfSSL.

After ten consecutive annual visits to FOSDEM, I have appeared in ten program slots!

I fully intend to go back to FOSDEM again next year. For all the friends, the waffles, the chats, the beers, the presentations and then for the waffles again. Maybe I will even present something…

I’m on team wolfSSL

Let me start by saying thank you to all and everyone who sent me job offers or otherwise reached out with suggestions and interesting career moves. I received more than twenty different offers and almost every one of those were truly good options that I could have said yes to and still pulled home a good job. What a luxury challenge to have to select something from that! Publicly announcing me leaving Mozilla turned out a great ego-boost.

I took some time off to really reflect and contemplate on what I wanted from my next career step. What would the right next move be?

I love working on open source. Internet protocols, and transfers and doing libraries written in C are things considered pure fun for me. Can I get all that and yet keep working from home, not sacrifice my wage and perhaps integrate working on curl better in my day to day job?

I talked to different companies. Very interesting companies too, where I have friends and people who like me and who really wanted to get me working for them, but in the end there was one offer with a setup that stood out. One offer for which basically all check marks in my wish-list were checked.

wolfSSL

On February 5, 2019 I’m starting my new job at wolfSSL. My short and sweet period as unemployed is over and now it’s full steam ahead again! (Some members of my family have expressed that they haven’t really noticed any difference between me having a job and me not having a job as I spend all work days the same way nevertheless: in front of my computer.)

Starting now, we offer commercial curl support and various services for and around curl that companies and organizations previously really haven’t been able to get. Time I do not spend on curl related activities for paying customers I will spend on other networking libraries in the wolfSSL “portfolio”. I’m sure I will be able to keep busy.

I’ve met Larry at wolfSSL physically many times over the years and every year at FOSDEM I’ve made certain to say hello to my wolfSSL friends in their booth they’ve had there for years. They’re truly old-time friends.

wolfSSL is mostly a US-based company – I’m the only Swede on the team and the only one based in Sweden. My new colleagues all of course know just as well as you that I’m prevented from traveling to the US. All coming physical meetings with my work mates will happen in other countries.

commercial curl support!

We offer all sorts of commercial support for curl. I’ll post separately with more details around this.

HTTP/3 talk on video

Yesterday, I had attracted audience enough to fill up the largest presentation room GOTO 10 has, which means about one hundred interested souls.

The subject of the day was HTTP/3. The event was filmed with a mevo camera and I captured the presentation directly from my laptop as well, and I then stitched together the two sources into this final version late last night. As you’ll notice, the sound isn’t awesome and the rest of the “production” isn’t exactly top notch either, but hey, I don’t think it matters too much.

I’ll talk about HTTP/3 (Photo by Jon Åslund)
I’m Daniel Stenberg. I was handed a medal from the Swedish king in 2017 for my work on… (Photo by OpenTokix)
HTTP/2 vs HTTP/3 (Photo by OpenTokix)
Some of the challenges to deploy HTTP/3 are…. (Photo by Jonathan Sulo)

The slide set can also be viewed on slideshare.

QUIC and missing APIs

I trust you’ve heard by now that HTTP/3 is coming. It is the next destined HTTP version, targeted to get published as an RFC in July 2019. Not very far off.

HTTP/3 will not be done over TCP. It will only be performed over QUIC, which is a transport protocol replacement for TCP that always is done encrypted. There’s no clear-text version of QUIC.

TLS 1.3

The encryption in QUIC is based on TLS 1.3 technologies which I believe everyone thinks is a good idea and generally the correct decision. We need to successively raise the bar as we move forward with protocols.

However, QUIC is not only a transport protocol that does encryption by itself while TLS is typically (and designed as) a protocol that is done on top of TCP, it was also designed by a team of engineers who came up with a design that requires APIs from the TLS layer that the traditional TLS over TCP use case doesn’t need!

New TLS APIs

A QUIC implementation needs to extract traffic secrets from the TLS connection and it needs to be able to read/write TLS messages directly – not using the TLS record layer. TLS records are what’s used when we send TLS over TCP. (This was discussed and decided back around the time for the QUIC interim in Kista.)

These operations need APIs that still are missing in for example the very popular OpenSSL library, but also in other commonly used ones like GnuTLS and libressl. And of course schannel and Secure Transport.

Libraries known to already have done the job and expose the necessary mechanisms include BoringSSL, NSS, quicly, PicoTLS and Minq. All of those are incidentally TLS libraries with a more limited number of application users and less mainstream. They’re also more or less developed by people who are also actively engaged in the QUIC protocol development.

The QUIC libraries in progress now are typically using either one of the TLS libraries that already are adapted or do what ngtcp2 does: it hosts a custom-patched version of OpenSSL that brings the needed functionality.

Matt Caswell of the OpenSSL development team acknowledged this situation already back in September 2017, but so far we haven’t seen this result in updated code shipped in a released version.

curl and QUIC

curl is TLS library agnostic and can get built with around 12 different TLS libraries – one or many actually, as you can build it to allow users to select TLS backend in run-time!

OpenSSL is without competition the most popular choice to build curl with outside of the proprietary operating systems like macOS and Windows 10. But even the vendor-build and provided mac and Windows versions are also built with libraries that lack APIs for this.

With our current keen interest in QUIC and HTTP/3 support for curl, we’re about to run into an interesting TLS situation. How exactly is someone going to build curl to simultaneously support both traditional TLS based protocols as well as QUIC going forward?

I don’t have a good answer to this yet. Right now (assuming we would have the code ready in our end, which we don’t), we can’t ship QUIC or HTTP/3 support enabled for curl built to use the most popular TLS libraries! Hopefully by the time we get our code in order, the situation has improved somewhat.

This will slow down QUIC deployment

I’m personally convinced that this little API problem will be friction enough when going forward that it will slow down and hinder QUIC deployment at least initially.

When the HTTP/2 spec shipped in May 2015, it introduced a dependency on the fairly new TLS extension called ALPN that for a long time caused head aches for server admins since ALPN wasn’t supported in the OpenSSL versions that was typically installed and used at the time, but you had to upgrade OpenSSL to version 1.0.2 to get that supported.

At that time, almost four years ago, OpenSSL 1.0.2 was already released and the problem was big enough to just upgrade to that. This time, the API we’re discussing here is not even in a beta version of OpenSSL and thus hasn’t been released in any version yet. That’s far worse than the HTTP/2 situation we had and that took a few years to ride out.

Will we get these APIs into an OpenSSL release to test before the QUIC specification is done? If the schedule sticks, there’s about six months left…

My talks at FOSDEM 2019

I’ll be celebrating my 10th FOSDEM when I travel down to Brussels again in early February 2019. That’s ten years in a row. It’ll also be the 6th year I present something there, as I’ve done these seven talks in the past:

My past FOSDEM appearances

2010. I talked Rockbox in the embedded room.

2011. libcurl, seven SSL libs and one SSH lib in the security room.

2015. Internet all the things – using curl in your device. In the embedded room.

2015. HTTP/2 right now. In the Mozilla room.

2016. an HTTP/2 update. In the Mozilla room.

2017. curl. On the main track.

2017. So that was HTTP/2, what’s next? In the Mozilla room.

DNS over HTTPS – the good, the bad and the ugly

On the main track, in Janson at 15:00 on Saturday 2nd of February.

DNS over HTTPS (aka “DoH”, RFC 8484) introduces a new transport protocol to do secure and private DNS messaging. Why was it made, how does it work and how users are free (to resolve names).

The presentation will discuss reasons why DoH was deemed necessary and interesting to ship and deploy and how it compares to alternative technologies that offer similar properties. It will discuss how this protocol “liberates” users and offers stronger privacy (than the typical status quo).

How to enable and start using DoH today.

It will also discuss some downsides with DoH and what you should consider before you decide to use a random DoH server on the Internet.

HTTP/3

In the Mozilla room, at 11:30 on Saturday 2nd of February.

HTTP/3 is the next coming HTTP version.

This time TCP is replaced by the new transport protocol QUIC and things are different yet again! This is a presentation about HTTP/3 and QUIC with a following Q&A about everything HTTP. Join us at Goto 10.

HTTP/3 is the designated name for the coming next version of the protocol that is currently under development within the QUIC working group in the IETF.

HTTP/3 is designed to improve in areas where HTTP/2 still has some shortcomings, primarily by changing the transport layer. HTTP/3 is the first major protocol to step away from TCP and instead it uses QUIC. I’ll talk about HTTP/3 and QUIC. Why the new protocols are deemed necessary, how they work, how they change how things are sent over the network and what some of the coming deployment challenges will be.

DNS Privacy panel

In the DNS room, at 11:55 on Sunday 3rd of February.

This isn’t strictly a prepared talk or presentation but I’ll still be there and participate in the panel discussion on DNS privacy. I hope to get most of my finer points expressed in the DoH talk mentioned above, but I’m fully prepared to elaborate on some of them in this session.

A curl 2018 retrospective

Another year reaches its calendar end and a new year awaits around the corner. In the curl project we’ve had another busy and event-full year. Here’s a look back at some of the fun we’ve done during 2018.

Releases

We started out the year with the 7.58.0 release in January, and we managed to squeeze in another six releases during the year. In total we count 658 documented bug-fixes and 31 changes. The total number of bug-fixes was actually slightly lower this year compared to last year’s 683. An average of 1.8 bug-fixes per day is still not too shabby.

Authors

I’m very happy to say that we again managed to break our previous record as 155 unique authors contributed code. 111 of them for the first time in the project, and 126 did fewer than three commits during the year. Basically this means we merged code from a brand new author every three days through-out the year!

The list of “contributors”, where we also include helpers, bug reporters, security researchers etc, increased with another 169 new names this year to a total of 1829 in the last release of the year. That’s 169 new names. Of course we also got a lot of help from people who were already mentioned in there!

Will we be able to reach 2000 names before the end of 2019?

Commits

At the time of this writing, almost two weeks before the end of the year, we’re still behind the last few years with 1051 commits done this year. 1381 commits were done in 2017.

Daniel’s commit share

I personally authored 535 (50.9%) of all commits during 2018. Marcel Raad did 65 and Daniel Gustafsson 61. In general I maintain my general share of the changes done in the project over time. Possibly I’ve even increased it slightly the last few years. This graph shows my share of the commits layered on top of the number of commits done.

Vulnerabilities

This year we got exactly the same amount of security problems reported as we did last year: 12. Some of the problems were one-off due curl being added to the OSS-Fuzz project in 2018 and it has taken a while to really hit some of our soft spots and as we’ve seen a slow-down in reports from there it’ll be interesting to see if 2019 will be a brighter year in this department. (In total, OSS-Fuzz is credited for having found six security vulnerabilities in curl to date.)

During the year we manage to both introduce new bug bounty program as well as retract that very same again when it shut down almost at once! 🙁

Lines of code

Counting all lines in the git repo in the src, lib and include directories, they grew nearly 6,000 lines (3.7%) during the year to 155,912. The primary code growing activities this year were:

  1. DNS-over-HTTPS support
  2. The new URL API and using that internally as well

Deprecating legacy

In July we created the DEPRECATE.md document to keep order of some things we’re stowing away in the cyberspace attic. During the year we cut off axTLS support as a first example of this deprecation procedure. HTTP pipelining, global DNS cache and HTTP/0.9 accepted by default are features next in line marked for removal, and the two first are already disabled in code.

curl up

We had our second curl conference this year; in Stockholm. It was blast again and I’m already looking forward to curl up 2019 in Prague.

Sponsor updates

Yours truly quit Mozilla and with that we lost them as a sponsor of the curl project. We have however gotten several new backers and sponsors over the year since we joined opencollective, and can receive donations from there.

Governance

Together with a bunch of core team members I put together a two-step proposal that I posted back in October:

  1. we join an umbrella organization
  2. we create a “board” to decide over money

As the first step turned out to be a very slow operation (ie we’ve applied, but the process has not gone very far yet) we haven’t yet made step 2 happen either.

2019

Things that didn’t happen in 2018 but very well might happen in 2019 include:

  1. Some first HTTP/3 and QUIC code attempts in curl
  2. HSTS support? A pull request for this has been lingering for a while already.

Note: the numbers for 2018 in this post were extracted and graphs were prepared a few weeks before the actual end of year, so some of the data quite possibly changed a little bit since.

curl, open source and networking