If you want commercial support, ports of curl to other operating systems or just instant help to fix your curl related problems, we’re here to help. Get in touch now! This is the premiere. This has not been offered by me or anyone else before.
I’m not sure I need to say it, but I personally have authored almost 60% of all commits in the curl source code during my more than twenty years in the project. I started the project, I’ve designed its architecture etc. There is simply no one around with my curl experience and knowledge of curl internals. You can’t buy better curl expertise.
curl has become one of the world’s most widely used software components and is the transfer engine doing a large chunk of all non-browser Internet transfers in the world today. curl has reached this level of success entirely without anyone offering commercial services around it. Still, not every company and product made out there has a team of curl experts and in this demanding time and age we know there are times when you rather hire the right team to help you out.
We are the curl experts that can help you and your team. Contact us for all and any support questions at firstname.lastname@example.org.
What about the curl project?
I’m heading into this new chapter of my life and the curl project with the full knowledge that this blurs the lines between my job and my spare time even more than before. But fear not!
The curl project is free and open and will remain independent of any commercial enterprise helping out customers. I realize me offering companies and organizations to deal with curl problems and solving curl issues for compensation creates new challenges and questions where boundaries go, if for nothing else for me personally. I still think this is worth pursuing and I’m sure we can figure out and handle whatever minor issues this can lead to.
My friends, the community, the users and harsh critiques on twitter will all help me stay true and honest. I know this. This should end up a plus for the curl project in general as well as for me personally. More focus, more work and more money involved in curl related activities should improve the project.
It is with great joy and excitement I take on this new step.
I know, has there been eight weeks since the previous release already? But yes it has – I double-checked! And then as the laws of nature dictates, there has been yet another fresh curl version released out into the wild.
the 179th release 5 changes 56 days (total: 7,628) 76 bug fixes (total: 4,913) 128 commits (total: 23,927) 0 new public libcurl functions (total: 80) 3 new curl_easy_setopt() options (total: 265) 1 new curl command line option (total: 220) 56 contributors, 29 new (total: 1,904) 32 authors, 13 new (total: 658) 3 security fixes (total: 87)
This release we have no less than three different security related fixes. I’ll describe them briefly here, but for the finer details I advice you to read the dedicated pages and documentation we’ve written for each one of them.
CVE-2018-16890 is a bug where the existing range check in the NTLM code is wrong, which allows a malicious or broken NTLM server to send a header to curl that will make it read outside a buffer and possibly crash or otherwise misbehave.
CVE-2019-3822 is related to the previous but with much worse potential effects. Another bad range check actually allows a sneaky NTLMv2 server to be able to send back crafted contents that can overflow a local stack based buffer. This is potentially in the worst case a remote code execution risk. I think this might be the worst security issue found in curl in a long time. A small comfort is that by disabling NTLM, you will avoid it until patched.
CVE-2019-3823 is a potential read out of bounds of a heap based buffer in the SMTP code. It is fairly hard to trigger and it will mostly cause a crash when it does.
Check out the full change log to see the whole list. Here are some of the bug fixes I consider to be most noteworthy:
We re-implemented the code coverage support for autotools builds due to a license problem. It turned out the previously used macro was GPLv2 licensed in an unusual way for autoconf macros.
We make sure –xattr never stores URLs with credentials, following the security problem reported on a related tool. Not considered a security problem since this is actually what the user asked for, but still done like this for added safety.
With -J, curl should not be allowed to append to the file. It could lead to curl appending to a file that was in the download directory since before.
–tls-max didn’t work correctly on macOS when built to use Secure Transport.
A couple of improvements in the libssh-powered SSH backend.
Adjusted the build for OpenSSL 3.0.0 (the coming future version).
We no longer refer to Schannel as “winssl” anywhere. winssl is dead. Long live Schannel!
When built with mbedTLS, ignore SIGPIPE accordingly!
Test cases were adjusted and verified to work fine up until February 2037.
We fixed several parsing errors in the URL parser, mostly related to IPv6 addresses. Regressions introduced in 7.62.0.
The next release cycle will be one week shorter and we expect to ship next release on March 27 – just immediately after curl turns 22 years old. There are already several changes in the pipe so we expect that to become 7.65.0.
We love your help and support! File bugs you experience or see, submit pull requests for the features or corrections you work on!
I didn’t present anything during last year’s conference, so I submitted my DNS-over-HTTPS presentation proposal early on for this year’s FOSDEM. Someone suggested it was generic enough I should rather ask for main track instead of the DNS room, and so I did. Then time passed and in November 2018 “HTTP/3” was officially coined as a real term and then, after the Mozilla devroom’s deadline had been extended for a week I filed my second proposal. I might possibly even have been an hour or two after the deadline. I hoped at least one of them would be accepted.
Not only were both my proposed talks accepted, I was also approached and couldn’t decline the honor of participating in the DNS privacy panel. Ok, three slots in the same FOSDEM is a new record for me, but hey, surely that’s no problems for a grown-up..
I of coursed hoped there would be interest in what I had to say.
I spent the time immediately before my talk with a coffee in the awesome newly opened cafeteria part to have a moment of calmness before I started. I then headed over to the U2.208 room maybe half an hour before the start time.
It was packed. Quite literally there were hundreds of persons waiting in the area outside the U2 rooms and there was this totally massive line of waiting visitors queuing to get into the Mozilla room once it would open.
People don’t know who I am by my appearance so I certainly didn’t get any special treatment, waiting for my talk to start. I waited in line with the rest and when the time for my presentation started to get closer I just had to excuse myself, leave my friends behind and push through the crowd. I managed to get a “sorry, it’s full” told to me by a conference admin before one of the room organizers recognized me as the speaker of the next talk and I could walk by a very long line of humans that eventually would end up not being able to get in. The room could fit 170 souls, and every single seat was occupied when I started my presentation just a few minutes late.
This presentation could have filled a much larger room. Two years ago my HTTP/2 talk filled up the 300 seat room Mozilla had that year.
I tend to need a little “landing time” after having done a presentation to cool off an come back to normal senses and adrenaline levels again. I got myself a lunch, a beer and chatted with friends in the cafeteria (again). During this conversation, it struck me I had forgotten something in my coming presentation and I added a slide that I felt would improve it (the screenshot showing “about:networking#dns” output with DoH enabled). In what felt like no time, it was again to move. I walked over to Janson, the giant hall that fits 1,470 persons, which I entered a few minutes ahead of my scheduled time and began setting up my machine.
I started off with a little technical glitch because the projector was correctly detected and setup as a second screen on my laptop but it would detect and use a too high resolution for it, but after just a short moment of panic I lowered the resolution on that screen manually and the image appeared fine. Phew! With a slightly raised pulse, I witnessed the room fill up. Almost full. I estimate over 90% of the seats were occupied.
This was a brand new talk with all new material and I performed it for the largest audience I think I’ve ever talked in front of.
To no surprise, my talk triggered questions and objections. I spent a while in the corridor behind Janson afterward, discussing DoH details, the future of secure DNS and other subtle points of the different protocols involved. In the end I think I manged pretty good, and I had expected more arguments and more tough questions. This is after all the single topic I’ve had more abuse and name-calling for than anything else I’ve ever worked on before in my 20+ years in Internet protocols. (After all, I now often refer to myself and what I do as webshit.)
I never really intended to involve myself in DNS privacy discussions, but due to the constant misunderstandings and mischaracterizations (both on purpose and by ignorance) sometimes spread about DoH, I’ve felt a need to stand up for it a few times. I think that was a contributing factor to me getting invited to be part of the DNS privacy panel that the organizers of the DNS devroom setup.
There are several problems and challenges left to solve before we’re in a world with correctly and mostly secure DNS. DoH is one attempt to raise the bar. I was content to had the opportunity to really spell out my view of things before the DNS privacy panel.
While sitting next to these giants from the DNS world, Stéphane Bortzmeyer, Bert Hubert and me discussed DoT, DoH, DNS centralization, user choice, quad-dns-hosters and more. The discussion didn’t get very heated but instead I think it showed that we’re all largely in agreement that we need more secure DNS and that there are obstacles in the way forward that we need to work further on to overcome. Moderator Jan-Piet Mens did an excellent job I think, handing over the word, juggling the questions and taking in questions from the audience.
Ten years, ten slots
Appearing in three scheduled slots during the same FOSDEM was a bit much, and it effectively made me not attend many other talks. They were all great fun to do though, and I appreciate people giving me the chance to share my knowledge and views to the world. As usually very nicely organized and handled. The videos of each presentation are linked to above.
I met many people, old and new friends. I handed out a lot of curl stickers and I enjoyed talking to people about my recently announced new job at wolfSSL.
After ten consecutive annual visits to FOSDEM, I have appeared in ten program slots!
I fully intend to go back to FOSDEM again next year. For all the friends, the waffles, the chats, the beers, the presentations and then for the waffles again. Maybe I will even present something…
Let me start by saying thank you to all and everyone who sent me job offers or otherwise reached out with suggestions and interesting career moves. I received more than twenty different offers and almost every one of those were truly good options that I could have said yes to and still pulled home a good job. What a luxury challenge to have to select something from that! Publicly announcing me leaving Mozilla turned out a great ego-boost.
I took some time off to really reflect and contemplate on what I wanted from my next career step. What would the right next move be?
I love working on open source. Internet protocols, and transfers and doing libraries written in C are things considered pure fun for me. Can I get all that and yet keep working from home, not sacrifice my wage and perhaps integrate working on curl better in my day to day job?
I talked to different companies. Very interesting companies too, where I have friends and people who like me and who really wanted to get me working for them, but in the end there was one offer with a setup that stood out. One offer for which basically all check marks in my wish-list were checked.
On February 5, 2019 I’m starting my new job at wolfSSL. My short and sweet period as unemployed is over and now it’s full steam ahead again! (Some members of my family have expressed that they haven’t really noticed any difference between me having a job and me not having a job as I spend all work days the same way nevertheless: in front of my computer.)
Starting now, we offer commercial curl support and various services for and around curl that companies and organizations previously really haven’t been able to get. Time I do not spend on curl related activities for paying customers I will spend on other networking libraries in the wolfSSL “portfolio”. I’m sure I will be able to keep busy.
I’ve met Larry at wolfSSL physically many times over the years and every year at FOSDEM I’ve made certain to say hello to my wolfSSL friends in their booth they’ve had there for years. They’re truly old-time friends.
wolfSSL is mostly a US-based company – I’m the only Swede on the team and the only one based in Sweden. My new colleagues all of course know just as well as you that I’m prevented from traveling to the US. All coming physical meetings with my work mates will happen in other countries.
commercial curl support!
We offer all sorts of commercial support for curl. I’ll post separately with more details around this.
Yesterday, I had attracted audience enough to fill up the largest presentation room GOTO 10 has, which means about one hundred interested souls.
The subject of the day was HTTP/3. The event was filmed with a mevo camera and I captured the presentation directly from my laptop as well, and I then stitched together the two sources into this final version late last night. As you’ll notice, the sound isn’t awesome and the rest of the “production” isn’t exactly top notch either, but hey, I don’t think it matters too much.
I trust you’ve heard by now that HTTP/3 is coming. It is the next destined HTTP version, targeted to get published as an RFC in July 2019. Not very far off.
HTTP/3 will not be done over TCP. It will only be performed over QUIC, which is a transport protocol replacement for TCP that always is done encrypted. There’s no clear-text version of QUIC.
The encryption in QUIC is based on TLS 1.3 technologies which I believe everyone thinks is a good idea and generally the correct decision. We need to successively raise the bar as we move forward with protocols.
However, QUIC is not only a transport protocol that does encryption by itself while TLS is typically (and designed as) a protocol that is done on top of TCP, it was also designed by a team of engineers who came up with a design that requires APIs from the TLS layer that the traditional TLS over TCP use case doesn’t need!
New TLS APIs
A QUIC implementation needs to extract traffic secrets from the TLS connection and it needs to be able to read/write TLS messages directly – not using the TLS record layer. TLS records are what’s used when we send TLS over TCP. (This was discussed and decided back around the time for the QUIC interim in Kista.)
These operations need APIs that still are missing in for example the very popular OpenSSL library, but also in other commonly used ones like GnuTLS and libressl. And of course schannel and Secure Transport.
Libraries known to already have done the job and expose the necessary mechanisms include BoringSSL, NSS, quicly, PicoTLS and Minq. All of those are incidentally TLS libraries with a more limited number of application users and less mainstream. They’re also more or less developed by people who are also actively engaged in the QUIC protocol development.
curl is TLS library agnostic and can get built with around 12 different TLS libraries – one or many actually, as you can build it to allow users to select TLS backend in run-time!
OpenSSL is without competition the most popular choice to build curl with outside of the proprietary operating systems like macOS and Windows 10. But even the vendor-build and provided mac and Windows versions are also built with libraries that lack APIs for this.
With our current keen interest in QUIC and HTTP/3 support for curl, we’re about to run into an interesting TLS situation. How exactly is someone going to build curl to simultaneously support both traditional TLS based protocols as well as QUIC going forward?
I don’t have a good answer to this yet. Right now (assuming we would have the code ready in our end, which we don’t), we can’t ship QUIC or HTTP/3 support enabled for curl built to use the most popular TLS libraries! Hopefully by the time we get our code in order, the situation has improved somewhat.
This will slow down QUIC deployment
I’m personally convinced that this little API problem will be friction enough when going forward that it will slow down and hinder QUIC deployment at least initially.
When the HTTP/2 spec shipped in May 2015, it introduced a dependency on the fairly new TLS extension called ALPN that for a long time caused head aches for server admins since ALPN wasn’t supported in the OpenSSL versions that was typically installed and used at the time, but you had to upgrade OpenSSL to version 1.0.2 to get that supported.
At that time, almost four years ago, OpenSSL 1.0.2 was already released and the problem was big enough to just upgrade to that. This time, the API we’re discussing here is not even in a beta version of OpenSSL and thus hasn’t been released in any version yet. That’s far worse than the HTTP/2 situation we had and that took a few years to ride out.
Will we get these APIs into an OpenSSL release to test before the QUIC specification is done? If the schedule sticks, there’s about six months left…
I’ll be celebrating my 10th FOSDEM when I travel down to Brussels again in early February 2019. That’s ten years in a row. It’ll also be the 6th year I present something there, as I’ve done these seven talks in the past:
DNS over HTTPS (aka “DoH”, RFC 8484) introduces a new transport protocol to do secure and private DNS messaging. Why was it made, how does it work and how users are free (to resolve names).
The presentation will discuss reasons why DoH was deemed necessary and interesting to ship and deploy and how it compares to alternative technologies that offer similar properties. It will discuss how this protocol “liberates” users and offers stronger privacy (than the typical status quo).
How to enable and start using DoH today.
It will also discuss some downsides with DoH and what you should consider before you decide to use a random DoH server on the Internet.
This time TCP is replaced by the new transport protocol QUIC and things are different yet again! This is a presentation about HTTP/3 and QUIC with a following Q&A about everything HTTP. Join us at Goto 10.
HTTP/3 is the designated name for the coming next version of the protocol that is currently under development within the QUIC working group in the IETF.
HTTP/3 is designed to improve in areas where HTTP/2 still has some shortcomings, primarily by changing the transport layer. HTTP/3 is the first major protocol to step away from TCP and instead it uses QUIC. I’ll talk about HTTP/3 and QUIC. Why the new protocols are deemed necessary, how they work, how they change how things are sent over the network and what some of the coming deployment challenges will be.
This isn’t strictly a prepared talk or presentation but I’ll still be there and participate in the panel discussion on DNS privacy. I hope to get most of my finer points expressed in the DoH talk mentioned above, but I’m fully prepared to elaborate on some of them in this session.
Another year reaches its calendar end and a new year awaits around the corner. In the curl project we’ve had another busy and event-full year. Here’s a look back at some of the fun we’ve done during 2018.
We started out the year with the 7.58.0 release in January, and we managed to squeeze in another six releases during the year. In total we count 658 documented bug-fixes and 31 changes. The total number of bug-fixes was actually slightly lower this year compared to last year’s 683. An average of 1.8 bug-fixes per day is still not too shabby.
I’m very happy to say that we again managed to break our previous record as 155 unique authors contributed code. 111 of them for the first time in the project, and 126 did fewer than three commits during the year. Basically this means we merged code from a brand new author every three days through-out the year!
The list of “contributors”, where we also include helpers, bug reporters, security researchers etc, increased with another 169 new names this year to a total of 1829 in the last release of the year. That’s 169 new names. Of course we also got a lot of help from people who were already mentioned in there!
Will we be able to reach 2000 names before the end of 2019?
At the time of this writing, almost two weeks before the end of the year, we’re still behind the last few years with 1051 commits done this year. 1381 commits were done in 2017.
Daniel’s commit share
I personally authored 535 (50.9%) of all commits during 2018. Marcel Raad did 65 and Daniel Gustafsson 61. In general I maintain my general share of the changes done in the project over time. Possibly I’ve even increased it slightly the last few years. This graph shows my share of the commits layered on top of the number of commits done.
This year we got exactly the same amount of security problems reported as we did last year: 12. Some of the problems were one-off due curl being added to the OSS-Fuzz project in 2018 and it has taken a while to really hit some of our soft spots and as we’ve seen a slow-down in reports from there it’ll be interesting to see if 2019 will be a brighter year in this department. (In total, OSS-Fuzz is credited for having found six security vulnerabilities in curl to date.)
In July we created the DEPRECATE.md document to keep order of some things we’re stowing away in the cyberspace attic. During the year we cut off axTLS support as a first example of this deprecation procedure. HTTP pipelining, global DNS cache and HTTP/0.9 accepted by default are features next in line marked for removal, and the two first are already disabled in code.
This time TCP is replaced by the new transport protocol QUIC and things are different yet again! This is a presentation by Daniel Stenberg about HTTP/3 and QUIC with a following Q&A about everything HTTP.
The presentation will be done in English. It will be recorded and possibly live-streamed. Organized by me, together with our friends at goto10. It is free of charge, but you need to register.
17:30 – 19:00 January 22, 2019 Goto 10: Hörsalen, Hammarby Kaj 10D plan 5
At a talk I did a while ago, someone from the back of the audience raised this question. I found it to be such a great question that I decided to spend a few minutes and explain how this happens and why.
In this blog post I’ll stick to discussing the curl command line tool. “curl” is often also used as a shortcut for the library but let’s focus on the tool here.
When you use a particular curl version installed in a system near you, chances are that it differs slightly from the curl your neighbor runs or even the one that you use in the machines at work.
Why is this?
We release a new curl version every eight weeks. On average we ship over thirty releases in a five-year period.
A lot of people use curl versions that are a few years old, some even many years old. There are easily more than 30 different curl version in active use at any given moment.
Not every curl release introduce changes and new features, but it is very common and all releases are at least always corrected a lot of bugs from previous versions. New features and fixed bugs make curl different between releases.
Linux/OS distributions tend to also patch their curl versions at times, and then they all of course have different criteria and work flows, so the exact same curl version built and shipped from two different vendors can still differ!
curl builds on almost every platform you can imagine. When you build curl for your platform, it is designed to use features, native APIs and functions available and they will indeed differ between systems.
curl also relies on a number of different third party libraries. The set of libraries a particular curl build is set to use varies by platform, but even more so due to the decisions of the persons or group that built this particular curl executable. The exact set, and the exact versions of each of those third party libraries, will change curl’s feature set, from subtle and small changes up to large really noticeable differences.
As a special third party library, I want to especially highlight the importance of the TLS library that curl is built to use. It will change not only what SSL and TLS versions curl supports, but also how to handle CA certificates, it provides crypto support for authentication schemes such as NTLM and more. Not to mention that of course TLS libraries also develop over time so if curl is built to use an older release, it probably has less support for later features and protocol versions.
When building curl, you can switch features on and off to a very large extent, making it possible to quite literally build it in several million different combinations. The organizations, people and companies that build curl to ship with their operating systems or their package distribution systems decide what feature set they want or don’t want for their users. One builder’s decision and thought process certainly does not have to match the ones of the others’. With the same curl version, the same TLS library on the same operating system two curl builds might thus still end up different!
Build your own!
If you aren’t satisfied with the version or feature-set of your own locally installed curl – build your own!