Not a lot changed this year compared to last year. Perhaps the biggest three changes this year were that
1. HTTP/3, Unix domain sockets and DNS-over-HTTPS increased significantly among “used features”
2. NSS and GnuTLS both had their usage shares among used TLS libraries fall significantly.
3. My twitter account and this blog are now top-voted as the two channels people follow mostly for participation in curl related topics.
The most used protocols are of course still HTTPS and HTTP, and the newest supported protocol (GOPHERS) checks in as the least used protocol this time around.
Much more details can be found in the linked PDF. Enjoy.
As suspected already from the start, I ran out of stickers really fast. I ordered more from my trusted sticker guy on the corner, and he could even deliver stickers put into pre-printed envelopes. Envelopes that even got a curl logo on them. Around two hundred recipients got stickers that way.
It took me a while to complete this task. Getting all the addresses organized took time, getting all the materials restocked took time, packaging sticker of different sorts to almost a thousand people took time and then I also of course had to do occasional work in the mean time so I didn’t finish the delivery from my end until near the end of June.
When I write this, I’ve just sent off the last few parcels. 978 recipients are now close(r) to get curl stickers.
Redistributors
The bulk amount of stickers would still get sent in larger batches to individuals who would send them on.
One redistributor single-handedly took 174(!) addresses, but otherwise they typically were in the range of 20-40 recipients each. Each redistributor got 4 * [number of recipients] + 10 stickers in their envelops.
Types
To spice up this game a little bit, most packages have gotten stickers of more than one kind. Hardly any package got the exact same setup (apart from the first batch). So whatever you might end up getting in the end, please cherish what you got and enjoy them. They are the result of quite a lot of work, sweat and devotion from a lot of people.
I’ve reimbursed most of my expenses for this from the curl fund, which means that this effort was paid for at least in part by our sponsors and Open Collective donors. Thanks for that!
Ending words
All the stickers have not yet arrived at their final destinations, so I’m writing this up a little premature. There will be disappointments involved because this whole process is very human-centric and error prone. Some of the addresses we got probably don’t even work or will be mis-interpreted down the line and more. I’ve done this to the best of my ability.
The plans for future sticker offers as discussed in part 1 are still just plans and have not materialized anything further yet.
On May 17, I joined the Kathy and Brian, the hosts of the GitHub ReadMe podcast on a video meeting from my home and we had a chat. Mostly about my work on curl. Today the episode “aired”.
curl is one of the most widely used software component in the world. It is over twenty years old and I am the founder and I still work as lead developer and head honcho. It works!
We talked about how I got into computers and open source in general. How curl started and about how it works to drive such a project, do releases and how to work on it as a full-time job. I am far from alone in this project – I’m just the captain of this ship with a large about of contributors onboard!
Photographs
As a part of the promotion for this episode, I was photographed by a professional outside of my house and nearby on a very lovely summer’s evening. In a southern suburb of Stockholm, Sweden. So, not only does the GitHub material feature not previously seen images of me, since I’ve been given the photos I can now use them for various things going forward. Like for when I do presentations and organizers ask for photos etc.
The photos I’ve used most commonly up until this point are the ones a professional photographer took of me when I spoke at the Velocity conference in New York in 2015. Of course I’m eternally young, but for some reason those past six years are visible on me…
Podcasts
I’ve participated in some podcasts before. If my count is correct, this is the 19th time. See the whole list.
Credits
The new set of photos of me were shot by Evia Photos. One of them is used on the top of this page.
curl is a command line tool and library for doing Internet data transfers. It has been around for a loooong time (over 23 years) but there is still a flood of new things being added to it and development being made, to take it further and to keep it relevant today and in the future.
I’m the lead developer and head maintainer of the curl project.
How do we decide what goes into curl? And perhaps more importantly, what does not get accepted into curl?
Let’s look how this works in the curl factory.
Stick to our principles
curl has come this far by being reliable, trusted and familiar. We don’t rock the boat: curl does Internet transfers specified as URLs and it doesn’t parse or understand the content it transfers. That goes for libcurl too.
Whatever we add should stick to these constraints and core principles, at least. Then of course there are more things to consider.
A shortlist of things I personally want to see
I personally usually have a shortlist of a few features I personally want to work on in the coming months and maybe half year. Items I can grab when other things are slow or if I need a change or fun thing to work on a rainy day. These items are laid out in the ROADMAP document – which also tends to be updated a little too infrequently…
There’s also the TODO document that lists things we consider could be good to do and KNOWN_BUGS that lists known shortcomings we want to address.
Sponsored works have priority
I’m the lead developer of the curl project but I also offer commercial support and curl services to allow me to work on curl full-time. This means that paying customers can get a “priority lane” into landing new features or bug-fixes in future releases of curl. They still need to suit the project though, we don’t abandon our principles even for money. (Contact me to learn how I can help you get your proposed changes done!)
Keep up with where the world goes
All changes and improvements that help curl keep up with and follow where the Internet protocol community is moving, are considered good and necessary changes. The curl project has always been on the front-lines of protocols and that is where we want to remain. It takes a serious effort.
Asking the community
Every year around the May time frame we do a “user survey” that we try to get as many users as possible to respond to. It asks about user patterns, what’s missing and how things are working.
The results from that work provide good feedback on areas to improve and help us identify features our community think curl lacks etc. (The 2020 survey analysis)
Even outside of the annual survey, discussions on the mailing list is a good way for getting direct feedback on questions and ideas and users very often bring up their ideas and suggestions using those channels.
Ideas are easy, code is harder
Actually implementing and providing a feature is a lot harder than just providing an idea. We almost drown among all the good ideas people propose we might or could do one day. What someone might think is a great idea may therefore still not be implemented very soon. Because of the complexity of implementing it or because of lack of time or energy etc.
But at the same time: oftentimes, someone needs to bring the idea or crack the suggestion for it to happen.
It needs to exist to be considered
Related to the previous section. Code and changes that exist, that are provided are of course much more likely to actually end up in curl than abstract ideas. If a pull-request comes to curl and the change adheres to our standards and meet the requirements mentioned in this post, then the chances are very good that it will be accepted and merged.
As I am currently the only one working on curl professionally (ie I get paid to do it). I can rarely count on or assume work submissions from other team members. They usually show up more or less by surprise, which of course is awesome in itself but also makes such work and features very hard to plan for ahead of time. Sometimes people bring new features. Then we deal with them!
Half-baked is not good enough
A decent amount of all pull requests submitted to the project never get merged because they aren’t good enough and the person who submitted them doesn’t respond to feedback and improvement requests properly so that they never become good enough. Things like documentation and tests are typically just as important as the functionality itself.
Pull requests that are abandoned by the author can of course also get taken over by someone else but it cannot be expected or relied upon. A person giving up on the pull request is also a strong sign to the rest of us that obviously the desire to get that specific change landed wasn’t that big and that tells us something.
We don’t accept and merge partial changes that for example lack a crucial part like tests or documentation because we’ve learned the hard way many times over the years that it is just too common that the author then vanishes before completing the work – forcing others to do that work or we have to rip the change out again.
Standards and in-use are preferred properties
At times people suggest we support new protocols or experiments for new things. While that can be considered fun and useful, we typically want both the protocol and the associated URL syntax to already be in use and be somewhat established and preferably even standardized and properly documented in specifications. One of the fundamental core ideas with URLs is that they should mean the same thing for more than one application.
When no compass needle exists, maintain existing direction
Most changes are in line with what we already do and how the products work so no major considerations are necessary. Only once in a while do we get requests or suggestions that actually challenge the direction or forces us to consider what is the right and the wrong way.
If the reason and motivation provided is valid and holds up, then we might agree and go in that direction, If we don’t, we discuss the topic and see if we perhaps can change someone’s mind or “wiggle” the concepts and ideas to see whether we can change the suggestion or perhaps see it from n a different angle to reconsider. Sometimes we just have to decline and say no: that’s not something we think is in line with curl.
Who decides if its fine?
curl is not a democracy, we don’t vote about decisions or what to accept etc.
curl is also not a strict dictatorship where a single leader dictates all truths and facts from above for all subjects to accept and obey.
We’re somewhere in between. We discuss and try to find consensus of what and how to do things. The persons who bring the code or experience the actual problems of course will have more to say. Experienced and long-term maintainers’ opinions have more weight in discussions and they’re free and allowed to merge pull-requests they think are good.
I retain the right to veto stuff, but I very rarely exercise that right.
curl is still a small project. You’ll notice that you’ll quickly recognize the same handful of maintainers in all pull-requests and long tail of others chipping in here and there. There’s no massive crowd anywhere. That’s also the explanation why sometimes your pull-requests might not get reviewed instantly but you must rather wait for a while until you get someone’s attention.
If you’re curious to learn how the project is governed in more detail, then check out the governance docs.
Listening to what users want, miss and think are needed when going forward is very important to us. Even if it sometimes is hard to react immediately and often we have to bounce things a little back and forth before they can become “curl material”. So, please don’t expect us to immediately implement what you suggest, but please don’t let that stop you from bringing your grand ideas.
In the afternoon of October 17, 2013 we merged the first config file ever that would use Travis CI for the curl project using the nifty integration at GitHub. This was the actual introduction of the entire concept of building and testing the project on every commit and pull request for the curl project. Before this merge happened, we only had our autobuilds. They are systems run by volunteers that update the code from git maybe once per day, build and run the tests and then upload all the logs.
Don’t take this wrong: the autobuilds are awesome and have helped us make curl what it is. But they rely on individuals to host and admin the machines and to setup the specific configs that are tested.
With the introduction of “proper” CI, the configs that are tested are now also hosted in git and allows the project members to better control and adjust the tests and configs, plus that we can run them on already on pull-requests so that we can verify code before merge instead of having to first merge the code to master before the changes can get verified.
Seriously. Always.
Travis provided a free service with a great promise.
In 2017 we surpassed 10 jobs per commit, all still on Travis.
In early 2019 we reached 30 jobs per commit, and at that time we started to use and spread out the work on more CI services. Travis would still remain as the one we’d lean on the heaviest. It was there and we had custom-written a bunch of jobs for it and it performed well.
Travis even turned some levers for us so that we got more parallel processing powers than on the regular open source tier, and we listed them as sponsors of the curl project for their gracious help. This may or may not be related to the fact that I met Josh Kalderimis (co-founder of travis) in 2019 and we talked about curl’s use of it and they possibly helping us more.
Transition to death
This year, 2021, the curl project runs around 100 CI jobs per commit and PR. 33 of them ran on Travis when we were finally pushed over from travis-ci.org to their new travis-ci.com domain. A transition they’d been advertising for a while but wasn’t very clearly explained or motivated in my eyes.
The new domain also implied new rules and new tiers, we quickly learned. Now we would have to apply to be recognized as an open source project (after 7.5 years of using their services as an open source project). But also, in order to get to take advantage of their free tier being an open source project was no longer enough. Among the new requirements on the project was this:
Project must not be sponsored by a commercial company or organization (monetary or with employees paid to work on the project)
We’re a small independent open source project, but yes I work on curl full-time thanks to companies paying for curl support. I’m paid to work on curl and therefore we cannot meet that requirement.
Not eligible but still there
I’m not sure why, but apparently we still got free “credits” for running CI on Travis. The CI jobs kept working and I think maybe I sighed a little from relief – of course I did it prematurely as it only took us a few days into the month of June until we had run out of the free credits. There’s no automatic refill but we can apparently ask for more. We asked, but many days after having asked we still had no more credits and no CI jobs could run on Travis anymore. CI on Travis at the same level as before would cost more than 249 USD/month. Maybe not so much “it will always be free”.
The 33 jobs on Travis were there for a purpose. They’re prerequisites for us to develop and ship a quality product. Without the CI jobs running, we risk landing bad code. This was not a sustainable situation.
We looked for alternative services and we quickly got several offers of help and assistance.
New service
Friends from both Zuul CI and Circle CI stepped up and helped us started to get CI jobs transitioned over from Travis over to their new homes.
At June 14th 2021, we officially had no more jobs running on Travis.
Visualized as a graph, we can see the Travis jobs “falling off a cliff” with Zuul rising to the challenge:
Services come and go. There’s no need to get hung up on that fact but instead keep moving forward with our eyes fixed on the horizon.
Thank you Travis CI for all those years of excellent service!
Pay up?
Lots of people have commented and think I’m “whining” about Travis CI charging for something that is useful and that I should rather just pay up. I could probably have gone with that but I dislike their broken promise and that they don’t consider us Open source anymore and I feel I have a responsibility to use the funds we get from gracious donors as wisely and economically as possible, and that includes using no-cost or cheap services rather than services charging thousands of dollars per year.
If there really were no other available and viable options, then paying could’ve been an alternative. Now, moving on to something else was the right choice for us.
Today, we remove that support again. This is a very drastic move, and I feel obliged to explain it so here it goes! curl 7.78.0 will ship without metalink support.
Metalink problems
There were several issues found that combined led us to this move.
Security problems
We’ve found several security problems and issues involving the metalink support in curl. The issues are not detailed here because they’ve not been made public yet.
When working on these issues, it become apparent to the curl security team that several of the problems are due to the system design, metalink library API and what the metalink RFC says. They are very hard to fix on the curl side only.
Unusual use pattern
Metalink usage with curl was only very briefly documented and was not following the “normal” curl usage pattern in several ways, making it surprising and non-intuitive which could lead to further security issues.
libmetalink is abandoned
The metalink library libmetalink was last updated 6 years ago and wasn’t very actively maintained the years before that either. An unmaintained library means there’s a security problem waiting to happen. This is probably reason enough.
XML is heavy
Metalink requires an XML parsing library, which is complex code (even the smaller alternatives) and to this day often gets security updates.
Not used much
Metalink is not a widely used curl feature. In the 2020 curl user survey, only 1.4% of the responders said that they’d are using it. In the just closed 2021 survey that number shrunk to 1.2%. Searching the web also show very few traces of it being used, even with other tools.
The torrent format and associated technology clearly won for downloading large files from multiple sources in parallel.
Violating a basic principle
This change unfortunately breaks command lines that uses --metalink. This move goes directly against one of our basic principles as it doesn’t maintain behavior with previous versions. We’re very sorry about this but we don’t see a way out of this pickle that also takes care of user’s security – which is another basic principle of ours. We think the security concern trumps the other concerns.
Possible to bring back?
The list above contains reasons for the removal. At least some of them can be addressed given enough efforts and work put into it. If someone is willing to do the necessary investment, I think we could entertain the possibility that support can be brought back in a future. I just don’t think it is very probable.
When you use the name localhost in a URL, what does it mean? Where does the network traffic go when you ask curl to download http://localhost ?
Is “localhost” just a name like any other or do you think it infers speaking to your local host on a loopback address?
Previously
curl http://localhost
The name was “resolved” using the standard resolver mechanism into one or more IP addresses and then curl connected to the first one that works and gets the data from there.
The (default) resolving phase there involves asking the getaddrinfo() function about the name. In many systems, it will return the IP address(es) specified in /etc/hosts for the name. In some systems things are a bit more unusually setup and causes a DNS query get sent out over the network to answer the question.
In other words: localhost was not really special and using this name in a URL worked just like any other name in curl. In most cases in most systems it would resolve to 127.0.0.1 and ::1 just fine, but in some cases it would mean something completely different. Often as a complete surprise to the user…
Starting now
curl http://localhost
Starting in commit 1a0ebf6632f8, to be released in curl 7.78.0, curl now treats the host name “localhost” specially and will use an internal “hard-coded” set of addresses for it – the ones we typically use for the loopback device: 127.0.0.1 and ::1. It cannot be modified by /etc/hosts and it cannot be accidentally or deliberately tricked by DNS resolves. localhost will now always resolve to a local address!
Does that kind of mistakes or modifications really happen? Yes they do. We’ve seen it and you can find other projects report it as well.
Who knows, it might even be a few microseconds faster than doing the “full” resolve call.
(You can still build curl without IPv6 support at will and on systems without support, for which the ::1 address of course will not be provided for localhost.)
Specs say we can
The RFC 6761 is titled Special-Use Domain Names and in its section 6.3 it especially allows or even encourages this:
Users are free to use localhost names as they would any other domain names. Users may assume that IPv4 and IPv6 address queries for localhost names will always resolve to the respective IP loopback address.
Followed by
Name resolution APIs and libraries SHOULD recognize localhost names as special and SHOULD always return the IP loopback address for address queries and negative responses for all other query types. Name resolution APIs SHOULD NOT send queries for localhost names to their configured caching DNS server(s).
Mike West at Google also once filed an I-D with even stronger wording suggesting we should always let localhost be local. That wasn’t ever turned into an RFC though but shows a mindset.
(Some) Browsers do it
Chrome has been special-casing localhost this way since 2017, as can be seen in this commit and I think we can safely assume that the other browsers built on their foundation also do this.
Firefox landed their corresponding change during the fall of 2020, as recorded in this bugzilla entry.
Safari (on macOS at least) does however not do this. It rather follows what /etc/hosts says (and presumably DNS of not present in there). I’ve not found any official position on the matter, but I found this source code comment indicating that localhost resolving might change at some point:
Since some time back, Windows already resolves “localhost” internally and it is not present in their /etc/hosts alternative. I believe it is more of a hybrid solution though as I believe you can put localhost into that file and then have that custom address get used for the name.
Secure over http://localhost
When we know for sure that http://localhost is indeed a secure context (that’s a browser term I’m borrowing, sorry), we can follow the example of the browsers and for example curl should be able to start considering cookies with the “secure” property to be dealt with over this host even when done over plain HTTP. Previously, secure in that regard has always just meant HTTPS.
This change in cookie handling has not happened in curl yet, but with localhost being truly local, it seems like an improvement we can proceed with.
Can you still trick curl?
When I mentioned this change proposal on twitter two of the most common questions in response were
can’t you still trick curl by routing 127.0.0.1 somewhere else
can you still use --resolve to “move” localhost?
The answers to both questions are yes.
You can of course commit the most hideous hacks to your system and reroute traffic to 127.0.0.1 somewhere else if you really wanted to. But I’ve never seen or heard of anyone doing it, and it certainly will not be done by mistake. But then you can also just rebuild your curl/libcurl and insert another address than the default as “hardcoded” and it’ll behave even weirder. It’s all just software, we can make it do anything.
The --resolve option is this magic thing to redirect curl operations from the given host to another custom address. It also works for localhost, since curl will check the cache before the internal resolve and --resolve populates the DNS cache with the given entries. (Provided to applications via the CURLOPT_RESOLVE option.)
What will break?
With enough number of users, every single little modification or even improvement is likely to trigger something unexpected and undesired on at least one system somewhere. I don’t think this change is an exception. I fully expect this to cause someone to shake their fist in the sky.
However, I believe there are fairly good ways to make to restore even the most complicated use cases even after this change, even if it might take some hands on to update the script or application. I still believe this change is a general improvement for the vast majority of use cases and users. That’s also why I haven’t provided any knob or option to toggle off this behavior.
Credits
The top photo was taken by me (the symbolism being that there’s a path to take somewhere but we don’t really know where it leads or which one is the right to take…). This curl change was written by me. Mike West provided me the Chrome localhost change URL. Valentin Gosu gave me the Firefox bugzilla link.
Thanks to funding by ISRG (via Google), we merged the hyper powered HTTP back-end into curl earlier this year as an alternative HTTP/1 and HTTP/2 implementation. Previously, there was only one way to do HTTP/1 and 2 in curl.
Backends
Core libcurl functionality can be powered by optional and alternative backends in a way that doesn’t change the API or directly affect the application. This is done by featuring internal APIs that can be implemented by independent components. See the illustration below (click for higher resolution).
curl 7.75.0 became the first curl release that could be built with hyper. The support for it was labeled “experimental” as while most of all common and basic use cases were supported, we still couldn’t run the full test suite when built with it and some edge cases even crashed.
We’ve subsequently fixed a few of the worst flaws so the Hyper powered curl has gradually and slowly improved since then.
Going further
Our best friends at ISRG has now once again put up funding and I’ll spend more work hours on making sure that more (preferably all) tests can run with hyper.
I’ve already started. Right now I’m sitting and staring at test case 154 which is doing a HTTP PUT using Digest authentication and an Expect: 100-continue header and this test case currently doesn’t work correctly when built to use Hyper. I’ll report back in a few weeks and let you know how it goes – and then I don’t mean with just test 154!
Consider yourself invited to join the #curl IRC channel and chat if you want live reports or want to help out!
Fund
You too can fund me to do curl work. Get in touch!
Part 1. The beginning. (There will be at least one more part later on following up the progress.)
On May 18, 2021 I posted a tweet that I was giving away curl stickers for free to anyone who’d submit their address to me. It looked like this:
Everyone once in a while when I post a photo that involves curl stickers, a few people ask me where they can get hold of such. I figured it was about time I properly offered “the world” some. I expected maybe 50 or a 100 people would take me up on this offer.
The response was totally overwhelming and immediate. Within the first hour 270 persons had already requested stickers. After 24 hours when I closed the form again, 1003 addresses had been submitted. To countries all around the globe. Quite the avalanche.
Assessing the damage
This level of interest put up some challenges I hadn’t planned for. Do I have stickers enough? Now suddenly doing 3 or 5 stickers per parcel will have a major impact. Getting envelops and addresses onto them for a thousand deliveries is quite a job! Not to mention the cost. A “standard mail” to outside Sweden using the regular postal service is 24 SEK. That’s ~2.9 USD. Per parcel. Add the extra expenses and we’re at an adventure north of 3,000 USD.
For this kind of volume, I can get a better rate by registering as a “company customer”. It adds some extra work for me though but I haven’t worked out the details around this yet.
Let me be clear: I already from the beginning planned to ask for reimbursement from the curl fund for my expenses for this stunt. I would mostly add my work on this for free. Maybe “hire” my daughter for an extra set of hands.
Donations
During the time the form was up, we also received 51 donations to Open Collective (as the form mentioned that, and I also mentioned it on Twitter several times). The donated total was 943 USD. The average donation was 18 USD, the largest ones (2) were at 100 USD and the smallest was 2 USD.
Of course some donations might not be related to this and some donations may very well arrive after this form was closed again.
Cleaning up
If I had thought this through better at the beginning, I would not have asked for the address using a free text field like this. People clearly don’t have the same idea of how to do this as I do.
I had to manually go through the addresses to insert newlines, add country names and remove obviously broken addresses. For example, a common pattern was addresses added with only a 6-8 digit number? I think over 20 addresses were specified like that!
Clearly there’s a lesson to be had there.
After removing obviously bad and broken addresses there were 978 addresses left.
Countries
I got postal addresses to 65 different countries. A surprisingly diverse collection I think. The top 10 countries were:
USA
174
Sweden
103
Germany
93
India
92
UK
64
France
56
Spain
31
Brazil
31
The Netherlands
24
Switzerland
20
Countries that were only entered once: Dubai, Iran, Japan, Latvia, Morocco, Nicaragua, Philippines, Romania, Serbia, Thailand, Tunisia, UAE, Ukraine, Uruguay, Zimbabwe
Figuring out the process
While I explicitly said I wouldn’t guarantee that everyone gets stickers, I want to do my best in delivering a few to every single one who asked for them.
Volunteers
I have the best community. Without me saying a word or asking for it, several people raised their hands and volunteered to offload the sending to their countries. I could send one big batch to them and they redistribute within their countries. They would handle US, Czechia, Denmark and Switzerland for me.
But why stop at those four? In my next step I put up a public plea for more volunteers on Twitter and man, I got myself a busy evening and after a few hours I had friends signed up from over 20 countries offering to redistributed stickers within the respective countries. This way, we share the expenses and the work load, and mailing out many smaller parcels within countries is also a lot cheaper than me sending them all individually from Sweden.
After a lot of communications I had an army of helpers lined up.
28 distributors will help me do 724 sticker deliveries to 24 countries. Leaving me to do just the remaining 282 packages to the other 41 countries.
Stickers inventory
I’ve offered “a few” stickers and I decided that means 4.
978 * 4 = 3912
Plus I want to add 10 extra stickers to each distributor, and there are 28 distributors.
3912 + 28 * 10 = 4192
Do I have 4200 curl stickers? I emptied my sticker drawer and put them all on the table and took this photo. All of these curl stickers you see on the photo have been donated to us/me by sponsors. Most of the from Sticker Mule.
(There are some Haxx and wolfSSL stickers on the photo as well, because I figured I should spice up some packages with some of those as well.)
Schedule
The stickers still haven’t shipped from my place but the plan is to get the bulk of them shipped from me within days. Stay tuned. There will of course be more delays on the route to their destinations, but rest assured that we intend to deliver to all who asked for them!
Will I give away more curl stickers?
Not now, and I don’t have any plans on doing this stunt again very soon. It was already way more than I expected. More attention, more desire and definitely a lot more work!
But at the first opportunity where you meet me physically I will of course give away stickers.
Buy curl stickers?
I’ve started looking into offering stickers for purchase but I’m not ready to make anything public or official yet. Stay tuned and I promise you’ll learn and be told when the sticker shop opens.
If it happens, the stickers will not be very cheap but you should rather see each such sticker as a mini-sponsorship.
Follow up
Stay tuned. I will be back with updates. See Part 2.
The official publication date of the relevant QUIC specifications is: May 27, 2021.
I’ve done many presentations about HTTP and related technologies over the years. HTTP/2 had only just shipped when the QUIC working group had been formed in the IETF and I started to mention and describe what was being done there.
I’ve explained HTTP/3
I started writing the document HTTP/3 explained in February 2018 before the protocol was even called HTTP/3 (and yeah the document itself was also called something else at first). The HTTP protocol for QUIC was just called “HTTP over QUIC” in the beginning and it took until November 2018 before it got the name HTTP/3. I did my first presentation using HTTP/3 in the title and on slides in early December 2018, My first recorded HTTP/3 presentation was in January 2019 (in Stockholm, Sweden).
In that talk I mentioned that the protocol would be “live” by the summer of 2019, which was an optimistic estimate based on the then current milestones set out by the IETF working group.
I think my optimism regarding the release schedule has kept up but as time progressed I’ve updated that estimation many times…
HTTP/3 – not yet
The first four RFC documentations to be ratified and published only concern QUIC, the transport protocol, and not the HTTP/3 parts. The two HTTP/3 documents are also in queue but are slightly delayed as they await some other prerequisite (“generic” HTTP update) documents to ship first, then the HTTP/3 ones can ship and refer to those other documents.
QUIC
QUIC is a new transport protocol. It is done over UDP and can be described as being something of a TCP + TLS replacement, merged into a single protocol.
Okay, the title of this blog is misleading. QUIC is actually documented in four different RFCs:
RFC 9002 – QUIC Loss Detection and Congestion Control
My role: I’m just a bystander
I initially wanted to keep up closely with the working group and follow what happened and participate on the meetings and interims etc. It turned out to be too difficult for me to do that so I had to lower my ambitions and I’ve mostly had a casual observing role. I just couldn’t muster the energy and spend the time necessary to do it properly.
I’ve participated in many of the meetings, I’ve been present in the QUIC implementers slack, I’ve followed lots of design and architectural discussions on the mailing list and in GitHub issues. I’ve worked on implementing support for QUIC and h3 in curl and thanks to that helped out iron issues and glitches in various implementations, but the now published RFCs have virtually no traces of me or my feedback in them.