Category Archives: cURL and libcurl

curl and/or libcurl related

qlog with curl

I want curl to be on the very bleeding edge of protocol development to aid the Internet protocol development community to test out protocols early and to work out kinks in the protocols and server implementations using curl’s vast set of tools and switches.

For this, curl supported HTTP/2 really early on and helped shaping the protocol and testing out servers.

For this reason, curl supports HTTP/3 already since August 2019. A convenient and well-known client that you can then use to poke on your brand new HTTP/3 servers too and we can work on getting all the rough edges smoothed out before the protocol is reaching its final state.

QUIC tooling

One of the many challenges QUIC and HTTP/3 have is that with a new transport protocol comes entirely new paradigms. With new paradigms like this, we need improved or perhaps even new tools to help us understand the network flows back and forth, to make sure we all have a common understanding of the protocols and to make sure we implement our end-points correctly.

QUIC only exists as an encrypted-only protocol, meaning that we can no longer easily monitor and passively investigate network traffic like before, QUIC also encrypts more of the protocol than TCP + TLS do, leaving even less for an outsider to see.

The current QUIC analyzer tool lineup gives us two options.

Wireshark

We all of course love Wireshark and if you get a very recent version, you’ll be able to decrypt and view QUIC network data.

With curl, and a few other clients, you can ask to get the necessary TLS secrets exported at run-time with the SSLKEYLOGFILE environment variable. You’ll then be able to see every bit in every packet. This way to extract secrets works with QUIC as well as with the traditional TCP+TLS based protocols.

qvis/qlog

The qvis/qlog site. If you find the Wireshark network view a little bit too low level and leaving a lot for you to understand and draw conclusions from, the next-level tool here is the common QUIC logging format called qlog. This is an agreed-upon common standard to log QUIC traffic, which the accompanying qvis web based visualizer tool that lets you upload your logs and get visualizations generated. This becomes extra powerful if you have logs from both ends!

Starting with this commit (landed in the git master branch on May 7, 2020), all curl builds that support HTTP/3 – independent of what backend you pick – can be told to output qlogs.

Enable qlogging in curl by setting the new standard environment variable QLOGDIR to point to a directory in which you want qlogs to be generated. When you run curl then, you’ll get files creates in there named as [hex digits].log, where the hex digits is the “SCID” (Source Connection Identifier).

Credits

qlog and qvis are spear-headed by Robin Marx. qlogging for curl with Quiche was pushed for by Lucas Pardue and Alessandro Ghedini. In the ngtcp2 camp, Tatsuhiro Tsujikawa made it very easy for me to switch it on in curl.

The top image is snapped from the demo sample on the qvis web site.

Review: curl programming

Title: Curl Programming
Author: Dan Gookin
ISBN: 9781704523286
Weight: 181 grams

A book for my library is a book about my library!

Not long ago I discovered that someone had written this book about curl and that someone wasn’t me! (I believe this is a first) Thrilled of course that I could check off this achievement from my list of things I never thought would happen in my life, I was also intrigued and so extremely curious that I simply couldn’t resist ordering myself a copy. The book is dated October 2019, edition 1.0.

I don’t know the author of this book. I didn’t help out. I wasn’t aware of it and I bought my own copy through an online bookstore.

First impressions

It’s very thin! The first page with content is numbered 13 and the last page before the final index is page 110 (6-7 mm thick). Also, as the photo shows somewhat: it’s not a big format book either: 225 x 152 mm. I suppose a positive spin on that could be that it probably fits in a large pocket.

Size comparison with the 2018 printed version of Everything curl.

I’m not the target audience

As the founder of the curl project and my role as lead developer there, I’m not really a good example of whom the author must’ve imagined when he wrote this book. Of course, my own several decades long efforts in documenting curl in hundreds of man pages and the Everything curl book makes me highly biased. When you read me say anything about this book below, you must remember that.

A primary motivation for getting this book was to learn. Not about curl, but how an experienced tech author like Dan teaches curl and libcurl programming, and try to use some of these lessons for my own writing and manual typing going forward.

What’s in the book?

Despite its size, the book is still packed with information. It contains the following chapters after the introduction:

  1. The amazing curl … 13
  2. The libcurl library … 25
  3. Your basic web page grab … 35
  4. Advanced web page grab … 49
  5. curl FTP … 63
  6. MIME form data … 83
  7. Fancy curl tricks … 97

As you can see it spends a total of 12 pages initially on explanations about curl the command line tool and some of the things you can do with it and how before it moves on to libcurl.

The book is explanatory in its style and it is sprinkled with source code examples showing how to do the various tasks with libcurl. I don’t think it is a surprise to anyone that the book focuses on HTTP transfers but it also includes sections on how to work with FTP and a little about SMTP. I think it can work well for someone who wants to get an introduction to libcurl and get into adding Internet transfers for their applications (at least if you’re into HTTP). It is not a complete guide to everything you can do, but then I doubt most users need or even want that. This book should get you going good enough to then allow you to search for the rest of the details on your own.

I think maybe the biggest piece missing in this book, and I really thing it is an omission mr Gookin should fix if he ever does a second edition: there’s virtually no mention of HTTPS or TLS at all. On the current Internet and web, a huge portion of all web pages and page loads done by browsers are done with HTTPS and while it is “just” HTTP with TLS on top, the TLS part itself is worth some special attention. Not the least because certificates and how to deal with them in a libcurl world is an area that sometimes seems hard for users to grasp.

A second thing I noticed no mention of, but I think should’ve been there: a description of curl_easy_getinfo(). It is a versatile function that provides information to users about a just performed transfer. Very useful if you ask me, and a tool in the toolbox every libcurl user should know about.

The author mentions that he was using libcurl 7.58.0 so that version or later should be fine to use to use all the code shown. Most of the code of course work in older libcurl versions as well.

Comparison to Everything curl

Everything curl is a free and open document describing everything there is to know about curl, including the project itself and curl internals, so it is a much wider scope and effort. It is however primarily provided as a web and PDF version, although you can still buy a printed edition.

Everything curl spends more space on explanations of features and discussion how to do things and isn’t as focused around source code examples as Curl Programming. Everything curl on paper is also thicker and more expensive to buy – but of course much cheaper if you’re fine with the digital version.

Where to buy?

First: decide if you need to buy it. Maybe the docs on the curl site or in Everything curl is already good enough? Then I also need to emphasize that you will not sponsor or help out the curl project itself by buying this book – it is authored and sold entirely on its own.

But if you need a quick introduction with lots of examples to get your libcurl usage going, by all means go ahead. This could be the book you need. I will not link to any online retailer or anything here. You can get it from basically anyone you like.

Mistakes or errors?

I’ve found some mistakes and ways of phrasing the explanations that I maybe wouldn’t have used, but all in all I think the author seems to have understood these things and describes functionality and features accurately and with a light and easy-going language.

Finally: I would never capitalize curl as Curl or libcurl as Libcurl, not even in a book. Just saying…

curl ootw: –get

(Previous options of the week.)

The long version option is called --get and the short version uses the capital -G. Added in the curl 7.8.1 release, in August 2001. Not too many of you, my dear readers, had discovered curl by then.

The thinking behind it

Back in the early 2000s when we had added support for doing POSTs with -d, it become obvious that to many users the difference between a POST and a GET is rather vague. To many users, sending something with curl is something like “operating with a URL” and you can provide data to that URL.

You can send that data to an HTTP URL using POST by specifying the fields to submit with -d. If you specify multiple -d flags on the same command line, they will be concatenated with an ampersand (&) inserted in between. For example, you want to send both name and bike shed color in a POST to example.com:

curl -d name=Daniel -d shed=green https://example.com/

POST-envy

Okay, so curl can merge -d data entries like that, which makes the command line pretty clean. What if you instead of POST want to submit your name and the shed color to the URL using the query part of the URL instead and you still would like to use curl’s fancy -d concatenation feature?

Enter -G. It converts what is setup to be a POST into a GET. The data set with -d to be part of the request body will instead be put after the question mark in the HTTP request! The example from above but with a GET:

curl -G -d name=Daniel -d shed=green https://example.com/

(The actual placement or order of -G vs -d is not important.)

The exact HTTP requests

The first example without -G creates this HTTP request:

POST / HTTP/1.1
Host: example.com
User-agent: curl/7.70.0
Accept: /
Content-Length: 22
Content-Type: application/x-www-form-urlencoded

name=Daniel&shed=green

While the second one, with -G instead does this:

GET /?name=Daniel&shed=green HTTP/1.1
Host: example.com
User-agent: curl/7.70.0
Accept: /

If you want to investigate exactly what HTTP requests your curl command lines produce, I recommend --trace-ascii if you want to see the HTTP request body as well.

-X GET vs -G

One of the highest scored questions on stackoverflow that I’ve answered concerns exactly this.

-X is only for changing the actual method string in the HTTP request. It doesn’t change behavior and the change is mostly done without curl caring what the new string is. It will behave as if it used the original one it intended to use there.

If you use -d in a command line, and then add -X GET to it, curl will still send the request body like it does when -d is specified.

If you use -d plus -G in a command line, then as explained above, curl sends a GET in the command line and -X GET will not make any difference (unless you also follow a redirect, in which the -X may ruin the fun for you).

Other options

HTTP allows more kinds of requests than just POST or GET and curl also allows sending more complicated multipart POSTs. Those don’t mix well with -G; this option is really designed only to convert simple -d uses to a query string.

curl 7.70.0 with JSON and MQTT

We’ve done many curl releases over the years and this 191st one happens to be the 20th release ever done in the month of April, making it the leading release month in the project. (February is the month with the least number of releases with only 11 so far.)

Numbers

the 191st release
4 changes
49 days (total: 8,076)

135 bug fixes (total: 6,073)
262 commits (total: 25,667)
0 new public libcurl function (total: 82)
0 new curl_easy_setopt() option (total: 270)

1 new curl command line option (total: 231)
65 contributors, 36 new (total: 2,169)
40 authors, 19 new (total: 788)
0 security fixes (total: 92)
0 USD paid in Bug Bounties

Security

There’s no security advisory released this time. The release of curl 7.70.0 marks 231 days since the previous CVE regarding curl was announced. The longest CVE-free period in seven years in the project.

Changes

The curl tool got the new command line option --ssl-revoke-best-effort which is powered by the new libcurl bit CURLSSLOPT_REVOKE_BEST_EFFORT you can set in the CURLOPT_SSL_OPTIONS. They tell curl to ignore certificate revocation checks in case of missing or offline distribution points for those SSL backends where such behavior is present (read: Schannel).

curl’s --write-out command line option got support for outputting the meta data as a JSON object.

We’ve introduced the first take on MQTT support. It is marked as experimental and needs to be explicitly enabled at build-time.

Bug-fixes to write home about

This is just an ordinary release cycle worth of fixes. Nothing particularly major but here’s a few I could add some extra blurb about…

gnutls: bump lowest supported version to 3.1.10

GnuTLS has been a supported TLS backend in curl since 2005 and we’ve supported a range of versions over the years. Starting now, we bumped the lowest supported GnuTLS version to 3.1.10 (released in March 2013). The reason we picked this particular version this time is that we landed a bug-fix for GnuTLS that wanted to use a function that was added to GnuTLS in that version. Then instead of making more conditional code, we cleaned up a lot of legacy and simplified the code significantly by simply removing support for everything older than this. I would presume that this shouldn’t hurt many users as I suspect it is a very bad idea to use older versions anyway, for security reasons if nothing else.

libssh: Use new ECDSA key types to check known hosts

curl supports three different SSH backends, and one them is libssh. It turned out that the glue layer we have in curl between the core libcurl and the SSH library lacked proper mappings for some recent key types that have been added to the SSH known_hosts file. This file has been getting new key types added over time that OpenSSH is using by default these days and we need to make sure to keep up…

multi-ssl: reset the SSL backend on Curl_global_cleanup()

curl can get built to support multiple different TLS backends – which lets the application to select which backend to use at startup. Due to an oversight we didn’t properly support that the application can go back and cleanup everything and select a different TLS backend – without having to restart the application. Starting now, it is possible!

Revert “file: on Windows, refuse paths that start with \\”

Back in January 2020 when we released 7.68.0 we announced what we then perceived was a security problem: CVE-2019-15601.

Later, we found out more details and backpedaled on that issue. “It’s not a bug, it’s a feature” as the saying goes. Since it isn’t a bug (anymore) we’ve now also subsequently removed the “fix” that we introduced back then…

tests: introduce preprocessed test cases

This is actually just one out of several changes in the curl test suite that has happened as steps in a larger sub-project: move all test servers away from using fixed port numbers over to using dynamically assigned ones. Using dynamic port numbers makes it easier to run the tests on random users’ machines as the risk for port collisions go away.

Previously, users had the ability to ask the tests to run on different ports by using a command line option but since it was rarely used, new test were often written assuming the default port number hard-coded. With this new concept, such mistakes can’t slip through.

In order to correctly support all test servers running on any port, we’ve enhanced the main test “runner” (runtests) to preprocess the test case files correctly which allows all our test servers to work with such port numbers appearing anywhere in protocol details, headers or response bodies.

The work on switching to dynamic port numbers isn’t quite completed yet but there are still a few servers using fixed ports. I hope those will be addressed within shortly.

tool_operate: fix add_parallel_transfers when more are in queue

Parallel transfers in the curl tool is still a fairly new thing, clearly, as we can get a report on this kind of basic functionality flaw. In this case, you could have curl generate zero byte output files when using --parallel-max to limit the parallelism, instead of getting them all downloaded fine.

version: add ‘cainfo’ and ‘capath’ to version info struct

curl_version_info() in libcurl returns lots of build information from the libcurl that’s running right now. It includes version number of libcurl, enabled features and version info from used 3rd party dependencies. Starting now, assuming you run a new enough libcurl of course, the returned struct also contains information about the built-in CA store default paths that the TLS backends use.

The idea being that your application can easily extract and use this information either in information/debugging purposes but also in cases where other components are used that also want a CA store and the application author wants to make sure both/all use the same paths!

windows: enable UnixSockets with all build toolchains

Due to oversights, several Windows build didn’t enable support for unix domain sockets even when built for such Windows 10 versions where there’s support provided for it in the OS.

scripts: release-notes and copyright

During the release cycle, I regularly update the RELEASE-NOTES file to include recent changes and bug-fixes scheduled to be included in the coming release. I do this so that users can easily see what’s coming; in git, on the web site and in the daily snapshots. This used to be a fairly manual process but the repetitive process finally made me create a perl script for it that removes a lot of the manual work: release-notes.pl. Yeah, I realize I’m probably the only one who’s going to use this script…

Already back in December 2018, our code style tool checksrc got the powers to also verify the copyright year range in the top header (written by Daniel Gustafsson). This makes sure that we don’t forget to bump the copyright years when we update files. But because this was a bit annoying and surprising to pull-request authors on GitHub we disabled it by default – which only lead to lots of mistakes still being landed on the poor suckers (like me) who enabled it would get the errors instead. Additionally, the check is a bit slow. This finally drove me into disabling the check as well.

To combat the net effect of that, I’ve introduced the copyright.pl script which is similar in spirit but instead scans all files in the git repository and verifies that they A) have a header and B) that the copyright range end year seems right. It also has a whitelist for files that don’t need to fulfill these requirements for whatever reason. Now we can run this script one every release cycle instead and get the same end results. Without being annoying to users and without slowing down anyone’s everyday builds! Win-win!

The release presentation video

Credits

The top image was painted by Dirck van Delen 1631. Found in the Swedish National Museum’s collection.

webinar: common libcurl mistakes

On May 7, 2020 I will present common mistakes when using libcurl (and how to fix them) as a webinar over Zoom. The presentation starts at 19:00 Swedish time, meaning 17:00 UTC and 10:00 PDT (US West coast).

[sign up to attend here]

Abstract

libcurl is used in thousands of different applications and devices for client-side Internet transfer and powers a significant part of what flies across the wires of the world a normal day.

Over the years as the lead curl and libcurl developer I’ve answered many questions and I’ve seen every imaginable mistake done. Some of the mistakes seem to happen more frequently and some of the mistake seem easier than others to avoid.

I’m going to go over a list of things that users often get wrong with libcurl, perhaps why they do and of course I will talk about how to fix those errors.

Length

It should be done within 30-40 minutes, plus some additional time for questions at the end.

Audience

You’re interested in Internet transfer, preferably you already know what libcurl is and perhaps you have even written code that uses libcurl. Directly in C or using a binding in another language.

Material

The video and slides will of course be made available as well in case you can’t tune in live.

Sign up

If you sign up to attend, you can join, enjoy the talk and of course ask me whatever is unclear or you think needs clarification around this topic. See you next week!

curl ootw: –remote-name-all

This option only has a long version and it is --remote-name-all.

Shipped curl 7.19.0 for the first time – September 1 2008.

History of curl output options

I’m a great fan of the Unix philosophy for command line tools so for me there was never any deeper thoughts on what curl should do with the contents of the URL it gets already from the beginning: it should send it to stdout by default. Exactly like the command line tool cat does for files.

Of course I also realized that not everyone likes that so we provided the option to save the contents to a given file. Output to a named file. We selected -o for that option – if I remember correctly I think I picked it up from some other tools that used this letter for the same purpose: instead of sending the response body to stdout, save it to this file name.

Okay but when you selected “save as” in a browser, you don’t actually have to select the full name yourself. It’ll propose the default name to use based on the URL you’re viewing, probably because in many cases that makes sense for the user and is a convenient and quick way to get a sensible file name to save the content as.

It wasn’t hard to come with the idea that curl could offer something similar. Since the URL you give to curl has a file name part when you want to get a file name, having a dedicated option pick the name from the rightmost part of the URL for the local file name was easy. As different output option that -o,it felt natural to pick the uppercase O option for this slightly different save-the-output option: -O.

Enter more than URL

curl sends everything to stdout, unless to tell it to direct it somewhere else. Then (this is still before the year 2000, so very early days) we added support for multiple URLs on the command line and what would the command line options mean then?

The default would still be to send data to stdout and since the -o and -O options were about how to save a single URL we simply decided that they do exactly that: they instruct curl how to send a single URL. If you provide multiple URLs to curl, you subsequently need to provide multiple output flags. Easy!

It has the interesting effect that if you download three files from example.com and you want them all named according to their rightmost part from the URL, you need to provide multiple -O options:

curl https://example.com/1 https://example.com/2 https://example.com/3 -O -O -O

Maybe I was a bit sensitive

Back in 2008 at some point, I think I took some critique about this maybe a little too hard and decided that if certain users really wanted to download multiple URLs to local file names in an easier manner, that perhaps other command line internet download tools do, I would provide an option that lets them to this!

--remote-name-all was born.

Specifying this option will make -O the default behavior for URLs on the command line! Now you can provide as many URLs as you like and you don’t need to provide an extra flag for each URL.

Get five different URLs on the command line and save them all locally using the file part form the URLs:

curl --remote-name-all https://example.com/a.html https://example.com/b.html https://example.com/c.html https://example.com/d.html https://example.com/e.html

Then if you don’t want that behavior you need to provide additional -o flags…

.curlrc perhaps?

I think the primary idea was that users who really want -O by default like this would put --remote-name-all in their .curlrc files. I don’t this ever really materialized. I believe this remote name all option is one of the more obscure and least used options in curl’s huge selection of options.

Report: curl’s bug bounty one year in

On April 22nd 2019, we announced our current, this, incarnation of the curl bug bounty. In association with Hackerone we now run the program ourselves, primarily funded by gracious sponsors. Time to take a closer look at how the first year of bug bounty has been!

Number of reports

We’ve received a total of 112 reports during this period.

On average, we respond with a first comment to reports within the first hour and we triage them on average within the first day.

Out of the 112 reports, 6 were found actual security problems.

Total amount of reports vs actual security problems, per month during the first year of the curl hackerone bug bounty program.

Bounties

All confirmed security problems were rewarded a bounty. We started out a bit careful with the amounts but we are determined to raise them as we go along and we’ve seen that there’s not really a tsunami coming.

We’ve handed out 1,400 USD so far, which makes it an average of 233 USD per confirmed report. The top earner got two reports rewarded and received 450 USD from us. So far…

But again: our ambition is to significantly raise these amounts going forward.

Trends

The graph above speaks clearly: lots of people submitted reports when we opened up and the submission frequency has dropped significantly over the year.

A vast majority of the 112 reports we’ve received have were more or less rubbish and/or more or less automated reports. A large amount of users have reported that our wiki can be edited by anyone (which I consider to be a fundamental feature of a wiki) or other things that we’ve expressly said is not covered by the program: specific details about our web hosting, email setup or DNS config.

A rough estimate says that around 80% of the reports were quickly dismissed as “out of policy” – ie they reported stuff that we documented is not covered by the bug bounty (“Sirs, we can figure out what http server that’s running” etc). The curl bug bounty covers the products curl and libcurl, thus their source code and related specifics.

Bounty funds

curl has no ties to any organization. curl is not owned by any corporation. curl is developed by individuals. All the funds we have in the project are graciously provided to us by sponsors and donors. The curl funds are handled by the awesome Open Collective.

Security is of utmost importance to us. It trumps all other areas, goals and tasks. We aim to produce solid and secure products for the world and we act as swiftly and firmly as we can on all reported security problems.

Security vulnerability trends

We have not published a single CVE for curl yet this year (there was one announced, CVE-2019-15601 but after careful considerations we have backpedaled on that, we don’t consider it a flaw anymore and the CVE has been rejected in the records.)

As I write this, there’s been exactly 225 days since the latest curl CVE was published and we’re aiming at shipping curl 7.70.0 next week as the 6th release in a row without a security vulnerability to accompany it. We haven’t done 6 “clean” consecutive release like this since early 2013!

Looking at the number of CVEs reported in the curl project per year, we can of course see that 2016 stands out. That was the year of the security audit that ended up the release of curl 7.51.0 with no less than eleven security vulnerabilities announced and fixed. Better is of course the rightmost bar over the year 2020 label. It is still non-existent!

The most recent CVEs per year graph is always found: here.

As you can see in the graph below, the “plateau” in the top right is at 92 published CVEs. The previous record holder for longest period in the project without a CVE ended in February 2013 (with CVE-2013-0249) at 379 days.

2013 was however quite a different era for curl. Less code, much less scrutinizing, no bug bounty, lesser tools, no CI jobs etc.

Number of published CVEs in the curl project over time. The updated graph is always found: here.

Are we improving?

Is curl getting more secure?

We have more code and support more protocols than ever. We have a constant influx of new authors and contributors. We probably have more users than ever before in history.

At the same time we offer better incentives than ever before for people to report security bugs. We run more CI jobs than ever that run more and more test cases while code analyzers and memory debugging are making it easier to detect problems earlier. There are also more people looking for security bugs in curl than ever before.

Jinx?

I’m under no illusion that there aren’t more flaws to find, report and fix. We’re all humans and curl is still being developed at a fairly high pace.

Please report more security bugs!

Credits

Top image by Luisella Planeta Leoni from Pixabay

curl ootw: –ipv4

Previous options of the week.

This option is -4 as the short option and --ipv4 as the long, added in curl 7.10.8.

IP version

So why would anyone ever need this option?

Remember that when you ask curl to do a transfer with a host, that host name will typically be resolved to a list of IP addresses and that list will contain both IPv4 and IPv6 addresses. When curl then connects to the host, it will iterate over that list and it will attempt to connect to both IPv6 and IPv4 addresses, even at the same time in the style we call happy eyeballs.

The first connect attempt to succeed will be the one curl sticks to and will perform the transfer over.

When IPv6 acts up

In rare occasions or even in some setups, you may find yourself in a situation where you get back problematic IPv6 addresses in the name resolve, or the server’s IPv6 behavior seems erratic, your local config simply makes IPv6 flaky or things like that. Reasons you may want to ask curl to stick to IPv4-only to avoid a headache.

Then --ipv4 is here to help you.

--ipv4

First, this option will make the name resolving only ask for IPv4 addresses so there will be no IPv6 addresses returned to curl to try to connect to.

Then, due to the disabled IPv6, there won’t be any happy eyeballs procedure when connecting since there are now only addresses from a single family in the list.

It could perhaps be worth to stress that if the host name you target then doesn’t have any IPv4 addresses associated with it, the operation will instead fail when this option is used.

Example

curl --ipv4 https://example.org/

Related options

The reversed request, ask for IPv6 only is done with the --ipv6 option.

There are also options for specifying other protocol versions, in particular for example HTTP with --http1.0, --http1.1, --http2 and --http3 or for TLS with --tslv1, --tlsv1.2, --tlsv1.3 and more.

curl better – video

As so many other events in these mysterious times, the foss-north conference went online-only and on March 30, 2020 I was honored to be included among the champion speakers at this lovely conference and I talked about how to “curl better” there.

The talk is a condensed run-through of how curl works and why, and then a look into how some of the more important HTTP oriented command line options work and how they’re supposed to be used.

As someone pointed out: I don’t do a lot of presentations about the curl tool. Maybe I should do more of these.

curl is widely used but still most users only use a very small subset of options or even just copy their command line from somewhere else. I think more users could learn to curl better. Below is the video of this talk.

Doing a talk to a potentially large audience in front of your laptop in completely silence and not seeing a single audience member is a challenge. No “contact” with the audience and no feel for if they’re all going to sleep or seem interested etc. Still I have the feeling that this is the year we all are going to do this many times and hopefully get better at it over time…