Tag Archives: command-line

Mastering the curl command line

For the first time ever, I am going to present a single, very long, video class with the title shown above.

This session will be streamed and recorded live on August 31, starting at 16:00 UTC (18:00 CEST, 09:00 PDT) and is expected to take about two and a half hours. Due to many uncertainties, the stream might of course be longer even if the end recording might get edited down a little.

[The slides] [Interactive text version][slides as pdf]

The stream will be done on my usual twitch channel:

https://www.twitch.tv/curlhacker

The agenda for this monster session might still be tweaked a little before it happens but the work in progress version is shown below. It should cover most of what curl can and knows in 2023.

There is no need to sign up. It is entirely free of charge. All you need to do to enjoy it live is to go to the above link at the correct time on the right day. You can participate and ask questions live in the designated chat while the stream is live.

The project (10 min)

  • start
  • name
  • products
  • open source
  • development
  • releases
  • issues
  • pull requests
  • asking for help
  • paying for help

Command line (20 min)

  • command line options
    • long vs short names
    • depends on version
  • URLs
    • scheme
    • name and password
    • host
    • port number
    • path
    • fragment
    • browsers’ address bar
    • options and URLs
    • connection reuse
    • parallel transfers
  • trurl
  • URL globbing
  • List options
  • config file
  • passwords
  • progress meter

Using curl (30 min)

  • verbose
    • --trace
    • --write-out
  • version
  • persistent connections
  • Downloads
    • What exactly is downloading?
    • Storing downloads
    • Download to a file named by the URL
    • Use the target file name from the server
    • HTML and charsets
    • Compression
    • Shell redirects
    • Multiple downloads
    • My browser shows something else
    • Maximum file size
    • Storing metadata in file system
    • Raw
    • Retry
    • Resuming and ranges
  • Uploads
  • Transfer controls
    • Stop slow transfers
    • Rate limiting
    • Request rate limiting
  • Connections
    • Name resolve tricks
    • Connection timeout
    • Network interface
    • Local port number
    • Keep alive
  • Timeouts
  • .netrc
  • Exit status
  • SCP and SFTP
  • Reading email
  • Sending email
  • MQTT
  • TFTP
  • TELNET
  • DICT
  • Copy as curl
  • --libcurl
  • h2c

TLS details (15 min)

  • ciphers
  • enable TLS
  • verifying server certificates
  • OCSP stapling
  • client certificates
  • TLS backends
  • SSLKEYLOGFILE

Proxies (20 min)

  • Discover your proxy
  • PAC
  • Captive portals
  • Proxy type
  • HTTP proxy
  • SOCKS proxy (tor)
  • MITM proxy
  • Authentication
  • HTTPS proxy
  • Proxy environment variables
  • Proxy headers

HTTP (30 min)

  • Protocol basics
  • Method
  • HTTP response codes
  • Responses
  • Authentication
  • Ranges
  • HTTP versions
  • Conditionals
  • HTTPS
  • HTTP POST
  • Multipart formpost
  • -d vs -F
  • Redirects
  • Modify the HTTP request
  • HTTP PUT
  • Cookies
  • HTTP/2
  • Alternative Services
  • HTTP/3
  • HSTS

FTP (10 min)

  • Authentication
  • Directories
  • Uploading
  • Custom FTP commands
  • Two connections
  • Directory traversing

Rounding off (5 min)

  • How to dig deeper
  • Where is curl going

curl write-out to files

The curl option –write-out is one of my personal favorites and offers users an excitingly powerful way to output information from a transfer. Over time, it has been extended to provide more and more features.

It was for example not that long ago we added the ability to output the content of specific headers with %headers{} to this option.

Now (shipping in the coming curl 8.3.0, merged in commit 1032f56efa) we take the next step and add yet another little nifty function to this option that makes it even more powerful and allows you to use it for more purposes better going forward.

It can now save the selected info to a specific file instead of just outputting to stdout or stderr. Or to multiple files. Or append to files. With %output{}.

Examples

Write the used IP address of the remote host to a file named “remote.txt”:

curl -w "%output{remote.txt}%{remote_ip}" https://example.com

Get the same information, but append it to the remote.txt file:

curl -w "%output{>>remote.txt}%{remote_ip}" https://example.com

Output the HTTP response code from the HTTP server to stderr and then append it to the file “log.txt” as well:

curl -w "%{response_code}%output{>>log.txt}%{response_code}" https://example.com

Enjoy!

introducing curl command line variables

If you are anything like me, you appreciate solving your every day simple tasks directly from the command line. Creating crafty single shot command lines or a small shell script to solve that special task you figured out you needed and makes your day go a little smoother. A fellow command line cowboy.

Video presentation

Background

To make life easier for curl users, the tool supports “config files“. They are a set of command line options written in a text file that you can point the curl tool to use. By default curl will check for and use such a config file named .curlrc if placed in your home directory.

One day not too long ago, a user over in the curl IRC channel asked me if it was possible to use environment variables in such config files to avoid having to actually store secrets directly in the file.

Variables

This new variable system that we introduce in curl 8.3.0 (commit 2e160c9c65) makes it possible to use environment variable in config files. But it does not stop there. It allows lots of other fun things.

First off, you can set named variables on the command line. Like :

curl --variable name=content

or in the config file:

variable=name=content

A variable name must only consist of a-z, A-Z, 0-9 or underscore (up to 128 characters). If you set the same name twice, the second set will overwrite the first.

There can be an unlimited amount of variables. A variable can hold up to 10M of content. Variables are set in a left to right order as curl parses the command line or config file.

Assign

You can assign a variable a plain fixed string as shown above. Optionally, you can tell curl to populate it with the contents of a file:

curl --variable name@filename

or straight from stdin:

curl --variable name@-

Environment variables

The variables mentioned above are only present in the curl command line. You can also opt to “import” an environment variable into this context. To import $HOME:

curl --variable %HOME

In this case above, curl will exit if there is no environment variable by that name. Optionally, you can set a default value for the case where the variable does not exist:

curl --variable %HOME=/home/nouser

Expand variables

All variables that are set or “imported” as described above can be used in subsequent command line option arguments – or in config files.

Variables must be explicitly asked for, to make sure they do not cause problems for older command lines or for users when they are not desired. To accomplish this, we introduce the --expand- option prefix.

Only when you use the --expand- prefix in front of an option will the argument get variables expanded.

You reference (expand) a variable like {{name}}. That means two open braces, the variable name and then two closing braces. This sequence will then be replaced by the contents of the variable and a non-existing variable will expand as blank/nothing.

Trying to show a variable with a null byte causes error

Examples

Use the variable named ‘content’ in the argument to --data, telling curl what to send in a HTTP POST:

--expand-data “{{content}}”

Create the URL to operate on by inserting the variables ‘host’ and ‘user’.

--expand-url “https://{{host}}/user/{{user}}”

Expand variables

--variable itself can be expanded when you want to create a new variable that uses content from one or more other variables. Like:

--expand-variable var1={{var2}}
--expand-variable fullname=’Mrs {{first}} {{last}}’
--expand-variable source@{{filename}}

Expansion functions

When expanding a variable, functions can be applied. Like this: {{name:function}}

Such variable functions alter how the variable is expanded. How it gets output.

Multiple functions can be applied in a left-to-right order:
{{name:func1:func2:func3}}.

curl offers four different functions to help you expand variables in the most productive way: trim, json, url and b64:

  • trim – removes leading and trailing whitespace
  • json – outputs the variable JSON quoted (but without surrounding quotes)
  • url – shows the string URL encoded, also sometimes called percent encoding
  • b64 – shows the variable base64 encoded

Function examples

Expands the variable URL encoded. Also known as “percent encoded”.

--expand-data “name={{name:url}}”

To trim the variable first, apply both functions (in the right order):

--expand-data “name={{name:trim:url}}”

Send the HOME environment variable as part of a JSON object in a HTTP POST:

--variable %HOME
--expand-data “{ \"homedir\": \"{{HOME:json}}\” "

Discuss

On hacker news.

The curl fragment trick

curl supports globbing in the sense that you can provide ranges or lists in the URL that will make curl iterate, loop, over all the different variations and do a separate transfer for each.

For example, get ten images in a numeric range:

curl "https://example.com/image[1-10].jpg" -O

Or get them when named after some weekdays:

curl "https://example.com/{Monday,Tuesday,Friday}.jpg" -O

Naming the output

The examples above use -O which makes curl use the same name for the destination file as is used the effective URL. Convenient, but not always what you want.

curl also allows you to refer to the number or name from the range or list and use that when naming your output files, which helps you do better globbing.

For example, maybe the file name part of the URL is actually the same and you iterate over another difference in the URL. Like this:

curl "https://example.com/{Monday,Tuesday,Friday}/image" -o #1.jpg

The #1 part in the example is a reference back to the first list/range, as you can do multiple ones and even using mixed types and you can then use multiple #-references in the same command line. To illustrate, here is a simple example using two iterators to download three hundred images:

curl "https://{red,blue,green}.example.com/image[1-100].jpg" -o "#2-#1-stored.jpg"

There is actually no upper limit to how many transfers you can do like this with curl, other than that the numeric ranges only deal with up to 64 bit numbers.

Hundreds? Maybe go parallel

If you actually do come up with a command line that needs to transfer several hundred or more resources, then maybe consider adding -Z, --parallel to the mix so that curl performs many transfers simultaneously, in parallel. This can drastically reduce the total time needed for completing the task.

curl runs up to 50 transfers in parallel by default when this option is used, but you can also tweak this amount with --parallel-max.

A fragment trick

Okay, so now we finally arrive at the fragment and the trick mentioned in the title.

If you want to do several repeated transfers but not actually change the URL then the examples above do not satisfy you as they change the URL for every new transfer.

A neat trick is then to add a fragment part to the URL you use, and then do the globbing there. The fragment is the rightmost part of a URL that starts with a #-character and continues to the end of the URL.

A fragment can always be added to a URL, but the fragment is never actually transmitted over the network so the remote server is not aware of it.

Get the same URL ten times, saved in different target files:

curl "https://example.com/index.html#[1-10]" -o #1.html

If you rather name the outputs according some scheme, you can of course just list them in the glob:

curl "https://example.com/index.html#{mercury,venus,earth,mars}" -o #1.html

Maybe slower

In cases where you transfer the same URL many times, chances are you want to do this because the content changes at some interval. Perhaps you then do not want them all to be done as fast as possible as then the contents may not have updated.

To help you pace the transfers to get the same thing over and over in a more controlled manner, curl offers --rate. With this you can tell curl to not do it faster than N transfers per given period.

If the URL contents update every 5 minutes, then doing the transfer 12 times per hour seems suitable. Let’s do it 2016 times to have the operation run non-stop for a week:

curl "https://example.com/index.html#[1-2016]" --rate 12/h -o "#1.html"

Append data to the URL query

A new curl option was born: --url-query.

How it started

curl offered the -d / --data option already in its first release back in 1998. curl 4.0. A trusted old friend.

curl also has some companion versions of this option that work slightly differently, but they all have the common feature that they append data to the the request body. Put simply: with these options users construct the body contents to POST. Very useful and powerful. Still today one of the most commonly used curl options, for apparent reasons.

curl -d name=mrsmith -d color=blue https://example.com

Convert to query

A few years into curl’s lifetime, in 2001, we introduced the -G / --get option. This option let you use -d to create a data set, but the data is not sent as a POST body anymore but is instead converted to a query string and used in a GET request.

curl -G -d name=mrsmith -d color=blue https://example.com

This would make curl send a GET request to this URL: https://example.com/?name=mrsmith&color=blue

The “query” is the part of the URL that sits on the right side of the question mark (but before the fragment that if it exists starts with the first # following the question mark).

URL-encode

In 2008 we added --data-urlencode which made it even easier for users and scripts to use these options correctly as now curl itself can URL-encode the given data instead of relying on the user to do it. Previously, script authors would have to do that encoding before passing the data to curl which was tedious and error prone. This feature also works in combination with -G of course.

How about both?

The -d options family make a POST. The -G converts it to a GET.

If you want convenient curl command line options to both make content to send in the POST body and to create query parameters in the URL you were however out of luck. You would then have to go back to use -d but handcraft and encode the query parameters “manually”.

Until curl 7.87.0. Due to ship on December 21, 2022. (this commit)

--url-query is your new friend

This is curl’s 249th command line option and it lets you append parameters to the query part of the given URL, using the same syntax as --data-urlencode uses.

Using this, a user can now conveniently create a POST request body and at the same time add a set of query parameters for the URL which the request uses.

A basic example that sends the same POST body and URL query:

curl -d name=mrsmith -d color=blue --url-query name=mrsmith --url-query color=blue https://example.com

Syntax

I told you it uses the data-urlencode syntax, but let me remind you how that works. You use --url-query [data] where [data] can be provided using these different ways:

contentThis will make curl URL-encode the content and pass that on. Just be careful so that the content does not contain any = or @ symbols, as that will then make the syntax match one of the other cases below!
=contentThis will make curl URL-encode the content and pass that on. The preceding = symbol is not included in the data.
name=contentThis will make curl URL-encode the content part and pass that on. Note that the name part is expected to be URL-encoded already.
@filenameThis will make curl load data from the given file (including any newlines), URL-encode that data and pass it on in the POST.
name@filenameThis will make curl load data from the given file (including any newlines), URL-encode that data and pass it on in the POST. The name part gets an equal sign appended, resulting in name=urlencoded-file-content. Note that the name is expected to be URL-encoded already.
+contentThe data is provided as-is unencoded

For each new --url-query, curl will insert an ampersand (&) between the parts it adds to the query.

Replaces -G

This new friend we call --url-query makes -G rather pointless, as this is a more powerful option that does everything -G ever did and a lot more. We will of course still keep -G supported and working. Because that is how we work.

A boring fact of life is that new versions of curl trickle out into the world rather slowly to ordinary users. Because of this, we can be certain that scripts and users all over will need to keep using -G for yet another undefined period of time.

Trace

Finally: remember that if you want curl to show you what it sends in a POST request, the normal -v / --verbose does not suffice as it will not show you the request body. You then rather need to use --trace or --trace-ascii.

convert a curl cmdline to libcurl source code

The dash-dash-libcurl is the sometimes missed curl gem that you might want to know about and occasionally maybe even use.

How do I convert

There is a very common pattern in curl land: a user who is writing an application using language L first does something successfully with the curl command line tool and then the person wants to convert that command line into the program they are writing.

How do you convert this curl command line into a transfer using programming language XYZ?

Language bindings

There is a huge amount of available bindings for libcurl. Bindings, as in adjustments and glue code for languages to use libcurl for Internet transfers so that they do not have to write the application in C.

At a recent count I found more than 60 libcurl bindings. They allow authors of virtually any programming language to do internet transfers using the powers of libcurl.

Most of these bindings are “thin” and follow the same style of the original C API: You create a handle for which you set options and then you do the transfer. This way they are fairly easy to keep up to date with the always-changing and always-improving libcurl releases.

Say hi to --libcurl

Most curl command line options are mapped one-to-one to an underlying libcurl option so for some time I tried to help users by explaining what they map to. Until one day I realized that

Hey! curl itself already has this mapping in source code, it just needs a way to show it to users!

How would it best output this mapping?

The --libcurl command line option was added to curl already back in 2007 and has been been part of the tool since version 7.16.1.

Show this command line as libcurl code

It is really easy to use too.

  1. Create a complicated curl command line that does what you need it to do.
  2. Append --libcurl example.c to the command line and run it again.
  3. Inspect your newly generated file example.c for how you could write your application to do the same thing with libcurl.

If you want to use a libcurl binding rather than the C API, like perhaps write your code in Python or PHP, you need to convert the example code into your programming language but much thanks to the options keeping their names across the different bindings it is usually a trivial task.

Example

The simplest possible command line:

curl curl.se

It gets HTTP from the site “curl.se”. If you try it, you will not see anything because it just replies with a redirect to the HTTPS version of the page. But ignoring that, you can convert that action into a libcurl program like this:

curl curl.se --libcurl code.c

The newly created file code.c now contains a program that you can compile :

gcc -o getit code.c -lcurl

and then run

./getit

You might want your program to rather follow the redirect? Maybe even show debug output? Let’s rerun the command line and get a code update:

curl curl.se --verbose --location --libcurl code.c

Now, if you rebuild your program and run it again, it shows you the front page HTML of the curl website on stdout.

The code

The exact code my curl version 7.85.0 produced in the command line above is shown below.

You see several options that are commented out. Those were used by the command line tool but there is no easy or convenient way to show their use in the example. Often you can start out by just skipping those .

curl offers repeated transfers at slower pace

curl --rate is your new friend.

This option is for when you use curl to do many requests in a single command line, but you want curl to not do them as quickly as possible. You want curl to do them no more often than at a certain interval. This is a way to slow down the request frequency curl would otherwise possibly use. Tell curl to do the transfers no faster than…

This is a completely different and separate option from the transfer speed rate limit option --limit-rate that has existed for a long time.

A primary reason for using this option is when the server end has a certain capped acceptance rate or other cases where you know it makes no sense to do the requests faster than at a certain interval.

With this new option, you specify the maximum transfer frequency you allow curl to use – in number of transfer starts per time unit (sometimes called request rate) with the new --rate option.

Set the fastest allowed rate with --rate "N/U" where N is an integer number and U is a time unit. Supported units are ‘s’ (for second), ‘m’ (for minute), ‘h’ (for hour) and ‘d’ (for day, as in a 24 hour unit). U is the time unit. The default time unit, if no “/U” is provided, is number of transfers per hour.

For example, to make curl not do its requests faster than twice per minute, use --rate 2/m but if you rather want 25 per hour, then use --rate 25/h.

If curl is provided several URLs and a single transfer completes faster than the allowed rate, curl will wait before it kicks off the next transfer in order to maintain the requested rate and not go faster. If curl is told to allow 10 requests per minute, it will not start the next request until 6 seconds have elapsed since the previous transfer started.

This option has no effect when --parallel is used. Primarily because you then ask for the transfers to happen simultaneously and we have not figured out how this option should affect such transfers!

This functionality uses a millisecond resolution timer internally. If the allowed frequency is set to more than 1000 transfer starts per second, it will instead run unrestricted.

When retrying transfers, enabled with --retry, the separate retry delay logic is used and not this setting.

Rate-limiting response headers

There is ongoing work to standardize HTTP response headers for the purpose or rate-limiting. (See RateLimit Header Fields for HTTP.) Using these headers, a server can tell the client what the maximum allowed transfer rate is and it can adapt.

This new command line option however, does nothing with any such new headers, but I think it would make sense to make a future version able to check for the rate-limit headers and, if opted-in, adapt to those instead of the frequency set by the user.

A challenge with these ratelimit headers vs the --rate command line option is of course that the response headers for this will return the rules for a given site/API, and curl might have been told to talk to many different sites which might all have different (or no) rates in their headers. Also, the command line option works for all protocols curl supports, not just HTTP(S).

Ship it

This feature is due in the pending curl 7.84.0 release.

Credits

Image by kewl from Pixabay

Easier header-picking with curl

Okay you might ask, what’s the news here? We’ve been able to get HTTP response headers with curl since virtually the stone age. Yes we have. Get the page and also show the headers:

curl -i https://example.com/

Make a HEAD request and see what headers we get back:

curl -I https://example.com/

Save the response headers in a separate file:

curl -D headers.txt https://example.com/

Get a specific header

This gets a little more complicated but you can always do

curl -I https://example.com/ | grep Date:

Which of course will fail if the casing is different, you need to check for it case insensitively. There might also be another header ending with “date:” that matches so you need to make sure that this an exact match

curl -I https://example.com/ | grep -i ^Date:

Now this shows the entire header, but for most cases you only want the value. So get it with cut:

curl -I https://example.com/ | grep -i ^Date: | cut -d: -f2-

You have the header value extracted now, but the leading and trailing white spaces in the content are probably not what you want in there so let’s strip them as well:

curl -I https://example.com/ | grep -i ^Date: | cut -d: -f2- | sed 's/^ *\(.*\).*/\1/'

There are of course many different ways you can do this operation and some of them are more clever than the methods I’ve used here. They are still often more or less convoluted and error-prone.

If we imagine that this is a fairly common use case for curl users in the world, then this kind of operation is found duplicated in quite a few scripts, applications and devices in the world.

Maybe we could make this easier for curl users?

A headers API

The other day we introduced a new experimental headers API to libcurl. Using this API, an application using libcurl gets an easy to use API to extract individual or several headers and their content.

As curl is such a libcurl-using application, we have expanded it to make use of this new API and this brings some new fun features to the curl tool.

Let me emphasize that since this API is labeled experimental it is not enabled in a default build. You need to explicitly enable it!

Get a single header, the new way

I decided to extend the -w output feature for this.

To extract a single header, get the value with leading and trailing spaces trimmed, use %header{name}. To repeat the operation from above and get the Date: header

curl -I -w '%header{date}' https://example.com/

‘date’ in this example is a case insensitive header name without the trailing colon and you can of course use any header name you please there. If the given header did not actually arrive in the response, it outputs nothing.

If you want more headers output, just repeat the %header{name} construct as many times as you like. If the -w output string gets unwieldy and hard to manage on the command line, then make it into a text file instead and tell -w about it with -w @filename.

curl -I -w @filename https://example.com/

Which headers?

There are several different kinds of headers and there can be multiple requests used for a transfer, but this option outputs the “normal” server response headers from the most recent request done. The option only works for HTTP(S) responses.

All headers – as JSON

As dealing with formatted data in the form of JSON has become very popular, I want to help fertilize this by making curl able to output all response headers as a JSON object.

This way, you can move the header handling, parsing and perhaps filtering to your JSON aware tool.

Tell curl to output the received HTTP headers as a JSON object:

curl -o save -w "%{header_json}" https://example.com/

curl itself does not pretty-print this, but if you pass the JSON from curl to a beautifier such as jq, the output ends up looking like this:

{
  "age": [
    "269578"
  ],
  "cache-control": [
    "max-age=604800"
  ],
  "content-type": [
    "text/html; charset=UTF-8"
  ],
  "date": [
    "Tue, 22 Mar 2022 08:35:21 GMT"
  ],
  "etag": [
    "\"3147526947+ident\""
  ],
  "expires": [
    "Tue, 29 Mar 2022 08:35:21 GMT"
  ],
  "last-modified": [
    "Thu, 17 Oct 2019 07:18:26 GMT"
  ],
  "server": [
    "ECS (nyb/1D2E)"
  ],
  "vary": [
    "Accept-Encoding"
  ],
  "x-cache": [
    "HIT"
  ],
  "content-length": [
    "1256"
  ]
}

JSON details

The headers are presented in the same order as received over the wire. Except if there are duplicated header names, as then they are grouped on the first occurrence and all values are provided there as a JSON array.

All headers are arrays just because there can be multiple headers using the same name .

The casing for the header names are kept unmodified from what was received, but for duplicated headers the casing used for the first occurrence will be used in the output.

Update: we lowercase all header names in the JSON output.

The “status line” of HTTP 1.x response, that first line that says “HTTP1.1 200 OK” when everything is fine, is not counted as a header by this function and will therefor not be included in this output.

Ships in 7.83.0

This feature is present in source code that will ship in curl 7.83.0, scheduled to happen late April 2022. Run your own build with it enabled, or ask your packager to provide an experimental build for you.

With enough positive feedback we should be able to move this out of experimental state fairly quickly.

curl no clobber

Do you remember August 26 2002? I can’t say I particularly do but the curl git log remembers for us that it was on that day we added this TODO item:

Add an option that prevents cURL from overwriting existing local files. When used, and there already is an existing file with the target file name (either -O or -o), a number should be appended (and increased if already existing).

That idea hadn’t even been listed for twenty years before it was converted into code by HexTheDragon and landed in curl the other day (with this commit). To get included in the pending curl 7.83.0 release.

--no-clobber

This new command line option (curl’s 247th) is called --no-clobber and it works as suggested already back in 2002. If the output file already exists at the time when curl wants to create it, it will instead append a number to the end of the name. If that file also exists, curl retries iteratively with numbers up to a 100 before it gives up and returns error.

To help you write even cooler scripts. Oh, and the -w variable %{filename_effective} will show this actually used file name.

curl your own error message

The --write-out (or -w for short) curl command line option is a gem for shell script authors looking for more information from a curl transfer. Experienced users know that this option lets you extract things such as detailed timings, the response code, transfer speeds and sizes of various kinds. A while ago we even made it possible to output JSON.

Maybe the best resource to learn more about it, is the dedication section in Everything curl. You’ll like it!

Now even more versatile

In curl 7.75.0 (released on February 3, 2021) we introduce five new variables for this option, and I’ll elaborate on some of the fun new things you can do with these!

These new variables were invented when we received a bug report that pointed out that when a user transfers many URLs in parallel and one or some of them fail – the error message isn’t identifying exactly which of the URLs that failed. We should improve the error messages to fix this!

Or wait a minute. What if we provide enough details for --write-out to let the user customize the error message completely by themselves and thus get exactly the info they want?

onerror

Using this, you can specify a message only to get written if the transfer ends in error. That is a non-zero exit code. An example could look like this:

curl -w '%{onerror}failed\n' $URL -o saved -s

…. if the transfer is OK, it says nothing. If it fails, the text on the right side of the “onerror” variable gets output. And that text can of course contain other variables!

This command line uses -s for “silent” to make it inhibit the regular built-in error message.

url

To help craft a good error message, maybe you want the URL included that was used in the transfer?

curl -w '%{onerror}%{url} failed\n' $URL

urlnum

If you get more than one URL in the command line, it might be helpful to get the index number of the used URL. This is of course especially useful if you for example work with the same URL multiple times in the same command line and just one of them fails!

curl -w '%{onerror}URL %{urlnum} failed\n' $URL $URL

exitcode

The regular built-in curl error message shows the exit code, as it helps diagnose exactly what the problem was. Include that in the error message like:

curl -w '%{onerror}%{url} got %{exitcode}\n' $URL

errormsg

This is the human readable explanation for the problem. The error message. Mimic the default curl error message like this:

curl -w '%{onerror}curl: %{exitcode} %{errormsg}\n' $URL

stderr

We already provide this “variable” from before, which allows you to make sure the output message is sent to stderr instead of stdout, which then makes it even more like a real error message:

url -w '%{onerror}%{stderr}curl: %{exitcode} %{errormsg}\n' $URL

More

These new variables work fine after %{onerror}, but they also of course work just as fine to output even when there was no error, and they work perfectly fine whether you use -Z for parallel transfers or doing them serially, one after the other.