Tag Archives: cURL and libcurl

curl ootw: –mail-from

(older options of the week)

--mail-from has no short version. This option was added to curl 7.20.0 in February 2010.

I know this surprises many curl users: yes curl can both send and receive emails.

SMTP

curl can do “uploads” to an SMTP server, which usually has the effect that an email is delivered to somewhere onward from that server. Ie, curl can send an email.

When communicating with an SMTP server to send an email, curl needs to provide a certain data set and one of the mandatory fields of data that is necessary to provide is the email of the sender.

It should be noted that this is not necessary the same email as the one that is displayed in the From: field that the recipient’s email client will show. But it can be the same.

Example

curl --mail-from from@example.com --mail-rcpt to@example.com smtp://mail.example.com -T body.txt

Receiving emails

You can use curl to receive emails over POP3 or IMAP. SMTP is only for delivering email.

Related options

--mail-rcpt for the recipient(s), and --mail-auth for the authentication address.

curl is 8000 days old

Another pointless number that happens to be round and look nice so I feel a need to highlight it.

When curl was born WiFi didn’t exist yet. Smartphones and tablets weren’t invented. Other things that didn’t exist include YouTube, Facebook, Twitter, Instagram, Firefox, Chrome, Spotify, Google search, Wikipedia, Windows 98 or emojis.

curl was born in a different time, but also in the beginning of the explosion of the web and Internet Protocols. Just before the big growth wave.

In 1996 when I started working on the precursor to curl, there were around 250,000 web sites (sources vary slightly)..

In 1998 when curl shipped, the number of sites were already around 2,400,000. Ten times larger amount in just those two years.

In early 2020, the amount of web sites are around 1,700,000,000 to 2,000,000,000 (depending on who provides the stats). The number of web sites has thus grown at least 70,000% over curl’s 8000 days of life and perhaps as much as 8000 times the amount as when I first working with HTTP clients.

One of the oldest still available snapshots of the curl web site is from the end of 1998, when curl was just a little over 6 months old. On that page we can read the following:

That “massive popularity” looks charming and possibly a bit naive today. The number of monthly curl downloads have also possibly grown by 8,000 times or so – by estimates only, as most users download curl from other places than our web site these days. Even more users get it installed as part of their OS or bundled with something else.

Thank you for flying curl.

(This day occurs only a little over a month before curl turns 22, there will be much more navel-gazing then, I promise.)

Image by Annie Spratt from Pixabay

curl ootw: –keepalive-time

(previously blogged about options are listed here.)

This option is named --keepalive-time even if the title above ruins the double-dash (thanks for that WordPress!). This command line option was introduced in curl 7.18.0 back in early 2008. There’s no short version of it.

The option takes a numerical argument; number of seconds.

What’s implied in the option name and not spelled out is that the particular thing you ask to keep alive is a TCP connection. When the keepalive feature is not used, TCP connections typically don’t send anything at all if no data is transmitted.

Idle TCP connections

Silent TCP connections typically cause the two primary issues:

  1. Middle-boxes that track connections, such as your typical NAT boxes (home WiFi routers most notoriously) will consider silent connections “dead” after a certain period of time and drop all knowledge about them, leading to the connection non functioning when the client (or server) later wants to resume operation of it.
  2. Neither side of the connection will notice when the network between them breaks, as it takes actual traffic to do so. This is of course also a feature, because there’s no need to be alarmed by a breakage if there’s no traffic as it might be fine again when it eventually gets used again.

TCP stacks then typically implement a low-level feature where they can send a “ping” frame over the connection if it has been idle for a certain amount of time. This is the keepalive packet.

--keepalive-time <seconds> therefor sets the interval. After this many seconds of “silence” on the connection, there will be a keepalive packet sent. The packet is totally invisible to the applications on both sides but will maintain the connection through NATs better and if the connection is broken, this packet will make curl detect it.

Keepalive is not always enough

To complicate issues even further, there are also devices out there that will still close down connections if they only send TCP keepalive packets and no data for certain period. Several protocols on top of TCP have their own keepalive alternatives (sometimes called ping) for this and other reasons.

This aggressive style of closing connections without actual traffic TCP traffic typically hurts long-going FTP transfers. This, because FTP sets up two connections for a transfer, but the first one is the “control connection” and while a transfer is being delivered on the “data connection”, nothing happens over the first one. This can then result in the control connection being “dead” by the time the data transfer completes!

Default

The default keepalive time is 60 seconds. You can also disable keepalive completely with the --no-keepalive option.

The default time has been selected to be fairly low because many NAT routers out there in the wild are fairly aggressively and close idle connections already after two minutes (120) seconds.

For what protocols

This works for all TCP-based protocols, which is what most protocols curl speaks use. The only exception right now is TFTP. (See also QUIC below.)

Example

Change the interval to 3 minutes:

curl --keepalive-time 180 https://example.com/

Related options

A related functionality is the --speed-limit amd --speed-time options that will cancel a transfer if the transfer speed drops below a given speed for a certain time. Or just the --max-time that sets a global timeout for an entire operation.

QUIC?

Soon we will see QUIC getting used instead of TCP for some protocols: HTTP/3 being the first in line for that. We will have to see what exactly we do with this option when QUIC starts to get used and what the proper mapping and behavior shall be.

curl ootw: -U for proxy credentials

(older options of the week)

-U, --proxy-user

The short version of this option uses the uppercase letter ‘U’. It is important since the lower case letter ‘u’ is used for another option. The longer form of the option is spelled --proxy-user.

This command option existed already in the first ever curl release!

The man page‘s first paragraph describes this option as:

Specify the user name and password to use for proxy authentication.

Proxy

This option is for using a proxy. So let’s first briefly look at what a proxy is.

A proxy is a “middle man” in the communication between a client (curl) and a server (the one that holds the contents you want to download or will receive the content you want to upload).

The client communicates via this proxy to reach the server. When a proxy is used, the server communicates only with the proxy and the client also only communicates with the proxy:

curl <===> proxy <===> server

There exists several different types of proxies and a proxy can require authentication for it to allow it to be used.

Proxy authentication

Sometimes the proxy you want or need to use requires authentication, meaning that you need to provide your credentials to the proxy in order to be allowed to use it. The -U option is used to set the name and password used when authenticating with the proxy (separated by a colon).

You need to know this name and password, curl can’t figure them out – unless you’re on Windows and your curl is built with SSPI support as then it can magically use the current user’s credentials if you provide blank credentials in the option: -U :.

Security

Providing passwords in command lines is a bit icky. If you write it in a script, someone else might see the script and figure them out.

If the proxy communication is done in clear text (for example over HTTP) some authentication methods (for example Basic) will transmit the credentials in clear text across the network to the proxy, possibly readable by others.

Command line options may also appear in process listing so other users on the system can see them there – although curl will attempt to blank them out from ps outputs if the system supports it (Linux does).

Needs other options too

A typical command line that use -U also sets at least which proxy to use, with the -x option with a URL that specifies which type of proxy, the proxy host name and which port number the proxy runs on.

If the proxy is a HTTP or HTTPS type, you might also need to specify which type of authentication you want to use. For example with --proxy-anyauth to let curl figure it out by itself.

If you know what HTTP auth method the proxy uses, you can also explicitly enable that directly on the command line with the correct option. Like for example --proxy-basic or --proxy-digest.

SOCKS proxies

curl also supports SOCKS proxies, which is a different type than HTTP or HTTPS proxies. When you use a SOCKS proxy, you need to tell curl that, either with the correct prefix in the -x argument or by with one of the --socks* options.

Example command line

curl -x http://proxy.example.com:8080 -U user:password https://example.com

See Also

The corresponding option for sending credentials to a server instead of proxy uses the lowercase version: -u / --user.

Remote-exploiting curl

In a Blackhat 2019 presentation, three gentlemen from the Tencent Blade Team explained how they found and managed to exploit two curl flaws. Both related to NTLM over HTTP. The “client version Heartbleed” as they call it.

Reported responsibly

The Tencent team already reported the bugs responsibly to us and we already fixed them back in February 2019, but the talk is still very interesting I think.

From my point of view, as I have already discussed these bugs with the team when they were reported us and when I worked on fixing them, I find it very interesting and educational to learn more about how exactly they envision an attacker would go about and exploit them in practice. I have much too bad imagination sometimes to really think of how bad exactly the problems can end up when a creative attacker gets to play with them.

The security issues

The two specific issues these stellar gents found are already fixed since curl 7.64.0 and you can read all the gory details about them here: CVE-2018-16890 and CVE-2019-3822. The latter is clearly the worse issue.

For all I know, these exploits have never been seen or reported to happen in real life.

Upgrade?

Luckily, most distros that ship older curl versions still back-port and apply later security patches so even if you may see that you have an older curl version installed on your system, chances are it has already been patched. Of course there’s also a risk that it hasn’t, so you should probably make sure rather than presume…

The video

The slides from their presentation. (The talk also details SQLite issues but they’re completely separate from the curl ones.)

Bug Bounty?

Unfortunately, I’m sorry to admit that these excellent friends of ours did not get a bug bounty from us! 🙁

We got their reports before our bug bounty was setup and we didn’t have neither the means nor the methods to reward them back then. If someone would report such serious bugs now, only a year later, we would probably reward new such findings with several thousand dollars.

On NTLM

NTLM was always wrong, bad and a hack. It’s not an excuse for having bugs in our code but man if someone could just please make that thing go away…

curl ootw: -k really means insecure

(ootw stands for option of the week)

Long form: --insecure. The title is a little misleading, but this refers to the lowercase -k option.

This option has existed in the curl tool since the early days, and has been frequently misused ever since. Or I should perhaps call it “overused”.

Truly makes a transfer insecure

The name of the long form of this option was selected and chosen carefully to properly signal what it does: it makes a transfer insecure that could otherwise be done securely.

The option tells curl to skip the verification of the server’s TLS certificate – it will skip the cryptographic certificate verification (that it was signed by a trusted CA) and it will skip other certificate checks, like that it was made for the host name curl connects to and that it hasn’t expired etc.

For SCP and SFTP transfers, the option makes curlskip the known hosts verification. SCP and SFTP are SSH-based and they don’t use the general CA based PKI setup.

If the transfer isn’t using TLS or SSH, then this option has no effect or purpose as such transfers are already insecure by default.

Unable to detect MITM attacks

When this option is used, curl cannot detect man-in-the-middle attacks as it longer checks that it actually connects to the correct server. That is insecure.

Slips into production

One of the primary reasons why you should avoid this option in your scripts as far as possible is that it is very easy to let this slip through and get shipped into production scripts and then you’ve practically converted perfectly secure transfers into very insecure ones.

Instead, you should work on getting an updated CA cert bundle that holds certificates so that you can verify your server. For test and local servers you can get the server cert and use that to verify it subsequently.

Optionally, if you run your production server on a test or local server and you just have a server name mismatch, you can fix that in your test scripts too just by telling curl what server to use.

libcurl options

In libcurl speak, the -k functionality is accomplished with two different options: CURLOPT_SSL_VERIFYHOST and CURLOPT_SSL_VERIFYPEER. You should not allow your application to set them to 0.

Since many bindings to libcurl use the same option names you can also find PHP programs etc setting these to zero, and you should always treat that as a warning sign!

Why does it exist?

Every now and then people suggest we should remove this option.

It does serve a purpose in the chicken and egg scenario where you don’t have a proper certificate locally to verify your server against and sometimes the server you want to quickly poll is wrongly configured and a proper check fails.

How do you use it?

curl -k https://example.com

You’d often use it in combination with -v to view the TLS and certificate information in the handshake so that you can fix it and remove -k again.

Related options

--cacert tells curl where to find the CA cert bundle. If you use a HTTPS proxy and want the -k functionality for the proxy itself you want --proxy-insecure.

An alternative approach to using a CA cert bundle for TLS based transfers, is to use public key pinning, with the --pinnedpubkey option.

curl cheat sheet refresh

Several years ago I made a first version of a “curl HTTP cheat sheet” to facilitate the most common curl command line options when working with HTTP.

This has now been refreshed after I took lots of feedback from friends on twitter, and then worked on rearranging the lines and columns so that it got as compact as possible without sacrificing readability (too much).

See the updated online version here.

The curl HTTP cheat sheet, 2020 edition.

The plan is to make stickers out of this – and possibly t-shirts as well. I did some test prints and deemed that with a 125 mm width, all the text is still clearly readable.

If things go well, I’ll hand out these beauties at curl up 2020 and of course leftovers will then be given away to whoever I run into at other places and conferences where I bring stickers…

curl ootw: -m, –max-time

(Previous option on the week posts.)

This is the lowercase -m. The long form is called --max-time.

One of the oldest command line options in the curl tool box. It existed already in the first curl release ever: 4.0.

This option also requires a numerical argument. That argument can be an integer number or a decimal number, and it specifies the maximum time – in seconds – that you allow the entire transfer to take.

The intended use for this option is basically to provide a means for a user to stop a transfer that stalls or hangs for a long time due to whatever reason. Since transfers may sometimes be very quick and sometimes be ridiculously slow, often due to factors you cannot control or know about locally, figuring out a sensible maximum time value to use can be challenging.

Since -m covers the whole operation, even a slow name resolve phase can trigger the timeout if you set it too low. Setting a very low max-time value is almost guaranteed to occasionally abort otherwise perfectly fine transfers.

If the transfer is not completed within the given time frame, curl will return with error code 28.

Examples

Do the entire transfer within 10 seconds or fail:

curl -m 10 https://example.com/

Complete the download the file within 2.25 seconds or return error:

curl --max-time 2.25 https://example.com

Caveat

Due to restrictions in underlying system functions, some curl builds cannot specify -m values under one second.

Related

Related, and often better, options to use include --connect-timeout and the --speed-limit and --speed-time combo.

You’re invited to curl up 2020: Berlin

The annual curl developers conference, curl up, is being held in Berlin this year; May 9-10 2020.

Starting now, you can register to the event to be sure that you have a seat. The number of available seats is limited.

Register here

curl up is the main (and only?) event of the year where curl developers and enthusiasts get together physically in a room for a full weekend of presentations and discussions on topics that are centered around curl and its related technologies.

We move the event around to different countries every year to accommodate different crowds better and worse every year – and this time we’re back again in Germany – where we once started the curl up series back in 2017.

The events are typically small with a very friendly spirit. 20-30 persons

Sign up

We will only be able to let you in if you have registered – and received a confirmation. There’s no fee – but if you register and don’t show, you lose karma.

The curl project can even help fund your travel and accommodation expenses (if you qualify). We really want curl developers to come!

Register here

Date

May 9-10 2020. We’ll run the event both days of that weekend.

Agenda

The program is not done yet and will not be so until just a week or two before the event, and then it will be made available => here.

We want as many voices as possible to be heard at the event. If you have done something with curl, want to do something with curl, have a suggestion etc – even just short talk will be much appreciated. Or if you have a certain topic or subject you really want someone else to speak about, let us know as well!

Expect topics to be about curl, curl internals, Internet protocols, how to improve curl, what to not do in curl and similar.

Location

We’ll be in the co.up facilities in central Berlin. On Adalbertstraße 8.

Planning and discussions on curl-meet

Everything about the event, planning for it, volunteering, setting it up, agenda bashing and more will be done on the curl-meet mailing list, dedicated for this purpose. Join in!

The Stockholm edition.

Anti Google?

If you have a problem with filling in the Google-hosted registration form, please email us at curlup@haxx.se instead and we’ll ask you for the information over email instead.

Credits

The images were taken by me, Daniel. The top one at the Nuremburg curl up in 2017, the in-article photo in Stockholm 2018.

Backblazed

I’m personally familiar with Backblaze as a fine backup solution I’ve helped my parents in law setup and use. I’ve found it reliable and easy to use. I would recommend it to others.

Over the Christmas holidays 2019 someone emailed me and mentioned that Backblaze have stated that they use libcurl but yet there’s no license or other information about this anywhere in the current version, nor on their web site. (I’m always looking for screenshotted curl credits or for data to use as input when trying to figure out how many curl installations there are or how many internet transfers per day that are done with curl…)

libcurl is MIT licensed (well, a slightly edited MIT license) so there’s really not a lot a company need to do to follow the license, nor does it leave me with a lot of “muscles” or remedies in case anyone would blatantly refuse to adhere. However, the impression I had was that this company was one that tried to do right and this omission could then simply be a mistake.

I sent an email. Brief and focused. Can’t hurt, right?

Immediate response

Brian Wilson, CTO of Backblaze, replied to my email within hours. He was very friendly and to the point. The omission was a mistake and Brian expressed his wish and intent to fix this. I couldn’t ask for a better or nicer response. The mentioned fixup was all that I could ask for.

Fixed it

Today Brian followed up and showed me the changes. Delivering on his promise. Just totally awesome.

Starting with the Windows build 7.0.0.409, the Backblaze about window looks like this (see image below) and builds for other platforms will follow along.

15,600 US dollars

At the same time, Backblaze also becomes the new largest single-shot donor to curl when they donated no less than 15,600 USD to the project, making the recent Indeed.com donation fall down to a second place in this my favorite new game of 2020.

Why this particular sum you may ask?

Backblaze was started in my living room on Jan 15, 2007 (13 years ago tomorrow) and that represents $100/month for every month Backblaze has depended on libcurl back to the beginning.

/ Brian Wilson, CTO of Backblaze

I think it is safe to say we have another happy user here. Brian also shared this most awesome statement. I’m happy and proud to have contributed my little part in enabling Backblaze to make such cool products.

Finally, I just want to say thank you for building and maintaining libcurl for all these years. It’s been an amazing asset to Backblaze, it really really has.

Thank you Backblaze!