curl 8.1.2 ate one too

This is the second follow-up patch release in the 8.1.x series due to regressions and bugs that are too annoying to leave lingering around.

Release video

Numbers

the 219th release
0 changes
7 days (total: 9,202)

14 bug-fixes (total: 9,045)
22 commits (total: 30,429
0 new public libcurl function (total: 91)
0 new curl_easy_setopt() option (total: 302)

0 new curl command line option (total: 251)
13 contributors, 3 new (total: 2,888)
5 authors, 2 new (total: 1,150)
0 security fixes (total: 145)

Bugfixes

configure: quote the assignments for run-compiler

A regression introduced in the previous release made configure fail if the $CC shell variable was set to something else than just a single command name. This now quotes the variable correctly.

configure: without pkg-config and no custom path, use -lnghttp2

Installations without pkg-config where nghttp2 is installed in a default directory would get a link error in the build.

http2: fix EOF handling on uploads with auth negotiation

This was a regression when using HTTP/2 for doing multi-phase authentication methods with POST, like for example Digest.

http3: send EOF indicator early as possible

By better tracking the amount of upload data, curl can avoid a superfluous final zero-length DATA packet and instead send the EOF sooner.

libcurl.m4: remove trailing ‘dnl’ that causes this to break autoconf

The configure macro we ship for other projects to use to detect installed libcurl version now works better.

libssh: when keyboard-interactive auth fails, try password

When a SSH server allows multiple auth methods, and curl tried keyboard-interactive it would wrongly skip trying the password method – if built to use libssh. This bug has been present all since libssh support shipped.

The Gemini protocol seen by this HTTP client person

There is again a pull-request submitted to the curl project to bring support for the Gemini protocol. It seems like a worthwhile effort that I support, even if it is also a lot of work involved and it might take some time before it reaches the state in which it can be merged. A previous attempt at doing this was abandoned a while ago.

This renewed interest made me take a fresh tour through the current Gemini protocol spec and I decided to write down some observations for you. So here I am. These are comments based on my reading of the 0.16.1 version of the protocol spec. I have implemented Internet application protocols client side for some thirty years. I have not actually implemented the Gemini protocol.

Motivations for existence

Gemini is the result of a kind of a movement that tries to act against some developments they think are wrong on the current web. Gemini is not only a new wire protocol, but also features a new documentation format and more. They also say its not “the web” at all but a new thing. As a sign of this, the protocol is designed by the pseudonymous “Solderpunk” – and the IETF or other suitable or capable organizations have not been involved – and it shows.

Counter surveillance

Gemini has no cookies, no negotiations, no authentication, no compression and basically no (other) headers either in a stated effort to prevent surveillance and tracking. It instead insists on using TLS client certificates (!) for keeping state between requests.

A Gemini response from a server is just a two-digit response code, a single media type and the binary payload. Nothing else.

Reduce complexity

They insist that thanks to reduced complexity it enables more implementations, both servers and clients, and that seems logical. The reduced complexity however also makes it less visually pleasing to users and by taking shortcuts in the protocol, it risks adding complexities elsewhere instead. Its quite similar to going back to GOPHER.

Form over content

This value judgement is repeated among Gemini fans. They think “the web” favors form over content and they say Gemini intentionally is the opposite. It seems to be true because Gemini documents certainly are never visually very attractive. Like GOPHER.

But of course, the protocol is also so simple that it lacks the power to do a lot of things you can otherwise do on the web.

The spec

The only protocol specification is a single fairly short page that documents the over-the-wire format mostly in plain English (undoubtedly featuring interpretation conflicts), includes the URL format specification (very briefly) and oddly enough also features the text/gemini media type: a new document format that is “a kind of lightweight hypertext format, which takes inspiration from gophermaps and from Markdown“.

The spec says “Although not finalised yet, further changes to the specification are likely to be relatively small.” The protocol itself however has no version number or anything and there is no room for doing a Gemini v2 in a forward-compatible way. This way of a “living document” seems to be popular these days, even if rather problematic for implementers.

Gopher revival

The Gemini protocol reeks of GOPHER and HTTP/0.9 vibes. Application protocol style anno mid 1990s with TLS on top. Designed to serve single small text documents from servers you have a relation to.

Short-lived connections

The protocol enforces closing the connection after every response, forcibly making connection reuse impossible. This is terrible for performance if you ever want to get more than one resource off a server. I also presume (but there is no mention of this in the spec) that they discourage use of TLS session ids/tickets for subsequent transfers (since they can be used for tracking), making subsequent transfers even slower.

We know from HTTP and a primary reason for the introduction of HTTP/1.1 back in 1997 that doing short-lived bursty TCP connections makes it almost impossible to reach high transfer speeds due to the slow-starts. Also, re-doing the TCP and TLS handshakes over and over could also be seen a plain energy waste.

The main reason they went with this design seem to be to avoid having a way to signal the size of payloads or do some kind of “chunked” transfers. Easier to document and to implement: yes. But also slower and more wasteful.

Serving an average HTML page using a number of linked resources/images over this protocol is going to be significantly slower than with HTTP/1.1 or later. Especially for servers far away. My guess is that people will not serve “normal” HTML content over this protocol.

Gemini only exists done over TLS. There is no clear text version.

GET-only

There are no other methods or ways to send data to the server besides the query component of the URL. There are no POST or PUT equivalents. There is basically only a GET method. In fact, there is no method at all but it is implied to be “GET”.

The request is also size-limited to a 1024 byte URL so even using the query method, a Gemini client cannot send much data to a server. More on the URL further down.

Query

There is a mechanism for a server to send back a single-line prompt asking for “text input” which a client then can pass to it in the URL query component in a follow-up request. But there is no extra meta data or syntax, just a single line text prompt (no longer than 1024 bytes) and free form “text” sent back.

There is nothing written about how a client should deal with the existing query part in this situation. Like if you want to send a query and answer the prompt. Or how to deal with the fact that the entire URL, including the now added query part, still needs to fit within the URL size limit.

Better use a short host name and a short path name to be able to send as much data as possible.

TOFU

the strongly RECOMMENDED approach is to implement a lightweight “TOFU” certificate-pinning system which treats self-signed certificates as first- class citizens.

(From the Gemini protocol spec 0.16.1 section 4.2)

Trust on first use (TOFU) as a concept works fairly well when you interface a limited set of servers with which you have some relationship. Therefore it often works fine for SSH for example. (I say “fine” for even with ssh, people often have the habit of just saying yes and accepting changed keys even when they perhaps should not.)

There are multiple problems with doing TOFU for a client/server document browsing system like Gemini.

A challenge is of course that on the first visit a client cannot spot an impostor, and neither can it when the server updates its certificates down the line. Maybe an attacker did it? It trains users on just saying “yes” when asked if they should trust it. Since you as a user might not have a clue about how runs that particular server or whatever the reason is why the certificate changes.

The concept of storing certificates to compare against later is a scaling challenge in multiple dimensions:

  • Certificates need to be stored for a long time (years?)
  • Each host name + port number combination has its own certificate. In a world that goes beyond thousands of Gemini hosts, this becomes a challenge for clients to deal with in a convenient (and fast) manner.
  • Presumably each user on a system has its own certificate store. What user A trusts, user B does not necessarily have to trust.
  • Does each Gemini client keep its own certificate store? Do they share? Who can update? How do they update the store? What’s the file format? A common db somehow?
  • When storing the certificates, you might also want to do like modern SSH does: not store the host names in cleartext as it is a rather big privacy leak showing exactly which servers you have visited.

I strongly suspect that many existing Gemini clients avoid this huge mess by simply not verifying the server certificates at all or by just storing the certificates temporarily in memory.

You can opt to store a hash or fingerprint of the certificate instead of the whole one, but that does not change things much.

I think insisting on TOFU is one of Gemini’s weakest links and I cannot see how this system can ever scale to a larger audience or even just many servers. I foresee that they need to accept Certificate Authorities or use DANE in a future.

Gemini Proxying

By insisting on passing on the entire URL in the requests, it is primarily a way to solve name based virtual hosting, but it is also easy for a Gemini server to act as a proxy for other servers. On purpose. And maybe I should write “easy”.

Since Gemini is (supposed to be) end-to-end TLS, proxying requests to another server is not actually possible while also maintaining security. The proxy would have to for example respond with the certificate retrieved from the remote server (in addition to its own) but the spec mentions nothing of this so we can guess existing clients and proxies don’t do it. I think this can be fixed by just adjusting the spec. But would add some rather kludgy complexity for a maybe a not too exciting feature.

Proxying to gopher:// URLs should be possible with the existing wording because there is no TLS to the server end. It could also proxy http:// URLs too but risk having to download the entire thing first before it can send the response.

URLs

The Gemini URL scheme is explained in 138 words, which is of course very terse and assumes quite a lot. It includes “This scheme is syntactically compatible with the generic URI syntax defined in RFC 3986“.

The spec then goes on to explain that the URL needs be UTF-8 encoded when sent over the wire, which I find peculiar because a normal RFC 3986 URL is just a set of plain octets. A Gemini client thus needs to know the charset that was used for or to assume for the original URL in order to convert it to UTF-8.

Example: if there is a %C5 in the URL and the charset was ISO-8859-1. That means the octet is a LATIN CAPITAL LETTER A WITH RING ABOVE. The UTF-8 version of said character is the two-byte sequence 0xC3 0x85. But if the original charset instead was ISO-8859-6, the same %C5 octet means ARABIC LETTER ALEF WITH HAMZA BELOW, encoded as 0xD8 0xA5 in UTF-8.

To me this does not rhyme well with reduced complexity. This conversion alone will cause challenges when done in curl because applications pass an RFC 3986 URL to the library and it does not currently have enough information on how to convert that to UTF-8. Not to mention that libcurl completely lacks UTF-8 conversion functions.

This makes me suspect that the intention is probably that only the host name in the URL should be UTF-8 encoded for IDN reasons and the rest should be left as-is? The spec could use a few more words to explain this.

One of the Gemini clients that I checked out to see how they do this, in order to better understand the spec, even use the punycode version of the host name quoting “Pending possible Gemini spec change”. What is left to UTF-8 then? That client did not UTF-8 encode anything of the URL, which adds to my suspicion that people don’t actually follow this spec detail but rather just interoperate…

The UTF-8 converted version of the URL must not be longer than 1024 bytes when included in a Gemini request.

The fact that the URL size limit is for the UTF-8 encoded version of the URL makes it hard to error out early because the source version of the URL might be shorter than 1024 bytes only to have it grow past the size limit in the encoding phase.

Origin

The document is carelessly thinking “host name” is a good authority boundary to TLS client certificates, totally ignoring the fact that “the web” learned this lesson long time ago. It needs to restrict it to the host name plus port number. Not doing that opens up Gemini for rather bad security flaws. This can be fixed by improving the spec.

Media type

The text/gemini media type should simply be moved out the protocol spec and be put elsewhere. It documents content that may or may not be transferred over Gemini. Similarly, we don’t document HTML in the HTTP spec.

Misunderstandings?

I am fairly sure that once I press publish on this blog post, some people will insist that I have misunderstood parts or most of the protocol spec. I think that is entirely plausible and kind of my point: the spec is written in such an open-ended way that it will not avoid this. We basically cannot implement this protocol by only reading the spec.

Future?

It is impossible to tell if this will fly for real or not. This is not a protocol designed for the masses to replace anything at high volumes. That is of course totally fine and it can still serve its community perfectly fine. There seems to be interest enough to keep the protocol and ecosystem alive for the moment at least. Possibly for a long time into the future as well.

What I would change

As I believe you might have picked up by now, I am not a big fan of this protocol but I still believe it can work and serve its community. If anyone would ask me, here are a few things I would consider changing in order to take it up a few notches.

  1. Split the spec into three separate ones: protocol, URL syntax, media type. Expand the protocol parts with more exact syntax descriptions and examples to supplement the English.
  2. Clarify the client certificate use to be origin based, not host name.
  3. Drop the TOFU idea, it makes for a too weak security story that does not scale and introduces massive complexities for clients.
  4. Clarify the UTF-8 encoding requirement for URLs. It is confusing and possibly bringing in a lot of complexity. Simplify?
  5. Clarify how proxying is actually supposed to work in regards to TLS and secure connections. Maybe drop the proxy idea completely to keep the simplicity.
  6. Consider a way to re-use connections, even if that means introducing some kind of “chunks” HTTP-style.

Discussion

Hacker news

curl user survey 2023

For widely used, widely distributed open source project such as curl, we often have little to no relation at all with our users and therefore it is hard to get feedback and learn what works and what is less good.

Our best and primary way is thus simply to ask users every year how they use curl.

user survey

For the tenth consecutive year, we put together a survey and we ask everyone we know and can reach who ever used curl or library within the last year, to donate a few minutes of their precious time and give us their honest opinions.

The survey is anonymous but hosted by Google. We do not care who you are, but we want to know how you think curl works for you.

The survey will remain online for submissions during 14 days. From Thursday May 25 2023 until midnight (CEST) Wednseday June 7 2023. Please tell your friends about it!

user survey

Post survey analysis

At June 5 the painstaking work of analyzing the results and putting together a summary and presentation begins. It usually takes me a few weeks to complete. Once that is done, the results will be shared for the entire world to enjoy.

Then we see what the curl project should take home and do as a direct result of what users say. Updating procedures, writing documentation and adding features to the roadmap are among the things that can happen and has happened after previous surveys.

user survey

Polhemsrådet

I was invited, and I have accepted, to become a member of Polhemsrådet, the “Polhem Council”, that works for the Polhem Prize nomination committee and serves to appoint the award winners.

I consider it a great honor to get to serve on this board. I am not an engineer by education, but I do know my way around a few engineer topics and in particular things around software and computer related technologies.

This assignment is done on a voluntary basis, there is no money involved. I am joining a council chock-full of intimidatingly impressive people as its seventh member.

The Polhem Prize, which I was awarded in 2017, is Sweden’s oldest engineering award. It was first awarded a person in 1878.

The Polhem Prize is awarded for “a [Swedish] high-level technical innovation or an ingenious solution to a technical problem. The innovation must be available on the open market and be competitive. It has to be sustainable and environmentally friendly.”

More details about the prize, how it works and other council members can be found on the Swedish site for Polhemspriset.

curl 8.1.1 lets do this

Only 6 days since the previous release we are again here with a curl release. It turned out 8.1.0 had some rather nasty regressions that we felt were urgent enough to warrant another round on the dance floor. So here goes curl 8.1.1. A bugfix release.

Release presentation

Numbers

the 218th release
0 changes
6 days (total: 9,195)

25 bug-fixes (total: 9,031)
40 commits (total: 30,407
0 new public libcurl function (total: 91)
0 new curl_easy_setopt() option (total: 302)

0 new curl command line option (total: 251)
19 contributors, 10 new (total: 2,885)
13 authors, 6 new (total: 1,148)
0 security fixes (total: 145)

Bugfixes

Some of the highlights of this release include…

cmake: avoid list(PREPEND)

This use of a too new cmake feature snuck itself into the build in the last release which caused trouble for people using older cmake versions.

cmake: repair cross compiling

A recently added cmake check did not have the correct precautions added for cross-compiling which broke such builds.

configure: generate a script to run the compiler

The configure script has an elaborate check that verifies provided if libraries can be used at run-time. This turned out complicated when the compiler itself uses libraries that configure checks for by setting the LD_LIBRARY_PATH since that path also affects the compiler!

http2: double http request parser max line length

The last word is probably not said about this logic, but capping the max request header line size to 4KB was too narrow and caused application breakages. Now the limit is at 8KB.

http2: increase stream window size to 10 MB

It turned out that even though we have a flexible HTTP/2 window concept, download performance could suffer and now we have bumped the window size again significantly.

http2: upload improvements

In particular doing uploads that are aborted prematurely by a reset when for example a 404 is returned before the entire upload was done could cause issues.

rename struct ‘http_req’ to ‘httpreq’

The development branch of FreeBSD (14) introduced a struct in one of the public headers that name-collided an internal struct libcurl uses. The bug exists in FreeBSD’s header, but we renamed ours anyway to work around the problem while the FreeBSD team fixes their end.

better error message when URLs fail to parse

Since we have a fairly elaborate identification of exactly what fails when the URL parser rejects a URL, this now helps users to better understand what curl does not like.

urlapi: allow numerical parts in the host name

The URL parser was far too strict in rejecting host names because they were “invalid IPv4” when in fact they should be treated as host names instead. Probably the worst regression added in 8.1.0. In fact, the URL parser basically cannot refuse a host name for not being a valid IPv4 since then it can get passed through to the name resolver which can then still find it in /etc/hosts etc.

curl 8.1.0 – http2 over proxy

We are back with the first release since that crazy March day when we did two releases on the same day. First 8.0.0 shipped that bumped the major version for the first time in decades. Then curl 8.0.1 followed just hours after, due to a serious mess-up in the factory lines.

Release video presentation

Numbers

the 217th release
3 changes
58 days (total: 9,189)

185 bug-fixes (total: 9,006)
322 commits (total: 30,367
0 new public libcurl function (total: 91)
0 new curl_easy_setopt() option (total: 302)

1 new curl command line option (total: 251)
64 contributors, 35 new (total: 2,875)
37 authors, 17 new (total: 1,142)
4 security fixes (total: 145)

Security

We disclose four new curl security vulnerabilities today, three of them at severity Low and one of them at Medium. This also means that 3,840 USD was awarded as bug bounties in this release cycle.

UAF in SSH sha256 fingerprint check

[CVE-2023-28319] libcurl offers a feature to verify an SSH server’s public key using a SHA 256 hash. When this check fails, libcurl would free the memory for the fingerprint before it returns an error message containing the (now freed) hash.

siglongjmp race condition

[CVE-2023-28320] libcurl provides several different backends for resolving host names, selected at build time. If it is built to use the synchronous resolver, it allows name resolves to time-out slow operations using alarm() and siglongjmp().

When doing this, libcurl used a global buffer that was not mutex protected and a multi-threaded application might therefore crash or otherwise misbehave.

IDN wildcard match

[CVE-2023-28321] curl supports matching of wildcard patterns when listed as “Subject Alternative Name” in TLS server certificates. curl can be built to use its own name matching function for TLS rather than one provided by a TLS library. This private wildcard matching function would match IDN (International Domain Name) hosts incorrectly and could as a result accept patterns that otherwise should mismatch.

more POST-after-PUT confusion

[CVE-2023-28322] When doing HTTP(S) transfers, libcurl might erroneously use the read callback (CURLOPT_READFUNCTION) to ask for data to send, even when the CURLOPT_POSTFIELDS option has been set, if the same handle previously was used to issue a PUT request which used that callback.

This flaw may surprise the application and cause it to misbehave and either send off the wrong data or use memory after free or similar in the second transfer.

Changes

This release has only three real changes. One bigger and two smaller:

HTTP/2 over proxy

libcurl can now negotiate and use HTTP/2 when it is told to use a HTTPS proxy (details in the CURLOPT_PROXYTYPE man page), and the command line tool can of course switch it on using the --proxy-http2 option. Explained more in this blog post.

refuse to resolve the .onion TLD

When a host name ending with .onion is passed on to the name resolver functions, they will cause an error and will not be resolved. Like RFC 7686 tells us.

curl’s -w option can now output URL components

The list of variables was extended by a whole range of new ones. Possibly best learned by checking out the writeout section in everything curl.

Bugfixes

The official counter says we did more than 180 bugfixes in his release cycle. Here follows some of my favorites:

checksrc fixes

We made it better at checking the code style for three distinct code situations – and then updated the source code accordingly.

cmake fixes

  • bring in the network library on Haiku
  • do not add zlib headers for OpenSSL
  • make config version 8 compatible with 7
  • set SONAME for SunOS too

only do transfer-encoding compression if asked to

Transfer encodings other than “chunked” are rarely used. Up until now libcurl would still activate automatic decompression if such was used, even if it was not asked for by the application.

bring back support for SFTP path ending in /~

A regression made a URL that ends with /~ no longer make a directory listing because the URL does not end with a slash. Now we bring back that behavior, even if it goes a little against the standard behavior.

never allocate dynbufs larger than “too big”

The general dynamic buffer system no longer allocates more memory than what the specific buffer is allowed to grow to. An optimization.

various gskit compile errors in OS400

Makes curl build fine there again.

enforce a maximum DNS cache size independent of timeout value

The DNS cache entries are purged on age only (default 60 seconds). With this new code, libcurl limits caps the maximum total amount of DNS cache entries to 30,000.

libssh2: fix crash in keyboard callback

Better SCP and SFTP when built with libssh2.

libssh: tell it to use SFTP non-blocking

Better SCP and SFTP when built with libssh.

add multi-ignore logic to multi_socket_action

The improved signal ignore logic for curl_multi_perform in 8.0.0 is now also done for curl_multi_socket_action. For better performance.

remove PENDING + MSGSENT handles from the main linked list

Not yet activated transfers and the transfers that are already completed, are now moved away off the main linked list. For performance.

runtests: prepare for parallel

Lots of cleanups and smaller fixes have been merged during this cycle in preparation for the pending introduction of parallel tests.

verify socketpair with a random value

The custom socketpair implementation used for platforms without a native one, was changed to use truly random values when verifying that the pipe works.

Fix ‘Location:’ formatting for early VTE terminals

The special terminal highlighting of the URL that is shown in the Location: header is now disabled for some terminals that can’t display it properly.

urlapi polish

Several different bugs and improvements were made. Including:

  • cleanups and performance improvements
  • detect and error on illegal IPv4 addresses
  • prevent setting invalid schemes
  • URL encoding for the URL missed the fragment

enhanced WebSocket en-/decoding

Parts of the websocket parser code was rewritten to fix bugs.

30,000 GitHub stars

Keeping up with the tradition. A little over two years since the curl reaching 20,000 GitHub stars celebrations I am here to post a new photo as we just surpassed 30,000 stars on curl/curl.

This post is slightly delayed just because I happened to be traveling when I realized we had climbed above this mark, so it took me an extra day to get an appropriate photograph made. The kind of photograph a moment like this calls for. Unfortunately without a beer this time.

Daniel celebrating

Steady growth

The star growth rate for curl seems to have been fairly linear over many years now.

Meaningless

Yes GitHub stars is a totally meaning less metric that does not say anything real about a project like curl. It does not reflect usage, it does not reflect popularity and it does not reflect the number of installs out in the real world. It’s just a vanity number.

CVE as JSON

It started as just a test to see if I could use the existing advisory data we have for all curl CVEs to date and provide that as JSON. Maybe, I thought, if we provide it good enough it can be used to populate other databases automatically or even get queried easier by tools.

Information

In the curl project we have published 141 vulnerability advisories so far, each with its own registered CVE Id. We make an effort to provide all the details about all the flaws as good and thorough as possible, but also with easy overviews and tables etc so that users can quickly detect for example which curl versions are vulnerable to which flaws etc. It is our going the extra meticulous mile that makes it extra annoying when others override what we conclude.

More machine friendly

In a recent push I decided that all the info we have and provide could and should be offered in a more machine friendly format for whoever wants it.

After a first few test shots, fellow curl team mates pointed out to me that there is an existing effort called the Open Source Vulnerability format, an openly developed JSON schema designed for pretty much exactly what I was set out to do. I agreed that it made sense to follow something existing rather to make up my own format.

Can be improved

Of course we ran into some minor snags with this schema and there are still details in it that I think can be improved and we are discussing with the OSV team to see if there is merit to our ideas or not. Still, even without any changes we can now offer our data using this established format.

The two primary things I want to improve is how we provide project identification (whose issue is this) and how we convey the severity level of the issue.

Different sets

As of now, we offer a set of different ways to get the CVE data as JSON.

1. Everything all at once

On the fixed URL https://curl.se/docs/vuln.json, we provide a JSON array holding a number of JSON objects. One object per existing CVE. Right now, this is 349KB of data. (If you ask for it compressed it will be smaller!)

This URL will always contain the entire set and it will automatically update in the future as we published new CVEs. It also automatically updates when we correct or change any of the previously published advisories.

2. Single object per issue

If you prefer to get the exact metadata for a single specific curl CVE, you can get the JSON for an issue by replacing the .html extension to .json for any CVE documented on the website. You will also find a menu option the “related box” for each documented CVE that links to it.

For example the vulnerability CVE-2022-35260, that we published last year which is documented on https://curl.se/docs/CVE-2022-35260.html, has its corresponding JSON object at https://curl.se/docs/CVE-2022-35260.json.

3. Objects per release

On the curl website we already offer a way for users to get a list of all known vulnerabilities a certain specific release is known to be vulnerable to. Of course always updated with the latest publications.

For example, curl version 7.87.0 has all its vulnerability information detailed on https://curl.se/docs/vuln-7.87.0.html. When I write this, there are eight known vulnerabilities for this version.

Screenshot of the website displaying vulnerability information for curl 7.87.0

Again, either by clicking the JSON link there on the page under the table, or simply by replacing html by json in the URL, the user can get a listing of all the CVEs this version is vulnerable to, as a JSON array with a number of JSON objects inside. In this case, eight objects as of now.

That info is thus available at https://curl.se/docs/vuln-7.87.0.json.

JSON

While I expect the format might still change a little bit going forward, and not all issues have all the metadata provided just yet (for example, the git commit ranges are still lacking on a number of issues from before 2017), here is an illustration screenshot of jq displaying the JSON object for CVE-2022-27780.

Object details

The database_specific object near the top is metadata that we have and believe belongs with the issue but that has no defined established field in the JSON schema. Since I think the data still adds value to users, I decided to put them into this section that is designed and meant exactly for this kind of extensions.

You can see that we set an “id” that is the CVE ID with a CURL- prefix. This is just us catering to the conditions of OSV and the JSON schema. We apparently need our own ID and provide the actual CVE ID as an alias, so we “fake” this by simply prepending curl to the CVE ID. We don’t use any private ID when we work with vulnerabilities: we only deal with public issues and we only deal with issues that are CVE worthy so it seems unnecessary to involve anything else.

Credits

Image by Reto Scheiwiller from Pixabay