Category Archives: cURL and libcurl

curl and/or libcurl related

no strcpy either

Some time ago I mentioned that we went through the curl source code and eventually got rid of all strncpy() calls.

strncpy() is a weird function with a crappy API. It might not null terminate the destination and it pads the target buffer with zeroes. Quite frankly, most code bases are probably better off completely avoiding it because each use of it is a potential mistake.

In that particular rewrite when we made strncpy calls extinct, we made sure we would either copy the full string properly or return error. It is rare that copying a partial string is the right choice, and if it is, we can just as well memcpy it and handle the null terminator explicitly. This meant no case for using strlcpy or anything such either.

But strcpy?

strcpy however, has its valid uses and it has a less bad and confusing API. The main challenge with strcpy is that when using it we do not specify the length of the target buffer nor of the source string.

This is normally not a problem because in a C program strcpy should only be used when we have full control of both.

But normally and always are not necessarily the same thing. We are but all human and we all do mistakes. Using strcpy implies that there is at least one or maybe two, buffer size checks done prior to the function invocation. In a good situation.

Over time however – let’s imagine we have code that lives on for decades – when code is maintained, patched, improved and polished by many different authors with different mindsets and approaches, those size checks and the function invoke may glide apart. The further away from each other they go, the bigger is the risk that something happens in between that nullifies one of the checks or changes the conditions for the strcpy.

Enforce checks close to code

To make sure that the size checks cannot be separated from the copy itself we introduced a string copy replacement function the other day that takes the target buffer, target size, source buffer and source string length as arguments and only if the copy can be made and the null terminator also fits there, the operation is done.

This made it possible to implement the replacement using memcpy(). Now we can completely ban the use of strcpy in curl source code, like we already did strncpy.

Using this function version is a little more work and more cumbersome than strcpy since it needs more information, but we believe the upsides of this approach will help us have an oversight for the extra pain involved. I suppose we will see how that will fare down the road. Let’s come back in a decade and see how things developed!

void curlx_strcopy(char *dest,
size_t dsize,
const char *src,
size_t slen)
{
DEBUGASSERT(slen < dsize);
if(slen < dsize) {
memcpy(dest, src, slen);
dest[slen] = 0;
}
else if(dsize)
dest[0] = 0;
}

the strcopy source

AI slop

An additional minor positive side-effect of this change is of course that this should effectively prevent the AI chatbots to report strcpy uses in curl source code and insist it is insecure if anyone would ask (as people still apparently do). It has been proven numerous times already that strcpy in source code is like a honey pot for generating hallucinated vulnerability claims.

Still, this will just make them find something else to make up a report about, so there is probably no net gain. AI slop is not a game we can win.

A curl 2025 review

Let’s take a look back and remember some of what this year brought.

commits

At more than 3,400 commits we did 40% more commits in curl this year than any single previous year!

Since at some point during 2025, all the other authors in the project have now added more lines in total to the curl repository than I have. Meaning that out of all the lines ever added in the curl repository, I have now added less than half.

More than 150 individuals authored commits we merged during the year. Almost one hundred of them were first-timers. Thirteen authors wrote ten or more commits.

Viktor Szakats did the most number of commits per month for almost all months in 2025.

Stefan Eissing has now done the latest commit for 29% of the product source code lines – where my share is 36%.

About 598 authors have their added contributions still “surviving” in the product code. This is down from 635 at end of last year.

tests

We have 232 more tests at the end of this year compared to last December (now at 2179 separate test cases), and for the first time ever we have more than twelve test cases per thousand lines of product source code.

(Sure, counting test cases is rather pointless and weird since a single test can be small or big, simple or complex etc, but that’s the only count we have for this.)

releases

The eight releases we did through the year is a fairly average amount:

  • 8.12.0
  • 8.12.1
  • 8.13.0
  • 8.14.0
  • 8.14.1
  • 8.15.0
  • 8.16.0
  • 8.17.0

No major revolution happened this year in terms of big features or changes.

We reduced source code complexity a lot. We have stopped using some more functions we deem were often the reasons for errors or confusion. We have increased performance. We have reduced numbed of used allocations.

We added experimental support for HTTPS-RR, the DNS record.

The bugfix frequency rate beat new records towards the end of the year as nearly 450 bugfixes shipped in curl 8.17.0.

This year we started doing release candidates. For every release we upload a series of candidates before the actual release so that people can help us and test what is almost the finished version. This helps us detect and fix regressions before the final release rather than immediately after.

Command line options

We end the year with 6 more curl command line options than we had last new year’s eve; now at 273 in total.

8.17.0–knownhosts
8.16.0–out-null
–parallel-max-host
–follow
8.14.0–sigalgs
8.13.0–upload-flags
8.12.0–ssl-sessions

man page

The curl man page continued to grow; now more than 500 lines longer since last year (7090 lines), which means that even when counted number of man page lines per command line option it grew from 24.7 to 26.

Lines of code

libcurl grew with a mere 100 lines of code over the year while the command line tool got 1,150 new lines.

libcurl is now a little over 149,000 lines. The command line tool has 25,800 lines.

Most of the commits clearly went into improving the products rather than expanding them. See also the dropped support section below.

QUIC

This year OpenSSL finally introduced and shipped an API that allows QUIC stacks to use vanilla OpenSSL, starting with version 3.5.

As a direct result of this, the use of the OpenSSL QUIC stack has been marked as deprecated in curl and is queued for removal early next year.

As we also removed msh3 support during 2025, we are looking towards a 2026 with supporting only two QUIC and HTTP/3 backends in curl.

Security

This year the number of AI slop security reports for curl really exploded. The curl security team has gotten a lot of extra load because of this. We have been mentioned in media a lot during the year because of this.

The reports not evidently made with AI help have also gotten significantly worse quality wise while the total volume has increased – a lot. Also adding to our collective load.

We published nine curl CVEs during 2025, all at severity low or medium.

AI improvements

A new breed of AI-powered high quality code analyzers, primarily ZeroPath and Aisle Research, started pouring in bug reports to us with potential defects. We have fixed several hundred bugs as a direct result of those reports – so far.

This is in addition to the regular set of code analyzers we run against the code and for which we of course also fix the defects they report.

Web traffic

At the end of the year 2025 we see 79 TB of data getting transferred monthly from curl.se. This is up from 58 TB (+36%) for the exact same period last year.

We don’t have logs or analysis so we don’t know for sure what all this traffic is, but we know that only a tiny fraction is actual curl downloads. A huge portion of this traffic is clearly not human-driven.

GitHub activity

More than two hundred pull requests were opened each month in curl’s GitHub repository.

For a brief moment during the fall we reached zero open issues.

We have over 220 separate CI jobs that in the end of the year spend more than 25 CPU days per day verifying our ongoing changes.

Dashboard

The curl dashboard expanded a lot. I removed a few graphs that were not accurate anymore, but the net total change is still that we went up from 82 graphs in December 2024 to 92 separate illustrations in December 2025. Now with a total of 259 individual plots (+25).

Dropped support

We removed old/legacy things from the project this year, in an effort to remove laggards, to keep focus on what’s important and to make sure all of curl is secure.

  • Support for Visual Studio 2005 and older (removed in 8.13.0)
  • Secure Transport (removed in 8.15.0)
  • BearSSL (removed in 8.15.0)
  • msh3 (removed in 8.16.0)
  • winbuild build system (removed in 8.17.0)

Awards

It was a crazy year in this aspect (as well) and I was honored with:

I also dropped out of the Microsoft MVP program during the year, to which I was accepted into in October 2024.

Conferences / Talks

I attended these eight conferences and talked – in five countries. My talks are always related to curl in one way or another.

  • FOSDEM
  • foss-north
  • curl up
  • Open Infra Forum
  • Joy of Coding
  • FrOSCon
  • Open Source Summit Europe
  • EuroBSDCon

Podcasts

I participated on these podcasts during the year. Always related to curl.

  • Security Weekly
  • Open Source Security
  • Day Two DevOps
  • Netstack.FM
  • Software Engineering Radio
  • OsProgrammadores

20,000 issues on GitHub

The curl project moved over its source code hosting to GitHub in March 2010, but we kept the main bug tracker running like before – on Sourceforge.

It took us a few years, but in 2015 we finally ditched the Sourceforge version fully. We adopted and switched over to the pull request model and we labeled the GitHub issue tracker the official one to use for curl bugs. Announced on the curl website proper on March 9 2015.

GitHub holds issues and pull requests in the same number series, and since a few years back they also added discussions to the mix. This number is another pointless one, but it is large and even so let’s celebrate it!

Issue one in curl’s GitHub repository is from October 2010.

Issue 100 is from May 18, 2014.

Issue 500 is from Oct 20, 2015.

Issue 10,000 was created November 29, 2022. That meant 9,500 issues created in 2,597 days. 3.7 issues/day on average over seven years.

Issue 20,000 (a pull request really) was created today, on December 16, 2025. 10,000 more issues created in 1,113 days. 9 issues/day over the last three years.

The pace of which primarily new pull requests are submitted has certainly gone up over the recent years, as this graph clearly shows. (Since the current month is only half so far, the drop at the right end of the plot is quite expected.)

We work hard in the project to keep the number of open issues and pull requests low even when the frequency rises.

It can also be noted that issues and pull requests are typically closed fast. Out of the ones that are closed with instructions in the git commit message, the trend looks like below. Half of them are closed within 6 hours.

Of course, these graphs are updated daily and shown on the curl dashboard.

Note: we have not seen the AI slop tsunami in the issues and pull requests as we do on Hackerone. This growth is entirely human made and benign.

Parsing integers in C

In the standard libc API set there are multiple functions provided that do ASCII numbers to integer conversions.

They are handy and easy to use, but also error-prone and quite lenient in what they accept and silently just swallow.

atoi

atoi() is perhaps the most common and basic one. It converts from a string to signed integer. There is also the companion atol() which instead converts to a long.

Some problems these have include that they return 0 instead of an error, that they have no checks for under or overflow and in the atol() case there’s this challenge that long has different sizes on different platforms. So neither of them can reliably be used for 64-bit numbers. They also don’t say where the number ended.

Using these functions opens up your parser to not detect and handle errors or weird input. We write better and stricter parser when we avoid these functions.

strtol

This function, along with its siblings strtoul() and strtoll() etc, is more capable. They have overflow detection and they can detect errors – like if there is no digit at all to parse.

However, these functions as well too happily swallow leading whitespace and they allow a + or – in front of the number. The long versions of these functions have the problem that long is not universally 64-bit and the long long version has the problem that it is not universally available.

The overflow and underflow detection with these function is quite quirky, involves errno and forces us to spend multiple extra lines of conditions on every invoke just to be sure we catch those.

curl code

I think we in the curl project as well as more or less the entire world has learned through the years that it is usually better to be strict when parsing protocols and data, rather than be lenient and try to accept many things and guess what it otherwise maybe meant.

As a direct result of this we make sure that curl parses and interprets data exactly as that data is meant to look and we error out as soon as we detect the data to be wrong. For security and for solid functionality, providing syntactically incorrect data is not accepted.

This also implies that all number parsing has to be exact, handle overflows and maximum allowed values correctly and conveniently and errors must be detected. It always supports up to 64-bit numbers.

strparse

I have previously blogged about how we have implemented our own set of parsing function in curl, and these also include number parsing.

curlx_str_number() is the most commonly used of the ones we have created. It parses a string and stores the value in a 64-bit variable (which in curl code is always present and always 64-bit). It also has a max value argument so that it returns error if too large. And it of course also errors out on overflows etc.

This function of ours does not allow any leading whitespace and certainly no prefixing pluses or minuses. If they should be allowed, the surrounding parsing code needs to explicitly allow them.

The curlx_str_number function is most probably a little slower that the functions it replaces, but I don’t think the difference is huge and the convenience and the added strictness is much welcomed. We write better code and parsers this way. More secure. (curlx_str number source code)

History

As of yesterday, November 12 2025 all of those weak functions calls have been wiped out from the curl source code. The drop seen in early 2025 was when we got rid of all strtrol() variations. Yesterday we finally got rid of the last atoi() calls.

(Daily updated version of the graph.)

curlx

The function mentioned above uses a ‘curlx’ prefix. We use this prefix in curl code for functions that exist in libcurl source code but that be used by the curl tool as well – sharing the same code without them being offered by the libcurl API.

A thing we do to reduce code duplication and share code between the library and the command line tool.

curl 8.17.0

Download curl from curl.se.

Release presentation

Numbers

the 271st release
11 changes
56 days (total: 10,092)
448 bugfixes (total: 12,537)
699 commits (total: 36,725)
2 new public libcurl function (total: 100)
0 new curl_easy_setopt() option (total: 308)
1 new curl command line option (total: 273)
69 contributors, 35 new (total: 3,534)
22 authors, 5 new (total: 1,415)
1 security fixes (total: 170)

Security

CVE-2025-10966: missing SFTP host verification with wolfSSH. curl’s code for managing SSH connections when SFTP was done using the wolfSSH powered backend was flawed and missed host verification mechanisms.

Changes

We drop support for several things this time around:

  • drop Heimdal support
  • drop the winbuild build system
  • drop support for Kerberos FTP
  • drop support for wolfSSH

And then we did some other smaller changes:

  • up the minimum libssh2 requirement to 1.9.0
  • add a notifications API to the multi interface
  • expand to use 6 characters per size in the progress meter
  • support Apple SecTrust – use the native CA store
  • add --knownhosts to the command line tool
  • wcurl: import v2025.11.04
  • write-out: make %header{} able to output all occurrences of a header

Bugfixes

We set a new project record this time with no less than 448 documented bugfixes since the previous release.

The release presentation mentioned above discusses some of the perhaps most significant ones.

Coming next

There a small set of pull-requests waiting to get merged, but other than that our future is not set and we greatly appreciate your feedback, submitted issues and provided pull-requests to guide us.

If this release happens to include an annoying regression, there might be a patch release already next week. If we are lucky and it doesn’t, then we aim for a 8.18.0 release in the early January 2026.

Yes really, curl is still developed

A lot!

One of the most common reactions or questions I get about curl when I show up at conferences somewhere and do presentations:

is curl still being actively developed?

How many more protocols can there be? This of course being asked by people without very close proximity or insight into the curl project and probably neither into the internet protocol world – which frankly probably is most of the civilized world. Still, these questions keep surprising me. Can projects actually ever get done?

(And do people really believe that adding protocols is the only thing that is left to do?)

Everything changes

There are new car models being made every year in spite of the roads being mostly the same for the last decades and there are new browser versions shipped every few weeks even though the web to most casual observers look roughly the same now as it did a few years ago. Etc etc. Even things such as shoes or bicycles are developed and shipped in new versions every year.

In spite of how it may appear to casual distant observers, very few things remain the same over time in this world. This certainly is also true for internet, the web and how to do data transfers over them. Just five years ago we did internet transfers differently than how we (want to) do them today. New tweaks and proposals are brought up at least on a monthly basis.

Not evolving implies stagnation and eventually… death.

As standards, browsers and users update their expectations, curl does as well. curl needs to adapt and keep up to stay relevant. We want to keep improving it so that it can match and go beyond what people want from it. We want to help drive and push internet transfer technologies to help users to do better, more efficient and more secure operations. We like carrying the world’s infrastructure on our shoulders.

It might evolve for decades to come

One of the things that actually have occurred to me, after having worked on this project for some decades by now – and this is something I did not at all consider in the past, is that there is a chance that the project will remain alive and in use the next few decades as well. Because of exactly this nothing-ever-stops characteristic of the world around us, but also of course because of the existing amount of users and usage.

Current development should be done with care, a sense of responsibility and with the anticipation that we will carry everything we merge today with us for several more decades – at least. At the latest curl up meeting, I had session I called 100 year curl where I brought up thoughts for us as a project that we might need to work on and keep in mind if indeed we believe the curl project will and should be able to celebrate its 100th birthday in a future. It is a slightly overwhelming (terrifying even?) thought but in my opinion not entirely unrealistic. And when you think about it, we have already traveled almost 30% of the way towards that goalpost.

But it looks the same

— I used curl the first time decades ago and it still looks the same.

This is a common follow-up statement. What have we actually done during all this time that the users can’t spot?

A related question that to me also is a little amusing is then:

— You say you worked on curl full time since 2019, but what do you actually do all days?

We work hard at maintaining backwards compatibility and not breaking existing use cases. If you cannot spot any changes and your command lines just keep working, it confirms that we do things right. curl is meant to do its job and stay out of the way. To mostly be boring. A dull stack is a good stack.

We have refactored and rearranged the internal architecture of curl and libcurl several times in the past and we keep doing it at regular intervals as we improve and adapt to new concepts, new ideas and the ever-evolving world. But we never let that impact the API, the ABI or by breaking any previously working curl tool command lines.

I personally think that this is curl’s secret super power. The one thing we truly have accomplished and managed to stick to: stability. In several aspects of the word.

curl offers stability in an unstable world.

Now more than ever

Counting commit frequency or any other metric of project activity, the curl project is actually doing more development now and at a higher pace than ever before during its entire lifetime.

We do this to offer you and everyone else the best, the most reliable, the fastest, the most feature rich, the best documented and the most secure internet transfer library on the planet.

A gold ceremony to remember

There are those moments in life you know already from the start are going to be the rare once in a lifetime events. This evening was one of those times.

On a dark and wet autumn Friday afternoon my entire family and me dressed up to the most fancy level you can expect and took at taxi to the Stockholm City Hall. Anja my wife and my kids Agnes and Rex.

This was the Swedish Royal Academy of Engineering Science’s (IVA) 106th Högtidssammankomst (“festive gathering”) since its founding in 1919.

Being one the four gold medal recipients of the night our family got a special dedicated person assigned to us who would help us “maneuver” the venue and agenda. Thanks Linus!

In the golden hall me and Anja took a seat in our reserved seats in the front row as the almost 700 other guests slowly entered and filled up every last available chair. The other guests were members of the Academy or special invitees, ministers, the speaker of the parliament etc. All in tail coats, evening dresses and the likes to conform with the dress code of the night.

The golden hall is named after its golden colored walls, all filled up with paintings of Swedish historic figures contributing to a pompous and important atmosphere and spirit. This is the kind of room you want to get awards in.

Part of the program in this golden hall was the gold medal awards ceremony. After having showed short two-minute videos of each of the awardees and our respective deeds and accomplishments on the giant screen in the front of the room, us awardees were called to the stage.

The video shown about me and curl. Swedish with subtitles

Three gold medals and one large gold medal were handed out to my fellow awardees and myself this year. Carl-Henric Svanberg received the large gold medal. Mats Danielsson and Helena Hedblom were awarded the gold medal. The same as I.

The medals were handed to us one by one by Marcus Wallenberg.

In one of the agenda items in the golden hall,IVA’s CEO Sylvia Schwaag Serger did a much inspiring talk about Swedish Engineering and mentioned an amazing list of feats and accomplishments done over the last year and with hope and anticipation for the future. I and curl were also mentioned in her speech. Even more humbled.

The audience here were some of the top minds and Engineering brains in Sweden. Achievers and great minds. The kind of people you want appreciation from because they know a thing or two.

Intermission

A small break followed. We strolled down to the giant main hall for some drinks. The blue hall, which is somewhat famous to anyone who ever watched the Nobel Prize banquets. Several people told me the story that the original intent was for the walls to be blue, but…

The blue hall that isn’t very blue

Banquet

At about 19:00, me and Anja had to sneak up a floor again together with crowd of others who were seated on that main long table you can see on the photo above. Table 1.

On the balcony someone mentioned I should wear the prize. So with some help I managed to get it around my neck. It’s not a bad feeling I can tell you.

As everyone else in the hall had found their ways to their seats, we got to do a slow procession walking down the big wide stairs down into the main hall and find our ways to our seats.

Then followed a most wonderful three-course meal. I had excellent table neighbor company and we had a lively and interesting conversation all through the dinner. There were a few welcome short interruptions in the form of speeches and music performances. A most delightful dinner.

After the final apple tart was finished, there was coffee and more drinks served upstairs again, as the golden hall had apparently managed to transition while we ate downstairs.

When the clock eventually approached midnight the entire Stenberg family walked off into the night and went home. A completely magical night was over but it will live on in my mind and head for a long time.

Thank you to every single one involved.

The medal

The medal has an image of Prometus on the front side, and Daniel Stenberg 2025 engraved on the back side. On the back it also says the name of the Academy and för framstående gärning, for outstanding achievement.

A medal to be proud of.

Of course I figured this moment in time also called for a graph.

On 110 operating systems

In November 2022, after I had been keeping track and adding names to this slide for a few years already, we could boast about curl having run on 89 different operating systems and only one year later we celebrated having reached 100 operating systems.

This time I am back with another update and I here is the official list of the 110 operating systems that have run curl.

I don’t think curl is unique in having reached this many operating systems, but I think it is a rare thing and I think it is even rarer that we actually have tracked all these names down to have them mentioned – and counted.

Disclaimers

For several of these cases, no patches or improvements were ever sent back to the curl project and we don’t know how much or little work that was required to make them happen.

The exact definition of “operating system” in this context is vague but separate Linux distributions do not count as another operating system.

There are probably more systems to include. Please tell me if you have run curl on something not currently mentioned.

AIxCC curl details

At the AIxCC competition at DEF CON 33 earlier this year, teams competed against each other to find vulnerabilities in provided Open Source projects by using (their own) AI powered tools.

An added challenge was that the teams were also tasked to have their tooling generate patches for the found problems, and the competitors could have a go to try to poke holes on the patches which if they were successful would lead to a reduced score for the patching team.

Injected vulnerabilities

In order to give the team actual and perhaps even realistic flaws to find, the organizers injected flaws into existing source code. I was curious about how exactly this was done as curl was one of the projects they used for this in the finals, so I had a look and I figured I would let you know. Should you also perhaps be curious.

Would your tools find these vulnerabilities?

Other C based projects used for this in the finals included OpenSSL, little-cms, libexif, libxml2, libavif, freerdp, dav1d and wireshark.

The curl intro

First, let’s paste their description of the curl project here to enjoy their heart-warming words.

curl is a command-line tool and library for transferring data with URLs, supporting a vast array of protocols including HTTP, HTTPS, FTP, SFTP, and dozens of others. Written primarily in C, this Swiss Army knife of data transfer has been a cornerstone of internet infrastructure since 1998, powering everything from simple web requests to complex API integrations across virtually every operating system. What makes curl particularly noteworthy is its incredible protocol support–over 25 different protocols–and its dual nature as both a standalone command-line utility and a powerful library (libcurl) that developers can embed in their applications. The project is renowned for its exceptional stability, security focus, and backward compatibility, making it one of the most widely deployed pieces of software in the world. From IoT devices to major web services, curl quietly handles billions of data transfers daily, earning it a reputation as one of the most successful and enduring open source projects ever created.

Five curl “tasks”

There is this website providing (partial) information about all the challenges in the final, or as they call them: tasks. Their site for this is very flashy and cyber I’m sure, but I find it super annoying. It doesn’t provide all the details but enough to give us some basic insights of what the teams were up against.

Task 9

The organizers wrote a new protocol handler into curl for supporting the “totallyfineprotocl” (yes, with a typo) and within that handler code they injected a rather crude NULL pointer assignment shown below. The result variable is an integer containing zero at that point in the code.

Task 10

This task had two vulnerabilities injected.

The first one is an added parser in the HTTP code for the response header X-Powered-by: where the code copies the header field value to a fixed-size 64 bytes buffer, so that if the contents is larger than so it is a heap buffer overflow.

The second one is curiously almost a duplicate of task 9 using code for a new protocol:

Task 20

Two vulnerabilities. The first one inserts a new authentication method to the DICT protocol code, where it contains a debug handler/message with string format vulnerability. The curl internal sendf() function takes printf() formatting options.

The second is hard to understand based on the incomplete code they provide, but the gist of it that the code uses an array for number of seconds in text format that it indexes with the given “current second” without taking leap seconds into account which then would access the stack out of bounds if tm->tm_sec is ever larger than 59:

Task 24

Third time’s the charm? Here’s the maybe not so sneaky NULL pointer dereference in a third made up protocol handler quite similar to the previous two:

Task 44

This task is puzzling to me because it is listed as “0 vulnerabilities” and there is no vulnerability details listed or provided. Is this a challenge no one cracked? A flaw on the site? A trick question?

Modern tools find these

Given what I recently have seen what modern tools from Aisle and ZeroPath etc can deliver, I suspect lots of tools can find these flaws now. As seen above here, they were all rather straight forward and not hidden or deeply layered very much. I think for future competitions they need to up their game. Caveat of course that I didn’t look much at the tasks related to other projects; maybe they were harder?

Of course making the problems harder to find will also make more work for the organizers.

I suspect a real obstacle for the teams to find these issues had to be the amount of other potential issues the tools also found and reported; some rightfully and some not quite as correctly. Remember how ZeroPath gave us over 600 potential issues on curl’s master repository just recently. I have no particular reason to think that other projects would have fewer, at least if at a comparable size.

[Addition after first post] I was told that a general idea for how to inject proper and sensible bugs for the competition, was to re-insert flaws from old CVEs, as they are genuine problems in the project that existed in the past. I don’t know why they ended up not doing this (for curl).

Reports?

I have unfortunately not seen much written in terms of reports and details from the competition from the competing teams. I am still waiting for details on some of their scans on curl.

A royal gold medal

The Royal Swedish Academy of Engineering Sciences (IVA) awards me a gold medal 2025 for my work on curl. (English version of IVA article)

This academy, established 1919 by the Swedish king Gustav V, has been awarding great achievers for over one hundred years and the simple idea behind the awards is, as quoted from their website:

Gold medals are awarded every year to people who, through outstanding deeds, have contributed to creating a better society.

I am of course humbled and greatly honored to have been selected as a receiver of said award this year. To be recognized as someone who have contributed to creating a better society, selected by top people in competition with persons of remarkable track records and achievements. Not too shabby for a wannabe-engineer like myself who did not even attend university.

There have been several software and tech related awardees for this prize before, but from what I can tell I am the first Open Source person to receive this recognition by the academy.

Justification

English version:

Daniel Stenberg, software developer, is awarded IVA’s Gold Medal for his outstanding contributions to software development, where he has played a central role in internet infrastructure and open source software. Through his work with curl – a tool now used by billions of devices worldwide – he has enabled reliable and secure data transfer over the internet, not only between traditional computer programmes but also across smartphones, vehicles, satellites and spacecraft.

The original Swedish “motivering”:

Systemutvecklare Daniel Stenberg tilldelas IVAs Guldmedalj för sina insatser inom mjukvaruutveckling där han haft en central betydelse för internetinfrastruktur och fri programvara. Genom sitt arbete med curl, verktyget som i dag används av miljarder enheter världen över, har han möjliggjort tillförlitlig och säker dataöverföring över internet. Inte bara mellan program i traditionella datorer utan allt från smartphones och bilar, till satelliter och rymdfarkoster.

The ceremony

The associated award ceremony when the physical medal is handed over happens this Friday at the Stockholm City Hall‘s Blue Hall, the same venue used for the annual Nobel Prize banquet.

I have invited my wife and my two adult kids to participate in those festivities.

See a gold ceremony to remember.

A second medal indeed

Did I not already receive a gold medal? Why yes, I did eight years ago. Believe me, it does not get old. This is something I can get used to. But yes: it is beyond crazy to get one medal in your life. Getting two is simply incomprehensible.

This is also my third award received within this calendar year so I completely understand if you already feel bored by my blog posts constantly banging my own drum. See European Open Source Achievement Award and Developer of the year for the two previous ones.

The medal

I wanted to include a good high resolution image of the medal in this post, but I failed to find one. I suppose I will just have to make a few shots by myself after Friday and do a follow-up post!