All posts by Daniel Stenberg

HTTP is not simple

I often hear or see people claim that HTTP is a simple protocol. Primarily of course from people without much experience or familiarity with actual implementations. I think I personally also had thoughts in that style back when I started working with the protocol.

After personally having devoted soon three decades on writing client-side code doing HTTP and having been involved in the IETF on all the HTTP specs produced since 2008 or so, I think I am in a decent position to give a more expanded view on it. HTTP is not a simple protocol. Far from it. Even if we presume that people actually mean HTTP/1 when they say that.

HTTP/1 may appear simple because of several reasons: it is readable text, the most simple use case is not overly complicated and existing tools like curl and browsers help making HTTP easy to play with.

The HTTP idea and concept can perhaps still be considered simple and even somewhat ingenious, but the actual machinery is not.

But yes, you can telnet to a HTTP/1 server and enter a GET / command manually and see a response. However I don’t think that is enough to qualify the entire thing as simple.

I don’t believe anyone has tried to claim that HTTP/2 or HTTP/3 are simple. In order to properly implement version two or three, you pretty much have to also implement version one so in that regard they are accumulating complexity and bring quite a lot of extra challenges in their own respective specifications.

Let me elaborate on some aspects of the HTTP/1 protocol that make me say it is not simple.

newlines

HTTP is not only text-based, it is also line-based; the header parts of the protocol that is. A line can be arbitrarily long as there is no limit in the specs – but they need to have a limit in implementations to prevent DoS etc. How long can they be before a server rejects them? Each line ends with a carriage-return and linefeed. But in some circumstances only a linefeed is enough.

Also, headers are not UTF-8, they are octets and you must not assume that you can just arbitrarily pass through anything you like.

whitespace

Text based protocols easily gets this problem. Between fields there can be one or more whitespace characters. Some of these are mandatory, some are optional. In many cases HTTP also does tokens that can either be a sequence of characters without any whitespace, or it can be text within double quotes (“). In some cases they are always within quotes.

end of body

There is not one single way to determine the end of a HTTP/1 download – the “body” as we say in protocol lingo. In fact, there are not even two. There are at least three (Content-Length, chunked encoding and Connection: close). Two of them require that the HTTP client parses content size provided in text format. These many end-of-body options have resulted in countless security related problems involving HTTP/1 over the years.

parsing numbers

Numbers provided as text are slow to parse and sometimes error-prone. Special care needs to be taken to avoid integer overflows, handle whitespace, +/- prefixes, leading zeroes and more. While easy to read for humans, less ideal for machines.

folding headers

As if the arbitrary length headers with unclear line endings are not enough, they can also be “folded” – in two ways. First: a proxy can merge multiple headers into a single one, comma-separated – except some headers (like cookies) that cannot. Then, a server can send a header as a continuation of the previous header by adding leading whitespace. This is rarely used (and discouraged in recent spec versions), but a protocol detail that an implementation needs to care about because it is used.

never-implemented

HTTP/1.1 ambitiously added features that at the time were not used or deployed onto the wide Internet so while the spec describes how for example HTTP Pipelining works, trying to use it in the wild is asking for a series of problems and is nothing but a road to sadness.

Later HTTP versions added features that better fulfilled the criteria that Pipelining failed to: mostly in the way of multiplexing.

The 100 response code is in similar territory: specified, but rarely actually used. It complicates life for new implementations. The fact that there is a discussion this week about particulars in the 100 response state handling, twenty-eight years since it was first published in a spec I think tells something.

so many headers

The HTTP/1 spec details a lot of headers and their functionality, but that is not enough for a normal current HTTP implementation to support. This, because things like cookies, authentication, new response codes and much more that an implementation may want to support today are features outside of the main spec and are described in additional separate documents. Some details, like NTLM, are not even found in RFC documents.

Thus, a modern HTTP/1 client needs to implement and support and a whole range of additional things and headers to work fine across the web. “HTTP/1.1” is mentioned in at least 40 separate RFC documents. Some of them quite complex by themselves.

not all methods are alike

While the syntax should ideally be possible to work exactly the same independently of which method that is used (sometimes referred to as verb), that is not how the reality works.

For example, if the method is GET we can also indeed send a body in the request similar to how we typically do with POST and PUT, but due to how it was never properly spelled out in the past, that is not interoperable today to the extend that doing it is just recipe for failure in a high enough share of attempts across the web.

This is one of the reasons why there is now work on a new HTTP method called QUERY which is basically what GET + request body should have been. But that does not simplify the protocol.

not all headers are alike

Because of the organic way several headers were created, deployed and evolved, a proxy for example cannot blindly just combine two headers into one, as the generic rules say it could. Because there are headers that specifically don’t follow there rules and need to be treated differently. Like for example cookies.

spineless browsers

Remember how browser implementations of protocols always tend to prefer to show the user something and guess the intention rather than showing an error because if they would be stringent and strict they risk that users would switch to another browsers that is not.

This impacts how the rest of the world gets to deal with HTTP, as users then come to expect that what works with the browsers should surely also work with non-browsers and their HTTP implementations.

This makes interpreting and understanding the spec secondary compared to just following what the major browsers have decided to do in particular circumstances. They may even change their stances over time and they may at times contradict explicit guidance in the specs.

size of the specs

The first HTTP/1.1 RFC 2068 from January 1997 was 52,165 words in its plain text version – which almost tripled the size from the HTTP/1.0 document RFC1945 at merely 18,615. A clear indication how the perhaps simple HTTP 1.0 was no longer simple anymore in 1.1.

In June 1999, the updated RFC 2616 added several hundred lines and clocked in at 57,897 words. Almost 6K more words.

A huge work was then undertaken within the IETF and in the fifteen years following the single document HTTP/1.1 spec was instead converted into six separate documents.

RFC7230 to RFC7235 were published in June 2014 and they hold a total of 90,358 words. It had grown another 56%. It is comparable to an average sized novel in number of words.

The whole spec was subsequently rearranged and reorganized again to better cater for the new HTTP versions, and the latest update was published in June 2022. The HTTP/1.1 parts had then been compacted into three documents RFC 9110 to RFC9112, with a total of 95,740 words.

For the argument sake, let’s say we can read two hundred words per minute when plowing this. It is probably a little slower than average reading speed, but I imagine we read standard specs a little slower than we read novels for example. Let’s say that 10% of the words are cruft we don’t need to read.

If we read only the three latest HTTP/1.1 related RFC documents non-stop, it would still take more than seven hours.

Must die?

In a recent conference talk with this click bait title, it was suggested that HTTP/1 is so hard to get implemented right that we should all stop using it.

Necessarily so?

All this, and yet there are few other Internet protocols that can compete with HTTP/1 in terms of use, adoption and popularity. HTTP is a big shot on the internet. Maybe this level of complication has been necessary to reach this success?

Comparing with other popular protocols still in use like DNS or SMTP I think we can see similar patterns: started out as something simple a long time ago. Decades later: not so simple anymore.

Perhaps this is just life happening?

Conclusion

HTTP is not a simple protocol.

The future is likely just going to be even more complicated as more things are added to HTTP over time – for all versions.

curl tells the %time

The curl command line option --write-out or just -w for short, is a powerful and flexible way to extract information from transfers done with the tool. It was introduced already back in version 6.5 in the early 2000.

This option takes an argument in which you can add “variables” that hold all sorts of different information, from time information, to speed, sizes, header content and more.

Some users have right out started to use the -w output for logging of the performed transfer, and when you do that there was a little detail missing: the ability to output the time the transfer completed. After all, most log lines actually feature the time in one way or another.

Starting in curl 8.16.0, curl -w knows the time and now allows the user to specify exactly how to output that time in the output. Suddenly this output is a whole notch better for logging purposes.

%time{format}

Since log files also tend to use different time formats I decided I didn’t want to use a fixed format and risk that a huge portion of users will think it is the wrong one, so I went straight with strftime formatting: the user controls the time format using standard %-flags: different ones for year, month, day, hour, minute, second etc.

Some details to note:

  1. The time is provided using UTC, not local.
  2. It also supports %f for microseconds, which is a POSIX extension already used by Python and possible others
  3. %z and %Z (for time zone offset and name) had to be fixed to become portable and identical across systems and platforms

Example

Here’s a sample command line outputting the time the transfer completed:

curl -w "%time{%a %b %e %Y - %H:%M:%S.%f} %{response_code}\n" https://example.com -o saved

When I ran this command line it gave me this output:

Wed Aug 6 2025 - 12:43:45.160382 200

Credits

The clock image by Alexa from Pixabay

Follow redirects but differently

In the early days of curl development we (I suppose it was me personally but let’s stick with we so that I can pretend the blame is not all on me) made the possibly slightly unwise decision to make the -X option change the HTTP method for all requests in a curl transfer, even when -L is used – and independently of what HTTP responses the server returns.

That decision made me write blog posts and inform people all over about how using -X superfluously causes problems.

In curl 8.16.0, we introduce a different take on the problem, or better yet, a solution really: a new command line option that offers a modified behavior. Possibly the behavior people were thinking curl was having all along.

Just learn to use --follow going forward (in curl 8.16.0 and later).

This option works fine together with -X and will adjust the method in the possible subsequent requests according to the HTTP response code.

A long time ago I wrote separately about the different HTTP response codes and what they mean in terms of changing (or not) the method.

–location remains the same

Since we cannot break existing users and scripts, we had to leave the exiting --location option working exactly like it always has. This option is this mutually exclusive with --follow, so only pick one.

QUERY friendly

Part of the reason for this new option is to make sure curl can follow redirects correctly for other HTTP methods than the good old fashioned GET, POST and PUT. We already see PATCH used to some extent but perhaps more important is the work on the spec for the new QUERY method. It is a flavor of POST, but with a few minor but important different properties. Possibly enough for me to write a separate blog post about, but right now we can stick to it being “like POST”, in particular from a HTTP client’s perspective.

We want curl to be able to do a “post” but with a QUERY method and still follow redirects correctly. The -L and -X combination does not support this.

curl can be made to issue a proper QUERY request and follow redirects correctly like this:

curl -X QUERY --follow -d sendthis https://example.com/

Thank you for flying curl!

c10kday

From March 20, 1998 when the first curl release was published, to this day August 5, 2025 is exactly 10,000 days. We call it the curl-10000-day. Or just c10kday. c ten K day.

We want to celebrate this occasion by collecting and sharing stories. Your stories about curl. Your favorite memories. When you used curl for the first time. When curl saved your situation. When curl rescued your lost puppy. What curl has meant or perhaps still means to you, your work, your business, or your life. We want to favor and prioritize the good, the fun, the nostalgic and the emotional stories but it is of course up to your discretion.

We have created this thread in curl’s GitHub Discussion section for this purpose, so please go there and submit your story or read what others have shared.

https://github.com/curl/curl/discussions/17930

In the curl factory this day is nothing special. We keep hammering out new features and bugfixes – just like we always do.

Thanks for flying curl.

Even happier eyeballs

Back in 2012, the Happy Eyeballs RFC 6555 was published. It details how a sensible Internet client should proceed when connecting to a server. It basically goes like this:

Give the IPv6 attempt priority, then with a delay start a separate IPv4 connection in parallel with the IPv6 one; then use the connection that succeeds first.

We also tend to call this connection racing, since it is like a competition where multiple attempts compete trying to “win”.

In a normal name resolve, a client may get a list of several IPv4 and IPv6 addresses to try. curl would then pick the first, try that and if that fails, move on the next etc. If a whole family fails, it would start the other immediately.

v2

The updated Happy Eyeballs v2 RFC 8305 was published in 2017. It focused a lot on that the client should start its connections earlier in the process, preferably while getting responses from DNS instead of waiting for the hostname resolve phase to end before starting that.

This is complicated for lots of clients because there is no established (POSIX) API for doing such name resolves, so for a portable network library like libcurl we could not follow most of the new advice in this spec.

QUIC added a dimension

In 2012 we did not have QUIC on the map and not practically in 2017 either so those eyeballing specs did not include such details.

Even later, HTTP/3 was documented to require an alt-svc response header before a client would know if the server speaks HTTP/3 and only then could it attempt QUIC with it and expect it to work.

While curl works with alt-svc response approach, that’s information arriving far too late for many users – and it is especially damning for a command line tool as opposed to a browser, since lots of users just do single shot transfers and then never get to use HTTP/3 at all.

To combat that drawback, we decided that adding QUIC to the mix should add a separate connection competition. To allow faster and earlier use of QUIC.

Start the QUIC-IPv6 connect attempt first, then in order the QUIC-IPv4, TCP-ipv6 and finally the TCP-ipv4.

To users, this typically makes a very smooth operation where the client just automatically connects to the “best” alternative without it having to make any particular choices or decisions. It graciously and transparently adapts to situations where IPv6 or UDP have problems etc.

v3 and HTTPS-RR

With the introduction of HTTPS-RR, there are also more ways introduced to get IP addresses for hosts and there is now ongoing work within the IETF on making a v3 of the Happy Eyeballs specification detailing how exactly everything should be put together.

We are of course following that development to monitor and learn how we should adapt and polish curl connects further.

Parallel more

While waiting on the happy eyeballs v3 work to land in a document, Stefan Eissing took it upon himself to further tweak how curl behaves in an attempt to find the best connection even faster. Using more parallelism.

Starting in curl 8.16.0, curl will start the first IPv6 and the first IPv4 connection attempts exactly like before, but then, if none of them have connected after 200 milliseconds curl continues to the next address in the list and starts another attempt in parallel.

An illustration

Let’s take a look at an example of curl connecting to a server, let’s call the server curl.se. The orange numbers show the order of things after the DNS response has been received.

  1. The first connect attempt starts using the first IPv6 address from the DNS response. If it has not succeeded within 200 milliseconds…
  2. The second attempt starts in parallel, using the first IPv4 address. Now two connect attempts are running and if neither have succeeded in yet another 200 milliseconds…
  3. A second IPv6 connect attempt is started in parallel, using the second IPv6 address from the list. Now three connect attempts are racing. If none of them succeeds in another 200 milliseconds…
  4. A second IPv4 race starts, using the second IPv4 address from the list.
  5. … and this can continue, if this is a really slow or problematic server with many IP addresses.

Of course, each failed attempt makes curl immediately move to the next address in the list until all alternatives have been tested.

Add QUIC to that

The illustration above can be seen as “per transport”. If only TCP is wanted, there is a single such race going on. With potentially quite a few parallel attempts in the worst cases.

If instead HTTP/3 or a lower HTTP version is wanted, curl first starts a QUIC connection race as illustrated and then after 200 milliseconds it starts a similar TCP race in parallel to the QUIC one! Both run at the same time, the first one to connect wins.

A little table to illustrate when the different connect attempts starts when either QUIC or TCP is okay:

Time (ms)QUICTCP
0Start IPv6 connect
200Start IPv4 connectStart IPv6 connect
400Start 2nd IPv6 connectStart IPv4 connect
600Start 2nd IPv4 connectStart 2nd IPv6 connect
800Start 3rd IPv6 connectStart 2nd IPv4 connect

So in the case of trying to connect to a server that doesn’t respond that has more than two IPV6 and IPv4 addresses each, there could be nine connection attempts running after 801 milliseconds.

200 ms can be changed

The 200 milliseconds delay mentioned above is just the default time. It can easily be changed both using the library or the command line tool.

Credit

Image by Ilona Ilyés from Pixabay (heavily cropped)

curl adds parallel host control

I’m convinced a lot of people have not yet figured out that curl has supported parallel downloads for six years already by now.

Provided a practically unlimited number of URLs, curl can be asked to get them in a parallel fashion. It then makes sure to keep N transfers alive for as long as there is N or more transfers left to complete, where X is a custom number but 50 by default.

Concurrently transferring data from potentially a large number of different hosts can drastically shorten transfer times and who doesn’t prefer to complete their download job sooner rather than later?

Limit connections per host

At times however, you may want to do a lot of transfers, and you want to do them in parallel for speed, but maybe you prefer to limit how many connections curl should use per each hostname among all the URLs?

This per-host limit is a feature libcurl has offered applications for a long time and now the time has come for curl tool users to also enjoy its powers.

Per host should perhaps be called per origin if we spoke web lingo, because it rather limits the number of connections to the same protocol + hostname + port number. We call that host here for simplicity.

To set a cap on how many connections curl is allowed to use for each specific server use --parallel-max-host [number].

For example, if you want to download ten million images from this site, but never use more than six connections:

curl --parallel --parallel-max-host 6 https://example.com/[1-10000000].jpg --remote-name

Connections

Pay special attention to the exact term: this limits the number of connections used to each host. If the transfers are done using HTTP/2 or HTTP/3, they can be done using many streams over just one or a few connections so doing 50 or 200 transfers in parallel should still be perfectly doable even with a limited number of connections. Not so much with HTTP/1.

Ships in 8.16.0

This command line option will become available in the pending curl version 8.16.0 release.

option parsing in curl

We have always had a custom command line option parser in curl. It is fast and uncomplicated and gives us the perfect mix of flexibility and function. It also saves us from importing or using code with another license.

In one aspect it has behaved slightly different than many other command line parsers: the way it accepts arguments to long options.

Long options are the options provided using a name that starts with two dashes and are often not single-letters. Example:

curl --user-agent "curl/2000" https://example.com/

The example above tells curl to use the user agent curl/2000 in the transfer. The argument provided to the --user-agent option is provided separated with a space.

When instead using the short version of the same option, the argument can be specified with a space in between or not:

curl -A curl/2000 https://example.com/

or

curl -Acurl/2000 https://example.com/

What about equals sign?

A common paradigm and syntax style for accepting long options in command line tools is the “equals sign” approach. When you provide an argument to a long option you do this by appending an equals sign followed by the argument to the option; with no space. Like this:

curl --user-agent="curl/2000" https://example.com/

This example uses double quotes but they are of course not necessary if there is no space or similar in the argument.

Bridging the gap

To make life easier for future users, curl now also support this latter style – starting in curl 8.16.0. With this syntax supported, curl accepts a more commonly used style and therefore should induce less surprises to users. To make it easier to write curl command lines.

I emphasize that change this is an improvement for future users, because I really don’t think it is a good idea for most user to switch to this syntax immediately. This of course because all the older curl versions that are still used widely around the word do not support it.

I think it is better if we wait a year or two until we start using this option style in curl documentation and example command lines. To give time for users to upgrade to a version that has support for it.

Output nothing with –out-null

Downloading data from a remote URL is probably the single most common operation people do with curl.

Often, users then add various additional options to the command line to extract information from that transfer but may also decide that the actually fetched data is not interesting. Sometimes they don’t get the accurate meta-data if the full download is not made, sometimes they run performance measurements where the actual content is not important, and so on. Users sometimes have reasons for not saving their downloads.

They do downloads where the actual downloaded content is tossed away. On GitHub alone, we can find almost one million command lines doing such curl invokes.

curl of course offers multiple ways to discard the downloaded data, but the maybe most straight-forward way is to write the contents to a null device such as /dev/null on *nix systems or NUL: on windows. Like this:

curl https://example.com/ --output /dev/null

or using the short option

curl https://example.com/ -o /dev/null

In many cases we can accomplish the same thing with a shell redirect – which also redirects multiple URLs at once:

curl https://example.com/ >/dev/null

Improving nothing

The command line above is perfectly fine and works fine and has been doing so for decades. It does however have two drawbacks:

  1. Lack of portability. curl runs on most operating systems and most options and operations work identically, to the degree that you can often copy command lines back and forth between machines without thinking much about it. Outputting data to /dev/null is however not terribly portable and trying that operation on Windows for example will cause the command line to fail.
  2. Performance. It may not look like much, but completely avoiding writing the data instead of writing it to /dev/null makes benchmarks show a measurable improvement. So if you don’t want the data, why not do the operation faster rather than slower?

The shell redirect approach has the same drawbacks.

Usage

The new option is used as follows, where it needs one --out-null occurrence per URL it wants to redirect.

curl https://example.com/ --out-null

This allows you to for example send one to null and save the other:

curl https://example.com/ --out-null https://example.net/ --output save-data

Coming in 8.16.0

This command line option debuts in curl 8.16.0, shipping in September 2025.

Credits

Stefan Eissing brought this option. He also benchmarked this option.

Carving out msh3

I hope that by now most readers of my blog have understood that curl, and libcurl specifically, is an architecture with a transfer core with a set of different backends plugged in. Backends powered by different third party libraries.

The exact set of backends used in a particular build is decided by the person that builds curl.

What backends that curl supports varies over time (and platform). We appreciate adding support for more backends and to let users decide which ones to use, as this allows us to approach it with a survival of the fittest attitude. What does not work in the long run or what isn’t actually used, we can deprecate and remove again. Ideally this helps us select the better ones for the future.

HTTP/3

For the last few years curl has supported the HTTP/3 protocol powered by one out of four different backends:

  1. nghttp3 + ngtcp2
  2. quiche
  3. nghttp3 + OpenSSL-QUIC
  4. msh3 + msquic

(All except the first listed combination, we still label experimental.)

Dropping msh3

In this quartet, there is one option that stands out a little: the last one. The msh3 powered backend was brought in and merged into the curl source tree a few years ago with the hope that this solution would end up a good choice for people on Windows since it is the only choice in the list that can get built to use the native Windows TLS solution SChannel.

Unfortunately, this work was never finalized. It never worked correctly in curl and the API and architecture of msh3 makes it quirky and cumbersome to integrate – and quite frankly we can’t seem to drum up any interest for people to test nor work on improving this backend.

As we have three other working backends, all of which also can build and run on Windows, we see no benefit in dragging msh3 along. In fact, there is a cost in maintenance and keeping the build working and the tests running etc that we rather avoid. In particular as we seem to be doing that for virtually no gain.

I want to stress that I don’t think there is anything wrong with msh3 nor its underlying msquic library. They simply have not been made to work properly in curl.

Updated backend map

The msh3 backend has now been removed from git in the current master branch and this is how the HTTP/3 offer will look like in the coming curl 8.16.0 release.

Hello Sprout

Sprout is the name of my new machine that just arrived. The crowd-funded laptop. Since this beauty is graciously sponsored by a large crowd of people I felt I should share a little bit of its journey and entry into my life.

First I needed a name for it, and since it is small and is meant to grow with me a bit, I think Sprout feels apt.

The crowd-funding

Starting the initiative on a Saturday afternoon might not have been the most clever thing to get widest possible reach, but it seems it did not matter. We reached the goal of 3,500 USD within 90 minutes and people have kept on donating even after that and the counter is now at 7,000 USD. Amazing.

As mentioned: all surplus ends up in the general curl fund and will be used solely and exclusively to cover expenses that benefit and favor curl and its development. That is a promise. The curl fund is also completely open and transparent so everyone who wants to can in fact monitor our finances to verify this.

Specs

I decided to go with a Framework laptop because I like and want to support their concept of modular and upgradable laptops. After the overwhelming funding round, I decided to go with the top of the line AMD CPU alternative they offer, 96GB of RAM and 4TB of storage. This should make the laptop last a while I think.

  • CPU: AMD Ryzen AI 9 HX 370. Up to 5.1 GHz. 12 cores, 24 threads.
  • Graphics (integrated): AMD Radeon 890M. Up to 2.9GHz. 16 Graphics Cores
  • Wifi: AMD RZ717 Wi-Fi 7
  • Display: 13.5″ 2880×1920 120Hz matte display (3:2 ratio)
  • Memory: DDR5-5600 – 96GB (2 x 48GB)
  • Storage: WD_BLACK SN850X NVMe – M.2 2280 – 4TB
  • Laptop Bezel: Framework Laptop 13 Bezel – Black
  • Keyboard: Swedish/Finnish (2nd Gen)
  • Dimensions: 15.85mm x 296.63mm x 228.98mm
  • Weight: 1.3 Kg

Outputs

The laptop has four slots available for ports. I have USB-C, USB-A, HDMI and external Ethernet modules. I bought a few more than four, because I don’t know which exact setup I will prefer and they are interchangeable so I can change them according to the situation I’m in.

Dimensions compared to the old

My old laptop was a Lenovo T470S 14″.

Dimensions: 18.8 mm x 331 mm x 226.8 mm
Weight 1.32 kg

So the new one is 3 mm thinner, 3 cm narrower and pretty much the same depth (+2mm) and pretty much the same weight.

Assembling

Ordered without Windows installed (of course), this thing arrived like an IKEA flat-pack and there was some assembly required. The necessary screwdriver comes included and I could complete the task in under ten minutes. Not at all complicated.

Linux

I noticed two different Linux distributions offered as “easy installs” with guides from Framework, but as none of them were Debian I opted to take the more complicated route.

Debian

I downloaded a DVD iso image for Debian testing, copied it onto a USB stick and booted up Sprout with it. The installation went like a breeze and it detected the Wifi networking just fine.

Once the system came up for real without the USB stick, I edited the necessary files and took it up to current Debian Unstable over wifi with no problems.

Initial glitches

I experienced some glitches (X or the keyboard or something would stop accepting input after 5-15 minutes of use), which I first thought was due to an older Linux kernel as I had friends tell me that I might need 6.15+ for proper hibernation support and Debian unstable only has a 6.12 one just now. I switched to the Debian experimental kernel (6.16-rc7) but the issue remained. Hm?

I then remembered I hadn’t upgraded the laptop BIOS to its latest version yet, and after having invoked

fwupdmgr refresh --force
fwupdmgr get-updates
fwupdmgr update

and done a reboot, it first seemed to have fixed the problems but I was wrong. Is it X11 related? I have now switched my desktop to Plasma/Wayland to see if it fixes the problem. I might switch around a little bit more if I see it again because it is clearly a software glitch and not a hardware problem. Hardly Framework’s fault but instead more of a thing that happens occasionally when you run bleeding edge stuff. I’ll sort it out.

Console

Having a small but high DPI screen and trying to use the console with its default (tiny) font is next to impossible, at least with my aging eyes, so I spent a few minutes to figure out how to use setfont and then to invoke dpkg-reconfigure console-setup.

I find it a little curious that the Debian installer doesn’t have any easy provided option to do this already at install time.

A message

A few days after I had received my laptop I received a package via FedEx, and as I opened it I found this lovely note and some presents from Framework!

I know some of my followers tagged and mentioned Framework during the crowdfunding campaign but I of course didn’t expect anything from that.

The thing that looks like a CD-R among the gifts is actually a mouse mat, slightly larger than a CD. The small packages are USB-C modules for the laptop.

This little message still holds and shows more appreciation than what I have received from most companies that ever used my Open Source. It’s not a high bar. I truly appreciate it – said entirely without sarcasm.

Impressions and Performance

Just to give you a small idea of the performance difference, I decided to compare a simple but common operation I do. Build curl. It basically requires three command lines:

autoreconf -fi

This invokes a series of tools to setup the build.

Sprout: 4.8 seconds

Old: 9.3 seconds

Diff: 1.9 times faster

configure –with-openssl

A long series of single-threaded tests of the environment. Lots of invokes of gcc to check for features, functions etc.

Sprout: 10.4 seconds

Old: 11.1 seconds

Diff: 1.1 times faster

make -sj

This invokes gcc and forks off lots of new processes. The old machine’s 4 threads vs the new 24 threads probably plays a role here.

Sprout: 8.9 seconds

Old: 60.6 seconds

Diff: 6.8 times faster

(My desktop PC does the same in under 4 seconds.)

Keyboard

This is not a full-time development machine for me and I have never been fully productive on a laptop and I don’t expect to be on this new one either. I don’t think a laptop keyboard exists that can satisfy me the way a proper one can.

The Framework one does not have dedicated page up/down keys for example. The keys still feel decently fine to press and I think I will adjust to the layout over time.

Stickers

I offered everyone who donated 200 USD or more for the laptop sticker space on my cover, but so far not a single one has reached out to make this reality. To honor my promise I intend to wait a little while before I put my first stickers on it.

For reference this is what my old laptop looks like.