Category Archives: Blogging

Meta topic when writing about blogging, ranting, posting, blah blah

Summing up My 2019

2019 is special in my heart. 2019 was different than many other years to me in several ways. It was a great year! This is what 2019 was to me.

curl and wolfSSL

I quit Mozilla last year and in the beginning of the year I could announce that I joined wolfSSL. For the first time in my life I could actually work with curl on my day job. As the project turned 21 I had spent somewhere in the neighborhood of 15,000 unpaid spare time hours on it and now I could finally do it “for real”. It’s huge.

Still working from home of course. My commute is still decent.

HTTP/3

Just in November 2018 the name HTTP/3 was set and this year has been all about getting it ready. I was proud to land and promote HTTP/3 in curl just before the first browser (Chrome) announced their support. The standard is still in progress and we hope to see it ship not too long into next year.

curl

Focusing on curl full time allows a different kind of focus. I’ve landed more commits in curl during 2019 than any other year going back all the way to 2005. We also reached 25,000 commits and 3,000 forks on github.

We’ve added HTTP/3, alt-svc, parallel transfers in the curl tool, tiny-curl, fixed hundreds of bugs and much, much more. Ten days before the end of the year, I’ve authored 57% (over 700) of all the commits done in curl during 2019.

We ran our curl up conference in Prague and it was awesome.

We also (re)started our own curl Bug Bounty in 2019 together with Hackerone and paid over 1000 USD in rewards through-out the year. It was so successful we’re determined to raise the amounts significantly going into 2020.

Public speaking

I’ve done 28 talks in six countries. A crazy amount in front of a lot of people.

In media

Dagens Nyheter published this awesome article on me. I’m now shown on the internetmuseum. I was interviewed and highlighted in Bloomberg Businessweek’s “Open Source Code Will Survive the Apocalypse in an Arctic Cave” and Owen William’s Medium post The Internet Relies on People Working for Free.

When Github had their Github Universe event in November and talked about their new sponsors program on stage (which I am part of, you can sponsor me) this huge quote of mine was shown on the big screen.

Maybe not media, but in no less than two Mr Robot episodes we could see curl commands in a TV show!

Podcasts

I’ve participated in three podcast episodes this year, all in Swedish. Kompilator episode 5 and episode 8, and Kodsnack episode 331.

Live-streamed

I’ve toyed with live-streamed programming and debugging sessions. That’s been a lot of fun and I hope to continue doing them on and off going forward as well. They also made me consider and get started on my libcurl video tutorial series. We’ll see where that will end…

2020?

I figure it can become another fun year too!

A curl delivery network

I’ve run my own public web sites on hardware I’ve administered myself for over twenty years now. I’ve hosted the curl web site myself since it’s inception.

The curl web site at curl.haxx.se has recently been delivering roughly 1.5 terabyte of data to the world per month. The CA bundle we convert to PEM from the Mozilla source code, is alone downloaded more than 100,000 times per day. Occasional blog entries I’ve posted here on my blog have climbed very fast on popular sites such as Hacker news and Reddit, and have resulted in intense visitor storms hitting this same server – sometimes reaching visitor counts above 200,000 “uniques” – most of them within the first few hours of the publication. At times, those visitor spikes have effectively brought the server to its knees.

Yes, my personal web site and the curl web site are both sharing the same physical server. It also hosts more than a dozen other sites and numerous services for our own pleasures and fun, providing services for a handful of different open source projects. So when the server has to cease doing work because it runs out of memory or hits other resource restraints, that causes interruptions all over. Oh yes, and my email doesn’t reach me.

Inconvenient and annoying.

The server

Haxx owns and runs this co-located server that we have a busload of web servers on – for the good of the projects and people that run things on it. This machine’s worst bottle neck is available RAM memory and perhaps I/O performance. Every time the server goes down to a crawl due to network traffic overload we discuss how we should upgrade it. Installing a new machine and transferring over all the sites and services is work. Work that none of us at Haxx are very happy to volunteer to do. So it hasn’t been done yet, and frankly the server handles the daily load just fine and without even a blink. Which is ninety nine point something percent of the time…

Haxx pays for a certain amount of network traffic so as long as we’re below some threshold we remain paying the same monthly fee. We don’t want to increase the traffic by magnitudes as that would cost more.

The specific machine, that sits deep inside a server room in Stockholm Sweden, is a five(?) years old Dell Poweredge E310, Intel Xeon X3440 2.53GHz with 8GB ram, This model is shown on the image at the top.

Alternatives that hasn’t helped

Why not a mirror system? We had a fair amount of curl site mirrors a few years ago, but it never worked well because they were always less reliable than the main site and they often turned stale and out of sync with the master site which eventually just hurt users.

They also trick visitors into bookmark or otherwise go back to the mirror site instead of the real one and there were always the annoying people who couldn’t resist but to fill the mirror with ads and stuff. Plus, they didn’t help much with with the storms to the main site.

Why not a cloud server? Because with the amount of services, servers and various things we do on our server, it would be inconvenient and expensive. But perhaps even more because we started out like this so we have invested time and energy into the infrastructure as it works right now. And I enjoy rowing my own boat!

The CDN

Fastly reached out and graciously offered to help us handle the load. Both on the account of traffic amounts but also to save our machine from struggling this hard the next time I’ll write something that tickles people’s curiosity (or rage) to that level when several thousands of visitors want to read the same article at the same time.

Starting now, the curl.haxx.se and the daniel.haxx.se web sites are fronted by Fastly. It should give web site visitors from all over the world faster response times and it will make the site more reliable and less likely to have problems due to traffic load going forward.

In case you’re not familiar with what a CDN is, a simplified explanation would say it is a globally distributed network of reverse proxy servers deployed in multiple data centers. These CDN servers front the Internet and will to the largest extent possible serve the visitors with the right content directly from their own caches instead of them reaching the actual lowly backend server I run that hosts the original content. Fastly has lots of servers across the globe for this purpose. Users who are a long way away from Sweden will probably be the ones who will notice this change the most, as you may suddenly find haxx.se content much closer (network round-trip wise) than before.

Standards

These new servers will host the sites over HTTPS just like before, and they will require TLS 1.2 and SNI. They will work over IPv6 and support HTTP/2.  Network standard wise, there shouldn’t be any step down – and honestly, I haven’t exactly been on the cutting edge of these technologies myself for these sites in the past.

Editing the site

We will keep editing and maintaining the site like before. It is made up of an old system with templates and include files that generate mostly static web pages. The site is mostly available on github and using that, you can build a local version for development and trying out changes before they land.

Hopefully, this move to Fastly will only make the site faster and more reliable. If you notice any glitches or experience any problems with the site, please let us know!

This post was not bought

coinsAt times I post blog articles that get the view counter go up to and beyond 50,000 views. This puts me in a position where I get offers from companies to mention them or to “cooperate” on further blog posts that would somehow push their agenda or businesses.

I also get the more simple offers of adding random ads or “text only information” on specific individual pages on my sites that some SEO person out there figured out could potentially attract audience that search for specific terms.

I’ve even gotten offers from a company to sell off my server logs. Allegedly to help them work on anti-fraud so possibly for a good cause, but still…

This is by no counts a “big” blog or site, yet I get a steady stream of individuals and companies offering me money to give up a piece of my soul. I can only imagine what more popular sites get and it is clear that someone with a less strict standpoint than mine could easily make an extra income that way.

I turn down all those examples of “easy money”.

I want to be able to look you, my dear readers, straight in the eyes when I say that what’s written here are my own words and the opinions revealed are my own – even if of course you may not agree with me and I may do mistakes and be completely wrong at times or even many times. You can rest assured that I did the mistakes on my own and I was not paid by anyone to do them.

I’ve also removed ads from most of my sites and I don’t run external analytic scripts, minimizing the privacy intrusions and optimizing the contents: the stuff downloaded from my sites are what your browser needs to render the page. Not heaps of useless crap to show ads or to help anyone track you (in order to show more targeted ads).

I don’t judge others’ actions based on how I decide to run my blog. I’m in a fortunate position to take this stand, I realize that.

Still biased of course

This all said, I’m still employed by a company (Mozilla) that pays my salary and I work on several projects that are dear to me so of course I will show bias to some subjects. I don’t claim to have an objective view on things and I don’t even try to have that. When I write posts here, they come colored by my background and by what I am.

Blog refresh

Dear reader,

If you ever visited my blog in the past and you see this, you should’ve noticed a pretty significant difference in appearance that happened the other day here.

When I kicked off my blog here on the site back in August 2007 and moved my blogging from advogato to self-host, I installed WordPress and I’ve been happy with it since then from a usability stand-point. I crafted a look based on an existing theme and left it at that.

Over time, WordPress has had its hefty amount of security problems over and over again and I’ve also suffered from them myself a couple of times, and a few times I ended up patching it manually more than once. At one point when I decided to bite the bullet and upgrade to the latest version it didn’t work to upgrade anymore and I postpone it for later.

Time passed, I tried again without success and then more time passed.

I finally fixed the issues I had with upgrading. With a series of manual fiddling I finally managed to upgrade to the latest WordPress and when doing so my old theme was considered broken/incompatible so I threw that out and started fresh with a new theme. This new one is based on one of the simple default ones WordPress ships for free. I’ve mostly just made it slightly wider and edited the looks somewhat. I don’t need fancy. Hopefully I’ll be able to keep up with WordPress better this time.

Additionally, I added a captcha that now forces users to solve an easy math problem to submit anything to the blog to help me fight spam, and perhaps even more to solve a problem I have with spambots creating new users. I removed over 3300 users yesterday that never posted anything that has been accepted.

Enjoy.  Now back to our regular programming!

Me in numbers, today

Number of followers on twitter: 1,302

Number of commits during the last 365 days at github: 686

Number of publicly visible open source commits counted by openhub: 36,769

Number of questions I’ve answered on stackoverflow: 403

Number of connections on LinkedIn: 608

Number of days I’ve committed something in the curl project: 2,869

Number of commits by me, merged into Mozilla Firefox: 9

Number of blog posts on daniel.haxx.se, including this: 734

Number of friends on Facebook: 150

Number of open source projects I’ve contributed to, openhub again: 35

Number of followers on Google+: 557

Number of tweets: 5,491

Number of mails sent to curl mailing lists: 21,989

TOTAL life achievement: 71,602

“haking”

(This is an authentic email we received at Haxx the other day. Names, emails and URLs are replaced in this excerpt to save the innocent)

Date: Thu, 29 Nov 2012 14:59:25
Subject: haking

hello, can you tell me how to hack into web site:
[FIRST URL]
so it is showing:

[OTHER URL]
when you click on a link in google results?

for example if you click on a google result:
[URL to a google.rs search for something on the FIRST URL site]

the point is i would like to protect my web site form that kind of attack so please let me know how to do that

how did i found you? there is your address at [FIRST URL]/coockies.txt so i think you did it, but was polite enough to leave address.. please help me.

Of course I was curious enough to check the “coockies.txt” file, and the beginning of that file looked like this:

# Netscape HTTP Cookie File
# http://curlm.haxx.se/rfc/cookie_spec.html
# This file was generated by libcurl! Edit at your own risk.
[FIRST URL] FALSE	/	FALSE	0	PHPSESSID	dfn1a5ll0hs8odpfh3p2qtlcj3

This tells us a few trivial things, all of which might not be obvious to the untrained eye:

  • The file was generated by libcurl that was 7.16.0 or later, but no later than 7.18.3 as we only used the URL in that file between those releases.
  • The spelling of that cookie file is so hilarious we can guess it wasn’t a native English speaker who named it. The subject of the email is similarly bad so perhaps it was a fellow countryman of Serbia? (the TLD of the google URL was .rs after all)
  • The person doing this didn’t even try to clean up the remaining junk file(s) afterwards
  • The guy sending me the email is completely in the blue of what has happened or even who he’s contacting or my relation to this all.
  • The world can be a harsh and cruel place and it isn’t easy to know your way around all of it…

this vs that and ssh through proxy

Taken from the web stats for daniel.haxx.se during September 2012. The top-10 search phrases used to end up on a page on this site:

  1. ssh proxy (198)
  2. curl vs wget (145)
  3. ftp vs http (92)
  4. wget vs curl (91)
  5. ssh through proxy (72)
  6. http vs ftp (67)
  7. curl wget (55)
  8. wget curl (53)
  9. http ftp (46)
  10. difference between ftp and http (45)

The top-3 most visited pages on my site during the same month were:

  1. SSH Through or Over Proxy (viewed 4800 times)
  2. curl vs Wget (viewed 3000 times)
  3. FTP vs HTTP (viewed 2300 times)

I guess this tells me something. I’m not sure what…

a 20 to 1 spam to comment ratio

It has been a little over 1500 days since I started this (wordpress’ed) version of my blog. During this time, I’ve posted entries, people have submitted comments and most of all there have been spammers posting “comments”.

During these 1500 days I’ve posted over 600 blog entries. Roughly one entry every 2.5 days. We can see that my visitors aren’t that talkative in comparison as I’ve received some 550 comments in total to my blog posts.

10,000 spam comments have been submitted. That means roughly 20 times more spam than legitimate comments. The world can indeed be a sad place at times! 🙁

11 years of me

On May 11th 2000 I posted by first blog entry that is still available online on advogato.org. No surprise but it was curl-related.

The full post was:

I was made aware of the fact that curl is not really dealing well with the directory part of an ftp URL.

I was gonna quote the appropriate text piece from RFC1738 (yes, it is obsoleted by RFC2396 although 1738 has more detailed info about particular protocols like ftp) to someone when I noticed that I had interpreted it wrong when I read it before.

The difference between getting a file relative the login directory or with absolute path. It turns out you have to get a path like ftp.site.com/%2etmp/ if you want have the absolute path “/tmp”. Oh well, I have it support my old way as well even if that isn’t following the RFC just to allow people using that way to be able to use the new one unmodifed…

… which I guess proves that even though lots of time has passed, I still occupy myself with the same kind of hobbies and side- projects…