What distro do you use for your servers?

communism@lemmy.ml to Linux@lemmy.ml – 88 points –

I've only ever used desktop Linux and don't have server admin experience (unless you count hosting Minecraft servers on my personal machine lol). Currently using Artix and Void for my desktop computers as I've grown fond of runit.

I'm going to get a VPS for some personal projects and am at the point of deciding what distro I want to use. While I imagine that systemd is generally the best for servers due to the far more widespread support (therefore it's better for the stability needs of a server), I have a somewhat high threat model compared to most people so I was wondering if maybe I should use something like runit instead which is much smaller and less vulnerable. Security needs are also the reason why I'm leaning away from using something like Debian, because how outdated the packages are would likely leave me open to vulnerabilities. Correct me if I'm misunderstanding any of that though.

Other than that I'm not sure what considerations there are to make for my server distro. Maybe a more mainstream distro would be more likely to have the software in its repos that I need to host my various projects. On the other hand, I don't have any experience with, say, Fedora, and it'd probably be a lot easier for me to stick to something I know.

In terms of what I want to do with the VPS, it'll be more general-purpose and hosting a few different projects. Currently thinking of hosting a Matrix instance, a Mastodon instance, a NextCloud instance, an SMTP server, and a light website, but I'm sure I'll want to stick more miscellaneous stuff on there too.

So what distro do you use for your server hosting? What things should I consider when picking a distro?

121

I love Debian for servers. Super stable. No surprises. It just works. And millions of other people use it as well in case I need to look something up.

And even when I'm lazy and don't update to the latest release oldstable will be supported for years and years.

@bjoern_tantau @communism That 'support for years and years' means security support. So even if the nominal versions stay stable, security fixes are backported. Security scans that only check versions usually give false positives: they think fixes in newer versions are not present when in fact they are.

Many others distros do exactly the same. I only chose Debian because the amount of software already packaged in the distro itself is bigger than any other, barring 3rd party repos.

Debian

This is the way.

Add unattended-upgrades, and never worry about security updates.

I'm using cron to run daily "sudo apt update && sudo apt upgrade -y" LMAO, what's the way to use unattended-upgrades?

Thx

Edit: I will stay with cron I believe it's easier to configure.

sudo apt install cron sudo crontab -e @daily sudo apt update && sudo apt upgrade -y

Easy peasy..

sudo apt install cron sudo crontab -e @daily sudo apt update && sudo apt upgrade -y

I have 20 years of history with the RPM version of this workflow and up to EL6 it was solid like bedrock. Now it's merely solid like a rock, but that's nothing to do with the tools or formats but the payload. And as long as it stays acceptably good, this should do us for another 20 years.

Controlling the supply chain is important, though, but is far more scalable where effort is concerned.

I run NixOS. It (or something like it, with a central declarative configuration for basically everything on the system) is imo the ideal server distro.

I think I can sense your love/hate relationship with nixos from here :) you are not alone

Very true haha. NixOS is great and the best I've got right now but I would lie if I said it has never been painful.

Especially for desktop use I want to build my own distro which takes a lot from NixOS, mostly in terms of the central configuration but not much else (I definitely want a more sane package installation situation where you don't need stuff like wrapper scripts which are incredibly awful imo), but also other distros, and also with some unconventional things (such as building it around GNUstep). But who knows if that ever gets off the ground, I have way too many projects with enormous scale...

Always, always, always: Debian. It's not even a debate. Ubuntu is a mess for using as a server with their snaps bullshit. Leave that trash on the desktop, it's a mess on a server.

Snaps are meant for server applications but yeah

I tried them by standing up a snap based docker server and it was a nightmare. Never again.

Snaps are meant for server applications

That's a frightening statement. I don't work in secret-squirrel shit these days, but I do private-squirrel stuff, and snaps are just everything our security guys wake up at night to, screaming. Back when I ran security for a company, the entire idea would have been an insta-fuckno . Please, carefully reconsider the choices that put you in a position where snaps are the best answer.

My server is running headless Debian. I run what I can in a Docker container. My experience has been rock solid.

From what I understand Debian isn't less secure due to the late updates. If anything it's the opposite.

Always Debian. I'm most comfortable in an environment with apt, and that's even more important on a server

Debian, with a Kubernetes cluster on top running a bunch of Debian & Alpine containers. Never ever Ubuntu.

Never ever Ubuntu

Why's that?

Because Ubuntu is the worst of both worlds. Its packages are both old and unstable, offering zero benefit over always-up-to-date distros like Arch or the standard Debian.

Especially when you're running a containerised environment, there's just no reason to opt for anything other than a stable, boring base OS while your containers can be as bleeding edge, crazy, or even Ubuntu-based as you like.

it’s just less reliable, more corporate, more bloated debian

… so why would you?

I second this. I run fedora on my desktop and debian on the server. Docker works great on debian as well.

I switched mine to NixOS a while ago. It's got a steep learning curve, but it's really nice having the entire server config exist in a handful of files.

uCore spin of Fedora CoreOS:

https://github.com/ublue-os/ucore

  • SELinux
  • Supports secure boot
  • Immutable root partition (can't be tampered with)
  • Rootless Podman (significantly more secure than Docker)
  • Everything runs in containers
  • Smart and secure opinionated defaults
  • Fedora base is very up-to-date, compared to something like Debian

How did you set up the intial system?
From what I've seen, FCOS needs an ignition file and has no Anaconda installer. I would like to set it up soon too, but it looked like a huge hazzle...

Yes you need an ignition file, but you just need to put it on any web accessible (local) host.

I used a docker one-liner on my laptop to host the server:

docker run -p 5080:80 --name quick-webserver -v "$PWD":/var/www/html php:7.2-apache

And put this Ignition file in the directory I ran the above command from: https://github.com/ublue-os/ucore/blob/main/examples/ucore-autorebase.butane

You could equally put the Ignition file on some other web host you have, or even Github.

That's it, that's the only steps.

If you want atomic Fedora but don't want to deal with the ignition file stuff, check out Fedora IoT.

Thing is, uCore has some very neat things I want, and FIOT doesn't provide me such a great OOTB experience compared to the uBlue variant.


I'm also not sure if I even should decide for Fedora Atomic as a server host OS.

I really love Atomic as desktop distro, because it is pretty close to upstream, while still being stable (as in how often things change).

For a desktop workstation, that's great, because DEs for example get only better with each update, and I want to be as close to upstream as possible, without sacrificing reliability.
The two major releases each year cycle is great for that.

But for a server, even with the more stable kernel, I think that's maybe too unstable? I think Debian is less maintenance, because it doesn't change as often, and also doesn't require rebooting as often.

What's your experience with it?

doesn’t require rebooting as often.

You have to reboot to upgrade to the latest image, so you'll have to get rid of the ideal of uptime with years showing on the clock.

Rebooting is optional, and so far it's been rock solid. Since your workload is all containerised everything just comes up perfectly after a reboot without any intervention.

I think Debian is less maintenance

Arguably that's the best feature of an atomic server. I don't need to perform any maintenance, and I don't need to worry that I've configured it in some way that has reduced my security. That's all handled for me upstream.

Debian has been rock solid for me.

It's not insecure. Quite the contrary debian repositories only include packages that has been through extensive testing and had been found secure and stable. And of course it regularly introduce security updates.

It’s not insecure.

There's the inconvenient truth: it's easiest to secure an OS, say for enterprise life, the farther you are from the bleeding edge: churn is lower, the targets move dramatically slower, and testing an install set (as a set) is markedly easier. It's why enterprise linux distros are ALL version-branched at a given version, and only port security fixes in: if you need to change a package and start the extensive testing, keep it to security fixes and similarly drastic reasons.

So most ent-like distros aren't insecure; not at all. Security is the goal and the reason they endure wave after yearly wave of people not understanding why they don't surf that bleeding edge. They don't get it.

Enterprise distros also offer a really stable platform to release stuff on; that was a mantra the sales team used for Open that we'd stress in ISV Engineering too, as we dealt with companies and people porting onto Open. But ISVs had their own inexperienced types for whom the idea of a stable platform that guaranteed a long life to their product with guaranteed compatibility wasn't as valuable as "ooh shiny". But that was the indirect benefit: market your Sybase or ProgressDb on the brand new release and once it's working you don't have to care about library rug-pulls or similar surprises for a fucking decade (or half that as you start the next wave onto the next distro release). And 5 years is a much better cadence than 'every week'.

So while it's easy to secure and support something that never moves, that's also not feasible: you have to march forward. So ent distros stay a little back from the bleeding edge, market 'RHL7' or 'OL31' as a stable LTS distro, and try to get people onto it so they have a better time of it.

Just, now devs have to cope with libs and tools that are, on average, 5 years stale. For some, that's not acceptable. And that's always the challenge.

I just use debian cause it's rock solid and most of what I set up are in containers or VM'S anyways

It’s not conventional wisdom, but I’m happiest with arch.

  • I’m familiar with it
  • can install basically any package without difficulty
  • also love that I never have a gigantic version upgrade to deal with. sure there might be some breaking change out of nowhere, but it’ll show up in my rss feeds and it hits all my computers at the same time so it’s not hard to deal with.
  • Arch never really surprises me because there’s nothing installed that didn’t choose to put there.
  • arch wiki

Tempted by nixos but I CBA to learn it.

I agree and use Arch as well, but of course I wouldn't recommend it for everyone. For me, having the same distribution on both server and desktop makes it easier to maintain. I run almost everything using containers on the server and install minimal packages, minimizing my upgrade risk. I haven't had an issue yet, but if I did I have btrfs snapshots and backups to resolve.

same exact setup, I'm running arch for years on both server and desktop, btrfs and containers. It's beautiful and I click perfectly with it's maintenance workflow

arch is great if you don’t really care about your server being reliable (eg home lab) but their ethos isn’t really great for a server that has to be reliable… the constant update churn causes issues a lot more than i’d personally like for a server environment

I could not disagree more. Arch is unstable in the meaning that it pushes breaking changes all the time, (as opposed to something like Ubuntu where you get hit with them all at once), but that’s a very different thing from reliability.

There are no backported patches, no major version upgrades for the whole system, and you get package updates as soon as they are released. Arch packages are minimally modified from upstream, which also generally minimizes problems.

The result has been in my experience outstandingly reliable over many years. The few problems I do encounter are almost always my own fault, and always easily recovered from by rolling back a snapshot.

disagreement is fine, but there was literally a thread about “linux disinformation” where the OP asked for examples of things people say about linux that are untrue

the top answers by FAR are that arch is stable

saying that arch is stable, or easy for newcomers is doing the linux ecosystem a disservice

you should never use arch for a server - arbitrary, rather than controlled and well-tested updates to the bleeding edge is literally everything you want to avoid in a server OS

@pupbiru @traches , I certainly second this. People don't need to become experts in Linux Distros, but they need to know what they want and need from their OS.

If it's browsing and writing word documents, maybe you don't need a constant stream up updates and a stable LTS would suffice. Maybe even a regular 6 month release like Fedora will probably suffice. Even Debian would be great, if upgrading is annoying and newest software isn't really important.

Gaming? There are distros for that.

@pupbiru @traches I have used Arch, I am definitely not new to the Linux scene. I have servers, all my workstations and laptops run it. I professionally write software. I didn't like the Arch experience at all. I qould definitely never recommend it to anyone, that's something they can one day decide for themselves.

I’m also not new to the Linux scene, I also run a variety of distros on a variety of machines including servers and I also write software professionally. Arch is fucking great.

@traches , I firmly believe that. It wouldn't be what it is if it didn't do it well. In my opinion, Arch has the best documentation and I use it for other distros. I don't use Arch and wouldn't recommend it to someone new to the scene.

Totally fair, I agree it is definitely not a good first distro. I think everyone should follow the manual setup process the first time and not use archinstall, because it’s the tutorial which teaches you what’s on your system and how it works.

I didn’t say it was stable, I specifically said it was unstable. Because it is. I said arch is reliable, which is a completely different thing.

Debian is stable because breaking changes are rare. Arch is unstable because breaking changes are common. In my personal experience, arch has been very reliable, because said breaking changes are manageable and unnecessary complexity is low.

that’s fair, and i think that in the context that we were both talking about, what we both wrote was reasonably correct

arch is a reliable OS that is sometimes unstable

but a server needs a stable OS to be reliable, which means that whilst arch can be a reliable OS, it does not make a particularly reliable server

NixOS for my homelab that I like to tinker with, Debian as Docker host for the server people actually rely on

I used to use Ubuntu, but nowadays I just go with Debian for servers (as well), but you said you wish to choose something else, so I can't give you any meaningful inputs...

I don't know how real the outdated packages threat, but I would assume, a server never really wants the bleeding edge software and Debian usually gets the critical security updates and patches.

But I'm no expert.

It is true that Bookworm is kinda old now, though.

Yeah I agree I don't want bleeding edge hence why I won't be using anything Arch-based (despite the fact that Arch-based systems are the ones I'm most familiar with, I'm typing this on an Artix system rn). But there is definitely a middle ground between bleeding edge and outdated, and I imagine a server should want to be somewhere between the middle and outdated, depending on how they balance stability and security.

I'm also not categorically opposed to using Debian. Ubuntu was my first Linux distro so I'm at least more familiar with Debian-based distros than most other popular server distros. I was just thinking probably not Debian because of how old its packages are and that I'm fairly concerned with security.

Debian runs on most cloud servers, it's pretty secure. The outdated packages refer mostly to apps, which is the reason why Debian is so stable. No frills and boring. Documentation is plenty on the internet and for server space it's probably the most compatible OS.
I'm running Debian 11, kernel 6.10 on Odroid. Arch on my desktop.

CentOS Stream 8. Which I regret. Because they ended support without upgrade path.

I thought you could still go Centos Stream 9?

Anyway, I'm pretty sure almalinux-deploy allows migration from Centos Stream 8... it's your second chance to be done with fickle management decisions from RedHat/IBM: don't miss it this time :)

Centos Stream 9

Nah. It's a bad idea, done poorly, for bad reasons.

I have tons of experience with enterprise linux, so I tend to use Rocky linux. It’s similar to my Fedora daily driver, which is nice, and very close to the RHEL and Centos systems I used to own.

You are slightly mistaken with your assumption that debian is insecure because of the old packages. Old packages are fine, and not inherently insecure because of its age. I only become concerned about the security implications of a package if it is dual use/LOLBin, known to be vulnerable, or has been out of support for some time. The older packages Debian uses, at least things related to infrastructure and hosting, are the patched LTS release of a project.

My big concerns for picking a distro for hosting services would be reliability, level of support, and familiarity.

A more reliable distro is less likely to crash or break itself. Enterprise linux and Debian come to mind with this regard.

A distro that is well supported will mean quick access to security patches, updates, and more stable updates. It will have good, accurate documentation, and hopefully some good guides. Enterprise linux, Debian and Ubuntu have excellent support. Enterprise linux distros have incredible documentation, and often are similar enough that documentation for a different branch will work fine. Heck, I usually use rhel docs when troubleshooting my fedora install since it is close enough to get me to a point where the application docs will guide me through.

Familiarity is self explanatory. But it is important because you are more likely to accidentally compromise security in an unfamiliar environment, and it’s the driving force behind me sticking with enterprise linux over Nixos or a hardened OpenBSD.

As a fair word of warning, enterprise linux will be pretty different compared to any desktop distro, even fedora. It takes quite a bit of learning, to get comfortable (especially with SELinux), but once you do, things will go smoothly. you can also use a pirated rhel certification guide to learn enterprise linux

If anything, you can simply mess around in a local VM and try installing the tools and services needed before taking it to the cloud.

openSUSE worth a consideration. More frequent releases than debian, but still pretty conservative

Debian backports security updates to most software, including popular server software. Stable also always uses an LTS kernel, which stays supported upstream. So long as you’re using latest Debian Stable (Bookworm as of this writing), run apt update often (in fact, ‘’’unattended-upgrades’’’ is probably not the worst idea in this case) and do common sense security practices like a firewall and (brain is not working), you should be good.

In brief, it’s totally fine to use Debian and in fact one of the best options in my opinion.

I have one server running arch and 3 running debian.

So far they are equally stable after running for about half a year.

Autoupdates are turned on on all of them. Which I am aware is against the arch wiki recommendations, but the server is not critical, easy to migrate and has nightly offsite backups anyway.

Debian and Ubuntu server which, barring some differences in versions, are basically the same thing

They're both awesome

Debian!

I've heard good things about Alma Linux.

Also, Ubuntu's not that bad. You'd see this a lot in corporate settings.

You don’t wanna use rolling release distros trust me, the whole point of server is automation and less maintenance. I got couple personal servers running, after things i need got setup and all of them running at a decent capacity, i just turn them on and never worry about them. Old package and software doesn’t necessarily mean less security, quite opposite actually, i suggest you take a look at how stable distros distribute their software, such as Debian. For a Debian package becomes stable, it has to go through several stages, experimental, unstable, testing, and finally stable, that’s why their packages are old, and because they are old, they are secure. It might be quite opposite than what you expect.

Mostly i use Debian for my personal servers, some of them are stable and some of them are testing, because of Podman’s new feature Quadlet. Honestly many features of Debian feel really old, like APT’s source list, preferences, and the way to deal with unattended upgrades. It’s kinda hard to get it at first and it’s easy to shoot yourself in the foot, especially many people tend to unintentionally mix and match packages from different suites for new software. But once you get comfortable with it things just work.

As my experience, no matter what distros i use, the worst distros are always those that i don’t understand and in a hurry to put them into production. Just pick one popular server distro and learn the ecosystem, you will find out what distros you like really soon.

Yeah, and key point in why old packages are secure is that versions with serious bugs and vulns don't get to the next stage, and if a package in stable is finally going to have one, they'll release a patch for it with just enough changes that fixes the serious issue.
There are some exceptions for very complex software, like Debian maintainers cannot be expected to be able to understand and see through something like Firefox. There they mitigate it by using ESR releases that are maintained by Mozilla.

Ubuntu LTS. Currently on 22.04.

Servers are the one thing I've generally heard people agree that snaps are good for, so given its history it's a bit of a strange thing to hear of Ubuntu being a better server distro than desktop distro nowadays.

snaps are like poor man’s containers when it comes to servers… maybe better than having single-use VMs but if you’re wanting to build out real systems in a modern way, i literally haven’t worked with anyone using ubuntu in the last ~10 years

Ubuntu here as well! Sticking with just the LTS versions tho 😎

I've been running arch for like 3 years now. Why arch? Because it just works (and its the only one i have esperience with). Maybe ill try nixos one day.

I currently use Ubuntu for all my machines (desktops, laptops, and servers), but I used to use Void Linux on my machines for about 6 years, including on a couple of VPSes. Since you are familiar with Void Linux, you could stick with that and just use Docker/Podman for the individual services such as Matrix, Mastodon, etc.

In regards to Debian, while the packages are somewhat frozen, they do get security updates and backports by the Debian security team:

https://www.debian.org/security/

There is even a LTS version of Debian that will continue backporting security updates:

https://www.debian.org/lts/

Good luck!

I use nixos, due to the incredible state management. You know exactly what versions of packages are on your machine, can build all packages from source yourself or download from a binary cache. 100% reproducible. Steep ass learning curve but tbh it's well worth it. Saves you configuration time and energy in the long run. I've stopped distro hopping the implementation is so good. If you are concerned about security you can definitely harden it. There's a lot more to security then package version. And even then nixos gives you the choice.

I use Alpine Linux. It's exceptionally stable, great for pretty much any device and is best for small VPS with limited space/ram. Nice package manager too, but it is limited in packages.

It works great for me since I only use docker containers, but some things outside docker may require something like Debian instead.

Alpine Linux

Alpine is so great for so many reasons. I don't like its packaging format, but its composition otherwise is just top-notch. I'm a huge fan when the one nit isn't an issue. It also avoid cancers like systemd, and it makes it a joy to use.

Downvotes for recommending alpine? This is my baffled face.

If you are already familiar with one package manager, pick a distro that also uses that package manager.

When deciding on the release track, the harder it is to recover the system, the more stable the track should be. Stable does not imply secure.

As you move up through virtualization layers, the less stable the track needs to be, allowing access to more recent features.

Steer clear of distros that pride themselves on using musl. It's historically slow and incomplete. Don't buy into the marketing.

Think about IaC. Remote management is a lot more comfortable if you can consider your server ephemeral. You'll appreciate the work on the day you need to upgrade to a new major release of the distro.

Unraid is amazing for getting into servers. It's just the right amount of WebUI and minimalism. Very safe and comfortable defaults, and the ability to start tweaking and adding more.

Had to scroll far to find this hehe, but count me in - Unraid for the win! Great OS and fantastic community 🧡

Personally, I use Rocky Linux on my servers. It’s stable, and has plenty of support since it’s RHEL-based. It’s supported until 2030 or so, and it doesn’t have any of the cloud-init or netplan stuff that Ubuntu Server has.

It’s also pretty simple to set up docker/podman containers, although you need the EPEL for podman-compose and for a lot of other packages, but once you get your setup the way you like it, it just keeps running and running.

Dietpi.. For no particular/proper reason other than its (extreme) focus on minimalism.

Love me some dietpi! Was pleasantly surprised of how smart and easy it was to use 🙌

@communism
I use alpine, but void is a good option too, for me the host should be minimal and lightweight. At the end I have all on containers

Debian but mostly Ubuntu LTS with the free Ubuntu Pro that gives 10-year support. If I get hit by a bus, chances are the self-hosted systems I've setup would continue to work for years till my family can get someone to support or migrate the data. 😅

Ubuntu server, though I am thinking of using arch even though it is a rolling distro. It doesn't really matter. As long as docker is supported, I am fine using any.

I wouldn't personally use Arch on a server. The rolling release could cause a lot of problems, especially since you lack the ability to seamlessly integrate older versions of packages like with gentoo masking.

Do you have a plan on how you’d do version controlling on Arch? It’d be annoying to upgrade, something breaks, and you can’t easily roll back.

you can’t easily roll back.

This has always been a tricky thing to get right, and half the problem is that so many people don't (yet) realize why it's valuable/important. For many, I claim it's a fundamental problem in their packaging choices that make a roll-back difficult; and even the closest we have - stomping an old package release over the new - is only effective with a perfect replacement of content moving forward and back.

Here's the sad news: no one's done it under linux. The amount of data to convert and revert is daunting. One Unix (not linux) distro did it, but that was around y2k and may have stopped. And they faked it by maintaining configs where the software installation sits and symlinking into the install trees to get binaries from (eg) /usr/install/httpd/1.3.13/sbin/httpd to just /usr/sbin/httpd and /usr/install/httpd/1.3.13/etc/httpdto/etc/httpd` -- you get the idea. They'd convert configs upward but 'revert' by just adjusting symlinks. But even here, config and other changes SINCE upgrading would be lost in the downgrade, and that's an issue.

If we ignore configs, one distro was fantastic in upgrading and downgrading so that while they don't roll back, they roll down if needed. Upgrade your entire OS from v4 to v5? Sure. apt-get dist-upgrade. Want to go down from 5 to 4? apt-get dist-downgrade may have been the roll-down command, but I don't remember. But it worked. Ohh, did it work like a magic trick. And they tested the hell out of it too, as it was their one massively cool feature.

Conectiva ran on that platform for years until it ran low on funds, got the great idea to go in with SuSE and others on a united linux, got the same 'shit kicked over the fence' from their SuSE 'partner' like we got, and ran out of money trying to bludgeon this afterbirth into an OS they could sell (they couldn't). I think this dist upgrade and downgrade feature was lost when they joined mandrake to stay alive and keep their people working, but working with SuSE in United Linux may have sapped their spirit like it sapped ours.

In short:

  • roll-back is hard
  • faking it is hard
  • few have faked it
  • packaging format is a huge reason why
  • RPM can roll down packages but debs and others can't as well.
  • YMMV like your goals may vary. It's all good.

I know my comment here is controversial, and I know someone's gonna look at 25 years of history and be all "what a dick for disparaging Stampede Linux like that" and downvote, and that's okay. I honestly don't want to come off like that, but I am biased from working and supporting linux distros professionally, which may not be valuable to some. Again, all good.

I'll just wait a few days or even weeks before doing any big updates, read the news page of archlinux.org and maybe some forum stuff. Nothing broke so far on my personal laptop, but I also don't tinker alot. All of the data of the containers are also stored in a storagebox from Hetzner so the system breaking wouldn't even mean that much, I'll just restore from a snapshot and everything will be fine.

I also might think of switching to NixOS instead. They say it's hard but pays off well and can be very stable.

I won't say which one, but I'll give you a hint as to why:

rpm -Vp https://...

It's what got me off Slackware, and it's true today. If the distro can't support that kind of check, it's dead to me.

I'm currently using debian with Docker.

If I were to do it again, though, I'd probably just use either fedora or the server equivalent to silverblue (I can't remember the name). I am so heavy on docker use at this point that I wouldn't mind going full immutable.

I use arch on my servers. It is the distro I am most used too, because I use it also as my daily driver.

Rocky and now moving too OpenSuse leap micro to move into immutable OS deployments.

Its all RKE2 (a k8s distro) on top anyways, so its very minor mods underneath, and base updates so I really want to maximize reproducibility and minim8ze attack surface.

Arch. With testing repos. And somehow, it also just works.

Mint on the Desktop, FreeBSD on the server. Amazingly stable.

Devuan. If you need stable, and you like runit, thats the easiest option.

Debian isn't unsecure because security updates for packages are still received.

Seonding the security point. It's probably riskier to use bleeding edge distros because the "old" Debian packages are well cured and don't have a lot of new issues. And as you said also old packages get security updates. Even in debian.

Been running Debian on my server for 10+ years.

I always use Rocky Linux or Alma Linux, since I have extensive experience with enterprise Linux and RPM packages. I have Fedora on my main desktop computer. Both Rocky Linux and Alma Linux are rock-solid and are ideal for any kind of workload.

Also, Debian is a good choice if you know how to manage DEB packages and you feel comfortable with APT.

Fedora is a good choice if you want fresh packages and are willing to upgrade your server every 6 months (following the Fedora release cycle).

Rocky Linux and Alma Linux follow a similar slow release cycle of RHEL, wherein you can install your server and not have to worry for years (as long as the packages are updated with dnf update) Debian is also a slow release distribution, which makes it good for servers.

l comfortable with APT.

Apt4RPM was a beautiful thing. I wish we still had that as a common tool, as yum and its incapable 'up'grade dnf are just worse and less capable each time. I shudder at the crayon-eating that'll go into whatever 'succeeds' dnf.

I have been using dnf for years, both on desktop and servers, and never had a problem with it. I have the opposite idea, it's getting better with dnf5, I think it's a great tool and upgrades not only the regular packages, but the entire distribution during new releases without any problem. I upgraded my notebook from Fedora 38 to 39 and finally to 40 through dnf, no complains.

Red Hat, because it's free for developers and used by a lot of enterprises.

Red Hat, because it’s free for developers

Not really.

and used by a lot of enterprises.

Not really. We're moving to a surprising alternative, but the source for a paid enterprise Linux is drifting away collectively from RedHat. It started with 7 - ironically people choosing a 7 equivalent from a clone, like a paid centos almost, just because they were so pissed at the quality free-fall that began with 7. In short, paying a competitor for their clone of a bad release because they're so pissed at RedHat for making that release. Really weird.

Now that RH is starting to wobble and falter, these also-rans are trying to get into the lead as flagship. If RH post-Lennart can't get its quality back up to EL6 level, the cracks will get noticeable. As they keep on pitching every product under the sun except linux, we worry their focus won't get back to it in time and they'll lose the flag - if not already - to someone else.

It's not SuSE. That combination of Slackware and (I wanna say SLS) is an experience, but not a joy. It seems like a good idea, but their culture is still weird for the west.

I use proxmox and run Debian containers and VMs

openSUSE Leap - YaST is the greatest thing since sliced bread, and works great on command line over SSH. Yes, sometimes installing some software is difficult, but generally most stuff you would want is there and a lot of stuff runs on Docker anyway now. Very stable too, have had nearly zero issues.

Been running Ubuntu LTS releases on all my server VMs for 8 years and haven't had a single problem. Absolutely solid as a rock. Fantastic support, loads of guides to do anything. Plus you can get 10years of support as a home user with a free Ubuntu Pro subscription.

I guess you could use something like those new immutable distros to move away from state and related vulnerabilities. TBH there are plenty of hardening guides for Debian.

Or you could use any hardened version of Fedora which gets security fixes quicker, and then harden it some more yourself. The good part about Debian is that you are free to use SysVInit, I do not know if you could do that on Fedora. I do not think Systemd is a massive risk (if they have reached Systemd you have many other, bigger problems to think of).

I think I should study some more about Fedora. I run k3s on top and will go through their CISA hardening guide at some point to round things out.

Gentoo because I know my way around it and I'm able to only install stuff that I explicitly want and configured.

I don't have a server but If I had one I would prob pick nixos or some arch distro