What are your homelab stats?

DeltaTangoLima@reddrefuge.com to Selfhosted@lemmy.world – 81 points –

I just spent a good chunk of today migrating some services onto new docker containers in Proxmox LXCs.

As I was updating my network diagram, I was struck by just how many services, hosts, and LXCs I'm running, so counted everything up.

  • 116 docker containers
    • Running on 25 docker hosts
    • 50 are the same on each docker host - Watchtower and Portainer agent
  • 38 Proxmox LXCs (19 are docker hosts)
  • 8 physical servers
  • 7 VLANs
  • 5 SSIDs
  • 2 NASes

So, it got me wondering about the size of other people's homelabs. What are your stats?

75

Dude, are you living in your company's server room?

Lol - not quite. It sounds like a lot, but all of this runs on a couple of HP DL360s, a handful of Raspberry Pis, a nettop box, and a couple of consumer NASes.

"i swear it's not a lot"

Goes on the describe an infrastructure setup comparable to most medium sized businesses

I love this community!

Well, to be fair, I do use my homelab to play with stuff I may or may not want to use at work. I don't need PEAP auth for wireless, with a separate RADIUS server and Postgres database. But I have it. πŸ˜‰

  • 116 docker containers
  • Running on 25 docker hosts
  • 50 are the same on each docker host - Watchtower and Portainer agent
  • 38 Proxmox LXCs (19 are docker hosts)
  • 8 physical servers
  • 7 VLANs
  • 5 SSIDs
  • 2 NASes

And a partridge in a pear treeeee.

When I read lists like this, I often wonder, what is this person doing with all these containers and such? Do they actually use all of them regularly?

I've got:

1 proxmox machine serving - Openmediavault - 2 shares (jellyfin, general smb shares) Homeassistant Uptimekuma for monitoring Jellyfin

And some misc VMs for trying out things.

1 pi4b - pihole 1 pi3a+ tailscale subnet router / exit node

I often look at lists of things i can host and think to myself "do I need this?". This br8ngs me back to huge lists of services like this and my curiosity. Do folks actually interact with all these services regularly? Honest question, no shade intended.

Do folks actually interact with all these services regularly?

In my case, yep. I believe in as much separation between services as possible, so each service essentially resides on its own docker host, whether physical or Linux container.

That said, some of my services are stacks of multiple containers. For example. my DNS service is a pair of Pi-hole DNS servers, each running their own Pi-hole container, but each one also running containers for Cloudflare tunnel and telemtry export to Prometheus.

Immich has a stack of 6 containers, Piped a stack of 5. So, out of the 66 containers (that aren't Portainer agent or Watchtower), it probably condenses down to around half that number (eg. the 25 docker hosts I have, plus a handful or two others).

each service essentially resides on its own docker host, whether physical or Linux container.

This is the way. Multiple simple dedicated systems is so much easier to maintain than a single "do everything" server.

It's not much, but I've got a little LG netbook with an Atom CPU and 2GB RAM running Pi-hole and Syncthing.

My starting point (with this incarnation of my homelab) was my Asrock ION330 nettop box. Then I discovered Raspberry Pis. Then I decided I needed a couple of HP DL360s. RIP my power bill.

One day when I'm all growed up I want to have a better setup. For now I've got what I absolutely need.

Yep - fair enough. Admittedly, my homelab is as much for professional development as it is home use, but pretty much everything gets used all the time.

How do people get to so many Docker containers before moving to Kubernetes? I only have 76 containers across 68 pods and that's far too much for me to manage in Docker.

Honestly, anything not mission critical (network/internet and home automation, mainly) gets auto-updated by Watchtower. I have Watchtower set to pull latest images of everything on a weekly basis, and specific containers that are set to monitor only. Every Saturday morning, I check the Slack channel for notifications of containers that need controlled updating.

Not really doing much docker, but a lot of LXC - everything scripted with ansible. I define basic container metadata in a yaml parsed by a custom inventory plugin - and that is sufficient for deploying a container before doing provisioning in it.

  • 8 Hosts (6 physical/local, 2 VPS/remote)
  • 72 Docker containers
    • Pi-hole (3 of them, 2 local, 1 on a VPS)
    • Orbital-sync (keeps the pi-holes synced up)
    • Searxng (search engine)
    • Kutt (URL shortener)
    • LenPaste (Pastebin-like)
    • Ladder (paywall bypass)
    • Squoosh (Image converter, runs fully in browser but I like hosting it anyway)
    • Paperless-ng (Document management)
    • CryptPad (Secure E2EE office colaboration)
    • Immich (Google Photos replacement)
    • Audiobookplayer (Audiobook player)
    • Calibre (Ebook management)
    • NextCloud (Don't honestly use this one much these days)
    • VaultWarden (Password/2FA/PassKey management)
    • Memos (Like Google Keep)
    • typehere (A simple scratchpad that stores in browser memory)
    • librechat (Kind of like chatgpt except self-hosted and able to use your own models/api keys)
    • Stable Diffusion (AI image generator)
    • JellyFin (Video streaming)
    • Matrix (E2EE Secure Chat provider)
    • IRC (oldschool chat service)
    • FireFlyIII (finance management)
    • ActualBudget (another finance thing)
    • TimeTagger (Time tracking/invoicing)
    • Firefox Sync (Use my own server to handle syncing between browsers)
    • LibreSpeed (A few instances, to speed testing my connection to the servers)
    • Probably others I can't think of right now

Most of these I use at least regularly, quite a few I use constantly.

I can't imagine living without Searxng, VaultWarden, Immich, JellyFin, and CryptPad.

I also wouldn't want to go back to using the free ad-supported services out there for things like memos, kutt, and lenpaste.


Also librechat I think is underappreciated. Even just using it for GPT with an api key is infinitely better for your privacy than using the free chatgpt service that collects/owns all your data.

But it's also great for using gpt4 to generate an image prompt, sending it through a prompt refiner, and then sending it to Stable Diffusion to generate an image, all via a single self-hosted interface.

How many W are you pulling, on the average? Or kWh per year.

Good question. According to my UPS, I'm pulling about 173Wh for everything except my pair of HP DL360s. Those each have a couple of 480W PSUs in them, but they're nowhere near running at full tilt, so I can't be sure. I really should get some power measurement going...

You're probably drawing about 400-450 W.

Yeah, seems about right. I'm planning on buying a 32RU rack in the new year - will fit it out with power monitoring PDUs while I'm at it.

For reference: Using dual E5-2630L, DL360/380G8 uses around 130-150 watts average unless something is spiking.

With a couple Cisco routers, 4 HP server, adds about 150 dollars to my monthly bill. This wouldn't be possible in Europe.

My current supplier rate is about 0.6 EUR/kWh. I make some 1/2 to 2/3 of my power myself, for a price that's less than half of that.

make some 1/2 to 2/3 of my power myself I'd have to :) That's .66c US per. Mine is .11-12 US / .10 EUR. Mine is 6 times cheaper. `Merica

Insert rant about our power is probably a large percentage of coal and gas (cheap + super bad)

You've got like a whole DCs worth of stuff. I've downscaled the hardware in my server a lot, but it's still just a single Threadripper 2970wx with 128 GB RAM and 50 TB of ZFS storage and 50 TB of cloud based object storage in a midtower case. I have like 20 containers running, one is a Caddy webserver which acts as a reverse proxy for all the others.

I love to do things to excess as much as the next geek, but I could never find a reason to run as much as you have.

Honestly, it's because I like to play. I don't need PEAP auth for my wireless network, but I run a radius server providing MAC and user auth, anyway.

I hear ya, the answer to "why?" is usually "because I can" πŸ˜‚

About 8 months ago I had 20x HDDs and 8x NVME drives in my server, totaling 187 TB across three ZFS pools. I could write to the largest pool (2 RAIDZ1 striped vdevs, 6 drives wide) at 250 MB/sec and read from it at over a GB/sec and that was from spinning rust with NVME "special devices".

What was I doing with all of this? Pirating movies and TV shows and running a media server for my friends and family.

I don't have a homelab ( space contrains ) but I do have 2 vps that I use to host in total 13 docker containers, mail server and an xmpp server.

Edit: My lemmy server is also hosted on them.

What I'm more interesting in is what is it that you selfhost to have so many docker containers?

What I’m more interesting in is what is it that you selfhost to have so many docker containers?

Well, lots of services are stacks of containers - Immich has 6 containers and Piped has 5, for example - so it's easy for the container count to get up there.

Other "services" are groups of containers/hosts to provide a complete capability - Home Assistant; esphome; Node-RED, for example. Then there's just the stuff that, due to my desire for loose coupling, are spread across multiple docker hosts/containers - 5 x Sonarr/Radarr instances, for example.

  • 33 nomad jobs, most being containers
  • 12 physical nomad clients
    • 3 amd64 poweredge
    • 2 pi4
    • 6 Nano Pi r5c
    • 1 odroid M1
  • Ceph: (nomad orchestrated)
    • 8 OSD
    • 50TB total raw disk

Ah - I've been meaning to look into Nomad. I have plenty of admiration for Hashicorp's products. How are you finding it?

At my day job, we took a look at nomad and now we are planning to run everything in nomad. It's just so simple to understand and a joy to use.

I believe they changed some of their licensing from the fallout of their IPO. Just worth noting for the selfhosting crowd. I know terraform is being forked entirely, but I'm unfamiliar with the specifics beyond that.

My day job is a lot of kube/openshift so nomad is refreshing. Having the template blocks are amazing and makes it so that much of what helm gave me is not required. Parameterized jobs are the best once you find a good use case for them!

A single SFF desktop setup in a Node306. 2700x, 32 GB RAM, Arc A380, some WD reds.

  • Homeassistant & associated packages for esphome and Zwave stuff
  • Jellyfin
  • *arr suite + transmission
  • yacht
  • uptimekuma
  • paperless
  • immich
  • authelia with OIDC SSO for containers where possible
  • traefik for reverse proxy
  • Nexcloud
  • valheim server
  • boinc in the winter
  • syncthing for phone sync
  • more services for keeping up the others

Soon a pihole to come.

I want to expand my smart home setup. My project this spring is integrating my smart gas and electric meters into homeassistant. We are completely stripping the house so I am wiring up everything with KNX with a nee Zwave devices where needed. Greatly expanding the smartish home.

I also have to set up a proper network. Right now I am using my Proximus Internet Box from the ISP which admittedly is pretty customizable.

Love this! Lot of similarity to what I use - Authelia's awesome, especially paired with a free push 2FA like Duo.

boinc in the winter

Lol. I really doubt an extra Watt or two during winter helps, and probably not saving much than just running it the whole year.

Good post though

Well, considering going from a 40W idle system to 80 to 100W is a >100% increase in power.

In Belgium we pay 0.30€ per kWh, so running the entire year at 80W average is approximately 150€ difference with idle the entire year. That definitely helps. That is 1/3 the cost of a lawnmower or a month of groceries.

But in the winter it is a 80-100W small heater that can keep a local area a degree or so warmer.

When you start paying your own power bill it really adds up. I wish I had gone for an intel NUC sometimes.

I see your electric is about 2.2x the cost of mine, so yes that's significant. Was mostly pointing at your net impact to heating in winter, which in your case is only an additional 40-60W from baseline. That's effectively an extra Type A light bulb in your room. This is more of a savings during hot months than effectively heating during cold months.

It really depends on the size of the space. It does a lot more in a room of 8m^2 than 20m^2. There is a reason that a 40W incandescent bulb is used to ferment foods like yogurt in an oven. It produces enough heat to keep the whole oven at fermenting temps.

No, I (respectfully) disagree... When I had a tower PC under my desk, I upped Boinc to use ~50% idle CPU (from memory... might've been more) and that would just keep the chill off my office so that I didn't need to heat it (unless it was really cold).

In the Summer I would drop Boinc down to ~25% as it was getting too hot in there.

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
AP WiFi Access Point
DNS Domain Name Service/System
ESXi VMWare virtual machine hypervisor
Git Popular version control system, primarily for code
HTTP Hypertext Transfer Protocol, the Web
LVM (Linux) Logical Volume Manager for filesystem mapping
LXC Linux Containers
MQTT Message Queue Telemetry Transport point-to-point networking
NAS Network-Attached Storage
NUC Next Unit of Computing brand of Intel small computers
PSU Power Supply Unit
PiHole Network-wide ad-blocker (DNS sinkhole)
Plex Brand of media server package
PoE Power over Ethernet
RAID Redundant Array of Independent Disks for mass storage
SSO Single Sign-On
Unifi Ubiquiti WiFi hardware brand
VPN Virtual Private Network
VPS Virtual Private Server (opposed to shared hosting)
ZFS Solaris/Linux filesystem focusing on data integrity
nginx Popular HTTP server

20 acronyms in this thread; the most compressed thread commented on today has 3 acronyms.

[Thread #370 for this sub, first seen 24th Dec 2023, 07:35] [FAQ] [Full list] [Contact] [Source code]

Currently 3 physical boxes down from 4 and aiming for 2. It pretty well comes down to a hypervisor and a NAS and the regular aux gear like a switch and modem. They're big boxes though with about 35 TB storage, .5 TB RAM, and 72 cores between them so lots of space to make imaginary computers in.

Right now my goal is reducing the power footprint. Kill-a-watt places the whole set at 650 watts today and I should knock about 150 off when I get the other box virtualized.

Nice - have you got anything setup to monitor power consumption? I've got a few of those "smart" plugs running on Tuya (localised through Home Assistant) but I'm not 100% convinced of their accuracy just yet...

Just the kill-a-watt plug that the main power block is attached to. The servers have stats visible via the IDRAC (R730XD & R820) to break out for those, but nothing that shows a dashboard or such.

I've found the HP iLOs to be really unreliable for viewing across the network. Something I've been meaning to look into...

Old laptop, Debian with docker running nextcloud, navidrome, jellyfin, gitea, librespeed, wireguard, dnsmasq, and nginx as a reverse proxy.

Dang, how does your isp feel about that many machines talking out to the internet, have they made you pay for business plans yet?

I have a very modest 7 docker containers on a vm on my gaming rig and I have a raspberry pi for my DNS server. Honestly my setup is quite scuffed (in comparison to yours), but it does what I need it to do

Mine's not as fancy as it sounds - a couple second hand enterprise boxes and a handful of Raspberry Pis, mostly.

I'm able to get a lot of gear secondhand through my job, so I've got:

One 2u Intel server running proxmox in a 'cluster' (circa 2013ish. Added RAM and upgraded the CPU/storage.)

One Intel nuc with an i7-7th gen as the other host in the cluster - only one VM is set to fail over between the two if needed.

VMs:

  • Plex
  • 2x PiHoles (one of these is the failover VM) (these also have a few docker containers like Uptime Kuma.)
  • Windows arr box (I know it's blasphemy but I felt more comfortable doing that stuff in windows)
  • anything else I want to mess with because the server really doesn't run that hard.

Network:

  • Sonicwall TZ 300 (incl a perpetual VPN license)
  • Unifi 24 port switch (it's gigabit and POE but doesn't output enough power for the...)
  • single Unifi AP.

All acquired over the last couple years for the low low price of "it was going into the trash anyway"

2 Raspberry Pi 4 with a few services running (some directly, some via docker): pihole, pialert, gitlab plantuml, munin, restic rest server, jupyter instance, airsonic-advanced. And an old synology NAS which serves as document and media server

One laptop, 2 ssd, 4 Proxmox lxcs, 3 docker containers, 2 routers.

  • 3 DL360G8 Esxi (86Ghz/512GB RAM)
  • 1 DL380G8 TrueNAS
  • 1 DL360G7 Veeam
  • Dell n5070 Extended PVE SophosnUTM
  • 48 Port Catalyst rack switch
  • Cisco 2921
  • Fibre Channel / iSCSI

50+ VMs and containers:

  • VMware ESXi, vCenter, VMware Log Insight, VMware OPS
  • DMVPN to remote locations like a desk switch at work and family member houses
  • Sophos UTM
  • Active Directory for my home computers
  • hybrid sync to MS Entra (Azure Active Directory) with Entra Connect
  • hybrid Exchange on Premise and Exchange online
  • Active Directory for management network
  • Security Onion VMs for IDS
  • Network monitoring like Elastiflow, PRTG
  • Docker, gitlab, OpenSalt / Saltstack
  • Trellix ePO for AV
  • Nessus vuln scanners
  • Team Awareness Kit (TAK) server
  • Active Directory Certificate Services
  • Home media applications

These things are mostly to maintain familiarity and documentation development. I write off the cost of electricity as continuing education and professional development. More enterprise than some enterprises.

86Ghz

Woah

Most likely some sum of (cores x Ghz) each processor in all servers? While it kind of makes sense, it feels like a much higher clock speed than what I’m used to seeing.

I have a single quad sock E5-4640 server, I think in terms of me having 4 processors with 8 cores at base 2.4Ghz each; I don’t regularly (or ever, for that matter) think in terms of me having 76.8Ghz.

360G8s should be single or dual sock E5 v2 processors. I can’t really math right now (insufficient caffeine), but I can’t seem to make the math work, so I’d imagine something that to be an aggregated across all three systems, not individual systems?

Yes, aggregate of all three hosts in cluster, sorry. Dual socket, six cores.

Love it! I'm gonna grab a 32RU rack soon. Got most of my stuff in a small ~14RU wall cabinet right now. I was originally aiming for low power everything - RasPis, etc. But I've since bought a couple DL360s, and you just can't beat the sheer grunt factor, especially when paired with Proxmox.

What are you running in docker

There's a lot not worth mentioning, but broadly...

  • Home automation
    • Home Assistant
    • esphome
    • Node-RED
    • MQTT
    • Frigate
  • Homelab/management
    • 2 x Pi-hole (plus supporting services - Cloudflare tunnel, for example)
    • Grafana
    • Prometheus
    • Shellinabox
    • Forgejo (git)
    • Netbox
    • VScode
  • Media/entertainment
    • 2 x Sonarr
    • 3 x Radarr
    • Calibre
    • Piped
    • Minecraft
    • other supporting *arrs
  • Data
    • Paperless-ngx
    • Immich
  • Social
    • Lemmy
    • Mastodon

I've got an old Dell Poweredge tower server with dual 6-Core Xeons, 128 GB Ram, and 21 TB combined Raid 5 storage.

  • 10 VM's
  • Veeam Backups
  • All behind a Mikrotik RB3011

I run one service per VM because I like being able to nuke the whole thing without bringing down any other services.

You can get some good hardware on eBay if you know what you're looking at. The HDD and SDD's cost more than the server. Electricity probably runs about $16/mo.

Biggest problem I've got coming up is what I'm going to do for backups once I exceed Veeam community editions 10 VM limit.

Three most important VM's are Jellyfin (whole family uses every day), Paperless-ngx (I use every day), and Jitsi (kids use to video call Grandma and Grandpa). Most of the other stuff is non-essential.

I've pared mine down a lot. The biggest hurdle for me has been storage.

It used to be 5 2u servers running a ceph cluster, but that got to be expensive and unruly.

Now it's mainly a small half depth supermicro for my firewall, a half depth supermicro for home assistant, a 2u Dell for unraid, and a small NAS.

Unraid houses Plex and the *arrs. Along with a handful of other useful services like immich.

I do colo a 1u HP though that houses my pbx, web server, unifi controller, jirai server, nextcloud, email, and a bunch of other servers that I run.

Now, I've got a lot of spare hardware though. 7 Dell 1u servers, 2 Dell 2u, a supermicro 3u, an HP 2u and a bunch of things clients that I might turn into replacements for my rokus.

Wow I am not in your league

I am currently migrating from a dedicated docker host to a proxmox host with multiple LXC containers.

old host - 23 docker containers, 128GB system drive, 4TB data drive

backup server - 1 docker container, 1TB disk

proxmox - 3 LXC containers, one of which has 3 docker containers. 500GB system drive, 4TB media drive (not LVM)

The plan is to migrate the loads on the old host to the proxmox host. I also have another 4TB drive coming with the intent of setting up a RAID with 2 of the 4TB drives.