What are your homelab stats?
I just spent a good chunk of today migrating some services onto new docker containers in Proxmox LXCs.
As I was updating my network diagram, I was struck by just how many services, hosts, and LXCs I'm running, so counted everything up.
- 116 docker containers
- Running on 25 docker hosts
- 50 are the same on each docker host - Watchtower and Portainer agent
- 38 Proxmox LXCs (19 are docker hosts)
- 8 physical servers
- 7 VLANs
- 5 SSIDs
- 2 NASes
So, it got me wondering about the size of other people's homelabs. What are your stats?
Dude, are you living in your company's server room?
Lol - not quite. It sounds like a lot, but all of this runs on a couple of HP DL360s, a handful of Raspberry Pis, a nettop box, and a couple of consumer NASes.
"i swear it's not a lot"
Goes on the describe an infrastructure setup comparable to most medium sized businesses
I love this community!
Well, to be fair, I do use my homelab to play with stuff I may or may not want to use at work. I don't need PEAP auth for wireless, with a separate RADIUS server and Postgres database. But I have it. π
And a partridge in a pear treeeee.
Lol - Merry Christmas, my anonymous friend. π
When I read lists like this, I often wonder, what is this person doing with all these containers and such? Do they actually use all of them regularly?
I've got:
1 proxmox machine serving - Openmediavault - 2 shares (jellyfin, general smb shares) Homeassistant Uptimekuma for monitoring Jellyfin
And some misc VMs for trying out things.
1 pi4b - pihole 1 pi3a+ tailscale subnet router / exit node
I often look at lists of things i can host and think to myself "do I need this?". This br8ngs me back to huge lists of services like this and my curiosity. Do folks actually interact with all these services regularly? Honest question, no shade intended.
In my case, yep. I believe in as much separation between services as possible, so each service essentially resides on its own docker host, whether physical or Linux container.
That said, some of my services are stacks of multiple containers. For example. my DNS service is a pair of Pi-hole DNS servers, each running their own Pi-hole container, but each one also running containers for Cloudflare tunnel and telemtry export to Prometheus.
Immich has a stack of 6 containers, Piped a stack of 5. So, out of the 66 containers (that aren't Portainer agent or Watchtower), it probably condenses down to around half that number (eg. the 25 docker hosts I have, plus a handful or two others).
This is the way. Multiple simple dedicated systems is so much easier to maintain than a single "do everything" server.
It's what docker and Proxmox were born to do!
Regularly, probably not, just depends. If you only spin things us to setup or learn something, no. https://infosec.pub/comment/5234431
It's not much, but I've got a little LG netbook with an Atom CPU and 2GB RAM running Pi-hole and Syncthing.
My starting point (with this incarnation of my homelab) was my Asrock ION330 nettop box. Then I discovered Raspberry Pis. Then I decided I needed a couple of HP DL360s. RIP my power bill.
One day when I'm all growed up I want to have a better setup. For now I've got what I absolutely need.
Yep - fair enough. Admittedly, my homelab is as much for professional development as it is home use, but pretty much everything gets used all the time.
How do people get to so many Docker containers before moving to Kubernetes? I only have 76 containers across 68 pods and that's far too much for me to manage in Docker.
Honestly, anything not mission critical (network/internet and home automation, mainly) gets auto-updated by Watchtower. I have Watchtower set to pull latest images of everything on a weekly basis, and specific containers that are set to monitor only. Every Saturday morning, I check the Slack channel for notifications of containers that need controlled updating.
Not really doing much docker, but a lot of LXC - everything scripted with ansible. I define basic container metadata in a yaml parsed by a custom inventory plugin - and that is sufficient for deploying a container before doing provisioning in it.
Most of these I use at least regularly, quite a few I use constantly.
I can't imagine living without Searxng, VaultWarden, Immich, JellyFin, and CryptPad.
I also wouldn't want to go back to using the free ad-supported services out there for things like memos, kutt, and lenpaste.
Also librechat I think is underappreciated. Even just using it for GPT with an api key is infinitely better for your privacy than using the free chatgpt service that collects/owns all your data.
But it's also great for using gpt4 to generate an image prompt, sending it through a prompt refiner, and then sending it to Stable Diffusion to generate an image, all via a single self-hosted interface.
Which program do you mean with this? I'm only familiar eith audiobookshelf
Ah yeah that's the one, sorry
How many W are you pulling, on the average? Or kWh per year.
Good question. According to my UPS, I'm pulling about 173Wh for everything except my pair of HP DL360s. Those each have a couple of 480W PSUs in them, but they're nowhere near running at full tilt, so I can't be sure. I really should get some power measurement going...
You're probably drawing about 400-450 W.
Yeah, seems about right. I'm planning on buying a 32RU rack in the new year - will fit it out with power monitoring PDUs while I'm at it.
For reference: Using dual E5-2630L, DL360/380G8 uses around 130-150 watts average unless something is spiking.
With a couple Cisco routers, 4 HP server, adds about 150 dollars to my monthly bill. This wouldn't be possible in Europe.
My current supplier rate is about 0.6 EUR/kWh. I make some 1/2 to 2/3 of my power myself, for a price that's less than half of that.
Insert rant about our power is probably a large percentage of coal and gas (cheap + super bad)
You've got like a whole DCs worth of stuff. I've downscaled the hardware in my server a lot, but it's still just a single Threadripper 2970wx with 128 GB RAM and 50 TB of ZFS storage and 50 TB of cloud based object storage in a midtower case. I have like 20 containers running, one is a Caddy webserver which acts as a reverse proxy for all the others.
I love to do things to excess as much as the next geek, but I could never find a reason to run as much as you have.
Honestly, it's because I like to play. I don't need PEAP auth for my wireless network, but I run a radius server providing MAC and user auth, anyway.
I hear ya, the answer to "why?" is usually "because I can" π
About 8 months ago I had 20x HDDs and 8x NVME drives in my server, totaling 187 TB across three ZFS pools. I could write to the largest pool (2 RAIDZ1 striped vdevs, 6 drives wide) at 250 MB/sec and read from it at over a GB/sec and that was from spinning rust with NVME "special devices".
What was I doing with all of this? Pirating movies and TV shows and running a media server for my friends and family.
I don't have a homelab ( space contrains ) but I do have 2 vps that I use to host in total 13 docker containers, mail server and an xmpp server.
Edit: My lemmy server is also hosted on them.
What I'm more interesting in is what is it that you selfhost to have so many docker containers?
Well, lots of services are stacks of containers - Immich has 6 containers and Piped has 5, for example - so it's easy for the container count to get up there.
Other "services" are groups of containers/hosts to provide a complete capability - Home Assistant; esphome; Node-RED, for example. Then there's just the stuff that, due to my desire for loose coupling, are spread across multiple docker hosts/containers - 5 x Sonarr/Radarr instances, for example.
I have a NAS and it runs deluge to download torrents, and hosts two very basic websites.
Ah - I've been meaning to look into Nomad. I have plenty of admiration for Hashicorp's products. How are you finding it?
At my day job, we took a look at nomad and now we are planning to run everything in nomad. It's just so simple to understand and a joy to use.
I believe they changed some of their licensing from the fallout of their IPO. Just worth noting for the selfhosting crowd. I know terraform is being forked entirely, but I'm unfamiliar with the specifics beyond that.
My day job is a lot of kube/openshift so nomad is refreshing. Having the template blocks are amazing and makes it so that much of what helm gave me is not required. Parameterized jobs are the best once you find a good use case for them!
I've got one headless cheap desktop PC sitting under my desk.
A single SFF desktop setup in a Node306. 2700x, 32 GB RAM, Arc A380, some WD reds.
Soon a pihole to come.
I want to expand my smart home setup. My project this spring is integrating my smart gas and electric meters into homeassistant. We are completely stripping the house so I am wiring up everything with KNX with a nee Zwave devices where needed. Greatly expanding the smartish home.
I also have to set up a proper network. Right now I am using my Proximus Internet Box from the ISP which admittedly is pretty customizable.
Love this! Lot of similarity to what I use - Authelia's awesome, especially paired with a free push 2FA like Duo.
*arr suite?
Lol. I really doubt an extra Watt or two during winter helps, and probably not saving much than just running it the whole year.
Good post though
Well, considering going from a 40W idle system to 80 to 100W is a >100% increase in power.
In Belgium we pay 0.30β¬ per kWh, so running the entire year at 80W average is approximately 150β¬ difference with idle the entire year. That definitely helps. That is 1/3 the cost of a lawnmower or a month of groceries.
But in the winter it is a 80-100W small heater that can keep a local area a degree or so warmer.
When you start paying your own power bill it really adds up. I wish I had gone for an intel NUC sometimes.
I see your electric is about 2.2x the cost of mine, so yes that's significant. Was mostly pointing at your net impact to heating in winter, which in your case is only an additional 40-60W from baseline. That's effectively an extra Type A light bulb in your room. This is more of a savings during hot months than effectively heating during cold months.
It really depends on the size of the space. It does a lot more in a room of 8m^2 than 20m^2. There is a reason that a 40W incandescent bulb is used to ferment foods like yogurt in an oven. It produces enough heat to keep the whole oven at fermenting temps.
No, I (respectfully) disagree... When I had a tower PC under my desk, I upped Boinc to use ~50% idle CPU (from memory... might've been more) and that would just keep the chill off my office so that I didn't need to heat it (unless it was really cold).
In the Summer I would drop Boinc down to ~25% as it was getting too hot in there.
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:
20 acronyms in this thread; the most compressed thread commented on today has 3 acronyms.
[Thread #370 for this sub, first seen 24th Dec 2023, 07:35] [FAQ] [Full list] [Contact] [Source code]
Currently 3 physical boxes down from 4 and aiming for 2. It pretty well comes down to a hypervisor and a NAS and the regular aux gear like a switch and modem. They're big boxes though with about 35 TB storage, .5 TB RAM, and 72 cores between them so lots of space to make imaginary computers in.
Right now my goal is reducing the power footprint. Kill-a-watt places the whole set at 650 watts today and I should knock about 150 off when I get the other box virtualized.
Nice - have you got anything setup to monitor power consumption? I've got a few of those "smart" plugs running on Tuya (localised through Home Assistant) but I'm not 100% convinced of their accuracy just yet...
Just the kill-a-watt plug that the main power block is attached to. The servers have stats visible via the IDRAC (R730XD & R820) to break out for those, but nothing that shows a dashboard or such.
I've found the HP iLOs to be really unreliable for viewing across the network. Something I've been meaning to look into...
Old laptop, Debian with docker running nextcloud, navidrome, jellyfin, gitea, librespeed, wireguard, dnsmasq, and nginx as a reverse proxy.
Dang, how does your isp feel about that many machines talking out to the internet, have they made you pay for business plans yet?
Lol - I'm on unlimited 1Gbps fibre here. So far, they haven't raised any concerns.
That's awesome, best of luck it stays that way!
I have a very modest 7 docker containers on a vm on my gaming rig and I have a raspberry pi for my DNS server. Honestly my setup is quite scuffed (in comparison to yours), but it does what I need it to do
Mine's not as fancy as it sounds - a couple second hand enterprise boxes and a handful of Raspberry Pis, mostly.
I'm able to get a lot of gear secondhand through my job, so I've got:
One 2u Intel server running proxmox in a 'cluster' (circa 2013ish. Added RAM and upgraded the CPU/storage.)
One Intel nuc with an i7-7th gen as the other host in the cluster - only one VM is set to fail over between the two if needed.
VMs:
Network:
All acquired over the last couple years for the low low price of "it was going into the trash anyway"
Nice! There's nothing better than finding new life for old tech.
2 Raspberry Pi 4 with a few services running (some directly, some via docker): pihole, pialert, gitlab plantuml, munin, restic rest server, jupyter instance, airsonic-advanced. And an old synology NAS which serves as document and media server
One laptop, 2 ssd, 4 Proxmox lxcs, 3 docker containers, 2 routers.
50+ VMs and containers:
These things are mostly to maintain familiarity and documentation development. I write off the cost of electricity as continuing education and professional development. More enterprise than some enterprises.
Woah
Most likely some sum of (cores x Ghz) each processor in all servers? While it kind of makes sense, it feels like a much higher clock speed than what Iβm used to seeing.
I have a single quad sock E5-4640 server, I think in terms of me having 4 processors with 8 cores at base 2.4Ghz each; I donβt regularly (or ever, for that matter) think in terms of me having 76.8Ghz.
360G8s should be single or dual sock E5 v2 processors. I canβt really math right now (insufficient caffeine), but I canβt seem to make the math work, so Iβd imagine something that to be an aggregated across all three systems, not individual systems?
Yes, aggregate of all three hosts in cluster, sorry. Dual socket, six cores.
Love it! I'm gonna grab a 32RU rack soon. Got most of my stuff in a small ~14RU wall cabinet right now. I was originally aiming for low power everything - RasPis, etc. But I've since bought a couple DL360s, and you just can't beat the sheer grunt factor, especially when paired with Proxmox.
What are you running in docker
There's a lot not worth mentioning, but broadly...
That's a lot. Impressive
Mine's pretty moderate in comparison to yours lol
I've got an old Dell Poweredge tower server with dual 6-Core Xeons, 128 GB Ram, and 21 TB combined Raid 5 storage.
I run one service per VM because I like being able to nuke the whole thing without bringing down any other services.
You can get some good hardware on eBay if you know what you're looking at. The HDD and SDD's cost more than the server. Electricity probably runs about $16/mo.
Biggest problem I've got coming up is what I'm going to do for backups once I exceed Veeam community editions 10 VM limit.
Three most important VM's are Jellyfin (whole family uses every day), Paperless-ngx (I use every day), and Jitsi (kids use to video call Grandma and Grandpa). Most of the other stuff is non-essential.
I've pared mine down a lot. The biggest hurdle for me has been storage.
It used to be 5 2u servers running a ceph cluster, but that got to be expensive and unruly.
Now it's mainly a small half depth supermicro for my firewall, a half depth supermicro for home assistant, a 2u Dell for unraid, and a small NAS.
Unraid houses Plex and the *arrs. Along with a handful of other useful services like immich.
I do colo a 1u HP though that houses my pbx, web server, unifi controller, jirai server, nextcloud, email, and a bunch of other servers that I run.
Now, I've got a lot of spare hardware though. 7 Dell 1u servers, 2 Dell 2u, a supermicro 3u, an HP 2u and a bunch of things clients that I might turn into replacements for my rokus.
Wow I am not in your league
I am currently migrating from a dedicated docker host to a proxmox host with multiple LXC containers.
old host - 23 docker containers, 128GB system drive, 4TB data drive
backup server - 1 docker container, 1TB disk
proxmox - 3 LXC containers, one of which has 3 docker containers. 500GB system drive, 4TB media drive (not LVM)
The plan is to migrate the loads on the old host to the proxmox host. I also have another 4TB drive coming with the intent of setting up a RAID with 2 of the 4TB drives.