What does your current setup look like?

Lemmy@lemm.ee to Selfhosted@lemmy.world – 66 points –
99

Pi4 with 2TB SSD running:

  • Portainer
  • Calibre
  • qBittorrent
  • Kodi

HDMI cable straight to the living room Smart TV (which is not connected to the internet).

Other devices access media (TV shows, movies, books, comics, audiobooks) using VLC DLNA. Except for e-readers which just use the Calibre web UI.

Main router is flashed with OpenWrt and running DNS adblocker. Ethernet running to 2nd router upstairs and to main PC. Small WiFi repeater with ethernet in the basement. It's not a huge house, but it does have old thick walls which are terrible for WiFi propogation.

Bad. I have a Raspberry Pi 4 hanging from a HDMI cable going up to a projector, then have a 2TB SSD hanging from the Raspberry Pi. I host Nextcloud and Transmission on my RPi. Use Kodi for viewing media through my projector.

I only use the highest of grade when it comes to hardware

Case: found in the trash

Motherboard: some random Asus AM3 board I got as a hand-me down.

CPU: AMD FX-8320E (8 core)

RAM: 16GB

Storage: 5x2tb hdds + 128gb SSD and a 32GB flash drive as a boot device

That's it... My entire "homelab"

1) DIY PC (running everything)

  • MSI Z270-A PRO
  • Intel G3930
  • 16GB DDR4
  • ATX PSU 550W
  • 250GB SSD for OS
  • 500GB SSD for data
  • 12TB HDD for backup + media

2) Raspberry pi 4 4GB (running 2nd pihole instance)

Only 2 piHOLES?

Not sure is this a joke, but I dont see a reason to have more than 2.

Internet:

  • 1G fiber

Router:

  • N100 with dual 2.5G nics

Lab:

  • 3x N100 mini PCs as k8s control plane+ceph mon/mds/mgr
  • 4x Aoostar R7 "NAS" systems (5700u/32G ram/20T rust/2T sata SSD/4T nvme) as ceph OSDs/k8s workers

Network:

  • Hodge podge of switches I shouldn't trust nearly as much as I do
  • 3x 8 port 2.5G switches (1 with poe for APs)
  • 1x 24 port 1G switch
  • 2x omada APs

Software:

  • All the standard stuff for media archival purposes
  • Ceph for storage (using some manual tiering in cephfs)
  • K8s for container orchestration (deployed via k0sctl)
  • A handful of cloud-hypervisor VMs
  • Most of the lab managed by some tooling I've written in go
  • Alpine Linux for everything

All under 120w power usage

How are you finding the AooStar R7? I have had my eye on it for a while but not much talk about it outside of YouTube reviews

They've been rock solid so far. Even through the initial sync from my old file server (pretty intensive network and disk usage for about 5 days straight). I've only been running them for about 3 months so far though, so time will tell. They are like most mini pc manufacturers with funny names though. I doubt I'll ever get any sort of bios/uefi update

I have 5 servers in total. All except the iMac are running Alpine Linux.

Internet

Ziply fiber 100mb small business internet. 2 Asus AX82U Routers running in AiMesh.

Rack

Raising electronics 27U rack

N3050 Nuc's

One is running mailcow, dnsmasq, unbound and the other is mostly idle.

iMac

The iMac is setup by my 3d printers. I use it to do slicing and I run BlueBubbles on it for texting from Linux systems.

Family Server

Hardware

  • I7-7820x
  • Rosewill rackmount case
  • Corsair water cooler
  • 2 4tb drives
  • 2 240gb ssd
  • Gigabyte motherboard

Mostly doing nothing, currently using it to mine Monero.

Main Cow Server

Hardware

  • R7-3900XT
  • Rosewill rackmount case
  • 3 18tb drives
  • 2 1tb nvme
  • Gigabyte motherboard

Services

  • ZFS 36TB Pool
  • Secondary DNS Server
  • NFS (nas)
  • Samba (nas)
  • Libvirtd (virtual macines)
  • forgejo (git forge)
  • radicale (caldav/carddav)
  • nut (network ups tools)
  • caddy (web server)
  • turnserver
  • minetest server (open source blockgame)
  • miniflux (rss)
  • freshrss (rss)
  • akkoma (fedi)
  • conduit (matrix server)
  • syncthing (file syncing)
  • prosody (xmpp)
  • ergo (ircd)
  • agate (gemini)
  • chezdav (webdav server)
  • podman (running immich, isso, peertube, vpnstack)
  • immich (photo syncing)
  • isso (comments on my website)
  • matrix2051 (matrix to irc bridge)
  • peertube (federated youtube alternative)
  • soju (irc bouncer)
  • xmrig (Monero mining)
  • rss2email
  • vpnstack
    • gluetun
    • qbittorrent
    • prowlarr
    • sockd
    • sabnzbd

Why do you host FreshRSS and MiniFlux if you don't mind me asking?

I kind of prefer mini flux but I maintain the freshrss package in Alpine so I have an instance to test things.

Thank you. I'm looking at sorting an aggregator out and am leaning towards Miniflux

  • An HP ML350p w/ 2x HT 8 core xeons (forget the model number) and 256GB DDR3 running Ubuntu and K3s as the primary application host
  • A pair of Raspberry Pi's (one 3, one 4) as anycast DNS resolvers
  • A random minipc I got for free from work running VyOS as by border router
  • A Brocade ICX 6610-48p as core switch

Hardware is total overkill. Software wise everything is running in containers, deployed into kubernetes using helmfile, Jenkins and gitea

  • Pico psu
  • Asrock n100m
  • Eaton3S mini UPS
  • 250gb OS Sata SSD
  • 4x sata 4t SSD's
  • Pcie sata splitter

All in a small PC Case

sever is running YunoHost

A 13-year-old former gaming computer, with 30TB storage in raid6 that runs *arrs, sabnzbd, and plex. Everything managed by k3s except plex.

Also, 3-node digital ocean k8s cluster which runs services that don't need direct access to the 30TB of storage, such as: grocy, jackett, nextcloud, a SOLID server, and soon a lemmy instance :)

The Lemmy instance might need access to large storage.

My instance's image cache is like 230GB. Plus a bunch more for the db. Can confirm storage is needed.

(unrelated question 😶 - anyone running pictrs 0.5 on local storage happily?)

Thanks for the heads up.

I plan on using digital ocean's Spaces (s3-alike) where possible and also it's intended to be a personal instance, at least to start - just for me to federate with others and subscribe to my communities. Given that, do you think it'll still use much disk (block device) storage?

Might be time to familiarize myself with DO's disk pricing...

https://pixelfed.social/p/thejevans/664709222708438068

EDIT:

Server:

  • AMD 5900x
  • 64GB RAM
  • 2x10TB HDD
  • RTX 3080
  • LSI-9208i HBA
  • 2x SFP+ NIC
  • 2TB NVMe boot drive

Proxmox hypervisor:

  • TrueNAS VM (HBA PCIe passthrough)

  • HomeAssistant VM

  • Debian 12 LXC as SSH entrypoint and Ansible controller

  • Debian 12 VM with Ansible controlled docker containers

  • Debian 12 VM (GPU PCIe passthrough) with Jellyfin and other services that use GPU

  • Debian 12 VM for other docker stuff not yet controlled by Ansible and not needing GPU

Router: N6005 fanless mini PC, 2.5Gbit NICs, pfsense

Switch Mikrotik CRS 8-port 2.5Gbit, 2-port SFP+

You play games on that server don't you. 😁

I have a Kasm setup with blender and CAD tools, I use the GPU for transcoding video in Immich and Jellyfin, and for facial recognition in Immich. I also have a CUDA dev environment on there as a playground.

I upgraded my gaming PC to an AMD 7900 XTX, so I can finally be rid of Nvidia and their gaming and wayland driver issues on Linux.

Does Immich require a GPU or can it do facial recognition on CPU alone?

At home - Networking

  • 10Gbps internet via Sonic, a local ISP in the San Francisco Bay Area. It's only $40/month.
  • TP-Link Omada ER8411 10Gbps router
  • MikroTik CRS312-4C+8XG-RM 12-port 10Gbps switch
  • 2 x TP-Link Omada EAP670 access points with 2.5Gbps PoE injectors
  • TP-Link TL-SG1218MPE 16-port 1Gbps PoE switch for security cameras (3 x Dahua outdoor cams and 2 x Amcrest indoor cams). All cameras are on a separate VLAN that has no internet access.
  • SLZB-06 PoE Zigbee coordinator for home automation - all my light switches are Inovelli Blue Zigbee smart switches, plus I have a bunch of smart plugs. Aqara temperature sensors, buttons, door/window sensors, etc.

Home server:

  • Intel Core i5-13500
  • Asus PRO WS W680M-ACE SE mATX motherboard
  • 64GB server DDR5 ECC RAM
  • 2 x 2TB Solidigm P44 Pro NVMe SSDs in ZFS mirror
  • 2 x 20TB Seagate Exos X20 in ZFS mirror for data storage
  • 14TB WD Purple Pro for security camera footage. Alerts SFTP'd to offsite server for secondary storage
  • Running Unraid, a bunch of Docker containers, a Windows Server 2022 VM for Blue Iris, and an LXC container for a Bo gbackup server.

For things that need 100% reliability like emails, web hosting, DNS hosting, etc, I have a few VPSes "in the cloud". The one for my emails is an AMD EPYC, 16GB RAM, 100GB NVMe space, 10Gbps connection for $60/year at GreenCloudVPS in San Jose, and I have similar ones at HostHatch (but with 40Gbps instead of 10Gbps) in Los Angeles.

I've got a bunch of other VPSes, mostly for https://dnstools.ws/ which is an open-source project I run. It lets you perform DNS lookup, pings, traceroutes, etc from nearly 30 locations around the world. Many of those are sponsored which means the company provides them for cheap/free in exchange for a backlink.

This Lemmy server is on another GreenCloudVPS system - their ninth birthday special which has 9GB RAM and 99GB NVMe disk space for $99 every three years ($33/year).

  • Old Gaming Rig - Proxmox
    • Nextcloud, Immich, Grafana on VMs
  • Old HP ProDesk - FreeIPA
  • NAS - TrueNAS Scale
  • Couple Laptops - Docker Stuff
    • Wireguard, SearXNG, Nginx
  • Raspberry Pi 4 - Home Assistant
  • Rasberry Pi 3A+ - ntfy Docker
  • Very Old Dell - NTP Server
  • Qotom PC - OPNsense
  • Network Devices - OpenWRT
    • Zyxel Wireless APs (3)
    • Netgear R7000 (2)
    • Zyxel 24 and 8 port Switches
  • Gaming Rig - Windows 11 for now
    • Playnite, Sunshine, Jellyfin
  • Another HP ProDesk hopefully running an email server soon
  • UPS

Edit: Formatting

Jesus, you can run more than one piece of software on each bit of hardware....

Why spread out across 12-13 machines? Seems like a huge waste of power, and a whole bunch of extra to maintain.

You're probably right. I mean. I need most of the network devices, and I didn't list everything I am running on each, just big things. I do need to consolidate some if them though. Its been a trip and has made me a better IT though.

Also move most services to containers. That's a huge resource saver while maintaining ease of management and separation from the host.

I have a Lenovo TS140 in the laundry room, i3-4330, 16GB, 2TB of SSD running arch.

In docker I am running:

Plex, Wire guard, Qbittorrent, Pihole, my discord bot, nginx, and Teslamate.

Works great, I'm probably going to swap my gaming rig in (5800x + 3080 12GB) with more RAM to host some AI stuff and the same services.

  • Server - Desktop Tower

    • Build - Intel server board & CPU based on old serverbuild naskiller guide
      • OS on SSD
      • ZFS ON 8 6TB DRIVES, YIELDING ~36TB of storage, recoverable with up to two failed drives
    • Runs (via docker)
      • Navidrome (webui used daily @ work, dsub on phone, feishin on desktop)
      • Jellyfin (used almost exclusively locally on my TV, occasionally to watch with friends on web)
      • Nextcloud (used occasionally, mostly backs up password files, etc or to share. Thinking about replacing.)
      • QBitTorrent with glutun VPN
      • Audiobookshelf - used frequently for audiobooks. Occasionally for podcasts. Often more convenient to use antennapod/pocket casts on phone for active podcasts)
      • Kavitas - used seldom. Thinking about stopping. I like using obps on my rooted kindle to access my library.
      • Changedetection.io -watch some sites for new products, etc
      • Kiwix (local wikipedia copy I use shortcuts in FF locally to search for things)
      • Homepage (local links I use on local machines to my services)
  • Raspberry pi

    • Adguard home & unbound - block most garbage for any traffic from my home

Thoughts - I'm considering downsizing. I don't really need all that much space, and it can be a headache at times. With drive replacement costs on top of power (~$320 a year) I consider either going to a vps or downsizing to what could run on a small compute like the n100 or a raspberry pi5, etc.

Look for 5W idle consumption boards + CPU combos which go down to package C6+ state. HardwareLuxx has a spreadsheet with various builds focusing on low power. Sell half your disks, go mirror or Raidz1. Invest the difference in off-site vps and or backup. Storage on any SBC is a big pain and you will hit the sata connector / IO limits very soon.

The small NUC form factors are also fine, but if your problem is power you can go very low with a good approach and the right parts. And you'll make up for any new investments within the first year.

Thanks! I need to look more into what the power implications of 8 drives is - they never spin down, so I assume they are a non-trivial portion of my power consumption.

That said, I've been considering upgrading to something recent and low power anyways. It would be a good opportunity to sneak in some useful features too,

  • Maybe the possibility of transcoding a video stream
  • USB3 (not a huge deal)
  • Non VGA display (useful, for when connection issues arise)
  • Audio jack (I could use navidrome jukebox mode!)

Which the old hardware wouldn't support without adapters, cards, etc.

Responding to myself...

Datasheet reports 7.05 idle watts (~11w at active random read) so depending on what it considers idle, it'd be 8*7.05|11= 56.4:88W

Server clocks in at ~102W. Halving the drives would reduce the power by 27 : 43%

And in theory other components (motherboard, CPU...) must be using anywhere from (102-88) :(102-56.4)= 14 : 45.6 W.

An old computer running on the top of a shelf that whenever I need to work with a display I have to bring it back down to the floor and borrow a VGA cable from another because the HDMI port is broken.

Oh and it occasionally disconnects itself from the internet.

i got the random Dell SFF optiplex with 16gb of upgraded ram and a i5-4690 sitting at the girlfriend's house because she's the only one with an ISP that still allows public ip's.
It runs Minecraft.

at home i have my old 9yo retired gaming desktop doing seedbox work and mostly just running BOINC to donate compute power to science... and also keep my feet warm lol

yeah. that's it. i really don't do shit even though i totally could.

https://blog.krafting.net/my-first-server-rack/

For a few weeks now, it's been looking like this! (At the bottom there is a complete picture)

Plus a Orange Pi 3 as a DNS/Reverse Proxy server

Your link is not on https and asking me to download a .bin file. Extremely sus

Edit: link looks good now

What?

The same thing happened to me when I first tried to go there, but it's fine now.

Also prompting me to download a .bin

OKey, so that's a bit concerning... I'd love to get my hand on this "bin" file, I cannot reproduce the issue on my side... Also the site should be HTTPS only. I had a bug with caching recently that showed the ActivityPub data instead of the blog post, could it be that ? Are you on mobile, and the browser cannot show JSON data properly so it tries to download it with a weird name ?

I am on Android mobile. Firefox only prompts to download downloadfile.bin. Duckduckgo browser actually opens the file contents. I'll post it here, since I'm getting it from public I'm hoping that's okay. This is the content...

{"@context":["https://www.w3.org/ns/activitystreams",{"Hashtag":"as:Hashtag"}],"id":"https://blog.krafting.net/my-first-server-rack/","type":"Note","attachment":[{"type":"Image","url":"https://blog.krafting.net/wp-content/uploads/2024/02/603fb502-9977-461f-92c6-7375055fdec6-min-scaled.jpg","mediaType":"image/jpeg"},{"type":"Image","url":"https://blog.krafting.net/wp-content/uploads/2024/02/20240129_184909-min-scaled.jpg","mediaType":"image/jpeg"},{"type":"Image","url":"https://blog.krafting.net/wp-content/uploads/2024/02/20240129_185338-min-scaled.jpg","mediaType":"image/jpeg"},{"type":"Image","url":"https://blog.krafting.net/wp-content/uploads/2024/02/20240129_193432-min-scaled.jpg","mediaType":"image/jpeg"}],"attributedTo":"https://blog.krafting.net/author/admin/","content":"\u003Cp\u003E\u003Cstrong\u003EMy First Server Rack!\u003C/strong\u003E\u003C/p\u003E\u003Cp\u003E\u003Ca href=\u0022https://blog.krafting.net/my-first-server-rack/\u0022\u003Ehttps://blog.krafting.net/my-first-server-rack/\u003C/a\u003E\u003C/p\u003E\u003Cp\u003E\u003Ca rel=\u0022tag\u0022 class=\u0022hashtag u-tag u-category\u0022 href=\u0022https://blog.krafting.net/tag/homelab/\u0022\u003E#homelab\u003C/a\u003E \u003Ca rel=\u0022tag\u0022 class=\u0022hashtag u-tag u-category\u0022 href=\u0022https://blog.krafting.net/tag/management/\u0022\u003E#management\u003C/a\u003E \u003Ca rel=\u0022tag\u0022 class=\u0022hashtag u-tag u-category\u0022 href=\u0022https://blog.krafting.net/tag/networking/\u0022\u003E#networking\u003C/a\u003E \u003Ca rel=\u0022tag\u0022 class=\u0022hashtag u-tag u-category\u0022 href=\u0022https://blog.krafting.net/tag/rack/\u0022\u003E#rack\u003C/a\u003E \u003Ca rel=\u0022tag\u0022 class=\u0022hashtag u-tag u-category\u0022 href=\u0022https://blog.krafting.net/tag/server/\u0022\u003E#server\u003C/a\u003E \u003Ca rel=\u0022tag\u0022 class=\u0022hashtag u-tag u-category\u0022 href=\u0022https://blog.krafting.net/tag/startech/\u0022\u003E#startech\u003C/a\u003E\u003C/p\u003E","contentMap":{"en":"\u003Cp\u003E\u003Cstrong\u003EMy First Server Rack!\u003C/strong\u003E\u003C/p\u003E\u003Cp\u003E\u003Ca href=\u0022https://blog.krafting.net/my-first-server-rack/\u0022\u003Ehttps://blog.krafting.net/my-first-server-rack/\u003C/a\u003E\u003C/p\u003E\u003Cp\u003E\u003Ca rel=\u0022tag\u0022 class=\u0022hashtag u-tag u-category\u0022 href=\u0022https://blog.krafting.net/tag/homelab/\u0022\u003E#homelab\u003C/a\u003E \u003Ca rel=\u0022tag\u0022 class=\u0022hashtag u-tag u-category\u0022 href=\u0022https://blog.krafting.net/tag/management/\u0022\u003E#management\u003C/a\u003E \u003Ca rel=\u0022tag\u0022 class=\u0022hashtag u-tag u-category\u0022 href=\u0022https://blog.krafting.net/tag/networking/\u0022\u003E#networking\u003C/a\u003E \u003Ca rel=\u0022tag\u0022 class=\u0022hashtag u-tag u-category\u0022 href=\u0022https://blog.krafting.net/tag/rack/\u0022\u003E#rack\u003C/a\u003E \u003Ca rel=\u0022tag\u0022 class=\u0022hashtag u-tag u-category\u0022 href=\u0022https://blog.krafting.net/tag/server/\u0022\u003E#server\u003C/a\u003E \u003Ca rel=\u0022tag\u0022 class=\u0022hashtag u-tag u-category\u0022 href=\u0022https://blog.krafting.net/tag/startech/\u0022\u003E#startech\u003C/a\u003E\u003C/p\u003E"},"published":"2024-02-05T19:10:19Z","tag":[{"type":"Hashtag","href":"https://blog.krafting.net/tag/homelab/","name":"#homelab"},{"type":"Hashtag","href":"https://blog.krafting.net/tag/management/","name":"#management"},{"type":"Hashtag","href":"https://blog.krafting.net/tag/networking/","name":"#networking"},{"type":"Hashtag","href":"https://blog.krafting.net/tag/rack/","name":"#rack"},{"type":"Hashtag","href":"https://blog.krafting.net/tag/server/","name":"#server"},{"type":"Hashtag","href":"https://blog.krafting.net/tag/startech/","name":"#startech"}],"updated":"2024-02-05T19:22:17Z","url":"https://blog.krafting.net/my-first-server-rack/","to":["https://www.w3.org/ns/activitystreams#Public","https://blog.krafting.net/wp-json/activitypub/1.0/users/1/followers"],"cc":[]}

I can erase the direct post link and then the site loads, but then if I click the post title it loads the text again...

Okey so the "bin" is actually the activitypub data... I don't know why this is still happening... there might be something wrong somewhere, but where...

It's working now. I did reset my router today to get IPv6 working for me, so unsure if that changed anything or in was on your end, but FYI.

My blog doesn't have IPv6 sadly... and I did nothing, the issue just seems to appear randomly... I did nothing yesterday so yeah...

1 more...
1 more...
1 more...
1 more...

OKey, so that's a bit concerning... I'd love to get my hand on this "bin" file, I cannot reproduce the issue on my side... Also the site should be HTTPS only. I had a bug with caching recently that showed the ActivityPub data instead of the blog post, could it be that ? Are you on mobile, and the browser cannot show JSON data properly so it tries to download it with a weird name ?

I'm not really a networking expert so I can't make too good of a guess as to what happened. I'm on the latest Firefox mobile release on Android and was accessing from a Colorado IP. When I originally tried the site, nothing was rendered. It was a blank page or just a redirect for download. I didn't download the .bin. I clicked your link twice before sending my message.

Well, thanks for the follow-up anyway, I did some tweaks, and I hope it won't happen again... I'll see.

1 more...
1 more...

It's a work in progress, but https://wiki.gardiol.org (which is OFC self-hosted)

Anyway, beefy HP laptop with 32gb ram and Xeon CPU to run all services. 3 RAID-1 (Linux sw raid) usb3 volumes to host all services and data.

Two isp's: Vodafone FVA 5G (data capped) for general navigation and Fastweb FTTC (low speed but uncapped) for backup access and torrent/Usenet downloads.

Gentoo Linux all the way and podman, but as much limited as possible: only immich (that's impossible to host on bare metal due to devs questionable choices).

Services: WebDAV/webcal/etc wiki, more stuff, arrs, immich, podfetch, and a few more.

All behind nginx reverse proxy.

99% bare metal.

Self developed simple dashboard

External access via ssh tunnels to vps

That public wiki gives me the security heebie-jeebies. 🤭

The service runs as an unpriviledged user, even if, at worst, an intruder would delete or replace the wiki itself. Even the php-fpm behind it runs as that unpriviledged user and is not shared with any other service.

I doubt an attacker could do anything worse than DoS on the wiki itself.

Why?

Not saying it's not secure, just that I'd have constant doubts whether I've covered all the bases if I were doing it. Especially ensuring an intruder can't compromise anything else if they take it over via some security exploit in PHP or DocuWiki itself.

Bit of a mess right now . Amd ryzen 5800x with 6800xt , yr gigs of ram. Running Ubuntu 22 . Also have a ps3 and ps4 set up to the main monitor. A second work computer under my desk with both PC's hooked up with a KVM so seamlessly switch between work and gaming.

What KVM?

Now I realize you may have been asking what kvm I'm using . It strapped to the bottom of my desk so you cant see it . Here is the exact one I have TRENDnet 2-Port Dual Monitor DisplayPort KVM Switch with Audio, 2-Port USB 2.0 Hub, 4K UHD Resolutions Up to 3840 x 2160, Connect Two DisplayPort Monitors, Dual Monitor KVM Switch, Black, TK-240DP https://a.co/d/epAHtkR

Yeah, that's what I was asking. DP KVMs are a bit of a hit and miss. Any problems with it?

This one works perfectly was just a little expensive. I got cheap one the first time and my monitors would turn off and on with my gaming PC .

It's a device where you can plug in things like monitors, mouse keyboard, etc and then that plugs into both PC's and you can switch back and forth without having to unplug anything. here is an example of one . 4K@120Hz DisplayPort KVM Switch 2 Computers 2 Monitors DP1.4, KVM Switches Dual Monito 8K@60Hz with 3 USB 3.0 Ports Share Keyboard Mouse Printer (EDID Plug and Play) and Wired Remote, USB3.0 Cable*2 https://a.co/d/h9vJ0Uu

Western Digital My Cloud EX2 (Original) for storage

Raspberry Pi 5 for Home Assistant, Navidrome, Jellyfin, Kavita, Immich, Paperless and eventually NextCloud. Though it's being a bastard and won't run right now.

I need to get a Nano Pi to run OPNSense and Pi-Hole and I'll be happy.

NanoPi R2C has 1 gigabit speeds and you can run LibreCMC with little to no blobs :)

It is a Ethernet only router though, no WiFi.

My plan was to get one of those flying saucer looking WAPs to handle the WiFi. Would that work?

Runs off to look up LibreCMC 😂

Proxmox VE on a machine that I got almost for free. Intel i3-4160, 10GB RAM, 240GB SSD for the OS, and a non-redundant 1T HDD for storage. The only things I paid for are a second NIC and an 8GB RAM stick.

PVE is running a pfSense VM, and a bunch of Debian containers:

  • Samba
  • Jellyfin (still setting it up)
  • Twingate Connector

All internet traffic goes through the pfSense VM. Unfortunately the ISP has put me behind CGNAT and disabled bridge mode, so my internet-facing things (mostly Wireguard and SSH) are pretty much crippled. Right now my best no-cost option is to use Twingate, but I don't trust it to handle anything other than SSH.

If behind CGNAT and forwarding is not an option, Headscale, Tailscale or ZeroTier may be an option. I use Tailscale and it have ZERO forwarding on and can access anything on my network when connected through it. Think of these as Wireguard on Steroids. :)

I tried Tailscale once, but it introduced some massive latency because apparently I got connected to my machine through a gateway in Frankfurt. It was the Tailscale Funnel service though, so maybe that's not what I needed.

Also, are any of the services you listed end-to-end encrypted?

Great setup! Be careful with the SSD though, Proxmox likes to eat those for fun with all those small but numerous writes. A used, small capacity enterprise SSD can be had for cheap.

ThinkPad T450s (my old laptop)

OS: Arch Linux DE: Plasma

Services: Arr stack for gluetun, sonarr, radar and jackets Jellyfin for videos Gonic for audio

All 3 of them are run using docker compose

NAS with Truenas, built myself:

  • Shared storage
  • Backups
  • Downloaders

And the following in a VM with docker compose:

  • TubeArchivist

Separate K8s cluster with Single control pane (2nd hand old small form-factor HP stuff) and 3 Nodes to run more resource intensive stuff that doesn't need to be close to the data source:

  • *ARR

HomeAssistant in another 2nd hand HP small form factor box

Main site:

  • 5950X on a GA-AB350-Gaming 3

  • 64GB

  • 1TB NVMe mirrored

  • 24TB RAIDz1, using external USB 3 disks

  • Ubuntu LTS

  • 700Mbps uplink

  • OpenWrt on Pi 4 router

  • Home Assistant Yellow

Off site:

  • ThinkCentre 715q
  • 2400GE
  • 8GB
  • 256GB NVMe
  • 24TB RAIDz1, using external USB 3 disks
  • Ubuntu LTS
  • 30Mbps uplink
  • OpenWrt on Pi 4 router

Syncthing replicates data between the two. ZFS auto snapshots prevent accidental or malicious data loss at each site. Various services are running on both machines. Plex, Wiki.js, OpenProject, etc. Most are run in docker, managed via systemd. The main machine is also used as a workstation as well as games. The storage arrays are ghetto special - USB 3 external disks, some WD Elements, some Seagate in enclosures. I even used to have a 1T, a 3T and a 4T disk in an LVM volume pretending to be an 8T disk in one of the ZFS pools. The next time I have to expand the storage I'll use second hand disks. The 5950X isn't boosting as high as it should be able to on a chipset with PB2, but I got all those cores on a B350 board. 😆 Config management is done with SaltStack.

I have a similar setup. I just recently switched to the ASRock Phantom X570 for $100. It's a fantastic board at that price.

Did it improve the 5900X'es boost?

I'll have to double check, but I came from a B450 board. It definitely allowed me to run my RAM at a higher XMP profile (4x 3200MHz), and it has way better IOMMU groups. Each PCIe device gets its own group, so they can all be passed to different VMs.

Like a fucked up ACL trying to do a kind of least-priviledged filesystem knowing absolutely nothing.

And 2 NUCs.

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
AP WiFi Access Point
CGNAT Carrier-Grade NAT
DNS Domain Name Service/System
Git Popular version control system, primarily for code
HA Home Assistant automation software

~ | High Availability HTTP | Hypertext Transfer Protocol, the Web HTTPS | HTTP over SSL IP | Internet Protocol LTS | Long Term Support software version LVM | (Linux) Logical Volume Manager for filesystem mapping LXC | Linux Containers NAS | Network-Attached Storage NAT | Network Address Translation NUC | Next Unit of Computing brand of Intel small computers NVMe | Non-Volatile Memory Express interface for mass storage PCIe | Peripheral Component Interconnect Express PSU | Power Supply Unit PiHole | Network-wide ad-blocker (DNS sinkhole) Plex | Brand of media server package PoE | Power over Ethernet RAID | Redundant Array of Independent Disks for mass storage RPi | Raspberry Pi brand of SBC SAN | Storage Area Network SATA | Serial AT Attachment interface for mass storage SBC | Single-Board Computer SSD | Solid State Drive mass storage SSH | Secure Shell for remote terminal access SSL | Secure Sockets Layer, for transparent encryption VPN | Virtual Private Network ZFS | Solaris/Linux filesystem focusing on data integrity Zigbee | Wireless mesh network for low-power devices k8s | Kubernetes container management package nginx | Popular HTTP server


30 acronyms in this thread; the most compressed thread commented on today has 6 acronyms.

[Thread #525 for this sub, first seen 18th Feb 2024, 06:05] [FAQ] [Full list] [Contact] [Source code]

I'm running my email server on a POCO F1 ex-Android phone (running PostmarketOS now).

I wish I could get NixOS running on it, then I'd move other things also there.

I have a 10 year old CPU running Intel i3 (don't know what generation) with 12GB of ram, few HDSs (8TB, 2TB and 1TB) , SSD(128GB) for Debian.

The motherboard has a VGA and I don't have any VGA display with me. So if anything goes wrong at reboot, I mostly do guesswork and resolve it.

The PSU fan is whining and hanging on to its life.

I am an atheist , but I pray to God for my PSUs life.

A custom PC running proxmox:

MOBO: Asus ROG Strix Z790-E gaming wifi

RAM: 4x 32GB Ripjaws S5

CPU: i9-13900k

GPU: Gigabyte RTX 4090

GPU2: EVGA GTX 1070

HDD: 4x 8TB WD red plus in raid 10

SSD: 2x 2TB Samsung 980 pro in raid 1

PSU: Super Flower Leadex Titanium 1600 W

Case: Fractal Design R5

I run everything on it: homassistant, Plex/*arr, pihole/unbound, my windows gaming VM, etc.

PA-220 fw for internet access. An old workhorse, Synology DS1812+, for filesharing. A mac mini with Ubuntu running Plex and Roon also hosting Dashy in docker. A Hwg-ste to measure temp in my cabinet. I host a RIPE probe. An RPI4 running Zabbix. My next project is moving from PA-220 to something in the 400 series (probably 415) so I can upgrade to newer PANOS.

Power

  • 2x feeds into the rack (same circuit but we'll work on that)
  • Eaton 2000VA double conversion UPS on Feed A
  • APC 1500VA line interactive UPS on Feed B (bypassed, replacing it with another double conversion 2kVA eventually)

Network

  • 2x Dell N2048P, stacked (potentially getting replaced with 2x stacked Cisco 9300)
  • FortiGate firewall
  • 1000/50 FTTP primary Internet link
  • 4G backup Internet link using a different Telco (the dream is to replace this with Starlink)

Storage

  • Synology 4-bay NAS with 4x4TB in RAID-10 (for overflow storage from Virtual SAN cluster)
  • HP MSL2024 8GB Fiber Channel LTO5 Tape autoloader for off-site backup

Compute

  • Dell R520 running VMware ESX for Production (2x Xeon E5-2450L, 80GB DDR3, 4x500GB SSD RAID-10 for Virtual SAN, 1x10TB SATA "scratch" disk, 2x10G fibre storage NICs, 2x1G copper NICs for VM traffic)
  • Dell R330 running VMware ESX for backups and DR (1x Xeon E3-1270v5, 32GB DDR4, 2x512GB SSD RAID-1, 2x4TB HDD RAID-1, 8G FC card for tape library)

A second prod host will join the R520 soon to add some redundancy and mirror the Virtual SAN.

All VMs are backed up and kept in an encrypted on-site data store for at least 4 weeks. They're duplicated to tape (encrypted) once a month and taken off site. Those are kept for 1 year minimum. Cloud backup storage will never replace tape in my setup.

Services

As far as "public facing" goes, the list is very short:

Though I do run around 30-40 services all up on this setup (not including actual non-prod lab things that are on other servers or various SBCs around the place).

If I had unlimited free electricity and no functioning ears I'd be using my Cisco UCS chassis and Nexus 5K switch/fabric extenders. But it just isn't meant to be (for now, haha).

Self built Proxmox server (5600G/64gb ram/1x2tb nvme+4x4tb hdd) with 2 nics running litrally everything. List of services I run is long and Im too lazy to type them.

  • Ryzen 2700X on a gigabyte B450i

  • Arc A380

  • 2 mirrored 4TB HDDs and 1 12 TB HDD, luks encrypted and on 2 zpools (I have an "unsafe" mount path for data on a single drive like media)

  • removable flash drive with boot partition and main SSD keyfile

-Zwave dongle

That's it.

I can run everything I need to on it and my home internet is only 100/30 still because I don't live in a city, so 2.5gig networking isn't worth the cost. a380 does all of the hardware transcoding I need at a fairly low power. It isn't as good as just getting a newer NUC, but it was cheaper and a fun project.

Also doing a full renovation, so KNX will be connected for home assistant to control my lights and things and my smart home stuff will probably balloon.

N100 that just got built today with only Ubuntu and portainer installed. I still gotta migrate what I had in my main PC, which was emby, sonarr, bazarr, qbittorrent and prowlarr. It'll be...fun

Not sure if I'm late on the draw here, but:

Debian 12 "Bookworm" Ancient 2007 Quad Core Intel Q6600 ASUS P5N-T Deluxe Motherboard 8 GB RAM 64GB SSD for the OS and a few applications 6x2TB Laptop HDDs in RAID 5 - scavenged from electronics scrap All wrapped up in a spare full tower I had from an old build

For now, the few services I have running are local network only. They are simply a few Docker containers running PiHole and Portainer. The RAID array is set up as a network share via SMB for my various personal devices to dump files to.

I am very new to the whole self-hosted thing and enjoying learning. Really, new to Linux, servers, networking, etc. Would love to hear some recommendations on what services I should look into, resources for learning more, critiques, etc. So far, browsing topics on here has been pretty helpful.