Homelab Organization

Hellmo_luciferrari@lemm.ee to Selfhosted@lemmy.world – 33 points –

Hi all!

So I want to get back into self hosting, but every time I have stopped is because I have lack of documentation to fix things that break. So I pose a question, how do you all go about keeping your setup documented? What programs do you use?

I have leaning towards open source software, so things like OneNote, or anything Microsoft are out of the question.


Edit: I didn't want to add another post and annoy people, but had another inquiry:

What ReverseProxy do you use? I plan to run a bunch of services from docker, and would like to be able to reserve an IP:Port to something like service.mylocaldomain.lan

I already have Unbound setup on my PiHole, so I have the ability to set DNS records internally.

Bonus points if whatever ReverseProxy setup can accomplish SSL cert automation.

57

I'm a grumpy linux greybeard type, so I went with... plain text files.

Everything is deployed via docker, so I've got a docker-compose.yml for each stack, and any notes or configuration things specific to that app is a comment in the compose file. Those are all backed up in a couple of places, since all I need to do is drop them on a filesystem, and bam, complete restoration.

Reverse proxy is nginx, because it's reliable, tested, proven, works, and while it might not have all those fancy auto-config options other things have, it also doesn't automatically configure itself into a way that I'd prefer it didn't, either.

I don't use any tools like portainer or dockge or nginx proxy manager at this point, because dealing with what's just a couple of config files on the filesystem is faster (for me) and less complicated (again, for me) than adding another layer of software on top (and it keeps your attack surface small).

My one concession to gui shit for the docker is an install of dozzle because it certainly makes dealing with docker logs simple, and it simplifies managing the ~40 stacks and ~85 containers that I've got setup at the moment.

I appreciate that mentality though. When things break, if your understanding of your setup is there, it's less to deal with.

I am forgoing the Portainer route this time. I am going to strictly use Docker Compose for my containers. I had too many issues with Portainer to consider using it.

For reverse proxy, I just need/want it for simple ip:port to sub.domain.lan type addresses locally. Anything I need outside of my home will be tunneled through wireguard.

I always quite liked Dozzle. It was handy, and has helped me comb through logs in the past.

Yeah, exactly: if you know how it works, then you know how to fix it. I don't think you need a comprehensive knowledge about how everything you run works, but you should at least have good enough notes somewhere to explain HOW you deployed it the first time, if you had to make any changes as well as anything you ran into that required you to go figure out what the blocking issue was.

And then you should make sure that documentation is visible in a form that doesn't require ANYTHING to actually be working, which is why I just put pages of notes in the compose file: docker doesn't care, and darn near any computer on earth made in the last 40 years can read a plan text file.

I don't really think there's any better/worse reverse proxy for simple configurations, but I'm most familiar with nginx, which means I've spent too long fixing busted shit on it so it's the choice primarily because, well, when I break it, I already probably know how to fix what's wrong.

I run a k3s cluster for selfhosted apps and keep all the configuration and docs in a git repo. That way I have history of changes and can rollback if needed. In that repo I have a docs folder with markdown documents about common operations and runbooks.

There are other ways to do this, but I like keeping docs next to the code and config so I can update them all at the same time. Deployed several wikis in the past but always forget to update them when I change things.

I really should spend time familiarizing with maintaining a git repo. I'll likely find one I can self host.

If you want a git "server" quick and low maintenance then gitolite is most likely the best choice. https://gitolite.com/gitolite/index.html

It simply acts as a server that you can clone with any git client and the coolest part is that you use git commits to create repositories and manage users as well. Very very or no maintenance at all. I've been using it personally for years but also saw it being used at some large companies because it simply gets the job done and doesn't bother anyone.

  • caddyserver for reverse proxy
  • docker-compose for ~75% of documentation
  • logseq for notes, though I don’t keep much.

Docker and docker-compose are nice because every service you want to run follows the same basic pattern. You don’t need much documentation beyond the project docs and the compose files themselves

Edit: caddyserver can do automatic certs, even behind a firewall if you set up the api call method. Varies by registrar

So I want to get back into self hosting, but every time I have stopped is because I have lack of documentation to fix things that break. So I pose a question, how do you all go about keeping your setup documented? What programs do you use?

Joplin or Obsidian? Or... plain markdown files with your favorite text editor.

I use Joplin and it works great for this exact thing. Anytime I discover a new command that fixes something I’ll throw it into my Joplin notebook. “New Server Cheatsheet” goes to list in order common operations and commands for setting up SSH, UfW, making a non-root user, configuring wireguard, etc. I have hundreds of notes by now and they’re easily found via search bar.

My documentation problem was largely fixed by using Nixos. The actual OS instances are self-documenting that way.

As far as the documentation for the network setup itself goes, a simple wiki does the rest of the trick for me.

I still want to get familiarized with NixOS and the concepts behind it. Just haven't taken the time.

I'm adding documentation about what I do in Joplin and I'm using Nextcloud to keep it synced.

For reverse proxy I use Nginx Proxy Manager for its simplicity. I really don't need anything more fancy.. https://nginxproxymanager.com/

You could try Logseq, it's like Obsidian but open source. I use Obsidian for most notes and I also have a personal wiki built with Otterwiki.

I use NGINX for my reverse proxy, you could check out NGINX Proxy Manager which uses Certbot to automate the SSL certificates.

I've heard a lot of people also like Caddy and Traefik. Can't remember which is easier to use, maybe Caddy.

I will likely dabble with Logseq.

I used NGINX Proxy Manager for a while, then had some issues that ultimately killed my homelab setup, so not sure that I want to go down that route again, or if I want to investigate Caddy, Traefik, or another.

Yeah, I could never get NPM to work right on my system either. I use the NGINX Docker image and set up my certs manually.

If I were to do it all over again today, I would probably go with Caddy since it now has a bunch of that stuff built in with automatic HTTPS by default and the basic reverse proxy setup is literally 2 lines of code.

My documentation is a folder with the docker compose files I am using. And some notes in Nextcloud Notes if needed.

My reverse proxy is Traefik, since it's docker aware. :)

Came to write basically this. I would try caddy but my compose file is 600 lines long now and half of that is traefik labels, I can't be arsed with the migration.

Traefik or Caddy are the 2 I am bouncing back and forth between currently. I may spin up a nextcloud instance.

This might be a bit late, but from my perception Traefik has a touch more of a learning curve, but it integrates much better with solutions like Authlia/Authentik and Prometheus than Caddy does.

I might be wrong, I've never used Caddy, but that's my perception.

I use markdown text files which are synced to my nextcloud instance.

This is somewhat tangential to your post, but I think using infrastructure as code and declarative technologies is great for reliability because you aren't just running a bunch of commands until something works, you have the code which tells you exactly how things are set up, and you can version control it to roll back to a working state. The code itself can be a form of documentation in that case.

I think I need to utilize this strategy because I get lazy and don't update external documentation.

Some examples of technologies which follow that paradigm are docker compose, ansible, nixOS and terraform. But it all depends on your workflow.

I think I am going down the docker compose route. When I started using docker, I didn't use compose, however, now I plan to. Though, Ansible has been on my list of things to learn, as well as nixOS.

Another suggestion for you, I highly recommend specifying a version for the docker image you are using for a container, in the compose file. For example, nextcloud:29.0.1. If you just use :latest, it will pull a new version whenever you redeploy which you may not have tested against your setup, and the version upgrade may even be irreversible, as in the case of nextcloud. This will give you a lot more control over your setup. Just don't forget to update images at reasonable intervals.

That is good advice, and honestly never really occurred to me to set specific versions for containers.

  • ansible playbook for automated/self-documenting setup
  • for one-off bugs or ongoing/long-term problems, open an issue on my gitea instnce and track the investigations and solutions there.

I'm also using ansible everywhere in my home / private infra and lab. Occasionally I get slightly annoyed that I have to open an inventory file or a role var to find something. But in general I'm so grateful that there is one place to find this information, and the same is used to set up everything from scratch.

Is it extra work to write the roles and playbooks? Yes. Does it solve the documentation and automation problem completely? Absolutely. 10/10 would recommend. And for the record, most things I host run on containers, but the volumes and permission management alone make it worth your time.

Personally I use Linkwarden for keeping snapshots of websites as well as a bookmark manager and Memos for a simple note-taking app. Both can be installed on mobile as PWAs, so it makes it easier to access on-the-go.

I'm using Nginx Proxy Manager, which I highly recommend for new users due to how simple it is get set up and running! NPM renews SSL certs automatically before they expire as well (afaik). You just gotta make sure that your different Docker containers' ports don't collide with each other.

Today I learned about Linkwarden, and I am so excited to check it out. Thank you!

NPM I did use, however it was ultimately the catalyst as to why I quit homelabbing. But when it did work, it was simple even for SSL cert renewal.

I hope you have fun with Linkwarden!

If you don't mind me asking, why did NPM push you to quitting homelabbing?

I ran into an issue where I changed nothing, and all of a sudden none of my SSL certs worked on top of most of the hosts were not working through the reverse proxy. I had not even changed ip addresses on any of them. I am not sure what was going on.

It was more of a "I didn't want to troubleshoot" and gave up, so I shut down my servers.

I write everything in markdown, and I mean just about everything. Tech notes, recipes, work procedures, shopping lists...everything. If you check my comment history from today, you can see a quick example of the kind of tech notes I keep (firewalld in this case).

I keep all of my plain text files synced across multiple devices using Syncthing. For desktop editors, I use mostly vim and VSCodium (though Kate is nice too), and I use Markor on Android. This workflow has been highly efficient for many years now, and I no longer waste time constantly reviewing the latest note-taking app.

Dokuwiki (dokuwiki.org) is my usual go-to. It’s really simple and stores entries in markdown files so you can get at them as plain text files in a pinch. Here’s a life lesson: don’t host your documentation in the machine you’re going to be breaking! Learned that the hard way once or twice.

For reverse proxies, I’m a fan of HAProxy. It uses pretty straightforward config files and is incredibly robust.

I use obsidian for my notes/wiki. I use the git plugin to backup/sync my notes. I self-host forgejo as my gut server. Works great!

Caddy is my favorite reverse-proxy. The setup is just a config file.

Traefik for reverse proxy. Tag your container with the route and let traefik take over.

I think Traefik is going to be what I investigate using. However the last time I tried, I was a little lost. I will have to comb over the documentation better this time.

Traefik is powerful and versatile but has a steep learning curve. It also uses code to control its configuration which is a bonus for reliability and documentation as discussed elsewhere ITT. Nginx proxy manager is much simpler and easier to use, may be a good one to get started with, but lacks the advantages of traefik described above. Nginx proxy manager does support SSL cert automation.

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
DNS Domain Name Service/System
Git Popular version control system, primarily for code
HA Home Assistant automation software

~ | High Availability HTTP | Hypertext Transfer Protocol, the Web IP | Internet Protocol SSH | Secure Shell for remote terminal access SSL | Secure Sockets Layer, for transparent encryption VPS | Virtual Private Server (opposed to shared hosting) nginx | Popular HTTP server


8 acronyms in this thread; the most compressed thread commented on today has 8 acronyms.

[Thread #845 for this sub, first seen 2nd Jul 2024, 16:15] [FAQ] [Full list] [Contact] [Source code]

OPNSense router handles auto SSL certificate renewals, Unbound (DNS) and HA Proxy ( for reverse proxy ).

Gitea instance for all of my docker-compose configs and documentation.

Joplin server and Joplin clients for easy notes available on all my devices.

I use BookStack and with Node Red I export to PDF the books as soon as pages get updated, so if everything goes feet up, I have all the documentation in PDFs (locally and automatically uploaded to a free DropBox account, still done with Node Red).

I've been using YunoHost for some time. Cosmos seems good, too. Both do most of the stuff for you and should come with documentation. I think that's the way to go if you can't set it all up yourself, or lack time to maintain it.

I've also used Docker containers and plain Debian. I use NGinx as a reverse proxy.

I document things in text files (markdown). At some point it'd like to upload them with something like mkdocs or to a wiki. But since it's just me, having them just sitting in a directory on my laptop is fine.

Use something that's super accessible so you'll actually use it. I often just dump random thoughts or commands I executed into the textfiles and I have my text editor open all the time anyways. And then on the server I eiter use Ctrl+R and search through the shell history, or search in my documents. Doesn't need to be fancy, grep -rni "keyword" does it for me.

I use nginx for reverse proxy. You can get certbot working to automate ssl fairly easily. There is a learning curve, but most services I use have documentation for hosting their stuff with it.

One day, I moved all services I really wanted from a couple of random VPS to a nice little proxmox machine at home (and then added some more services, of course). That was the day I swore to document stuff better, and I'm pretty satisfied with how well I was able to keep up with that.

In the proxmox web interface, you can leave notes per container. I note down which service the container is running including a link to the service's web interface if applicable, plus the source, and a note about how it auto-updates (green check mark emoji) or if it requires manual updates (handiman emoji).

Further I made a concious effort to document everything into a gollum wiki running on that proxmox host (exposes a wiki like web interface, stores all entries as plaintext .md files into a local git repo - very "portable"). Most importantly, it also includes a page of easy to understand emergency measures in case I die or become unresponsive, which I regularly print out and put into a folder with other important documents. The page contains a QR code linking to itself on the wiki too in case the printed version might be outdated here or there.

The organization of the wiki itself (what goes into which folder) is a bit of a work in progress, but as it offers full text search, that's not too much of a problem imo.

I have a couple Libre Office files where I document the non-technical stuff for my own quick reference, like network layout in Draw, or IP and port assignments in Calc. I use a git repo to store and organize podman scripts, systemd unit files, configs, etc. Probably not the most elegant solution, but it's simple and FOSS.

Reverse proxy is Nginx Proxy Manager.

Right now, I’m using Obsidian. I think I’d like to transition to keeping docs in a wiki, but I worry that it’s part of the self-hosted infrastructure. In other words, if the wiki’s down, I no longer have the docs that I need to repair the wiki.

I have looked at Obsidian, it looks nice, but the closed source part is why I can't personally use it. Though, from discussions I have seen Logseq be thrown out when talking about similar software.

The wiki idea is a good one. The way to handle that is to have the wiki backed up incrementally.

I'm writing documentation in obsidian. I then expose it to the web so I can access from all my devices and share to others with quartz. Everything is markdown. It's tunneled out of my network with cloudflare tunnels, which do handle SSL for me.