After 1.5 years of learning selfhosting, this is where I'm at

7Sea_Sailor@lemmy.dbzer0.com to Selfhosted@lemmy.world – 627 points –

@selfhosted@lemmy.world

Mid 2022, a friend of mine helped me set up a selfhosted Vaultwarden instance. Since then, my "infrastructure" has not stopped growing, and I've been learning each and every day about how services work, how they communicate and how I can move data from one place to another. It's truly incredible, and my favorite hobby by a long shot.

Here's a map of what I've built so far. Right now, I'm mostly done, but surely time will bring more ideas. I've also left out a bunch of "technically revelant" connections like DNS resolution through the AdGuard instance, firewalls and CrowdSec on the main VPS.

Looking at the setups that others have posted, I don't think this is super incredible - but if you have input or questions about the setup, I'll do my best to explain it all. None of my peers really understand what it takes to construct something like this, so I am in need of people who understand my excitement and proudness :)

Edit: the image was compressed a bit too much, so here's the full res image for the curious: https://files.catbox.moe/iyq5vx.png And a dark version for the night owls: https://files.catbox.moe/hy713z.png

109

me after 15 years of intermittent learning self hosting:

i have the one random office PC that runs minecraft

....yeah that's it

With the enshittification of streaming platforms, a Kodi or Jellyfin server would be a great starting point. In my case, I have both, and the Kodi machine gets the files from the Jellyfin machine through NFS.

Or Home Assistant to help keep IOT devices that tend to be more IoS. Or a Nextcloud server to try to degoogle at least a little bit.

Maybe a personal Friendica instance for your LAN so your family can get their Facebook addiction without giving their data to Meta?

Additionally, using jottacloud with 2 VPS's (one of them being built on epyc like from OVH cloud) can get you a really good download server and streaming server for about £30 a month, which is the same as having netflix and Disney plus, except now you can have anything you want.

I have a contabo 4core 8gb ram VPS that handles downloading content.

A OVH 4core 8gb VPS that handles emby (I keep trying to go back to jellyfin but it's just slightly slower than emby at transcoding and I need to squeeze as much performance out of my VPS as possible so... Maybe one day jelly)

And I have a really good streaming experience with subtitles that don't put big black boxes on the screen making 1/8th of the screen non viewable.

I'd recommend using Borgbackup over SSH, instead of just using rclone for backups. As far as I know, rclone is like rsync in that you only have one copy of the data. If it gets corrupted at the source, and that gets synced across, your backup will be corrupted too. Borgbackup and Borgmatic are a great way to do backups, and since it's deduplicated you can usually store months of daily backups without issue. I do daily backups and retain 7 daily backups, 4 weekly backups, and 'infinite' monthly backups (until my backup server runs out of space, then I'll start pruning old monthly backups).

Borgbackup also has an append-only mode, which prevents deleting backups. This protects the backup in case the client system is hacked. Right now, someone that has unauthorized access to your main VPS could in theory delete both the system and the backup (by connecting via rclone and deleting it). Borg's append-only mode can be enabled per SSH key, so for example you could have one SSH key on the main VPS that is in append-only mode, and a separate key on your home PC that has full access to delete and prune backups. It's a really nice system overall.

You're right, that's one of the remaining pain points of the setup. The rclone connections are all established from the homelab, so potential attackers wouldn't have any traces of the other servers. But I'm not 100% sure if I've protected the local backup copy from a full deletion.

The homelab is currently using Kopia to push some of the most important data to OneDrive. From what I've read it works very similarly to Borg (deduplicate, chunk based, compression and encryption) so it would probably also be able to do this task? Or maybe I'll just move all backups to Borg.

Do you happen to have a helpful opinion on Kopia vs Borg?

I haven't tried Kopia, so unfortunately I can't compare the two. A lot of the other backup solutions don't have an equivalent to Borg's append-only mode though.

I'm a borg guy. I'd never heard of kopia. This is from their docs though:

Each snapshot is always incremental. This means that all data is uploaded once to the repository based on file content, and a file is only re-uploaded to the repository if the file is modified. Kopia uses file splitting based on rolling hash, which allows efficient handling of changes to very large files: any file that gets modified is efficiently snapshotted by only uploading the changed parts and not the entire file.

So looks like they do append only.

Architecture looks dope

Hope you've safeguarded your setup by writing a provisoning script in case anything goes south.

I had to reinstall my server from scratch twice and can't fathom having to reconfigure everything manually anymore

Nope, don't have that yet. But since all my compose and config files are neatly organized on the file system, by domain and then by service, I tar up that entire docker dir once a week and pull it to the homelab, just in case.

How have you setup your provisioning script? Any special services or just some clever batch scripting?

Old school ansible at first, then I ditched it for Cloudbox (an OSS provisioning script for media server)

Works wonders for me but I believe it's currently stuck on a deprecated Ubuntu release

Possible for a dark mode version XD? excalidraw can do that.

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
DNS Domain Name Service/System
HTTP Hypertext Transfer Protocol, the Web
HTTPS HTTP over SSL
IP Internet Protocol
Plex Brand of media server package
SSH Secure Shell for remote terminal access
SSL Secure Sockets Layer, for transparent encryption
TCP Transmission Control Protocol, most often over IP
VPN Virtual Private Network
VPS Virtual Private Server (opposed to shared hosting)
k8s Kubernetes container management package
nginx Popular HTTP server

11 acronyms in this thread; the most compressed thread commented on today has 15 acronyms.

[Thread #473 for this sub, first seen 2nd Feb 2024, 05:25] [FAQ] [Full list] [Contact] [Source code]

How do you like crowdsec? I've used it on a tiny VPS (2 vcpu / 1 GB RAM) and it hogs my poor machine. I also found it to have a bit of learning curve, compared to fail2ban (which is much simpler, but dosen't play well with Caddy by default).

Would be happy to see your Caddy / Crowdsec configuration.

The crowdsec agent running on my homelab (8 Cores, 16GB RAM) is currently sitting idle at 96.86MiB RAM and between 0.4 and 1.5% CPU usage. I have a separate crowdsec agent running on the Main VPS, which is a 2 vCPU 4GB RAM machine. There, it's using 1.3% CPU and around 2.5% RAM. All in all, very manageable.

There is definitely a learning curve to it. When I first dove into the docs, I was overwhelmed by all the new terminology, and wrapping my head around it was not super straightforward. Now that I've had some time with it though, it's become more and more clear. I've even written my own simple parsers for apps that aren't on the hub!

What I find especially helpful are features like explain, which allow me to pass in logs and simulate which step of the process picks that up and how the logs are processed, which is great when trying to diagnose why something is or isn't happening.

The crowdsec agent running on my homelab is running from the docker container, and uses pretty much exactly the stock configuration. This is how the docker container is launched:

  crowdsec:
    image: crowdsecurity/crowdsec
    container_name: crowdsec
    restart: always
    networks:
      socket-proxy:
    ports:
      - "8080:8080"
    environment:
      DOCKER_HOST: tcp://socketproxy:2375
      COLLECTIONS: "schiz0phr3ne/radarr schiz0phr3ne/sonarr"
      BOUNCER_KEY_caddy: as8d0h109das9d0
      USE_WAL: true
    volumes:
      - /mnt/user/appdata/crowdsec/db:/var/lib/crowdsec/data
      - /mnt/user/appdata/crowdsec/acquis:/etc/crowdsec/acquis.d
      - /mnt/user/appdata/crowdsec/config:/etc/crowdsec

Then there's the Caddyfile on the LabProxy, which is where I handle banned IPs so that their traffic doesn't even hit my homelab. This is the file:

{
	crowdsec {
		api_url http://homelab:8080
		api_key as8d0h109das9d0
		ticker_interval 10s
	}
}

*.mydomain.com {
	tls {
		dns cloudflare skPTIe-qA_9H2_QnpFYaashud0as8d012qdißRwCq
	}
	encode gzip
	route {
		crowdsec
		reverse_proxy homelab:8443
	}
}

Keep in mind that the two machines are connected via tailscale, which is why I can pass in the crowdsec agent with its local hostname. If the two machines were physically separated, you'd need to expose the REST API of the agent over the web.

I hope this helps clear up some of your confusion! Let me know if you need any further help with understanding it. It only gets easier the more you interact with it!

don't worry, all credentials in the two files are randomized, never the actual tokens

Thanks for the offer! I might take you up on that :-) If you have a Matrix handle and hang out in certain rooms, please DM me and I'll harass reach out to you there.

Hm, I have yet to mess around with matrix. As anything fediverse, the increased complexity is a little overwhelming for me, and since I am not pulled to matrix by any communities im a part of, I wasn't yet forced to make any decisions. I mainly hang out on discord, if that's something you use.

Somehow I only had issues with CrowdSec. I used it with Traefik but it would ban me and my family every time they used my selhosted matrix instance. I could not figure out why and it even did that when I tried it on OPNSense without the Traefik bouncer...

I have crowdsec on a bunch of servers. It's great and I love that I'm feeding my data to the swarm.

What software did you use to make this image? Its very well done

Thank you! It's done in excalidraw.com. Not the most straightforward for flowcharts, took me some time to figure out the best way to sort it all. But very powerful once you get into the flow.

If you're feeling funny, you can download the original image from the catbox link and plug it right back into the site like a save file!

I saved this! Yeah, it seems like a lot of work, but I got inspired again (I had a slight self-hosting burnout and nuked my raspberry setup ~year ago) so I appreciate it. :) Can I ask what hardware you run this on? edit: I just wanted to ramble some more: I just fired up my rPI4 again just last week, setup it with just as barebone VPS with wireguard, samba, jellyfin and pi-hole+unbound (as to not burn myself again :D )

Glad to have gotten you back into the grind!

My homelab runs on an N100 board I ordered on Aliexpress for ~150€, plus some 16GB Corsair DDR5 SODIMM RAM. The Main VPS is a 2 vCPU 4GB RAM machine, and the LabProxy is a 4 vCPU 4GB RAM ARM machine.

What VPS service do you use/recommend and what's your monthly cost?

I use Hetzner, mainly because of their good uptime, dependable service and being geographically close to me. Its a "safe bet" if you will. Monthly cost, if we're not counting power usage by the homelab, is about 15 bucks for all three servers.

Remeber, the more boxes you have, the more advanced you are as an admin! Once you do his job for money, the challenge is the exact opposite. The less parts you have, the better. The more vanilla they are, the better.

Absolutely! To be honest, I don't even want to have countless machines under my umbrella, and constantly have consodilation in mind - but right now, each machine fulfills a separate purpose and feels justified in itself (homelab for large data, main VPS for anything thats operation critical and cant afford power/network outages and so on). So unless I find another purpose that none of the current machines can serve, I'll probably scale vertically instead of horizontally (is that even how you use that expression?)

I've seen Caddy mentioned a few times recently, what do you like about it over other tools?

In addition to the other commenter and their great points, here's some more things I like:

  • ressource efficient: im running all my stuff on low end servers, and cant afford my reverse proxy to waste gigabytes of RAM (kooking at you, NPM)
  • very easy syntax: the Caddyfile uses a very simple, easy to remember syntax. And the documentation is very precise and quickly tells me what to do to achieve something. I tried traefik and couldn't handle the long, complicated tag names required to set anything up.
  • plugin ecosystem: caddy is written in go, and very easy to extend. There's tons of plugins for different functionalities, that are (mostly) well documented and easy to use. Building a custom caddy executable takes one command.

I think the two of you have convinced me to check it out! It is sounding pretty great, so thank you in advance.

I can answer this one, but mainly only in reference to the other popular solutions:

  • nginx. Solid, reliable, uncomplicated, but. Reverse proxy semantics have a weird dependency on manually setting up a dns resolver (why??) and you have to restart the instance if your upstream gets replaced.
  • traefik. I am literally a cloud software engineer, I've been doing Linux networking since 1994 and I've made 3 separate attempts to configure traefik to work according to its promises. It has never worked correctly. Traefik's main selling point to me is its automatic docker proxying via labels, but this doesn't even help you if you also have multiple VMs. Basically a non-starter due to poor docs and complexity.
  • caddy. Solid, reliable, uncomplicated. It will do acme cert provisioning out of the box for you if you want (I don't use that feature because I have a wildcard cert, but it seems nice). Also doesn't suffer from the problems I've listed above.

Fully agree to this summary. traefik also gave me a hard time initially, but once you have the quirks worked out, it works as promised.

Caddy is absolutely on my list as an alternative, but the lack of docker label support is currently the main roadblocker for me.

May I present to you: Caddy but for docker and with labels so kind of like traefik but the labels are shorter 👏 https://github.com/lucaslorentz/caddy-docker-proxy

Jokes aside, I did actually use this for a while and it worked great. The concept of having my reverse proxy config at the same place as my docker container config is intriguing. But managing labels is horrible on unraid, so I moved to classic caddy instead.

Nice catch and thanks for sharing. Will definitely check it out.

1 more...

I feel so relieved reading that about traefik. I briefly set that up as a k8s ingress controller for educational purposes. It's unnecessarily confusing, brittle, and the documentation didn't help. If it's a pain for people in the industry that makes me feel better. My next attempt at trying out k8s I'll give Kong a shot.

I really like solid, reliable, and uncomplicated. The fun part is running the containers and VMs, not spending hours on a config to make them accessible.

I have traefik running on my kubernetes cluster as an ingress controller and it works well enough for me after finagling it a bit. Fully automated through ansible and templated manifests.

Heh. I am, as I said, a cloud sw eng, which is why I would never touch any solution that mentioned ansible, outside of the work I am required to do professionally. Too many scars. It's like owning a pet raccoon, you can maybe get it to do clever things if you give it enough treats, but it will eventually kill your dog.

Care to share some war stories? I have it set up where I can completely destroy and rebuild my bare metal k3s cluster. If I start with configured hosts, it takes about 10 minutes to install k3s and get all my services back up.

Sure, I mean, we could talk about

  • dynamic inventory on AWS means the ansible interpreter will end up with three completely separate sets of hostnames for your architecture, not even including the actual DNS name. if you also need dynamic inventory on GCP, that's three completely different sets of hostnames, i.e. they are derived from different properties of the instances than the AWS names.
  • btw, those names are exposed to the ansible runtime graph via different names i.e. ansible_inventory vs some other thing, based on who even fuckin knows, but sometimes the way you access the name will completely change from one role to the next.
  • ansible-vault's semantics for when things can be decrypted and when they can't leads to completely nonsense solutions like a yaml file with normal contents where individual strings are encrypted and base64-encoded inline within the yaml, and others are not. This syntax doesn't work everywhere. The opaque contents of the encrypted strings can sometimes be treated as traversible yaml and sometimes cannot be.
  • ansible uses the system python interpreter, so if you need it to do anything that uses a different Python interpreter (because that's where your apps are installed), you have to force it to switch back and forth between interpreters. Also, the python setting in ansible is global to the interpreter meaning you could end up leaking the wrong interpreter into the role that follows the one you were trying to tweak, causing almost invisible problems.
  • ansible output and error reporting is just a goddamn mess. I mean look at this shit. Care to guess which one of those gives you a stream which is parseable as json? Just kidding, none of them do, because ansible always prefixes each line.
  • tags are a joke. do you want to run just part of a playbook? --start-at. But oops, because not every single task in your playbook is idempotent, that will not work, ever, because something was supposed to happen earlier on that didn't. So if you start at a particular tag, or run only the tasks that have a particular tag, your playbook will fail. Or worse, it will work, but it will work completely differently than in production because of some value that leaked into the role you were skipping into.
  • Last but not least, using ansible in production means your engineers will keep building onto it, making it more and more complex, "just one more task bro". The bigger it gets, the more fragile it gets, and the more all of these problems rears its head.
  • Dynamic inventory. I haven’t used it on a cloud api before but I have used it against kube API and it was manageable. Are you saying through kubectl the node names are different depending on which cloud and it’s not uniform? Edit: Oh you’re talking about the VMs doh

  • I’ve tried ansible vault and didn’t make it very far… I agree that thing is a mess.

  • Thank god I haven’t ran into interpreter issues, that sounds like hell.

  • Ansible output is terrible, no argument there.

  • I don’t remember the name for it, but I use parameterized template tasks. That might help with this? Edit: include_tasks.

  • I think this is due to not a very good IDE for including the whole scope of the playbook, which could be a condemnation of ansible or just needing better abstraction layers for this complex thing we are trying to manage the unmanageable with.

Really all of these have solutions, but they're constantly biting you and slowing down development and requiring people to be constantly trained on the gotchas. So it's not that you can't make it work, it's that the cost of keeping it working eats away at all the productive things you can be doing, and that problem accelerates.

The last bullet is perhaps unfair; any decent system would be a maintainable system, and any unmaintainable system becomes less maintainable the bigger your investment in it. Still, it's why I urge teams to stop using it as soon as they can, because the problem only gets worse.

You urge teams to stop using it [ansible?] as soon as they can? What do you recommend to use instead?

Well people use ansible for a wide variety of things so there's no straightforward answer. It's a Python program, it can in theory do anything, and you'll find people trying to do anything with it. That said, some common ways to replace it include

  • you need terraform or pulumi or something for provisioning infrastructure anyway, so a ton of stuff can be done that way instead of using ansible. Infra tools aren't really the same thing, but there are definitely a few neat tricks you can do with them that might save you from reaching for ansible.
  • Kubernetes + helm is a big bear to wrestle, but if your company is also a big bear, it's worth doing. K8s will also solve a lot of the same problems as ansible in a more maintainable way.
  • Containerization of components is great even if you don't use kubernetes.
  • if you're working at the VM level instead of the container level, cloud-init can allow you to take your generic multipurpose image and make it configure itself into whatever you need at boot. Teams sometimes use ansible in the cloud-init architecture, but it's usually doing only a tiny amount of localhost work and no dynamic invetory in that role, so it's a lot nicer there.
  • maybe just write a Python program or even a shell script? If your team has development skills at all, a simple bespoke tool to solve a specific problem can be way nicer.

Very insightful. I definitely need to check out cloud-init as that is one thing you mentioned I have practically no experience with. Side note, I hate other people’s helm with a passion. No consistency in what is exposed, anything not cookie cutter and you’re customizing the helm chart to the point it’s probably easier to start with a custom template to begin with, which is what I started doing!

1 more...

I see everyone else have already chimed in on whats so great about Caddy (because it is!), one thing that has been a thorn in my side though is the lack of integration of fail2ban since Caddy has moved on from the old common log format and moved on to more modern log formats. So if you want to use a IPS/IDS, you'll have to either find a creative hack to make it work with fail2ban or rely on more modern (and resource heavier) solutions such as crowdsec.

You can install the log transformer plugin for Caddy and have it produce a readable log format for fail2ban: https://github.com/caddyserver/transform-encoder

I had this setup on my VPS before I moved to a k3s setup. I will take a look at how to migrate my fail2ban setup to the new server.

Cool, thanks for this! As a user of Caddy through Docker, I suppose I need to find a way to build a docker image to be able to do this?

Sometimes new simple technologies makes things simple - but only as long as one intends to follow how they are used... 🙃

I think so, but if you check the official image you can definitely find out how to include custom plugins in it. I think the documentation might mention a thing or two about it too.

1 more...

What is seedbox? Is it part of the homelab or a service like the VPSs?

its basically a VPS that comes with torrenting software preinstalled. Depending on hoster and package, you'll be able to install all kinds of webapps on the server. Some even enable Plex/Jellyfin on the more expensive plans.

How do you do the sshfs mount, tracker and search queries? Is that over tailscale?

The rclone mount works via SSH credentials. Torrent files and tracker searches run over simple HTTPS, since both my torrent client and jackett expose public APIs for these purposes, so I can just enter the web address of these endpoints into the apps running on my homelab.

Sidenote, since you said sshfs mount: I tried sshfs, but has significantly lower copy speeds than with rclone mount. Might have been a misconfiguration, but it was more time efficient to use rclone than trying to debug my sshfs connection speed.

I have noticed very slow speeds with sshfs as well. I’ll have to give rclone mount over ssh a try. Thanks!

What is the proxy in front of crowdsec for?

If you're referring to the "LabProxy VPS": So that I don't have to point a public domain that I (plan to) use more and more in online spaces to my personal IP address, allowing anyone and everyone to pinpoint my location. Also, I really don't want to mess with the intricacies of DynDNS. This solution is safer and more reliable than DynDNS and open ports on my router thats not at all equipped to fend off cyberspace attacks.

If you're referring to the caddy reverse proxy on the LabProxy VPS: I'm pointing domains that I want to funnel into my homelab at the external IP of the proxy VPS. The caddy server on that VPS reads these requests and reverse-proxies them onto the caddy-port from the homelab, using the hostname of my homelab inside my tailscale network. That's how I make use of the tunnel. This also allows me to send the crowdsec ban decisions from the homelab to the Proxy VPS, which then denies all incoming requests from that source IP before they ever hit my homelab. Clean and Safe!

"pinpoint" is a bit hyperbolic. Country, state and maybe city can be pretty good, at least in the US.

It's fine if that's important to you to hide, but entirely unnecessary for most people.

Maybe. But I've read some crazy stories on the web. Some nutcases go very far to ruin an online strangers day. I want to be able to share links to my infrastructure (think photos or download links), without having to worry that the underlying IP will be abused by someone who doesn't like me for whatever reason. Maybe that's just me, but it makes me sleep more sound at night.

Thanks, but I meant the HAProxy in your homelab.

Oh, that! That app proxies the docker socket connections over a TCP channel. Which provides a more granular control over what app gets what access to specific functionalities of the docker socket. Directly mounting the socket into an app technically grants full root access to the host system in case of a breach, so this is the advised way to do it.

What did you use to chart this? And nicely done.

Excalidraw. Reading is hard. (Yeah, I missed that it was mentioned in the thread)

Excalidraw is nice. Also, I want to throw in a mention for mermaid.live (mermaid js). A little less flexiblity but it’s nice. There’s also kroki.io which hosts a lot of these types of apps.

This is oddly similar to some informal workups I've done for our work network.

Nice work 👍.

Sorry if someone already asked this, but do you have any tutorials or guides that you used and found helpful for starting out? I have some small experience with nginx and such, but I would definitely need to follow along with something that tells me what to do and what each part does in a infrastructure like you have haha

That's a tough one. I've pieced this all together from countless guides for each app itself, combined with tons of reddit reading.

There are some sources that I can list though:

I've been dabbling in self hosting recently and found that chatgpt can help you setup a lot if you don't get annoyed and keep fixing your prompts. It even writes out your docker compose files for you and you can ask it questions on what things mean and what's linked to each other. If you do try it out though, avoid giving personal info like passwords in the chat.

I just have a UniFi firewall, a Synology Diskstation, and a linux server running everything. Provides torrenting, video streaming with plex, file sharing, game server hosting, music hosting, and more, and I don't ever have to mess with it :). This is impressive but I don't know if I would want to support it personally

I'd love to have everything centralized at home, but my net connection tends to fail a lot and I dont want critical services (AdGuard, Vaultwarden and a bunch of others that arent listed) to be running off of flakey internet, so those will remain in a datacenter. Other stuff might move around, or maybe not. Only time will tell, I'm still at the beginning of my journey after all!

Fair. I'm lucky enough to be able to get business internet at home so I have a static IP and 99.9% uptime. My plex watchers and game hosting players know that sometimes around 3am, they might be booted when my networking gear auto updates itself, haha

Since nobody else asked about this, why ruTorrent over the other typical download clients?

Pretty sure ruTorrent is a typical download client. The real reason is that it came preinstalled and I never had a reason to change it ¯_(ツ)_/¯

You’re usually stuck with what your seedbox provider gives you.

btw why did you choose tailscale over zerotier

I heard about tailscale first, and haven't yet had enough trouble to attempt a switch.

huh i thought zerotier is more popular.
i love it but their android app sucks. hasn't received a single large update since android 5 and constantly keeps disconnecting

Are you talking about the Tailscale App or the ZeroTier app? Because the TS Android app is the one thing im somewhat unhappy about, since it does not play nice with the private DNS setting.

I'm talking about the zerotier's app

Tail scale is stupid easy to set up and free for first ten 100 devices and supports 3 custom domains.

zerotier is open source and free with up to 25 nodes per network, and supports custom ip assignments (in custom ranges, with option to have multiple subnets per network), custom dhcp, managed dns, and custom, multiple managed routes (with option to point to a custom gateway), and traffic flow rules.

for example here are the rules i have set up for my "gaming" network that i use to play LAN games with my friends (only allows ipv4, arp and ipv6 traffic and prevents clients from self-assigning ip addresses)

route settings page:
my "personal" network (which just links all of my personal devices together) exists in 172.16.0.0/24 and auto-assigns ipv4 addresses in 172.16.0.101-172.16.0.199 range using dhcp (but i have configured custom ip addresses for each device anyway), and ipv6 is auto-assigned using RFC4196.

This is needlessly over complicated and bloated just FYI but if you enjoy complexity for the sake of complexity then go for it.

I can't say I'm in support of bring discussion of illegal content to Lemmy but you do you

Happy to learn what illegal content is on display and discussed about here?

I'm sure OP only torrents unlicenced media.

Torrents are assumed illegal and discussion of it likewise by confused parent comment.

Nothing illegal is being discussed.

But I'm happy to talk about Jolly Roger.