Does "Selfhosted" mean you actually have a server at home?
I'm trying to better understand hosting a Lemmy Instance. Lurking discussions it seems like some people are hosting from the Cloud or VPS. My understanding is that it's better to futureproof by running your own home server so that you have the data and the top most control of hardware, software etc. My understanding is that by hosting an instance via Cloud or VPS you are offloading the data / information to a 3rd party.
Are people actually running their own actual self-hosted servers from home? Do you have any recommended guides on running a Lemmy Instance?
You are viewing a single comment
I haven't got any piece of hardware that was sold with the firstname "Server".
But there's this self-built PC in my room that's running 24/7 without having to reboot in several years...
Well technically a "server" is a machine dedicated to "serving" something, like a service or website or whatever. A regular desktop can be a server, it's just not built as well as a "real" server.
There is though reasons to stray from certain consumer products for server equipment.
Yeah I'd stay away from Mac too... but seriously most modern laptops can disable any sleep/hibernation on lid close
My go to lately is Lenovo tiny, can pick them up super cheap with 6-12 month warranties, throw in some extra ram, a new drive, haven't had any fail on me yet
You should think before releasing dangerous information on the internet!
You can get a 2core 8GB / 240GB for 75€!!
Uh oh, I think I'll have to buy one now...
This is my little setup at the moment. Each is 8500t CPU, 32gb ram, 2tb nvme and 1tb SATA SSD all running in a proxmox cluster
Edit: also check out Dell micro or the hp... Uh I want to say it's g6 micro? You might need to search for what is actually called
Heh, this is awesome 😅
Thanks :D the frame and all parts are self designed and 3d printed... was a fun project
The whole thing runs from just 2 power cables with room for another without adding any extra power cables
Not at all overkill? :-D
Future proofing or is it really used ? I don't know proxmox, is it some docker launcher thingy?
Very cool anyways!
Proxmox is like esxi, it lets you setup virtual machines. So you can fire up a virtual Linux machine and allocate it like 2gb ram and limit it to 2 cores of the CPU or give it the whole lot depending on what you need to do
Having them in a cluster let's them move virtual machines between the physical hardware and have complete copies so if one goes down the next can just start up
It is a little overkill, I'm probably only using about 20% of its resources but it's all for a good cause. I'm currently unable to work due to kidney failure but I'm working towards a transplant. If I do get a transplant and can return to work, being able to say "well this is my home setup and the various things I know how to do" looks a lot better than "I sat on my ass for the last 4 years so I'm very rusty"
This whole setup cost me about $1000aud and uses 65-70w on average
Hey good luck man!
Good idea, just sitting around isn't good for mental health either.
So back to tech :-) is it like docker / Kubernetes but with VM right? What's the good/bad things concerning VM Vs Docker?
BTW that's not a lot of power consumption!
And yeah if it's not overkill they you are morally obliged to search for ways to make it so, right :-) ?!
Cheers
Docker/kubernetes and VMS are similar in that they are all virtualisation but the similarity kinda end there. Love them or hate them, Each has its own important role in IT infrastructure.
First off, docker itself needs a host operating system to run. Secondly, Docker are containers. Each image is built on a cut down version of the operating system generally to perform one specific task or run one specific application. The environment is preconfigured to work exactly as intended so generally speaking, you don't get the whole "but it works on my machine"
Kubernetes I'm not the most qualified to speak to, but pretty much someone said "ok docker is great but we want redundancy, scalability, etc" and made kubernetes.
A vm is a full virtual machine. You can give it virtual harddisks, virtual network cards, etc. You then install a full operating system on it, could be windows or Linux or whatever you need.
From there you can install docker if that's what you want, or can install specific apps. This is the first difference, is if you install the app compared to a docker container, you need to make sure you have all the prerequisites met, all the correct compatibility, etc. It's up to you to make sure your system is correct for the software.
Another major difference is docker containers are all seen on the network as coming from whatever the host machine's IP is.
Whereas the network views each vm as it's own device on the network, giving each it's own IP (if using dhcp) and allowing things like vlans and things.
As for my setup, I have 3 VMs with docker servers, each with between 20-30 docker containers, 3 VMs running adguard DNS, 1 vm acting as a tailscale entry point, then a few application specific VMs. It's handy just being able to fire up a blank Ubuntu instance to play with me software, and if anything goes wrong just delete the whole machine and start fresh.
Then for storage behind it all, I have a qnap ts453d with 4x 8tb drives.
Then outside my home, I have 2 X Oracle hosted VMs, one hosting about 22 websites and all the stuff they need, one acting as a tunnel into my home services since I'm behind a CGNAT, and then another physical server located in the local data centre running email for a few small businesses and myself
Thank you for the thorough explanation!
I think a VM for me would only bf a windows on inux for like Photoshop and 3dsmax :-)
Docker though seems interesting for a simple user like me.
Thanks again !
Cheers
No worries, in terms of docker, if you want to see some of the more useful docker things along with explanation of how to get them running, check out https://noted.lol and https://mariushosting.com
Noted has a lot of writeups on various projects that are nearly entirely docker based. Marius focuses more on docker projects on Synology but for many of them you can go to the project home to get the generic docker instructions and just read his one for project descriptions and intially setup guides
So just laid my hands on a 55€ + 6.49€ Shipping Lenovo thinkcenter M 910Q I5 Vpro 6eme gen 8GB/256GB SSD
It's crazy. I mean not long ago all I could even dream about was expensive slow computers with small harddrives :-D
So I'll dedicate it to docker "stuff", thank you :-) I really like the docker idea, and running on like "any" Linux kernel (if I got that right) is so awesome. I have mostly had to work on windows at work and it's so both changing and closed system it's infuriating in the long run.
Thanks for the link to all the examples, I have to get to try it all out, but if I want to "dockerise" stuff myself, how do you decide like how do it access to the outside world, like I fire up a docker image which plays music (if that's even possible?) it has to have access to the disc, sound drivers, maybe interactive stuff etc on the host PC right?
Cheers
Doesn't that mean, tiny fans howling all day long?
Only if you've got it cranking all day. I've got a couple of Tiny (they're Micro, which is the same thing) systems that are silent when idle and nearly silent when running less than a load avg of 5. It's only if I try to spin up a heavy, CPU-bound process that their singular fan spins fast enough to be noticable.
So don't use one as a Mining rig, but if you want something that runs x64 workloads at 9-20 watts continuously, they're pretty good.
Even running at full speed mine are pretty quiet but I also have 80mm silent low rpm fans blowing air across them too which seems to help
I also recently went through with fresh thermal paste
Just set it to "do nothing" when lid is closed. That's all.
FWIW, this free app solves for that issue well; I have several clammed Macs running it right now:
https://apps.apple.com/us/app/amphetamine/id937984704?mt=12
just break the screen off. call it a headless sever.
100%, and this is why businesses don't use laptops as servers... typically 😂.
How do you install security updates etc without restarting?
Linux servers prompt you do restart after certain updates do you just not restart?
Enterprise distributuions can hot-swap kernels, making it unnecessary to reboot in order to make system updates.
Microsoft needs to get its shit together because reboots were a huge point of contention when I was setting up automated patching at my company.
Good luck with that, I have all reboot options off but yesterday it just rebooted like that. Thanks MS.
You can just restart... with modern SSDs it takes less than a minute. No one is ging to have a problem with 1 minute downtime per month or so.
The right way (tm) is to have the application deployed with high availability. That is every component should have more than one server serving it. Then you can take them offline for a reboot sequentially so that there's always a live one serving users.
This is taken to an extreme in cloud best practices where we don't even update any servers. We update the versions of the packages we want in some source code file. From that we build a new OS image contains the updated things along with the application that the server will run and it's ready to boot. Then in some sequence we kill server VMs running the old image and create news ones running the new. Finally the old VMs are deleted.
Actually I am lazy with updates on the "bare metal" debian/proxmox. It does nothing else than host several vm's. Even the hard disks belong to a vm that provides all the file shares.
Do you have any recommended resources for getting started? I do have a secondary PC...
First, you need a use-case. It's worthless to have a server just for the sake of it.
For example, you may want to replace google photos by a local save of your photos.
Or you may want to share your movies accross the home network. Or be able to access important documents from any device at home, without hosting them on any kind of cloud storage
Or run a bunch of automation at home.
TL;DR choose a service you use and would like to replace by something more private.
Get a copy of vmware (esxi) or proxmox and load it on that secondary pc.
Proxmox absolutely changed the game for me learning Linux. Spinning up LXC containers in seconds to test out applications or simply to learn different Linux OSs without worrying about the install process each time has probably saved me days of my life at this point. Plus being able to use and learn ZFS natively is really cool.
Ive been using esxi (free copy) for years. Same situation. Being able to spin up virtual machines or take a snapshot before a major change has been priceless. I started off with smaller nuc computers and have upgraded to full fledged desktops.
I don't know where to start today, honestly.
I started with books a long time ago:
https://www.amazon.com/Algorithms-Data-Structures-Niklaus-Wirth/dp/0130220051
https://www.amazon.de/-/en/Andrew-S-Tanenbaum/dp/0132126958
https://www.amazon.de/Programming-Language-Prentice-Hall-Software/dp/0131103628
The simple way is to Google 'yunohost' and install that on your spare machine, then just play around with what that offers.
If you want, you could also dive deeper by installing Linux (e.g.Ubuntu), then installing Docker, then spin up Portainer as your first container.
Well, there are specific hardware configurations that are designed to be servers. They probably don't have graphics cards but do have multiple CPUs, and are often configured to run many active processes at the same time.
But for the most part, "server" is more related to the OS configuration. No GUI, strip out all the software you don't need, like browsers, and leave just the software you need to do the job that the server is going to do.
As to updates, this also becomes much simpler since you don't have a lot of the crap that has vulnerabilities. I helped manage comuter department with about 30 servers, many of which were running Windows (gag!). One of the jobs was to go through the huge list of Microsoft patches every few months. The vast majority of which, "require a user to browse to a certain website" in order to activate. Since we simply didn't have anyone using browsers on them, we could ignore those patches until we did a big "catch up" patch once a year or so.
Our Unix servers, HP-UX or AIX, simply didn't have the same kind of patches coming out. Some of them ran for years without a reboot.
Years? Lol you should update that software.