Is it practically impossible for a newcomer selfhost without using centralised services, and get DDOSed or hacked?

Maroon@lemmy.world to Selfhosted@lemmy.world – 90 points –

I understand that people enter the world of self hosting for various reasons. I am trying to dip my toes in this ocean to try and get away from privacy-offending centralised services such as Google, Cloudflare, AWS, etc.

As I spend more time here, I realise that it is practically impossible; especially for a newcomer, to setup any any usable self hosted web service without relying on these corporate behemoths.

I wanted to have my own little static website and alongside that run Immich, but I find that without Cloudflare, Google, and AWS, I run the risk of getting DDOSed or hacked. Also, since the physical server will be hosted at my home (to avoid AWS), there is a serious risk of infecting all devices at home as well (currently reading about VLANS to avoid this).

Am I correct in thinking that avoiding these corporations is impossible (and make peace with this situation), or are there ways to circumvent these giants and still have a good experience self hosting and using web services, even as a newcomer (all without draining my pockets too much)?

Edit: I was working on a lot of misconceptions and still have a lot of learn. Thank you all for your answers.

101

This is nonsense. A small static website is not going to be hacked or DDOSd. You can run it off a cheap ARM single board computer on your desk, no problem at all.

What?

I've popped up a web server and within a day had so many hits on the router (thousands per minute) that performance tanked.

Yea, no, any exposed service will get hammered. Frankly I'm surprised that machine I setup didn't get hacked.

Don't leave SSH on port 22 open as there are a lot of crawlers for that, otherwise I really can't say I share your experience, and I have been self-hosting for years.

Am I missing something? Why would anyone leave SSH open outside the internal network?

All of my services have SSH disabled unless I need to do something, and then I only do it locally, and disable as soon as I'm done.

Note that I don't have a VPS anywhere.

How do you reach into your server with SSH disabled without lugging a monitor and keyboard around?

My firewall, server, NAS and all my services have web GUIs. If I need SSH access all I have to do is enable it via web GUI, do what I need to, disable again.

If push comes to shove, I do have a portable monitor and a keyboard in storage if needed, but have not had the need to use them yet.

Some people want to be able to reach their server via SSH when they are not at home, but yes I agree in general that is not necessary when running a real home server.

Then use Wireguard to get into your local network. Simple as. All security risks that don't need to be accessed by the public (document servers, ssh, internal tools, etc...) can be accessed via VPN while the port forwarded servers are behind a reverse proxy, TLS, and an authentication layer like Authelia/authentik for things that only a small group needs to access.

Sorry, but there is 1 case in 10000 where a home user would have to have publicly exposed SSH and 9999 cases of 10000 where it is not needed at all and would only be done out of laziness or lack of knowledge of options.

Yeah, I guess I've never needed to do that. That may change as I'm thinking of moving all my services from UnRaid to ProxMox to leave UnRaid for storage only.

I guess that'll bring me back here soon enough.

I've been self-hosting a bunch of stuff for over a decade now, and have not had that issue.

Except for a matrix server with open registration for a community that others not in the community started to use.

Yes my biggest mistake was leaving a vps dns server wide open. It took months for it to get abused though.

You left stuff exposed is the only explanation. I've had services running for years without a problem

I can't say I've seen anything like that on the webservers I've exposed to the internet. But it could vary based on the IP you have if it's a target for something already I suppose.

Frankly I’m surprised that machine I setup didn’t get hacked.

How could it if all you had was a basic webserver running?

One aspect is how interesting you are as a target. What would a possible attacker gain by getting access to your services or hosts?

The danger to get hacked is there but you are not Microsoft, amazon or PayPal. Expect login attempts and port scans from actors who map out the internets. But I doubt someone would spend much effort to break into your hosts if you do not make it easy (like scripted automatic exploits and known passwords login attempts easy) .

DDOS protection isn't something a tiny self hosted instance would need (at least in my experience).

Firewall your hosts, maybe use a reverse proxy and only expose the necessary services. Use secure passwords (different for each service), add fail2ban or the like if you're paranoid. Maybe look into MFA. Use a DMZ (yes, VLANs could be involved here). Keep your software updated so that exploits don't work. Have backups if something breaks or gets broken.

In my experience the biggest danger to my services is my laziness. It takes steady low level effort to keep the instances updated and running. (Yes there are automated update mechanisms - unattended upgrades i.e. -, but also downwards compatibility breaking changes in the software which will require manual interactions by me.)

...maybe use a reverse proxy...

+1 post.

I would suggest definitely reverse proxy. Caddy should be trivial in this use case.

cheers,

Reverse proxies don't add security.

lol

eta:

Is it ok if I cite f5.com over some.random.lemmy.dude?

Who is f5.com?

I don't get why they say that? Sure, maybe the attackers don't know that I'm on Ubuntu 21.2 but if they come across https://paperless.myproxy.com and the Paperless-NGX website opens, I'm pretty sure they know they just visited a Paperless install and can try the exploits they know. Yes, the last part was a bit snarky, but I am truly curious how it can help? Since I've looked at proxies multiple times to use it for my selfhosted stuff but I never saw really practical examples of what to do and how to set it up to add an safety/security layer so I always fall back to my VPN and leave it at that.

Not every path is mapped with the reverse proxy.

I'm positive that F5's marketing department knows more than me about security and has not ulterior motive in making you think you're more secure.

Snark aside, they may do some sort of WAF in addition to being a proxy. Just "adding a proxy" does very little.

So, you've gone from:

reverse proxies don't add security

to:

"adding a proxy" does very little

What's next?

Give up. You don't know what the fuck you're talking about.

I have a dozen services running on a myriad of ports. My reverse proxy setup allows me to map hostnames to those services and expose only 80/443 to the web, plus the fact that an entity needs to know a hostname now instead of just an exposed port. IPS signatures can help identify abstract hostname scans and the proxy can be configured to permit only designated sources. Reverse proxies also commonly get used to allow for SSL offloading to permit clear text observation of traffic between the proxy and the backing host. Plenty of other use cases for them out there too, don't think of it as some one trick off/on access gateway tool

My reverse proxy setup allows me to map hostnames to those services and expose only 80/443 to the web,

The mapping is helpful but not a security benefit. The latter can be done with a firewall.

Paraphrasing - there is a bunch of stuff you can also do with a reverse proxy

Yes. But that's no longer just a reverse proxy. The reverse proxy isn't itself a security tool.

I see a lot of vacuous security advice in this forum. "Install a firewall", "install a reverse proxy", etc. This is mostly useless advice. Yes, do those things but they do not add any protection to the service you are exposing.

A firewall only protects you from exposing services you didn't want to expose (e.g. NFS or some other service running on the same system), and the rproxy just allows for host based routing. In both cases your service is still exposed to the internet. Directly or indirectly makes no significant difference.

What we should be advising people to do is "use a valid ssl certificate, ensure you don't use any application default passwords, use very good passwords where you do use them, and keep your services and servers up-to-date".

A firewall allowing port 443 in and an rproxy happily forwarding traffic to a vulnerable server is of no help.

They're a part of the mix. Firewalls, Proxies, WAF (often built into a proxy), IPS, AV, and whatever intelligence systems one may like work together to do their tasks. Visibility of traffic is important as well as the management burden being low enough. I used to have to manually log into several boxes on a regular basis to update software, certs, and configs, now a majority of that is automated and I just get an email to schedule a restart if needed.

A reverse proxy can be a lot more than just host based routing though. Take something like a Bluecoat or F5 and look at the options on it. Now you might say it's not a proxy then because it does X/Y/Z but at the heart of things creating that bridged intercept for the traffic is still the core functionality.

You can’t port map the same port to different services on a firewall. Reverse proxy lets you open one port and have multiple services on it. Firewall can protect exposed services one I geoip block every country but my own two use crowded to block what they consider malicious ips.

May not add security in and of itself, but it certainly adds the ability to have a little extra security. Put your reverse proxy in a DMZ, so that only it is directly facing the intergoogles. Use firewall to only expose certain ports and destinations exposed to your origins. Install a single wildcard cert and easily cover any subdomains you set up. There's even nginx configuration files out there that will block URL's based on regex pattern matches for suspicious strings. All of this (probably a lot more I'm missing) adds some level of layered security.

Put your reverse proxy in a DMZ, so that only it is directly facing the intergoogles

So what? I can still access your application through the rproxy. You're not protecting the application by doing that.

Install a single wildcard cert and easily cover any subdomains you set up

This is a way to do it but not a necessary way to do it. The rproxy has not improved security here. It's just convenient to have a single SSL endpoint.

There’s even nginx configuration files out there that will block URL’s based on regex pattern matches for suspicious strings. All of this (probably a lot more I’m missing) adds some level of layered security.

If you do that, sure. But that's not the advice given in this forum is it? It's "install an rproxy!" as though that alone has done anything useful.

For the most part people in this form seem to think that "direct access to my server" is unsafe but if you simply put a second hop in the chain that now you can sleep easily at night. And bonus points if that rproxy is a VPS or in a separate subnet!

The web browser doesn't care if the application is behind one, two or three rproxies. If I can still get to your application and guess your password or exploit a known vulnerability in your application then it's game over.

The web browser doesn't care if the application is behind one, two or three rproxies. If I can still get to your application and guess your password or exploit a known vulnerability in your application then it's game over.

Right!?

Your castle can have many walls of protection but if you leave the doors/ports open, people/traffic just passes through.

So I’ve always wondered this. How does a cloudflare tunnel offer protection from the same thing.

They may offer some sort of WAF (web application firewall) that inspects traffic for potentially malicious intent. Things like SQL injection. That's more than just a proxy though.

Otherwise, they really don't.

A reverse proxy is used to expose services that don't run on exposed hosts. It does not add security but it keeps you from adding attack vectors.

They usually provide load balancing too, also not a security feature.

Edit: in other words what he’s saying is true and equal to “raid isn't baclup”

1 more...
1 more...

All reverse proxies i have used do rudimentary DDoS protection: rate limiting. Enough to keep your local script kiddy at bay - but not advanced stuff.

You can protect your ssh instance with rate limiting too but you'll likely do this in the firewall and not the proxy.

1 more...

DDoS and hacking are like taxes: you should be so lucky as to have to worry about them, because that means you're wildly successful. Worry about getting there first because that's the hard part.

You don't have to be successful to get hit by bots scanning for known vulnerabilities in common software (e.g. Wordpress), but OP won't have to worry about that if they keep everything up to date. However, this is also necessary when renting a VPN from said centralised services.

Well he specified static website, which rules out WP, but yes. If your host accepts posts (in the generic sense, not necessarily specifically the http verb POST) that raises tons of other questions, that frankly were already well addressed when I made my post.

he specified static website, which rules out WP

Oops missed that

EDIT: And I missed Immich too

Drink less paranoia smoothie...

I've been self-hosting for almost a decade now; never bothered with any of the giants. Just a domain pointed at me, and an open port or two. Never had an issue.

Don't expose anything you don't share with others; monitor the things you do expose with tools like fail2ban. VPN into the LAN for access to everything else.

Use any old computer you have lying around as a server. Use Tailscale to connect to it, and don’t open any ports in your home firewall. Congrats, you’re self-hosting and your risk is minimal.

Exactly what I do and works like a dream. Had a VPS and nginx to proxy domain to it but got rid of it because I really had no use for it, the Tailscale method worked so well.

I’ve been thinking of trying this (or using Caddy instead of nginx) so I could get Nextcloud running on an internal server but still have an external entry point (spousal approval) but after setting up the subdomain and then starting caddy and watching how many times that subdomain started to get scanned from various Ips all over the world, I figured eh that’s not a good plan. And I’m a nobody and don’t promote my domain anywhere.

I feel like you have the wrong idea of what hacking acting a actually is... But yes, as long as you don't do anything too stupid line forwarding all of your ports or going without any sort of firewall, the chances of you getting hacked are very low...

As for DDOSing, you can get DDOSed with or without self hosting all the same, but I wouldn't worry about it.

Exactly piss off a script kiddie and get ddosed weather your self hosting or not.

DDOS against a little self hosted instance isn't really a concern I'd have. I'd be more concerned with the scraping of private information, ransomware, password compromises, things of that nature. If you keep your edge devices on the latest security patches and you are cognizant on what you are exposing and how, you'll be fine.

A VPS with fail2ban is all you need really. Oh and don't make ssh accounts where the username is the password. That's what I did once, but the hackers were nice, they closed the hole and then just used it to run a irc client because the network and host was so stable.

Found out by accident, too bad they left their irc username and pw in cleartext. Was a fun week or so messing around with their channels

The DDOSED hype on this site is so over played. Oh my god my little self hosted services are going to get attacked. Is it technically possible yes but it hasn’t been my experience.

DDoSing cost the attacker some time and resources so there has to something in it for them.

Random servers on the internet are subject to lots of drive-by vuln scans and brute force login attempts, but not DDoS, which are most costly to execute.

99% of people think they are more important than they are.

If you THINK you might be the victim of an attack like this, you're not going to be a victim of an attack like this. If you KNOW you'll be the victim of an attack like this on the other hand...

Many of us also lived through the era where any 13 year old could steal Mommy's credit card and rent a botnet for that ezpz

My MC server a decade ago was tiny and it still happened every few months when we banned some butthurt kid

Getting DDOSed or hacked is very very rare for anyone self hosting. DDOS doesn't really happen to random people hosting a few small services, and hacking is also rare because it requires that you expose something with a significant enough vulnerability that someone has a way into the application and potentially the server behind it.

But it's good to take some basic steps like an isolated VLAN as you've mentioned already, but also don't expose services unless you need to. Immich for example if it's just you using it will work just fine without being exposed to the internet.

Self hosting can save a lot of money compared to Google or aws. Also, self hosting doesn't make you vulnerable to DDOS, you can be DDOSed even without a home server.

You don't need VLANs to keep your network secure, but you should make sure than any self hosted service isn't unnecessarily opens up tot he internet, and make sure that all your services are up to date.

What services are you planning to run? I could help suggest a threat model and security policy.

Hey no to be harsh or anything but did you actually made your research? Plenty of people self host websites on their house without AWS , Google or Cloudfare and it works fine.

You can. I am lucky enough to not have been hacked after about a year of this, and I use a server in the living room. There are plenty of guides online for securing a server. Use common sense, and also look up threat modeling. You can also start hosting things locally and only host to the interwebs once you learn a little more. Basically, the idea that you need cloudflare and aws to not get hacked is because of misleading marketing.

Man if your lucky enough after a year I must be super duper lucky with well over a decade.

Why would anyone ddos you? Ddos costs money andor effort. Noone is going to waste that on you. Maybe dos but not ddos. And the troll will go away after some time as well. There's no gain in dosing you. Why would anyone hack your static website? For the lulz? If everything is https encrypted on your local net how does a hacker infest everything on your network?

DDOS can happen just from a script hammering on an exposed port trying to brute force credentials.

I host a handful of Internet facing sites/applications from my NAS and have had no issues. Just make sure you know how to configure your firewall correctly and you’ll be fine.

Firewall, Auth on all services, diligent monitoring, network segmentation (vlans are fine), and don't leave any open communications ports, and you'll be fine.

Further steps would be intrusion detecting/banning like crowdsec for whatever apps leave world accessible. Maybe think about running a BSD host and using jails.

Freebsd here with jails, very smooth running and low maintenance. Can't recommend it enough

Love jails. My server didn't move with me to Central America, and I miss Free/TrueNAS jails

Dw truenas core is dead/EoL so it's either truenas scale (Debian) or freebsd now

EoL? They're releasing betas regularly and announced 13.3 for Q2. You mean how they're sort of winding down with scale taking the bulk of dev cycles? Not much to change with the platform, and security fixes will be backported to CORE. I think SCALE still doesn't fit my use-case, hut when it does, and jails go away with CORE, I'll shed a tear and pour one out for my homie.

It’s very possible. If you carefully manage your attack surface and update your software regularly, you can mitigate your security risks quite a bit.

The main problem is going to be email. I have found no reliable way to send email that does not start with “have someone else do it for you” or “obtain an IP block delegation”.

email isn't that hard when you have a static IP, either from your network provider or via a VPS. Then, setup SPF, DKIM and DMARC and you're good to go (at least for simple use cases like notifications. When you want to send out thousands of emails, you might need more.)

You can simply set up a VPN for your home network (e.g. Tailscale, Netbird, Headscale, etc.) and you won't have to worry about attacks. Public services require a little more work, you will need to rely on a service from a company, either a tunnel (e.g. Tailscale funnel) or a VPS.

Public services require a little more work, you will need to rely on a service from a company, either a tunnel (e.g. Tailscale funnel) or a VPS.

I have been hosting random public services for years publicly and it hasn't been an issue.

Edit, I might have miss understood the definition of public. I have hosted stuff publicly, however everything was protected by a login screen. So it wasn't something a random person could make use of.

mmm netbird seems cool, any experience with it?

No, I'm currently using Tailscale but have been considering switching to Netbird to not be reliant on Tailscale.

I have servers on Digital Ocean and Linode and also one in my basement, and have had no problem. I do have all services behind NPM (not to suggest it's a panacea) and use HTTPS/SSH for everything. (not to suggest HTTPS/SSH are either) My use case could be different than yours - my immediate family are my only consumers - but have been running the same services in those locations for a few years now without issue.

If you are behind CGNAT and use some tunnel (Wireguard, Tailscale, etc.) to access your services which are running on Docker containers, the attack vector is almost not existing.

Use a firewall like OPNsense and you'll be fine. There's a Crowdsec plugin to help against malicious actors, and for the most part, nothing you're doing is worth the trouble to them.

If you are afraid of being ddosed which is very unlikely. Cloudflare has free ddos protection. You can put some but not all things behind their proxy.

Also instead of making things publicly available look in to using a VPN. Wireguard with "wireguard easy" makes this very simple.

VLANs do not make you network magically more secure. But when setup correctly can increase security a load if something has already penetrated the network. But also just to streamline a network and allow or deny some parts of the network.

Other than the low chance of you being targeted I would say only expose your services through something like Wireguard. Other than the port being open attackers won't know what it's for. Wireguard doesn't respond if you don't immediately authenticate.

I've self hosted home assistant for a few years, external access through Cloud flare now because it's been so stablez but previously used DuckDNS which was a bit shit if I'm honest.

I got into self hosting proper earlier this year, I wanted to make something that I could sail the 7 seas with.

I use Tailscale for everything.

The only open port on my router is for Plex because I'm a socialist and like to share my work with my friends.

Just keep it all local and use it at home. If you wanna take some of your media outside with you, download it onto your phone before you leave

If your needs are fairly low on the processing side, you can snag a cloud VPS on LowEndBox for five or six dollars a month. Quality is highly variable ofc, but I’m reasonably my happy with mine.

No AWS, etc (though I don’t know offhand where the actual box lives), SSH access defaults to a key, and the rest (firewall, reverse proxy if you like, and all the other best practices) are but an apt-get away and a quick searxng to find and dissect working configs.

Incidentally, searxng is a good place to start- dead easy to get rolling,and a big step towards degoogling your life. Stand it up, throw a pretty standard config at nginx, and do a certbot —nginx -d search.mydomain.com - that all there is to it.

YMMV with more complex apps,but there is plenty of help to be had.

Oh…. Decide early on if anonymity is a goal,or you’re ok tying real life identity to your server if someone cares to look. Register domains and make public facing choices accordingly.

Either choice is acceptable if it’s the right one for you, but it’s hard to change once you pick a path.

I’m a big fan of not hosting on prem simply because it’s one more set of cables to trip over, etc. But for a latte a month in hosting costs, it’s worth it to me.

If you do it right you shouldn't get hacked. Even if you do you can keep good immutable backups so you can restore. Also make sure you monitor everything for bad behavior or red flags.

If your SSH is using key authentication and you don't have anything silly as an attack vector, you should be grand.

People who ho get compromised are the ones who expose a password authentication service with a short memorable password

Of course security comes with layers, and if you're not comfortable hosting services publically, use a VPN.

However, 3 simple rules go a long way:

  1. Treat any machine or service on a local network as if they were publically accesible. That will prevent you from accidentally leaving the auth off, or leaving the weak/default passwords in place.

  2. Install services in a way that they are easy to patch. For example, prefer phpmyadmin from debian repo instead of just copy pasting the latest official release in the www folder. If you absolutely need the latest release, try a container maintained by a reasonable adult. (No offense to the handful of kids I've known providing a solid code, knowledge and bugreports for the general public!)

  3. Use unattended-upgrades, or an alternative auto update mechanism on rhel based distros, if you don't want to become a fulltime sysadmin. The increased security is absolutely worth the very occasional breakage.

  4. You and your hardware are your worst enemies. There are tons of giudes on what a proper backup should look like, but don't let that discourage you. Some backup is always better than NO backup. Even if it's just a copy of critical files on an external usb drive. You can always go crazy later, and use snapshotting abilities of your filesystem (btrfs, zfs), build a separate backupserver, move it to a different physical location... sky really is the limit here.

It depends on what your level of confidence and paranoia is. Things on the Internet get scanned constantly, I actually get routine reports from one of them that I noticed in the logs and hit them up via an associated website. Just take it as an expected that someone out there is going to try and see if admin/password gets into some login screen if it's facing the web.

For the most part, so long as you keep things updated and use reputable and maintained software for your system the larger risk is going to come from someone clicking a link in the wrong email than from someone haxxoring in from the public internet.

Not my experience so far with my single service I've been running for a year. It's making me even think of opening up even more stuff.

I've been self hosting for 2 or 3 years and haven't been hacked, though I fully expect it to happen eventually(especially if I start posting my blog in places). I'd suggest self hosting a VPN to get into your home network and not making your apps accessible via the internet unless 100% necessary. I also use docker containers to minimize the apps access to my full system. Best of luck!

Yeah na, put your home services in Tailscale, and for your VPS services set up the firewall for HTTP, HTTPS and SSH only, no root login, use keys, and run fail2ban to make hacking your SSH expensive. You're a much smaller target than you think - really it's just bots knocking on your door and they don't have a profit motive for a DDOS.

From your description, I'd have the website on a VPS, and Immich at home behind TailScale. Job's a goodun.

Just changing the SSH port to non standard port would greatly reduce that risk. Disable root login and password login, use VLANs and containers whenever possible, update your services regularly and you will be mostly fine

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
CGNAT Carrier-Grade NAT
HTTP Hypertext Transfer Protocol, the Web
HTTPS HTTP over SSL
IP Internet Protocol
NAS Network-Attached Storage
NAT Network Address Translation
NFS Network File System, a Unix-based file-sharing protocol known for performance and efficiency
Plex Brand of media server package
SSH Secure Shell for remote terminal access
SSL Secure Sockets Layer, for transparent encryption
TLS Transport Layer Security, supersedes SSL
VPN Virtual Private Network
VPS Virtual Private Server (opposed to shared hosting)
nginx Popular HTTP server

13 acronyms in this thread; the most compressed thread commented on today has 6 acronyms.

[Thread #832 for this sub, first seen 26th Jun 2024, 10:25] [FAQ] [Full list] [Contact] [Source code]

It is easy to get hacked if you make stupid mistakes. Just don't make them.

This is honestly true. Just follow good security practices

Is this some sort of insider I am not aware of? I always see these kind of replies and I never understand them. Why even write anything if you don't have anything meaningful to add to the conversation? This is a genuine question to both of you. I mean, yes, it might be true that everything is fine and dandy if you follow good security practices? But how does that help a beginner? Its like saying driving a car with manual transmission is easy. You just need to know the numbers from 1 to 6 and that a higher number makes the car go faster. Even though this might be technically true, it doesn't help anybody.