When do I actually need a firewall?

Kalcifer@sh.itjust.works to Linux@lemmy.ml – 92 points –

I've spent some time searching this question, but I have yet to find a satisfying answer. The majority of answers that I have seen state something along the lines of the following:

  1. "It's just good security practice."
  2. "You need it if you are running a server."
  3. "You need it if you don't trust the other devices on the network."
  4. "You need it if you are not behind a NAT."
  5. "You need it if you don't trust the software running on your computer."

The only answer that makes any sense to me is #5. #1 leaves a lot to be desired, as it advocates for doing something without thinking about why you're doing it -- it is essentially a non-answer. #2 is strange -- why does it matter? If one is hosting a webserver on port 80, for example, they are going to poke a hole in their router's NAT at port 80 to open that server's port to the public. What difference does it make to then have another firewall that needs to be port forwarded? #3 is a strange one -- what sort of malicious behaviour could even be done to a device with no firewall? If you have no applications listening on any port, then there's nothing to access. #4 feels like an extension of #3 -- only, in this case, it is most likely a larger group that the device is exposed to. #5 is the only one that makes some sense; if you install a program that you do not trust (you don't know how it works), you don't want it to be able to readily communicate with the outside world unless you explicitly grant it permission to do so. Such an unknown program could be the door to get into your device, or a spy on your device's actions.

If anything, a firewall only seems to provide extra precautions against mistakes made by the user, rather than actively preventing bad actors from getting in. People seem to treat it as if it's acting like the front door to a house, but this analogy doesn't make much sense to me -- without a house (a service listening on a port), what good is a door?

120

Seriously, unless you are extremely specialized and know exactly what you are doing, IMHO the answer is: Always (and even being extremely specialized, I would still enable a firewall. :-P)

Operating systems nowadays are extremely complex with a lot of moving parts. There are security relevant bugs in your network stack and in all applications that you are running. There might be open ports on your computer you did not even think about, and unless you are monitoring 24/7 your local open ports, you don't know what is open.

First of all, you can never trust other devices on a network. There is no way to know, if they are compromised. You can also never trust the software running on your own computer - just look at CVEs, even without malicious intentions your software is not secure and never will be.

As soon as you are part of a network, your computer is exposed, doesn't matter if desktop/laptop, and especially for attacking Linux there is a lot of drive by attacks happening 24/7.

Your needs for firewalls mostly depend on your threat model, but just disabling accepting incoming requests is trivial and increases your security by a great margin. Further, setting a rate limit for failed connection attempts for open ports like SSH if you use this services, is another big improvement for security. (... and of course disabling password authentication, YADA YADA)

That said, obviously security has to be seen in context, the only snake oil that I know of are virus scanners, but that's another story.

People, which claim you don't need a firewall make at least one of the following wrong assumptions:

  • Your software is secure - demonstrably wrong, as proven by CVEs
  • You know exactly what is running/reachable on your computer - this might be correct for very small specialized embedded systems, even for them one still must always assume security relevant bugs in software/hardware/drivers

Security is a game, and no usable system can be absolutely secure. With firewalls, you can (hopefully) increase the price for successful attacks, and that is important.

You may also want to check up on regulations and laws of your country.

In Belgium, for instance, I am responsible for any and all attacks originating from my PC. If you were hacked and said hackers used your computer to stage an attack, the burden of proof is upon you. So instead of hiring very expensive people to trace the real source of an attack originating from your own PC, enabling a firewall just makes sense, besides making it harder on hackers…

That's a strange law. That's like saying one should be held responsible for a thief stealing their car and then running over someone with it (well, perhaps an argument could be made for that, but I would disagree with it).

Seriously, unless you are extremely specialized and know exactly what you are doing, IMHO the answer is: Always

In what capacity, though? I see potential issues with both server firewalls, and client firewalls. Unless one wants their devices to be offline, there will always be at least one open port (for example, inbound on a server, and outbound on a client) which can be used as an attack vector.

Perhaps I don't understand your point. If I understand your point in the sense that there are also issues with firewalls and that one always has attack vectors against usable systems, I fully agree with your remark. My point is simply, as a rule of thump a firewall usually mitigates a lot of attack vectors (see my remark about LIMIT for ssh ports elsewhere). Especially for client systems having a firewall which blocks all incoming traffic by default is IMHO high payoff for almost no effort.

My point is simply, as a rule of thump a firewall usually mitigates a lot of attack vectors

The only quibble that I would have with your statement is that I would say that it's better to word it as it "mitigates a lot of potential attack vectors", but, other than that, I completely agree with what you said.

Other comments have hit this, but one reason is simply to be an extra layer. You won’t always know what software is listening for connections. There are obvious ones like web servers, but less obvious ones like Skype. By rejecting all incoming traffic by default and only allowing things explicitly, you avoid the scenario where you leave something listening by accident.

If anything, a firewall only seems to provide extra precautions against mistakes made by the user, rather than actively preventing bad actors from getting in.

You say that like that isn't providing value. How many services are listening on a port on your system right now? Run 'ss -ltpu' and prepare to be surprised.

Security isn't about "this will make you secure" it's about layers of protection and probability. It's a "good practice" because people make mistakes and having a second line of defense helps reduce the odds of a hack.

Security isn’t about “this will make you secure” it’s about layers of protection and probability. It’s a “good practice” because people make mistakes and having a second line of defense helps reduce the odds of a hack.

AKA Defense In Depth and should be considered for any type of security.

In the military when learning ORM we called this the "swiss cheese" theory.

The more layers of sliced swiss cheese, the fewer holes that go all the way through.

When you expose ports to the Internet. It's honestly interesting to setup a Web server with the default page on it and see how quickly you get hits on it. You don't need to register a DNS or be part of an index anywhere. If you open a port (and your router does forward it) then you WILL get scanned for vulnerabilities. It's like going naked in the forest, you sure can do that but clothes help, even if it's "just" again ivy or random critters. Now obviously the LONGER you run naked or leave a computer exposed, the most likely you are to get a bad bug.

Can confirm. As an example, I'm developing a game server that runs a raw socket connection over the Telnet port. Within 10 minutes of opening the port, I reliably get requests trying to use Telnet to enable command mode or login as admin. People are constantly scanning.

Ya. And sometimes hosting companies run active scans on customer machines. I get a crazy number of login attempts over ssh. I ❤️ fail2ban

For this specific argument, what difference does it make if that specific device has a firewall in addition to the NAT that it is behind? To expose the device to the internet, a port needs to be openend on the router which points to a specific port on the device. When a request is made to that port, only that port is accessed. Some third party can't start poking around at other ports on the device, as there is no route from the router.

True but there are also DMZ options that allow to expose an entire machine. I imagine someone who is not familiar with networking or firewalls might "give up" and use that "solution" if they don't manage to expose just the right port on just the right machine. I'm sure I did that at some point when I was tired of tinkering.

Also if the single port that is exposed has vulnerabilities, then scanning the other ports might not be necessary. If the vulnerability on the opened port allow some kind of access, even without escalating privilege (i.e no root access) maybe localhost queries could be made and from there maybe escalating on another service that wouldn't be exposed.

Finally on your initial question I'd argue if the firewall rules are equivalent then it would be equivalent but if they are a bit more refined than "just" open or close a port, e.g drop traffic that is not from within the LAN, so a specific subnet, then it might still create risk.

Also if the single port that is exposed has vulnerabilities, then scanning the other ports might not be necessary. If the vulnerability on the opened port allow some kind of access, even without escalating privilege (i.e no root access) maybe localhost queries could be made and from there maybe escalating on another service that wouldn’t be exposed.

For sure, but this is a separate topic. The existence of a firewall is kind of independent of the security of the service listening on the port that it's expected to listen on. If there is a vulnerability in the service, the existence of a packet filtering firewall most likely won't be able to do anything to thwart it.

Finally on your initial question I’d argue if the firewall rules are equivalent then it would be equivalent but if they are a bit more refined than “just” open or close a port, e.g drop traffic that is not from within the LAN

Fair point! Still, though, I'm not super convinced of the efficacy of a packet filtering firewall running on a device in preventing malicious connections from itself, were a service running on it to become compromised. The only way that I can see it guaranteeing protection from this scenario is if it drops all packets, but, at that point, it's just an offline system -- no networking -- so the issue essentially no longer applies.

You always need a firewall, no other answer's.

Why do you think windows and most linix distributions come packaged with one?

You always need a firewall, no other answer’s.

Okay, but why? That's kind of the point of why I made this post, as is stated in the post's body.

To keep your system secure no matter what, you open up only the ports you absolutely need.

People will always make a mistake while configuring software, a firewall is there to make sure that error is caught. With more advanced firewall' you can even make sure only certain app's have access to the internet to make sure only what you absolutely need toconnect to the internet does.

In general it's for security, but can also be privacy related depending on how deep you want to get into it.

EDIT: It isnt about not trusting other devices on your netork,or software you run, or whether you are runni g a server. It's about general security of your system.

With more advanced firewall’ you can even make sure only certain app’s have access to the internet to make sure only what you absolutely need toconnect to the internet does.

This sounds very interesting. This would have to be some forme of additional layer 7 firewall, right (As in it would have to interract with system processes, rather than filtering by network packet at layers 3, and 4)? Does this type of firewall have a specific name, or do you perhaps have some examples? I don't think it would be possible with something like nftables, but I could certainly be wrong.

I honestly only know of a windows one called simplewall.

I used to use it to outright block windows telemetry, microsoft services, apps, ...

It also helped me to save a lot of bandwith in regards to windows and all the stuff that comes preinstalked with it.

I havent searched for one for linux, mostly because 90% of apps I run are cli tools that don't require internet connection, but I'm sure there is probably one that exists.

OpenSnitch was recommended to me in this comment. I've set it up, and it seems to be working quite well. While doing some research on the topic, I also came across Portmaster, but, while it does look nice, some of it's features are locked behind a paywall, so I'm not interested -- OpenSnitch works just fine!

#2 is strange -- why does it matter?

It doesn't. If you're running a laptop with a local web server for development, you wouldn't want other devices in i.e. the coffee shop WiFi to be able to connect to your (likely insecure) local web server, would you?

If one is hosting a webserver on port 80, for example, they are going to poke a hole in their router's NAT at port 80 to open that server's port to the public. What difference does it make to then have another firewall that needs to be port forwarded?

Who is "they"? What about all the other ports?

Imagine a family member visits you and wants internet access in their Windows laptop, so you give them the WiFi password. Do you want that possibly malware infected thing poking around at ports other than 80 running on your server?

Obviously you shouldn't have insecure things listening there in the fist place but you don't always get to choose whether some thing you're hosting is currently secure or not or may not care too much because it's just on the local network and you didn't expose it to the internet.
This is what defense in depth is about; making it less likely for something to happen or the attack less potent even if your primary protections have failed.

#3 is a strange one -- what sort of malicious behaviour could even be done to a device with no firewall? If you have no applications listening on any port, then there's nothing to access

Mostly addressed by the above but also note that you likely do have applications listening on ports you didn't know about. Take a look at sudo ss -utpnl.

#5 is the only one that makes some sense; if you install a program that you do not trust (you don't know how it works), you don't want it to be able to readily communicate with the outside world unless you explicitly grant it permission to do so. Such an unknown program could be the door to get into your device, or a spy on your device's actions.

It's rather the other way around; you don't want the outside world to be able to talk to untrusted software on your computer. To be a classical "door", the application must be able to listen to connections.

OTOH, smarter malware can of course be something like a door by requesting intrusion by itself, so outbound filtering is also something you should do with untrusted applications.

People seem to treat it as if it's acting like the front door to a house, but this analogy doesn't make much sense to me -- without a house (a service listening on a port), what good is a door?

I'd rather liken it to a razor fence around your house, protecting you from thieves even getting near it. Your windows are likely safe from intrusion but they're known to be fragile. Razor fence can also be cut through but not everyone will have the skill or patience to do so.

If it turned out your window could easily be opened from the outside, you'd rather have razor fence in front until you can replace the window, would you?

If you’re running a laptop with a local web server for development, you wouldn’t want other devices in i.e. the coffee shop WiFi to be able to connect to your (likely insecure) local web server, would you?

This is a fair point that I hadn't considered for the mobile use-case.

Imagine a family member visits you and wants internet access in their Windows laptop, so you give them the WiFi password. Do you want that possibly malware infected thing poking around at ports other than 80 running on your server?

Fair point!

note that you likely do have applications listening on ports you didn't know about. Take a look at sudo ss -utpnl.

Interesting! In my case I have a number of sockets from spotify, and steam listening on port 0.0.0.0. I would assume, that these are only available to connections from the LAN?

It's rather the other way around; you don't want the outside world to be able to talk to untrusted software on your computer. To be a classical "door", the application must be able to listen to connections.

OTOH, smarter malware can of course be something like a door by requesting intrusion by itself, so outbound filtering is also something you should do with untrusted applications.

It could also be malicious software that simply makes a request to a remote server -- perhaps even siphoning your local data.

If it turned out your window could easily be opened from the outside, you'd rather have razor fence in front until you can replace the window, would you?

Fair point!

In my case I have a number of sockets from spotify, and steam listening on port 0.0.0.0. I would assume, that these are only available to connections from the LAN?

That's exactly the kind of thing I meant :)

These are likely for things like in-house streaming, LAN game downloads and remote music playing, so you may even want to consider explicitly allowing them through the firewall but they're also potential security holes of applications running under your user that you have largely no control over.

These are likely for things like in-house streaming, LAN game downloads and remote music playing, so you may even want to consider explicitly allowing them through the firewall

I looked up a few of the ports, and yeah an example of one of them was Steam Remote Play.

Even if you do trust the software running on your computer, did you actually fuzz it for vulnerabilities? Heartbleed could steal your passwords even if you ran ostensibly trustworthy software.

So unless you harden the software and prove it's completely exploit-free, then you can't trust it.

Heartbleed could steal your passwords even if you ran ostensibly trustworthy software.

Heartbleed is independent of a firewall though -- it's a protocol vulnerability that was patched into a specific library -- this feels somewhat like a strawman argument.

So unless you harden the software and prove it’s completely exploit-free, then you can’t trust it.

The type of "firewall" that I am referring to operates at layer 3/4. From what I understand, you seem to be describing exploits closer to the application layer.

I'm not saying there would be a Heartbleed 2.0 that you need a firewall against

I'm saying unless you read the code you're running, including the firmware and the kernel, how can you trust there isn't a remote execution exploit?

At work I showed a trivial remote execution using an upload form. If we didn't run php, it wouldn't happen. If the folder had proper .htaccess, it wouldn't happen. If we didn't trust the uploader's MIME type, it wouldn't happen.

There's something to be said about defense in depth. Even if you have some kind of a bug or exploit, the firewall just blocking everything might save you.

I’m saying unless you read the code you’re running, including the firmware and the kernel, how can you trust there isn’t a remote execution exploit?

A packet filtering firewall isn't able to protect against server, or protocol exploits directly. Sure, if you know that connections originating from a specific IP are malicious, then you can drop connections originating from that IP, but it will not be able to direclty protect against application layer exploits.

There do exist application layer firewalls (an example of which was pointed out to me here (opensnitch)), but those are out of the scope of this post.

Firewall for incoming traffic :

  • If you a home user with your computer or laptop inside a LAN you would not really need a firewall, unless you start to use applications which expose its ports to 0.0.0.0 rather than 127.0.0.1 (I believe Redis server software did this a few years ago) and do not trust other users or devices (smart home devices, phones, tablets, modems, switches and so on) inside your LAN.

  • If you are running a server with just a few services, for example ssh, smtp, https, some hosting company people I knew argue that no firewall is needed. I am not sure, my knowledge is lacking.

Application firewalls, watching also outgoing traffic :

If you compare Linux with some other Operating System you will see that on Linux for years an application firewall was non existing. But there is a choice now : opensnitch This can be useful if you run desktop applications that you do not fully trust, or want more control.

If you a home user with your computer or laptop inside a LAN you would not really need a firewall, unless you start to use applications which expose its ports to 0.0.0.0 rather than 127.0.0.1

Interestingly, on one of my devices, running # ss -utpnl shows quite a number of Spotify, and Steam sockets listening on 0.0.0.0. I looked up some of the ports, and, for example, one of the steam ones was a socket for Remote Play.

But there is a choice now : opensnitch

This is really cool! Thank you so much for this recommendation! This pretty much solves what was bugging me about outgoing connections in a layer 3/4 firewall like nftables.

This question reads a bit to me like someone asking, "Why do trapeze artists perform above nets? If they were good at what they did they shouldn't fall off and need to be caught."

Do you really need a firewall? Well, are you intimately familiar with every smidgeon of software on your machine, not just userland ones but also system ones, and you understand perfectly under which and only which circumstances any of them open any ports, and have declared that only the specific ports you want open actually are at every moment in time? Yes? You're that much of a sysadmin god? Then no, I guess you don't need a firewall.

If instead you happen to be mortal like the rest of us who don't read and internalize the behaviors of every piddly program that runs or will ever possibly run on our systems, you can always do what we do for every other problem that is too intensive to do manually: script that shit. Tell the computer explicitly which ports it can and cannot open.

Luckily, you don't even have to start from scratch with a solution like that. There are prefab programs that are ready to do this for you. They're called firewalls.

Tell the computer explicitly which ports it can and cannot open.

Isn't this all rather moot if there is even one open port, though? Say, for example, that you want to mitigate outgoing connections from potential malware that gets installed onto your device. You set a policy to drop all outgoing packets in your firewall; however, you want to still use your device for browsing the web, so you then allow outgoing connections to DNS (UDP, and TCP port 53), HTTP (TCP port 80), and HTTPS (TCP port 443). What if the malware on your device simply pipes its connections through one of those open ports? Is there anything stopping it from siphoning data from your PC to a remote server over HTTP?

The point of the firewall is not to make your computer an impenetrable fortress. It's to block any implicit port openings you didn't explicitly ask for.

Say you install a piece of software that, without your knowledge, decides to spin up an SSH server and start listening on port 22. Now you have that port open as a vector for malware to get in, and you are implicitly relying on that software to fend it off. If you instead have a firewall, and port 22 is not one of your allowed ports, the rogue software will hopefully take the hint and not spin up that server.

Generally you only want to open ports for specific processes that you want to transmit or listen on them. Once a port is bound to a process, it's taken. Malware can't just latch on without hijacking the program that already has it bound. And if that's your fear, then you probably have a lot of way scarier theoretical attack vectors to sweat over in addition to this.

Yes, if you just leave a port wide open with nothing bound to it, either via actually having the port reserved or by linking the process to the port with a firewall rule, and you happened to get a piece of actual malware that scanned every port looking for an opening to sneak through, sure, it could. To my understanding, that's not typically what you're trying to stop with a firewall.

In some regards a firewall is like a padlock. It keeps out honest criminals. A determined criminal who really wants in will probably circumvent it. But many opportunistic criminals just looking for stuff not nailed down will probably leave it alone. Is the fact that people who know how to pick locks exist an excuse to stop locking things because "it's all pointless anyway"?

Once a port is bound to a process, it’s taken. Malware can’t just latch on without hijacking the program that already has it bound.

Is this because the kernel assigns that port to that specific process, so that all traffic at that port is associated with only that process? For example, if you have an SSH server listening on 22, and another malicious porgram decides to start listening on 22, all traffic sent to 22 will only be sent to the SSH server, and not the malicious program?

EDIT (2024-01-31T01:20Z): While writing this, I came across this stackoverflow answer, which states that when a socket is created it calls some bind() function that attaches it to a port. This makes me wonder how difficult it would be for malware to steal the bound port.

Is this because the kernel assigns that port to that specific process, so that all traffic at that port is associated with only that process?

Yes, that's what ports do. They split your IP connection into 65,536 separate communication lines, that's the main thing, but that is specifically 65,536 1-on-1 lines, not party lines. When a process on your PC reserves port 80, that's it. It's taken. Short of hacking the kernel itself, it cannot be reassigned or stolen until the bound process frees it.

The SO answer you found it interesting, I was not aware that the Linux kernel had a feature that allowed two or more processes to willingly share a single port. But the answer explains that this is an opt-in parameter that the first binding process has to explicitly allow. And even then, traffic is not duplicated to all listening processes. It sounds like it's more of a "first come first serve" to whichever of the processes are free to read the incoming message at the time it arrives, making it more of a load balancing feature that isn't a useful vector for eavesdropping.

When you are attacked. Ok so when are you attacked , as soon as you connect outside. So unless you are air gapped you need a firewall.

Would you mind defining what you mean by "attacked"?

Scanned for vulnerability and exploited if found

I'm not sure if I understand perfectly the scenario that you are describing, but it appears that you are describing a situation in which one has a device behind a NAT that is running a server, which is port forwarded. In this scenario, the attack vector would depend on the security of the server itself, and is essentially independent to the existence of a firewall. One could potentially drop packets based on source IP, or some other metadata, or behaviour that identifies the connections as malicious, but, generally, unless the firewall drops all incoming connections (essentially creating an offline device), a packet filtering firewall will make no difference to thwarting such exploits.

TempleOS doesn't need one

Out of curiosity, why do you claim that? I know very little about TempleOS's functionality -- I'm essentially only aware of it's existence and some of it's history.

It doesn't have networking

Haha, yeah, a device without networking capabilities would be rather well protected from attacks originating from a network 😜

It seems that the consensus from all the comments is that you do in fact need a firewall. So my question is how does that look exactly? A hardware firewall device directly between modem and router? I using the software firewall on the router enough? Or, additionally having software firewall installed on all capable devices on the network? A combination of the above?

And like most things related to Linux on the internet, the consensus is generally incorrect. For a typical home user who isn't opening ports or taking a development laptop to places with unsecure wifi networks, you don't really need a firewall. It's completely superflous. Anything you do to your PC that causes you genuine discomfort will more than likely be your own fault rather than an explicit vulnerability. And if you're opening ports on your home network to do self-hosting, you're already inviting trouble and a firewall is, in that scenario, a bandaid on a sucking chest wound you self-inflicted.

For a typical home user who isn’t opening ports or taking a development laptop to places with unsecure wifi networks, you don’t really need a firewall. It’s completely superflous.

A "typical" home user, whom I assume is less knowledgeable about technology, is probably the person who would benefit the most from strict firewalls installed on their device. Such an individual assumedly doesn't have the prerequisite knowledge, or awareness required to adequately gauge the threats on their network.

Anything you do to your PC that causes you genuine discomfort will more than likely be your own fault rather than an explicit vulnerability.

Would this not be adequate rationale for having contingencies, i.e. firewalls? A risk/threat needn't only be an external malicious actor. One's own mistakes could certainly be interpreted as a potential threat, and are, therefore, worthy of mitigation.

And if you’re opening ports on your home network to do self-hosting, you’re already inviting trouble and a firewall is, in that scenario, a bandaid on a sucking chest wound you self-inflicted.

Well, no, not necessarily. It's important to understand what the purpose of the firewall is. If a device can potentially become an attack vector, it's important to take precautions against that -- you'd want to secure other devices on the network in the off chance that it does become compromised, or secure that very device to limit the potential damage that it could inflict.

A “typical” home user, whom I assume is less knowledgeable about technology, is probably the person who would benefit the most from strict firewalls installed on their device. Such an individual assumedly doesn’t have the prerequisite knowledge, or awareness required to adequately gauge the threats on their network.

They also would not realistically be doing anything that would cause open ports on their machine to serve data to some external application. It's not like someone can just "hack" your computer by picking a random port and weaseling their way in. They have to have some exploitable mechanism on the machine that serves data in a way that's insecure.

Would this not be adequate rationale for having contingencies, i.e. firewalls? A risk/threat needn’t only be an external malicious actor. One’s own mistakes could certainly be interpreted as a potential threat, and are, therefore, worthy of mitigation.

I am assuming that there's a hierarchy of needs in terms of maintaining any Linux system. Whenever you learn how to use something (and you would have to learn how to use a firewall), you are sacrificing time and energy that would be spent learning something else. Knowing how your package manager works, or how to use systemctl, or understanding your file system structure, or any number of pieces of fundamental Linux knowledge is, for a less technically sophisticated user, going to do comparatively more to guarantee the longevity and health of their system than learning how to use a firewall, which is something capable of severely negatively impacting your user experience if you misconfigure it. In other words: don't mess around with a firewall if you don't know what you're doing. Use your time learning other things first if you're a not technically sophisticated user. I also don't exactly know what "mistakes" you'd be mitigating by installing a firewall if you aren't binding processes to those ports (something a novice user should not be doing anyway).

Well, no, not necessarily. It’s important to understand what the purpose of the firewall is. If a device can potentially become an attack vector, it’s important to take precautions against that – you’d want to secure other devices on the network in the off chance that it does become compromised, or secure that very device to limit the potential damage that it could inflict.

You just wrote that "One’s own mistakes could certainly be interpreted as a potential threat, and are, therefore, worthy of mitigation." The best way of mitigating mistakes is by not making them in the first place, or creating a scenario in which you could potentially make them. Prevention is always better than cure. You should never open ports on your local network. Ever. I don't care if you have firewalls on everything down to your smart thermostat - if you need to expose locally hosted services you should be maintaining a cloud VM or similar cloud based service that forwards connections to the desired service on your internal network via a VPN like Tailscale. Or, even better, just put Tailscale's service on whatever machine you're using that needs access to your personal network. And, yes, if you're doing things like that, you would also want robust firewall protections everywhere. But the firewall simply isn't ever "enough."

Anyway, just my 2 cents. The more you know and do, the greater steps you should take to protect yourself. For someone who knows very little, the most important thing that can help them is knowing more, and there is a hierarchy of learning that will take them from "knowing little" to "knowing much," but they shouldn't/don't need to concern themselves with certain mechanisms before they know enough to reliably use them or mitigate their own mistakes. That said, if you are a new user, you're probably installing a linux distro that already comes with its own preconfigured firewall that's already running and you just don't know about it. In which case, moot point. If you're not, though, I'm assuming your goal is learning linux stuff, in which case, I've gone into that.

They also would not realistically be doing anything that would cause open ports on their machine to serve data to some external application.

They may not explicitly do it, no, but I could certainly see the possibility of the software that they use having such a vulnerability, or even a malicious bit of software inadvertently being installed on their device.

In other words: don’t mess around with a firewall if you don’t know what you’re doing. Use your time learning other things first if you’re a not technically sophisticated user. I also don’t exactly know what “mistakes” you’d be mitigating by installing a firewall if you aren’t binding processes to those ports (something a novice user should not be doing anyway).

This sort of skirts around answering the question.

The best way of mitigating mistakes is by not making them in the first place

But mistakes will be made all the same.

Prevention is always better than cure.

This is exactly the point that I am trying to make. Having contingencies in place on the off chance that something doesn't go as expected could certainly be interpreted as "prevention".

You should never open ports on your local network. Ever.

What would be the rationale for this statement?

if you need to expose locally hosted services you should be maintaining a cloud VM or similar cloud based service that forwards connections to the desired service on your internal network via a VPN like Tailscale.

I'm not sure that I understand what issue that this would solve. Would the malicious connections not still be forwarded through the VPN to the service? I am quite lacking in knowledge on Tailscale, and how related infrastructure is used in production, so please pardon my ignorance.

Depends on your setup. I got a network-level firewall+router setup between my modem and my LAN. But also, got firewalld (friendly wrapper on iptables) on every Linux device I care about because I don't want to unintentionally expose something to the network.

hm, guess maybe I should find something for Android and my Windows boxes.

(friendly wrapper on iptables)

iptables is deprecated, so it's better to label it as a wrapper for nftables.

I use the firewall built into Proxmox with a device running openwrt

7 more...

You need to understand the mindset behind running a firewall, and that mindset is that you define with mathematical precision what's possible within the network connectivity of a device, you leave nothing to chance or circumstance, because doing so would be sloppy.

Provided you want to subscribe to this mindset, and that the circumstances of that device warrant it, and that you have the networking knowledge to pull it off, you should in theory start with a DENY policy on everything and open up specific ports for specific users and related connections only. But it's not trivial and if you're a beginner it's best done directly on the server console, because you WILL break your SSH connection doing this. And of course maybe not persist the firewall rules permanently until you've learned more and can verify you can get in.

Now obviously this is an extreme mindset and yes you should use it in a professional setting. As a hobbyist? Up to you. In theory you don't need a firewall if your server only exposes the services you want to expose and you were gonna expose them through the firewall anyway. In practice, keeping track on what's running on a box and what's using what connections can be a bit harder than that.

If you're a beginner my recommendation is to use a dedicated router running OpenWRT with LUCI, which comes with a sensible firewall out of the box, an easy to use UI, and other goodies like an easy to use DNS+DHCP server combo and the ability to install plugins for DoH, DDNS etc.

because you WILL break your SSH connection doing this

Haha, yeah, I've certainly inadvertently done this when I was first learning about how firewalls worked on Linux.

You're right. If you don't open up ports on the machines, you don't need a firewall to drop the packages to ports that are closed and will drop the packets anyways. So you just need it if your software opens ports that shouldn't be available to the internet. Or you don't trust the software to handle things correctly. Or things might change and you or your users install additional software and forget about the consequences.

However, a firewall does other things. For example forwarding traffic. Or in conjunction with fail2ban: blocking people who try to guess ssh passwords and connect to your server multiple times a second.

Edit:

  1. “It’s just good security practice.” => nearly every time I've heard that people followed up with silly recommendations or were selling snake-oil.
  2. “You [just] need it if you are running a server.” => I'd say it's more like the opposite. A server is much more of a controlled environment than lets say a home network with random devices and people installing random stuff.
  3. “You need it if you don’t trust the other devices on the network.” => True, I could for example switch on and off your smarthome lights or disable the alarm and burgle your home. Or print 500 pages.
  4. “You need it if you are not behind a NAT.” => Common fallacy, If A then B doesn't mean If B then A. Truth is, if you have a NAT, it does some of the jobs a firewall does. (Dropping incoming traffic.)
  5. “You need it if you don’t trust the software running on your computer.” => True

True, I could for example switch on and off your smarthome lights or disable the alarm and burgle your home. Or print 500 pages.

How would the firewall on one device prevent other devices from abusing the rest of the network? Perhaps you misunderstood the original intent of my post. I certainly wouldn't blame you if that is the case, though -- when I made my post I was far too vague in my intent -- perhaps I simply didn't think through my question enough, but the more likely answer is that I simply wasn't knowledgeable enough on the topic to accurately pose the question that I wanted to ask.

Common fallacy, If A then B doesn’t mean If B then A. Truth is, if you have a NAT, it does some of the jobs a firewall does. (Dropping incoming traffic.)

Fair point!

“You need it if you don’t trust the software running on your computer.” => True

For this, though, the only solution to it would be an application layer firewall like OpenSnitch, correct?

How would the firewall on one device prevent other devices from abusing the rest of the network?

Sure. I'm not exactly sure any more what I was trying to convey. I think I was going for the firewall as a means if perimeter security. Usually devices are just configured to allow access to devices from the same Local Access Network. This is the case for lots of consumer electronics (and some enterprises also rely on securing the perimeter, once you get in their internal network, you can exploit that.) My printer lets everyone print and scan, no password setup required while installing the drivers. The wifi smart plugs I use to turn on and off the mood light in the livingroom also per default accept everyone in the WiFi. And lots of security cameras also have no password on them or people don't change the default since they're the only ones able to connect to the home WiFi. This works, since usually there is a Wifi router that connects to the internet and also does NAT, which I'd argue is the same concept as a firewall that discards incoming connections. And while wifi protocols have/had vulnerabilities, it's fairly uncommon that people go wardriving or close to your house to crack the wifi password. However, since you mentioned mixing devices you trust and devices you don't trust... That can have bad consequences in a network setup like this. You either do it properly, or you need some other means to secure your stuff. That may be isolating the cheap chinese consumer electronic with god knows which bugs and spying tech from the rest of the network. And/or shielding the devices you can't set up a password on.

the only solution to it would be an application layer firewall like OpenSnitch, correct?

I don't think you can make an absolute statement in this case. It depends on the scenario, as it always does with security. If you have broken web software with known and unpatched vulnerabilities, a Web Application Firewall might filter out malicious requests. An Application Firewall if other software is susceptible to attacks or might become the attacker itself (I'm not entirely sure what they do.) But you might also be able to use a conventional firewall (or a VPN) to restrict access to that software to trusted users only. For example drop all packets if it's not you interacting with that piece of software. And you can also combine several measures.

I think I was going for the firewall as a means if perimeter security.

Are you referring to the firewall on the router?

it’s fairly uncommon that people go wardriving

Interesting. I hadn't heard of this.

That may be isolating the cheap chinese consumer electronic with god knows which bugs and spying tech from the rest of the network.

As in blocking or restricting their communication with the rest of the lan in the router's firewall, for example? Or, perhaps, putting them behind their own dedicated firewall (this is probably superfluous to the firewall in the router though).

But you might also be able to use a conventional firewall (or a VPN) to restrict access to that software to trusted users only

For clarity's sake, would you be able to provide an example of how this could be implemented? It's not immediately clear to me exactly what you are referring to when combining "user" with network related topics.

Are you referring to the firewall on the router?

Yes. At home this will run on your (wifi) router. But the standard rules on that are pretty simple: Discard everything incoming, allow everything outgoing. Companies might have a dedicated machine, something like a pfSense in a server rack at each of their subsidiaries and draw a perimeter line around what they deem fit, the office building, a department, or separate the whole company's internal network from the internet. (Or a combination of those.) You just have one point at home where two network segments interconnect: your router.

I think it is important to distinguish between this kind of firewall and something that runs on a desktop computer. I'd call that a personal firewall or desktop firewall. It does different things: like detect what kind of network you're connected to. Enable access when you're at your workplace but inhibit the Windows network share when you're at the airport wifi. It adds a bit of protection to the software running on the computer, and can also filter packets from the LAN. And it's often configured to be easygoing in order not to get in the way of the user. But it is not an independent entity, since it runs on the same machine that it is protecting. If that computer gets compromised for example, so is the personal firewall. A dedicated firewall however runs on a dedicated and secure machine, one where there is no user software installed that could interfere with it. And at a different location, it filters traffic between network segments, so it might be physically at some network interconnect. There are lots of different ways to do it, and people apply things in different ways. Such a firewall might not be able to entirely protect you or stop malicious activity spread within the attached network at all. And of course you need the correct policy and type in the rules that allow people at the company to be able to work, but inhibit everything else. Perfection is more a theoretical concept here and nothing that can be achieved in reality.

[isolating the cheap chinese consumer electronics] As in blocking or restricting their communication with the rest of the lan in the router’s firewall, for example?

Yes, you'd need to separate them from the rest of the network so your router gets in-between of them. Lots of wifi routers can open an additional guest network, or do several independent WiFis. For cables there is VLAN. For example: You configure 4 independent networks, get your computers on one network, your IoT devices on another network, your TV and NAS storage on a third and your guests and visitors on yet another. You tell your router the IoT devices can't be messed with by guests and they can only connect to their respective update servers on the internet and your smarthome. Your guests can only connect to the internet but not to your other devices or each other. The TV is blocked from sending your behavior tracking data to arbitrary companies, it can only access your NAS and update servers. The devices you trust go on the network that is easygoing with the restrictions. You can make it arbitrarily complex or easy. This would be configured with the firewall of the router.

But an approach like this isn't perfect by any means. The IoT devices can still mess with each other. Everything is a hassle to set up. And the WiFi is a single point of failure. If there are any security vulnerabilities in the WiFi stack of the router, attackers are probably just as likely to get into the guest wifi as they'd get into your secured wifi. And then the whole setup and separating things was an exercise in futility.

would you be able to provide an example of how this [use a conventional firewall (or a VPN) to restrict access to that software to trusted users only] could be implemented? It’s not immediately clear to me exactly what you are referring to when combining “user” with network related topics.

I mean something like: You have a network drive that you use to upload your vacation pictures to in case your camera/phone gets stolen. You can now immediately block everyone from all countries except from France, since you're traveling there. This would be kind of a crude example but alike what we sometimes do with our credit cards. You can also set up a VPN that connects specifically you to your home-network or services. Your Nextcloud server can't be reached or hacked from the internet, unless you also have the VPN credentials to connect to it in the first place. You obviously need some means of mapping the concept 'user' to something that is distinguishable from a network perspective. If you know in advance what IP addresses you're going to use to connect, this is easy. If you don't, you have to use something like a VPN to accomplish that, make just your phone be able to dial in to your home network. (Or compromise, like in the France example.)

Enable access when you’re at your workplace but inhibit the Windows network share when you’re at the airport wifi.

How would something like this be normally accomplished? I know that Firewalld has the ability to select a zone based on the connection, but, if I understand correctly, I think this is decided by the Firewalld daemon, rather than the packet filtering firewall itself (e.g. nftables). I don't think an application layer firewall would be able to differentiate networks, so I don't think something like OpenSnitch would be able to control this, for example.

But an approach like this isn’t perfect by any means. The IoT devices can still mess with each other. Everything is a hassle to set up. And the WiFi is a single point of failure.

What would be a better alternative that you would suggest?

You can also set up a VPN that connects specifically you to your home-network or services. Your Nextcloud server can’t be reached or hacked from the internet, unless you also have the VPN credentials to connect to it in the first place.

The unfortunate thing about this -- and I have encountered this personally -- is that some networks may block VPN related traffic. You can take measures to attempt to obfuscate the VPN traffic from the network, but it is still a potential headache that could lock you out of using your service.

I think this is decided by the Firewalld daemon, rather than the packet filtering firewall itself

Mmh, I probably was way to vague with that. This is done by something like FirewallD or whatever Windows or MacOS uses for this. AFAIK it then uses packet filtering to accomplish the task. Seems FirewallD includes the packet filtering too and not tie into nftables and transfer the filtering task to that. I don't think OpenSnitch does things like that. I'm really not an expert on firewalls. I could be wrong. If you read the Wikipedia article (which isn't that good) you'll see there are at least 3 main types of firewall, probably more sub-types and a plethora of different implementations. Some software does more than one of the things. And everything kinda overlaps. Depending on the use-case you might need more than just one concept like packet-filtering. Or connect different software, for example detect which network was connected to and re-configure the packet filter. Or like fail2ban: read the logfiles with one piece of software and hand the results to the packet filter firewall and ban the hackers.

I don't really know how the network connection detection is accomplished and manages the firewall. Either something pops up and I click on it, or it doesn't. My laptop has just 3 ports open, ssh, ipp (printing) and mdns. I haven't felt the need to address that and care about a firewall on that machine. But I've made mistakes. I had MDNS or Bonjour or whatever automatically shows who is on the network and which services they offer activated and it showed some of the Apple devices at work and I didn't intend to show up in anyone's chat with my laptop or anything. And at one point I forgot to deactivate a webserver on my laptop. I had used that to design a website and then forgotten about. Everyone in the local networks I've connected to in that time could have accessed that and depending on where I was that could have made me mildly embarassed. But no-one did and I eventually deleted the webserver. I think I've been living alright without caring about a firewall on my private laptop. I could have prevented that hypothetical scenario by using a firewall that detects where I'm at, but far more embarassing stuff happens to other people. Like people changing their name and then Airdropping silly stuff to people who are just holding a lecture, or Skype popping up while their screen is mirrored to the beamer infront of a large audience. But that has nothing to do with firewalls. Also, in the old days every Windows and network share was displayed on the whole network anyways. Nothing ever happened to me. And while I think that is not a good argument at all, I feel protected enough by using the free software I do and roughly knowing how to use a computer. I don't see a need to install a firewall just to feel better. Maybe that changes once my laptop is cluttered and I lose track of what software opens new ports.

On my server I use nftables. Drop everything and specifically allow the ports that I want to be open. In case I forget about an experiment or configure something entirely wrong (which also has happened) it adds a layer of protection there. I handle things differently because the server is directly connected to the internet and targeted, and my laptop is behind some router or firewall all the time. Additionally, I configured fail2ban and configured every service so it isn't susceptible to brute-forcing the passwords. I'm currently learning about Web Application Firewalls. Maybe I'll put ModSecurity in-front of my Nextcloud. But it should be alright on it's own, I keep it updated and followed best practices when setting it up.

[IoT devices] What would be a better alternative that you would suggest?

I really don't have a good answer to that. Separating your various assortment of IoT devices from the rest of the network is probably a good idea. I personally would stop at that. I wouldn't install cameras inside of my house and not buy an Alexa. I have a few smart lightbulbs and 2 thermostats, they communicate via Zigbee (and not Wifi), so that's my separate network. And I indeed have a few Wifi IoT devices, a few plugs and an LED-strip. I took care to buy ones where I could hack the firmware and flash Tasmota or Esphome on them. So they run free software now and don't connect to some manufacturers cloud. And I can keep them updated and hopefully without security vulnerabilities indefinitely, despite them originally being really cheap no-name stuff from china.

You can also set up a guest Wifi (for your guests) if you want to. I recently did, but didn't bother to do it for many years. I feel I can trust my guests, we're old enough now and outgrew the time when it was funny to mess with other people's stuff, set an alarm to 3am or change the language to arabic. And all they can do is use my printer anyways. So I usually just give my wifi password to anyone who asks.

However, what I do might not be good advice for other people. I know people who don't like to give their wifi credentials to anyone, since it could be used to do illegal stuff over the internet connection. That would backfire on who owns the internet connection and they'd face the legal troubles. That will also happen if it's a guest wifi. I'm personally not a friend of that kind of legislation. If somebody uses my tools to commit a crime, I don't think I should be held responsible for that. So I don't participate in that fearmongering and just share my tools and internet connection anyways.

(And you don't absolutely need to put in all of that effort at home. Companies need to do it, since sending all the employers home and then paying 6 figures to another company to analyze the attack and restore the data is very expensive. At home you're somewhat unlikely to get targeted directly. You'll just be probed by all the stuff that scans for vulnerable and old IoT devices, open RDP connections, SSH, insecure webservers and badly configured telephony boxes. Your home wifi router will do the bare minimum and the NAT on it will filter that out for you. Do Backups, though.)

some networks may block VPN related traffic

That's a bummer. There is not much you can do except obfuscate your traffic. Use something that runs on port 443 and looks like https (i think that'd be a TCP connection) or some other means of obfuscating the traffic. I think there are several approaches available.

for example detect which network was connected to and re-configure the packet filter.

Firewalld is capable of this -- it can switch zones depending on the current connection.

And while I think that is not a good argument at all, I feel protected enough by using the free software I do and roughly knowing how to use a computer. I don’t see a need to install a firewall just to feel better. Maybe that changes once my laptop is cluttered and I lose track of what software opens new ports.

There does still exist the risk of a vulnerability being pushed to whatever software that you use -- this vulnerability would be essentially out of your control. This vulnerability could be used as a potential attack vector if all ports are available.

I’m currently learning about Web Application Firewalls. Maybe I’ll put ModSecurity in-front of my Nextcloud.

Interesting! I haven't heard of this. Side note, out of curiosity, how did you go about installing your Nextcloud instance? Manual install? AIO? Snap?

I’m personally not a friend of that kind of legislation. If somebody uses my tools to commit a crime, I don’t think I should be held responsible for that.

It would be a rather difficult thing to prove -- one could certainly just make the argument that you did, in that someone else that was on the guest network did something illegal. I would argue that it is most likely difficult to prove otherwise.

There does still exist the risk of a vulnerability being pushed to whatever software that you use – this vulnerability would be essentially out of your control. This vulnerability could be used as a potential attack vector if all ports are available.

But this is a really difficult thing to protect from. If someone gets to push code on my computer that gets executed, I'm entirely out of luck. It could do anything that that process is allowed to do, send data, mess with my files and databases or delete stuff. I'm far more worried about the latter. Sandboxing and containerization are ways to mitigate for this. And it's the reason why I like Linux distributions like Debian. There's always the maintainers and other people who use the same software packages. If somebody should choose to inject malicious code into their software, or it gets bought and the new company adds trackers to it, it first has to pass the (Debian) maintainers. They'll probably notice once they prepare the update (for Debian). And it gets rolled out to other people, too. They'll probably notice and file a bugreport. And I'm going to read it in the news, since it's something that rarely happens at all on Linux.

On the other hand it could happen not deliberately but just be vulnerable software. That happens and can be exploited and is exploited in the real world. I'm also forced to rely on other people to fix that before something happens to me. Again sandboxing and containerization help to contain it. And keeping everything updated is the proper answer to that.

What I've seen in the real world is a CMS being compromised. Joomla had lots of bugs and Wordpress, too. If people install lots of plugins and then also don't update the CMS, let it rot and don't maintain the server at all, after like 2 years(?) it can get compromised. The people who constantly probe all the internet servers will at some point find it and inject something like a rootkit and use the server to send spam, or upload viruses or phishing sites to it. You can pay Cloudflare $200 a month and hope they protect you from that, or use a Web Application Firewall and keep that up-to-date yourself, or just keep the software itself up-to-date. If you operate some online-services and there is some rivalry going on, it's bound to happen faster. People might target your server and specifically scan that for vulnerabilities way earlier than the drive-by attacks get a hold of it. Ultimately there is no way around keeping a server maintained.

how did you go about installing your Nextcloud instance?

I have two: YunoHost powers my NAS at home. It contains all the big files and important vacation pictures etc. YunoHost is an AIO solution(?), an operating system based on Debian that aims at making hosting and administration simple and easy. And it is. You don't have to worry too much to learn how to do all of the stuff correctly, since they do it for you. I've looked at the webserver config and so on and they seem to follow best practices, disallow old https ciphers, activate HSTS and all the stuff that makes cross site scripting and such attacks hard to impossible. And I pay for a small VPS. I used docker-compose and Docker on it. Read all the instructions and configured the reverse proxy myself. I also do some experimentation there in other Docker containers, try new software... But I don't really like to maintain all that stuff. Nextcloud and Traefik seem somewhat stable. But I have to regularly fiddle with some of the other docker-compose files of other projects that change after a major update. I'm currently looking for a solution to make that easier and planning to rework that server. And then also run Lemmy, Matrix chat and a microblogging platform on it.

It would be a rather difficult thing to prove

And it depends on where you live and the legislation there. If someone downloads some Harry Potter movies or uses your Wifi to send bomb threats to their school... They'll log the IP and then contact the ISP and the Internet Service Provider is forced to tell them your name. You'll get a letter or a visit from police. If they proceed and sue you, you'll have to pay a lawyer to defend yourself and it's a hassle. I think I'd call it coercion, but even if you're in the right, they can temporarily make your life a misery. In Germany, we have the concept of "Störerhaftung" on top. Even if you're not the offender yourself, being part of a crime willingly (or causally adequate(?))... You're considered a "disruptor" and can be held responsible, especially to stop that "disruption". I think it was meant get to people who technically don't commit crimes themselves, they just deliberately enable other people to do it. For some time it got applied to WiFi here. The constitutional court had to rule and now I think it doesn't really apply to that anymore. It's complicated... I can't sum it up in a few sentences. Nowadays they just send you letters, threatening to sue you and wanting a hundred euros for the lawyer who wrote the letter. They'll say your argument is a defensive lie and you did it. Or you need to tell them exactly who did it and rat out on your friends/partner/kids or whoever did it. Of course that's not how it works in the end but they'll try to pressure people and I can imagine it is not an enjoyable situation to be in. I've never experienced it myself, I don't download copyrighted stuff from the obvious platforms that are bound to get you in trouble and neither does anyone else in my close group of friends and family.

But this is a really difficult thing to protect from. If someone gets to push code on my computer that gets executed, I’m entirely out of luck. It could [...] send data [...].

Not necessarily. An application layer firewall, for example, could certainly get in the way of it trying to send data externally.

On the other hand it could happen not deliberately but just be vulnerable software.

Are you referring to a service leaving a port open that can be connected to from the network?

And then also run Lemmy, Matrix chat and a microblogging platform on it.

I'm definitely curious about the outcome of this -- Matrix especially. Perhaps the new/alternative servers function a bit better now, but I've heard that, for synapse at least, Matrix can be very demanding on hardware to run (from what I've heard, the issues mostly arise when one joins a larger server).

You’re considered a “disruptor” and can be held responsible, especially to stop that “disruption”.

Interesting. Do you mean "held responsible" to simply stop the disruption, or "held responsible" for the actions of/damaged caused by the disruption?

I think an Application Layer Firewall usually struggles to do more than the utmost basics. If for example my Firefox were to be compromised and started not only talking to Firefox Sync to send the history to my phone, but also send my behavior and all the passwords I type in to a third party... How would the firewall know? It's just random outgoing encrypted traffic from its perspective. And I open lots of outbound connections to all kinds of random servers with my Firefox. Same applies to other software. I think such firewalls only protect you once you run a new executable and you know it has no business sending data. If software you actually use were susceptible to attack, the firewall would need to ask you after each and every update of Firefox if it's still okay and you'd really need to verify the state of your software. If you just click on 'Allow' there is no added benefit. It could protect you from connecting to a list of known malicious addresses and from people smuggling new and dedicated malware to your computer.

I don't want to say doing the basics is wrong or anything. If I were to use Windows and lots of different software I'd probably think about using an Application Level Firewall. But I don't see a real benefit for my situation... However I'd like Linux to do some more sandboxing and asking for permissions on the desktop. Even if it can't protect you from everything and may not be a big leap for people who just click 'Accept' for everything, it might be a good direction and encourage more fine-granularity in the permissions and ways software ties together and interacts.

it could [...] just be vulnerable software

I mean your webserver or CMS or your browser has a vulnerability and that gets exploited and you get hacked. The webserver has open ports anyways in order to be able to work at all. The CMS is allowed to process requests and the browser allowed to talk to websites. A maliciously crafted request or answer to your software can trigger it to fail and do something that it shouldn't do.

[...] Matrix

Sure, I have a Synapse Matrix server running on my YunoHost. It works fine for me. I'm going to install Dendrite or the other newer one next. I'm not complaining if I can cut down memory consumption and load to the minimum.

Do you mean “held responsible” to simply stop the disruption, or “held responsible” for the actions of/damaged caused by the disruption?

Yeah, the issue was that it meant both. You were part of the crime, you were involved in the causality and linked to the damages somehow. Obviously not to the full extend, since you didn't do it yourself, but more than 'don't allow it to happen again'. Obviously that has consequences. And I think now it's not that any more when it comes to wifi. I think now it's just the first, plus they can ask for a fixed amount of money since by your negliect, you caused their lawyer to put in some effort.

If for example my Firefox were to be compromised and started not only talking to Firefox Sync to send the history to my phone, but also send my behavior and all the passwords I type in to a third party… How would the firewall know?

If it's going to some undesirable domain, or IP, then you can block the request for that application. The exact capabilities of the application layer firewall certainly depend on the exact application layer firewall in question, but this is, at least, possible with OpenSnitch.

It’s just random outgoing encrypted traffic from its perspective.

For the actual content of the traffic, is this not the case with essentially all firewalls? They can't see the content of te traffic if it is using TLS. You would need to somehow intercept the packet before it is encrypted on the device. I'm not aware of any firewall that has such a capability.

If you just click on ‘Allow’ there is no added benefit.

The exact level of fine-grain control heavily depends on the application layer firewall in question.

A maliciously crafted request or answer to your software can trigger it to fail and do something that it shouldn’t do.

Interesting.

I think now it’s just the first, plus they can ask for a fixed amount of money since by your negliect, you caused their lawyer to put in some effort.

I do, perhaps, somewhat understand this argument, but it still feels quite ridiculous to me.

1 more...
1 more...
1 more...
1 more...
1 more...
1 more...
1 more...
1 more...
1 more...
1 more...
1 more...

You’re right. If you don’t open up ports on the machines, you don’t need a firewall to drop the packages to ports that are closed and will drop the packets anyways.

Sorry, hard disagree.

I assume you are assuming: 1.) You know about all open ports at all times, which is usually not the case 2.) There are no bugs/errors in the network stacks or services with open ports (e.g. you assume a port is only available to localhost) 3.) That there are no timing attacks which can easily be mitigated by a firewall 4.) That software one uses does not trigger/start other services transitively which then open ports you are not even aware of w/o constant port scanning

I agree with your point, that a server is a more controlled environment. Even then, as you pointed out, you want to rate limit bad login attempts via firewall/fail2ban etc. for the simple reason, that even a fully updated ssh server might use a weak key (because of errors/bugs in software/hardware during key generation) and to prevent timing attacks etc.

In summary: IMHO it is bad advice to tell people they don't need a firewall, because it is demonstrably wrong and just confuses people like OP.

Sure, maybe I've worded things too factually and not differentiated between theory and practice. But,

  1. "you know everything": I've said that. Configurations might change or you you don't pay enough attention: A firewall adds an extra layer of security. In practice people make mistakes and things are complex. In theory where everything is perfect, blocking an already closed port doesn't add anything.

  2. "There are no bugs in the network stack": Same applies to the firewall. It also has a network stack and an operating system and it's connected to your private network. Depends on how crappy network stacks you're running and how the network stack of the firewall compares against that. Might even be the same as on my VPS where Linux runs a firewall and the services. So this isn't an argument alone, it depends.

  3. Who migitates for timing attacks? I don't think this is included in the default setup of any of the commonly used firewalls.

  4. "open ports you are not even aware of": You open ports then. And your software isn't doing what you think it does. We agree that this is a use-case for a firewall. that is what I was trying to convey with the previous argument no 5.

Regarding the summary: I don't think I want to advise people not to use a firewall. I thought this was a theoretical discussion about single arguments. And it's complicated and confusing anyways. Which firewall do you run? The default Windows firewall is a completely different thing and setup than nftables and a Linux server that closes everything and only opens ports you specifically allow. Next question: How do you configure it? And where do you even run it? On a seperate host? Do you always rent 2 VPS? Do you do only do perimeter security for your LAN network and run a single firewall? Do you additionally run firewalls on all the connected computers in the network? Does that replace the firewall in front of them? What other means of security protection did you implement? As we said a firewall won't necessarily protect against weak passwords and keys. And it might not be connected to the software that gets brute-forced and thus just forward the attack. In practice it's really complicated and it always depends on the exact context. It is good practice to not allow everything by default, but take the approach to block everything and explicitly configure exceptions like a firewall does. It's not the firewall but this concept behind it that helps.

I think I get you and the 'theory vs. practice' point you make is very valid. ;-) I mean, in theory my OS has software w/o bugs, is always up-to-date and 0-days do not exist. (Might even be true in practice for a default OpenBSD installation regarding remote vulnerabilities. :-P)

Who migitates for timing attacks? I don’t think this is included in the default setup of any of the commonly used firewalls.

fail2ban absolutely mitigates a subset of timing attacks in its default setup. ;-)

LIMIT is a high level concept which can easily applied for ufw, don't know about default setups of commonly used firewalls.

If someone exposes something like SSH or anything else w/o fail2ban/LIMIT IMHO that is grossly incompetent.

You are totally right, of course firewalls have bugs/errors/miss configurations... BUT ... if you are using a Linux firewall, good chances are, that the firewall has been reviewed/attacked/pen tested more often and thoroughly than almost all other services reachable from the internet. So, if I have to choose between a potential attacker first hitting a well tested and maintained firewall software or a MySQL server, which got no love from Orcacle and lives in my distribution as an outdated package, I'll put my money on the firewall every single time. ;-)

Thank you for pointing out that my arguments don't necessarily apply to reality. Sometimes I answer questions too direct. And the question wasn't "should I use a firewall" or I would have answered with "probably yes."

I think I have to make a few slight corrections: I think we use the word "timing attack" differently. To me a timing attack is something that relies on the exact order or interval/distance packets arrive at. I was thinking of something like TOR does where it shuffles around packets, waits for a few milliseconds, merges them or maybe blows them up so they all have the same size. Brute forcing something isn't exploiting the exact time where a certain packet arrives, it's just sending many of them and the other side lets the attacker try an indefinite amount of passwords. But I wouldn't put that in the same category with timing attacks.

Firewall vs MySQL: I don't think that is a valid comparison. The firewall doesn't necessarily look into the packets and detect that someone is running a SQL injection. Both do a very different job. And if the firewall doesn't do deep-packet-inspection or rate limiting or something, it just forwards the attack to the service and it passes through anyways. And MySQL probably isn't a good example since it rarely should be exposed to the internet in the first place. I've configured MariaDB just to listen on the internal interface and not to packets from other computers. Additionally I didn't open the port in the firewall but MariaDB doesn't listen on that interface anyways. Maybe a better comparison would be a webserver with https. The firewall can't look into the packets because it's encrypted traffic. It can't tell apart an attack from a legitimate request and just forwards them to the webserver. Now it's the same with or without a firewall. Or you terminate the encrypted traffic at the firewall, do packet inspection or complicated heuristics. But that shifts the complexity (including potential security vulberabilities in complex code) from the webserver to the firewall. And it's a niche setup that also isn't well tested. And you need to predict the attacks. If your software has known vulnerabilities that won't get fixed, this is a valid approach. But you can't know future attacks.

Having a return channel from the webserver/software to the firewall so the application can report an attack and order the firewall to block the traffic is a good thing. That's what fail2ban is for. I think it should be included by default wherever possible.

I think there is no way around using well-written software if you expose it to the internet (like a webserver or a service that is used by other people.) If it doesn't need to be exposed to the internet, don't do it. Any means of assuring that are alright. For crappy software that is exposed and needs to be exposed, a firewall doesn't do much. The correct tools for that are virtualization, containers, VPNs, and replacing that software... Maybe also the firewall if it can tell apart good and bad actors by some means. But most of the time that's impossible for the firewall to tell.

I agree. You absolutely need to do something about security if you run services on the internet. I do and have ran a few services. And especially webserver-logs (especially if you have a wordpress install or some other commonly attacked CMS), SSH and Voice-over-IP servers get bombarded with automated attacks. Same for Remote-Desktop, Windows-Networkshares and IoT devices. If I disable fail2ban, the attackers ramp up the traffic and I can see attacks scroll through the logfiles all day.

I think a good approach is:

  1. Choose safe passwords and keys.
  2. Don't allow people to brute-force your login credentials.
  3. If you don't need a service, deactivate it entirely and remove the software.
  4. If you just need a service internally, don't expose it to the internet. A firewall will help, and most software I use can be configured to either listen on external requests or don't do it. Also configure your software to just listen on/to localhost (127.0.0.1). Or just the LAN that contains the other things that tie into it. Doing it at two distinct layers helps if you make mistakes or something happens by accident or complexity or security vulnerabilities arise. (Or you're not in complete control of everything and every possibility.)
  5. If only some people need a service, either make it as secure as a public service or hide it behind a VPN.
  6. Perimeter security isn't the answer to everything. The subject is complex and we have to look at the context. Generally it adds, though.
  7. If you run a public service, do it right. Follow state of the art security practices. It's always complicated and depends on your setup and your attackers. There are entire books written about it, people dedicate their whole career to it. For every specific piece of software and combination, there are best practices and specific methods to follow and implement. Lots of things aren't obvious.
  8. Do updates and backups.
1 more...

Firewalls are necessary for least privilege. You only give something access that needs access.

Additionally you should not port forward and especially not port 80.

Additionally you should not port forward

In what context? There is nothing inherently insecure about port forwarding. If you want a service accessible outside of your local network, you generally need to port forward. The security mostly depends on the service that is bound to the forwarded port.

especially not port 80

Why? If you want to run a webserver without specifying a port in the URL all the time, you are going to forward port 80; port 80 is a standardized port for all HTTP connections.

No offense to you but there is a massive risk exposing services to the internet. I'll let someone else more qualified explain.

Of course there is risk in exposing a service to the internet; a service open to the internet has a far greater potential attack surface, so there is a greater chance that an existing vulnerability in the exposed service gets exploited. But that is not an argument against the practice of port forwarding -- you just need to make sure that you take adequate precautions to mitigate risk. You do realize that, to be able to access a service from the internet, you need to expose it to the internet, right?

The problem is when you expose your server to the entire internet. It only takes a few minutes for the bots to find you.

Honestly you should use a mesh VPN instead.

The problem is when you expose your server to the entire internet. It only takes a few minutes for the bots to find you.

I mean, sure, but the existence of bots doesn't immediately guarantee that a given service will be compromised; simply take precautions to ensure that the exposed services are secure, that the rest of the network, and the device itself are adequately protected, etc.

Honestly you should use a mesh VPN instead.

In order to solve what problem, specifically?

Yeah like JFC the most insecure way to access the Internet let's just open it up to the whole world.

A large part of this is only thinking of a firewall as preventing inbound connections. A big part of securing a net comes from preventing things like someone establishing an outbound connection on some random port and siphoning off everything to a home base.

A firewall in itself won't cover everything, that's just ports, protocols, and addresses. Tack on an IPS for behavioral scanning, reputation lists for dynamic 'do no allow connections to/from these IPs' and some DNS filters or a proxy to help get vision into the basic 80/443 traffic that you can't just block without killing the internet and you've got something going.

A firewall is not security on a box, although most think of it that way. A lot of commercial security-suite products actually do a few things but it's just easier to market it to grandma if they simply call it a firewall, it's a term well embedded in the public concesness.

A big part of securing a net comes from preventing things like someone establishing an outbound connection on some random port and siphoning off everything to a home base.

What's the stop said malware from siphoning data over a known port? If one were to block all outbound connections, then they essentially have an offline device. If they were to want to browse the web, for example, they would need to allow outbound connections to at least HTTPS, HTTP, and DNS. What's to stop the malware from simply establishing a connection to a remote server over HTTPS?

That's where some of the other lines come into play. Stop the bad domains with some lists in pi-hole/ad-guard, IP reputational blocking tools, proxies can be used for decrypting traffic if you want to go that route, IPS systems can help identify behavioral patterns for known bad actors.

I like to think of a basic firewall as the very efficient big dumb first line. You block everything except what is needed and it doesn't matter what app or vulnerability is in play those ports are dead to the world. Then the more refined tools dig through the rest to find the various evil bits and needles in the needle stack.

I've got two services on my computer. One is for email, I want that this port to be open to the public WAN and one is for immich which hosts all my private pictures, I don't want this port to be public but reachable on LAN. In my router I open the port for email but not for immich. Emal can communicate on LAN and WAN and immich only on LAN. On a foreign, untrusted LAN, like an airport I don't want other people being able to sniff my immich traffic which is why I have another firewall setting for an untrusted LAN.

This example feels mildly contrived, as it is probably unlikely that one would have an email server running on a mobile device, but I understand your point.

I have another firewall setting for an untrusted LAN

This sounds interesting. Is it possible to implement this with a packet filtering firewall (e.g. nftables)?

I think it’s better to have one but you probably don’t need multiple layers. When I’m setting up servers nowadays, it’s typically in the cloud and AWS and the like typically have firewalls. So, I don’t really do much on those machines besides change ports to non-standard things. (Like the SSH port should be a random one instead of 22.)

But you should use one if you don’t have an ecosystem where ports can be blocked or forwarded. If nothing else, the constant login attempts from bots will fill up your logs. I disable password logins on web servers and if I don’t change the port, I get a zillion attempts to ssh using “admin” and some common password on port 22. No one gets in but it still requires more compute than just blocking port 22 and making your SSH port something else.

If nothing else, the constant login attempts from bots will fill up your logs.

Yeah, this is defintely a scenario that I hadn't considerd.

As i see it, the term "firewall" was originally the neat name for an overall security concept for your systems privacy/integrity/security. Thus physical security is (or can be) as well part of a firewall concept as maybe training of users. The keys of your server rooms door could be part of that concept too.

In general you only "need" to secure something that actually is there, you won't build a safe into the wall and hide it with an old painting without something to put in it or - could be part of the concept - an alarmsensor that triggers when that old painting is moved, thus creating sort of a honeypot.

if and what types of security you want is up to you (so don't blame others if you made bad decisions).

but as a general rule out of practice i would say it is wise to always have two layers of defence. and always try to prepare for one "error" at a time and try to solve it quickly then.

example: if you want an rsync server on an internet facing machine to only be accessible for some subnets, i would suggest you add iptables rules as tight as possible and also configure the service to reject access from all other than the wanted addresses. also consider monitoring both, maybe using two different approaches: monitor the config to be as defined as well as setup an access-check from one of the unwanted, excluded addresses that fires an alarm when access becomes possible.

this would not only prevent those unwanted access from happening but also prevent accidental opening or breaking of config from happen unnoticed.

here the same, if you want monitoring is also up to you and your concept of security, as is with redundancy.

In general i would suggest to setup an ip filtering "firewall" if you have ip forwarding activated for some reason. a rather tight filtering would maybe only allow what you really need, while DROPping all other requests, but sometimes icmp comes in handy, so maybe you want ping or MTU discovery to actually work. always depends on what you have and how strong you want to protect it from what with what effort. a generic ip filter to only allow outgoing connections on a single workstation may be a good idea as second layer of "defence" in case your router has hidden vendor backdoors that either the vendor sold or someone else simply discovered. Disallowing all that might-be-usable-for-some-users-default-on-protocols like avahi & co in some distros would probably help a bit then.

so there is no generic fault-proof rule of thumb..

to number 5.: what sort of "not trusting" the software? might, has or "will" have: a. security flaws in code b. insecurity by design c. backdoors by gov, vendor or distributor d. spy functionality e. annoying ads as soon as it has internet connection f. all of the above (now guess the likely vendors for this one)

for c d and e one might also want to filter some outgoing connection..

one could also use an ip filtering firewall to keep logs small by disallowing those who obviously have intentions you dislike (fail2ban i.e.)

so maybe create a concept first and ask how to achieve the desired precautions then. or just start with your idea of the firewall and dig into some of the appearing rabbit holes afterwards ;-)

regards

for c d and e one might also want to filter some outgoing connection…

Is there any way to reliably do this in practice? There's no way of really knowing what outgoing source ports are being used, as they are chosen at random when the connection is made, and if the device is to be practically used at all, some outgoing destination ports must be allowed as well e.g. DNS, HTTP, HTTPS, etc. What other methods are there to filter malicious connections originating from the device using a packet filtering firewall? There is the option of using a layer 7 firewall like OpenSnitch, but, for the purpose of this post, I'm mostly curious about packet filtering firewalls.

one could also use an ip filtering firewall to keep logs small by disallowing those who obviously have intentions you dislike (fail2ban i.e.)

This is a fair point! I hadn't considered that.

you do not need to know the source ports for filtering outgoing connections.

(i usually use "shorewall" as a nice and handy wrapper around iptables and a "reject everything else policy" when i configured everything as i wanted. so i only occasionally use iptables directly, if my examples dont work, i simply might be wrong with the exact syntax)

something like:

iptables -I OUTPUT -p tcp --dport 22 -j REJECT

should prevent all new tcp connection TO ssh ports on other servers when initiated locally (the forward chain is again another story)

so ... one could run an http/s proxy under a specific user account, block all outgoing connections except those of that proxy (i.e. squid) then every program that wants to connect somewhere using direct ip connections would have to use that proxy.

better try this first on a VM on your workstation, not your server in a datacenter:

iptables -I OUTPUT -j REJECT iptables -I OUTPUT -p tcp -m owner --owner squiduser -j ACCEPT

"-I" inserts at the beginning, so that the second -I actually becomes the first rule in that chain allowing tcp for the linux user named "squiduser" while the very next would be the reject everything rule.

here i also assume "squiduser" exists, and hope i recall the syntax for owner match correctly.

then create user accounts within squid for all applications (that support using proxies) with precise acl's to where (the fqdn's) these squid-users are allowed to connect to.

there are possibilities to intercept regular tcp/http connections and "force" them to go through the http proxy, but if it comes to https and not-already-known domains the programs would connect to, things become way more complicated (search for "ssl interception") like the client program/system needs to trust "your own" CA first.

so the concept is to disallow everything by iptables, then allow more finegrained by http proxy where the proxy users would have to authenticate first. this way your weather desktop applet may connect to w.foreca.st if configured, but not e.vili.sh as that would not be included in its users acl.

this setup, would not prevent everything applications could do to connect to the outside world: a local configured email server could probably be abused or even DNS would still be available to evil applications to "transmit" data to their home servers, but thats a different story and abuse of your resolver or forwarder, not the tcp stack then. there exists a library to tunnel tcp streams through dns requests and their answers, a bit creepy, but possible and already prepaired. and only using a http-only proxy does not prevent tcp streams like ssh, i think a simple tcp-through-http-proxy-tunnel software was called "corckscrew" or similar and would go straight through a http proxy but would need the other ond of the tunnel software to be up and running.

much could be abused by malicious software if they get executed on your computer, but in general preventing simple outgoing connections is possible and more or less easy depending on what you want to achieve

should prevent all new tcp connection TO ssh ports on other servers when initiated locally (the forward chain is again another story)

But the point that I was trying to make was that that would then also block you from using SSH. If you want to connect to any external service, you need to open a port for it, and if there's an open port, then there's a opening for unintended escape.

so … one could run an http/s proxy under a specific user account, block all outgoing connections except those of that proxy (i.e. squid) then every program that wants to connect somewhere using direct ip connections would have to use that proxy.

I don't fully understand what this is trying to accomplish.

But the point that I was trying to make was that that would then also block you from using SSH. If you want to connect to any external service, you need to open a port for it, and if there’s an open port, then there’s a opening for unintended escape.

now i have the feeling as if there might be a misunderstanding of what "ports" are and what an "open" port actually is. Or i just dont get what you want. i am not on your server/workstation thus i cannot even try to connect TO an external service "from" your machine. i can do so from MY machine to other machines as i like and if those allow me, but you cannot do anything against that unless that other machine happens to be actually yours (or you own a router that happens to be on my path to where i connect to)

lets try something. your machine A has ssh service running my machine B has ssh and another machine C has ssh.

users on the machines are a b c , the machine letters but in small. what should be possible and what not? like: "a can connect to B using ssh" "a can not connect to C using ssh (forbidden by A)" "a can not connect to C using ssh (forbidden by C)" [...]

so what is your scenario? what do you want to prevent?

I don’t fully understand what this is trying to accomplish.

accomplish control (allow/block/report) over who or what on my machine can connect to the outside world (using http/s) and to exactly where, but independant of ip addresses but using domains to allow or deny on a per user/application + domain combonation while not having to update ip based rules that could quickly outdate anyway.

now i have the feeling as if there might be a misunderstanding of what “ports” are and what an “open” port actually is. Or i just dont get what you want. i am not on your server/workstation thus i cannot even try to connect TO an external service “from” your machine.

This is most likely a result of my original post being too vague -- which is, of course, entirely my fault. I was intending it to refer to a firewall running on a specific device. For example, a desktop computer with a firewall, which is behind a NAT router.

so what is your scenario? what do you want to prevent?

What is your example in response to? Or perhaps I don't understand what it is attempting to clarify. I don't necessarily have any confusion regarding setting up rules for known and discrete connections like SSH.

accomplish control (allow/block/report) over who or what on my machine can connect to the outside world (using http/s) and to exactly where, but independant of ip addresses but using domains to allow or deny on a per user/application + domain combonation while not having to update ip based rules that could quickly outdate anyway.

Are you referring to an application layer firewall like, for example, OpenSnitch?

This is most likely a result of my original post being too vague – which is, of course, entirely my fault.

Never mind, and i got distracted and carried away a bit from your question by the course the messages had taken

What is your example in response to?

i thought it could possibly help clarifying something, sort of it did i guess.

Are you referring to an application layer firewall like, for example, OpenSnitch?

no, i do not conside a proxy like squid to be an "application level firewall" (but i fon't know opensnitch however), i would just limit outbound connections to some fqdn's per authenticated client and ensure the connection only goes to where the fqdns actually point to. like an atracker could create a weather applet that "needs" https access to f.oreca.st, but implements a backdoor that silently connects to a static ip using https. with such a proxy, f.oreca.st would be available to the applet, but the other ip not as it is not included in the acl, neither as fqdn nor as an ip. if you like to say this is an application layer firewall ok, but i dont think so, its just a proxy with acls to me that only checks for allowed destination and if the response has some http headers (like 200 ok) but not really more. yet it can make it harder for some attackers to gain the control they are after ;-)

so here are some reasons for having a firewall on a computer, i did not read in the thread (could have missed them) i have already written this but then lost the text again before it was saved :( so here a compact version:

  • having a second layer of defence, to prevent some of the direct impact of i.e. supply chain attacks like "upgrading" to an malicously manipulated version.
  • control things tightly and report strange behaviour as an early warning sign 'if' something happens, no matter if attacks or bugs.
  • learn how to tighten security and know better what to do in case you need it some day.
  • sleep more comfortable when knowing what you have done or prevented
  • compliance to some laws or customers buzzword matching whishes
  • the fun to do because you can
  • getting in touch with real life side quests, that you would never be aware of if you did not actively practiced by hardening your system.

one side quest example i stumbled upon: imagine an attacker has ccompromised the vendor of a software you use on your machine. this software connects to some port eventually, but pings the target first before doing so (whatever! you say). from time to time the ping does not go to the correct 11.22.33.44 of the service (weather app maybe) but to 0.11.22.33 looks like a bug you say, never mind.

could be something different. pinging an IP that does not exist ensures that the connection tracking of your router keeps the entry until it expires, opening a time window that is much easier to hit even if clocks are a bit out of sync.

also as the attacker knows the IP that gets pinged (but its an outbound connection to an unreachable IP you say what could go wrong?)

lets assume the attacker knows the external IP of your router by other means (i.e. you've send an email to the attacker and your freemail provider hands over your external router address to him inside of an email received header, or the manipulated software updates an dyndns address, or the attacker just guesses your router has an address of your providers dial up range, no matter what.)

so the attacker knows when and from where (or what range) you will ping an unreachable IP address in exact what timeframe (the software running from cron, or in user space and pings at exact timeframes to the "buggy" IP address) Then within that timeframe the attacker sends you an icmp unreachable packet to your routers external address, and puts the known buggy IP in the payload as the address that is unreachable. the router machtes the payload of the package, recognizes it is related to the known connection tracking entry and forwards the icmp unreachable to your workstation which in turn gives your application the information that the IP address of the attacker informs you that the buggy IP 0.11.22.33 cannot be reached by him. as the source IP of that packet is the IP of the attacker, that software can then open a TCP connection to that IP on port 443 and follow the instructions the attacker sends to it. Sure the attacker needs that backdoor already to exist and run on your workstation, and to know or guess your external IP address, but the actual behaviour of the software looks like normal, a bit buggy maybe, but there are exactly no informations within the software where the command and control server would be, only that it would respond to the icmp unreachable packet it would eventually receive. all connections are outgoing, but the attacker "connects" to his backdoor on your workstation through your NAT "Firewall" as if it did not exist while hiding the backdoor behind an occasional ping to an address that does not respond, either because the IP does not exist, or because it cannot respond due to DDos attack on the 100% sane IP that actually belongs to the service the App legitimately connects to or to a maintenance window, the provider of the manipulated software officially announces. the attacker just needs the IP to not respond or slooowly to increase the timeframe of connecting to his backdoor on your workstation before your router deletes the connectiin tracking entry of that unlucky ping.

if you don't understand how that example works, that is absolutely normal and i might be bad in explaining too. thinking out of the box around corners that only sometimes are corners to think around and only under very specific circumstances that could happen by chance, or could be directly or indirectly under control of the attacker while only revealing the attackers location in the exact moment of connection is not an easy task and can really destroy the feeling of achievable security (aka believe to have some "control") but this is not a common attack vector, only maybe an advanced one.

sometimes side quests can be more "informative" than the main course ;-) so i would put that ("learn more", not the example above) as the main good reason to install a firewall and other security measures on your pc even if you'ld think you're okay without it.

You always need it and you actually use it. The smarter question is when you need to customize its settings. Defaults are robust enough, so unless you know what and why you need to change, you don't.

Defaults are robust enough

Would you mind defining what "defaults" are?

Defaults are the default settings of your firewall (netfilter in linux).

Is netfilter not just the API through which you can make firewall rules (e.g. nftables) for the networking stack?

For me, it's primarily #5: I want to know which apps are accessing the network and when, and have control over what I allow and what I don't. I've caught lots of daemons for software that I hadn't noticed was running and random telemetry activity that way, and it's helped me sort-of sandbox software that IMO does not need access to the network.

Not much to say about the other reasons, other than #2 makes more sense in the context of working with other people: If your policy is "this is meant to be an HTTPS-only machine," then you might want to enforce that at the firewall level to prevent some careless developer from serving the app on port 80 (HTTP), or exposing the database port while they're throwing spaghetti at the wall wrestling with some bug. That careless developer could be future-you, of course. Then once you have a policy you like, it's also easier to copy a firewall config around to multiple machines (which may be running different apps), instead of just making sure to get it consistently right on a server-by-server basis.

So... Necessary? Not for any reason I can think of. But useful, especially as systems and teams grow.

I’ve caught lots of daemons for software that I hadn’t noticed was running and random telemetry activity that way

I did the exact same thing recently when I installed OpenSnitch -- it was quite interesting to see all the requests that were being made.

If your policy is “this is meant to be an HTTPS-only machine,” then you might want to enforce that at the firewall level to prevent some careless developer from serving the app on port 80 (HTTP), or exposing the database port while they’re throwing spaghetti at the wall wrestling with some bug. That careless developer could be future-you, of course.

That's a fair point!

A couple of decades ago, iirc, SANS.org ( IF I'm remembering who it was who did it ) put a fresh-install of MS-Windows on a machine, & connected it to the internet.

It took SEVERAL MINUTES for it to be broken-into, & corrupted, botnetted.

The auto-attacks by botnets are continuous: hitting different ports, trying to break-in, automatically.

I've had linux desktops pwned from me.

the internet should be considered something like a mix of toxic & corrosive chemicals: "maybe" your hand will be fine, if you dip it in for a moment & immediately rinse it off ( for 3 hours ), but if you leave you limbs dwelling in the virulent slop, Bad Things(tm) are going to happen, sooner-or-later.


I used to de-infest Windows machines for my neighbours...

haven't done it in years: they'll not pay-for good anti-virus, they'll not resist installing malware: therefore there is no point.

Let 'em rot.

I've got a life to work-on uncrippling, & too-little strength/time left.


"but I don't need antivirus: i never get infected!!"

then how come I needed to de-infest it for you??

"but I don't need an immune-system: pathogens are a hoax!!"

get AIDS, then, & don't use anti-AIDS drugs, & see how "healthy" you are, 2 years in.

Same argument, different context-mapping.


Tarpit was a wonderful-looking invention, for Linux's netfilter/iptables, years ago: don't help botnets scan quickly & efficiently to help them find a way to break-in...


Anyways, just random thoughts from an old geek...


EDIT: "when do I need to wear a seatbelt?"

is essentially the same category of question.

_ /\ _

put a fresh-install of MS-Windows on a machine, & connected it to the internet.

What version of Windows? Connected how? Through a NAT, or was it through a DMZ connection, or netiher? Was Windows' firewall enabled?

It took SEVERAL MINUTES for it to be broken-into, & corrupted, botnetted.

This is highly dependent on the setup, ofc. I can't really comment without more knowledge of the experiment.

haven’t done it in years: they’ll not pay-for good anti-virus

Idk, nowadays, 3rd party anti-virus software on Windows doesn't have too much user -- Windows Defender is pretty dang good. If anything, a lot of them are borderline scams, or worse.

get AIDS, then, & don’t use anti-AIDS drugs, & see how “healthy” you are, 2 years in.

You don't catch AIDS. HIV is the virus which causes AIDS to develop over time, if untreated. I'm not sure what you mean by anti-AIDS drugs. You could potentially be referring to anti-retroviral medication, or other related medication used to treat HIV, but, again that's treating HIV to prevent the development of AIDS. You could also be referring to PrEP, but, once again, that is for protection against contracting the virus, not the collection of symptoms from a chronic HIV infection which is referred to as AIDS.

Tarpit was a wonderful-looking invention

This is interesting, I hadn't heard of this!

Linux’s netfilter/iptables

Just a side note: iptables is deprecated -- it has been succeeded by nftables.

EDIT: “when do I need to wear a seatbelt?”

is essentially the same category of question.

Fair point!

#1 leaves a lot to be desired, as it advocates for doing something without thinking about why you’re doing it – it is essentially a non-answer.

Agreed. That's mostly BS from people who make commissions from some vendor.

#2 is strange – why does it matter? If one is hosting a webserver on port 80, for example, they are going to poke a hole in their router’s NAT at port 80 to open that server’s port to the public. What difference does it make to then have another firewall that needs to be port forwarded?

A Firewall might be more advanced than just NAT/poking a hole, it may do intrusion detection (whatever that means) and DDoS protection

#3 is a strange one – what sort of malicious behaviour could even be done to a device with no firewall? If you have no applications listening on any port, then there’s nothing to access.

Maybe you've a bunch of IoT devices in your network that are sold by a Chinese company or any IoT device (lol) and you don't want them to be able to access the internet because they'll establish connections to shady places and might be used to access your network and other devices inside it.

#5 is the only one that makes some sense;

Essentially the same answer and in #3

If we're talking about your home setup and/or homelab just don't get a hardware firewall, those are overpriced and won't add much value. You're better off by buying an OpenWRT compatible router and ditching your ISP router. OpenWRT does NAT and has a firewall that is easy to manage and setup whatever policies you might need to restrict specific devices. You'll also be able to setup things such as DoH / DoT for your entire network, setup a quick Wireguard VPN to access your local services from the outside in a safe way and maybe use it to setup a couple of network shares. Much more value for most people, way cheaper.

A Firewall might be more advanced than just NAT/poking a hole, it may do intrusion detection (whatever that means) and DDoS protection

I mean, sure, but the original question of why there's a need for a second firewall still exists.

Maybe you’ve a bunch of IoT devices in your network that are sold by a Chinese company or any IoT device (lol) and you don’t want them to be able to access the internet because they’ll establish connections to shady places and might be used to access your network and other devices inside it.

This doesn't really answer the question. The device without a firewall would still be on the same network as the "sketchy IoT devices". The question wasn't about whether or not you should have outgoing rules on the router preventing some devices from making contact with the outside world, but instead was about what risk there is to a device that doesn't have a firewall if it doesn't have any services listening.

Essentially the same answer and in #3

Somewhat, only I would solve it using an application layer firewall rather than a packet filtering firewall (if it's even possible to practically solve that with a packet filtering firewall without just dropping all outgoing packets, that is).

just don’t get a hardware firewall

What is the purpose of these devices? Is it because enterprise routers don't contain a firewall within them, so you need a dedicated device that offers that functionality?

I don't know what else is there to answer about the purpose of a hardware firewall.

Hardware firewalls have their use cases, mostly overkill for homelabs and most companies but they have specific features you may want that are hard or impossible to get in other ways.

A hardware firewall may do the following things:

  • Run DPI and effectively block machines on the network to access certain protocols, websites, hosts or detect whenever some user is about to download malware and block it;
  • Run stats and alert sysadmins of suspicious behaviors like a user sending large amount of confidential data to the outside;
  • Have "smart" AI features that will detect threats even when they aren't known yet;
  • Provide VPN endpoints and site-to-site connections. This is very common in brands like WatchGuard;
  • Higher throughput than your router while doing all the other operations above;
  • Better isolation.

An isolated device is the fact that you can then play around with your routers without having to think about the security as much - you may break them, mess some config but you can be sure that the firewall is still in place and doing its job. The firewall becomes both a virtual and a physical and physiological barrier between your network and the outside, there's less risk of plugging a wire on the wrong spot or a apply a configuration and suddenly having your entire network exposed.

Sure you may be able to setup something on OpenWRT to cover most of the things I listed before but how much time will you spend on that? Will it be as reliable? What about support? A Pi-hole is also another common solution for those problems, and it may work until a specific machines devices to ignore its DNS server and go straight to the router / outside.

You can even argue that you can virtualize something like pfSense or OPNsense on some host that also virtualizes your router and a bunch of other stuff, however, is it wise? Most likely not. Virtualization is mostly secure but we've seen cases from time to time where a compromised VM can be used to gain access to the host or other VMs, in this case the firewall could be hacked to access the entirety of your network.

When you've to manage larger networks, lets say 50* devices I believe it becomes easier to see how a hardware firewall can become useful. You can't simply trust all those machines, users and software policies in them to ensure that things are secure.

Have “smart” AI features that will detect threats even when they aren’t known yet;

This is a crazy one -- pattern recognition of traffic.

Higher throughput than your router while doing all the other operations above;

Fair point! I hadn't considered that one.

You can even argue that you can virtualize something like pfSense or OPNsense on some host

This is an intriguing idea. I hadn't heard of it before.

also virtualizes your router

How would one virtualize a router...? That sounds strange, to say the least.

[virtualized router/firewall] This is an intriguing idea. I hadn’t heard of it before.

Virtualized routers and firewalls are more common than you might think, specially in large datacenters and other deployments that require a lot of flexibility / SDN.

Other people just like the convenience of having a single machine / mini PC whatever that runs everything from their router/firewall to their NAS and VMs to self-host stuff.

But... at the end of the day virtualization is only mostly secure and we’ve seen cases where a compromised VM can be used to gain access to the host or other VMs, in this case the firewall could be hacked to access the entirety of your network.

You most likely don’t need on device firewall if your in your home network behind a router that has a firewall. If you‘d disable that firewall as well and one of your devices has e.g. SSH activated using username and password, than there is nothing stopping a "hacker" or "script kiddy" from penetrating/spamming your SSH port and brute force your password. The person than can take over your PC and can e.g. install software for his botnet or install keylogger or can overtake your browser session including all authentication cookies or many other bad stuff.

If you are using puplic WiFi, I’d recommend a good on device firewall, or better just use a VPN to get an encrypted tunnel to your home (where you would need to open a port for that tho) and go into the internet from there.

You most likely don’t need on device firewall if your in your home network behind a router that has a firewall.

Under what circumstance(s) would one need a device firewall? If I were to guess, I would say that it is when the internet facing device doesn't contain a firewall within it (e.g. some enterprise-grade router), so a dedicated firewall device must exist behind it.