486

@486@lemmy.world
0 Post – 16 Comments
Joined 1 years ago

That's good, I never liked the clunky .home.arpa domain.

1 more...

The guide mentions:

Your ISP will give you the first 64 bits, and your host machine will have the last 64 bits.

This isn't correct. While some ISPs do give you the first 64 bit (a /64 prefix), this isn't recommended and not terribly common either. An ISP should give its users prefixes with less than 64 bit. Typically a residential user will get a /56 and commercial users usually get a /48. With such a prefix the user can then generate multiple /64 networks which can be used on the local network as desired.

I wouldn't run it as a router due to its high power consumption, but it would be a fine computer for retro gaming for games up until ~2005 if you add a graphics card.

Are you sure there is exactly one DHCP server running?

3 more...

Try diasbling the second DHCP server altogether. You only need one, since you have a flat network.

No, it is not like Docker. You can treat an LXC container pretty much like a VM in most instances, including firewall rules. To answer the question, you can use fail2ban just like you had done in your VM, meaning you can run it inside the LXC container, where fail2ban can change the firewall rules of that container as it sees fit.

2 more...

As much as I like RISC-V, it is kind of ironic to suggest RISC-V ist the solution to this. At least as it stands, because of RISC-V's simplicity, most if not all current RISC-V CPUs don't even run microcode, so there is nothing to update/fix in case of a CPU bug. There's even a very current example of this problem with that chinese RISC-V cpu that has this "GhostWrite" bug that allows every unpriviliged process to gain root access.

2 more...

I'm exclusively running unprivileged LXC containers and haven't had any issues regarding the firewall, neither with iptables nor nftables.

Edit: 75 LXC containers, 22VMs.

That's a lot of power draw for so few VMs and containers. Any particular applications running that justify such a setup?

What does it offer that nginx doesnt?

Automatic HTTPS, you don't have to use certbot or something similar to get/renew certificates. Also, its configuration is really simple and straight forward.

While you certainly can run AI models that require such a beefy GPU, there are plenty of models that run fine even on a CPU-only system. So it really depends on what exactly Ollama is going to be used for.

It is. It might end up on disk in swap, if you run low on memory (and have some sort of disk-based swap enabled), but usually it is located in RAM.

1 more...

I understood that. My point was rather that in this particular case (a CPU bug that could be fixed via microcode, but AMD chose not to do so for certain CPUs), RISC-V wouldn't have been of any advantage, because there would be nothing to fix in the first place. Sure, one could introduce microcode for RISC-V and people have argued in favor of doing so for this exact reason, but the architecture was intentionally designed to not require microcode.

IT-Tools - hands down one of the coolest self hosted tool sets you can use.

Looks similar to Cyberchef. Any reason to use that one over Cyberchef?

No, tmpfs is always located in virtual memory. Have a look at the kernel documentation for more information about tmpfs.

I would advice against using SSDs for storage of media and such. Not only because of their higher price, but also because flash memory cells tend to fade over time, causing read speeds to decrease considerably over time. This is particularily the case for mostly read-only workloads. For each read operation the flash memory cell being read loses a bit of its charge. Eventually the margin for the controller to be able to read the data will be so small, that it takes the controller lots of read operations to figure out the correct data. In the worst case this can lead to the SSD controller being unable to read some data alltogether.