retrodaredevil

@retrodaredevil@lemmy.world
0 Post – 6 Comments
Joined 1 years ago

Also has the benefit of being a completely local DNS server for all your devices to use. I think you are also able to add custom entries if you wanted to be able to refer to your devices using dns. It also has some caching benefits so there are less DNS requests going out of your home network.

Personally I set up AdGuard Home because it has DNS over HTTPS support out of the box, which means your ISP cannot see your DNS requests. Pihole supports this too, but it requires additional setup.

I do something similar, but I avoid gitignore at all costs because any secret data should have root read only permissions on it. Plus any data that is not version controlled goes in a common directory, so all I have to do is backup that directory and I'm good. It makes moving between machines easy if I ever need to do that.

I did something similar a while back and this tutorial helped me: https://kb.vander.host/operating-systems/how-to-import-a-qcow2-file-to-proxmox/

I think the import command it has you run allows you to import it however you want. It's my understanding that after you import it, it's no longer using the qcow2 file as the backing drive.

It really depends on how you have your /etc/network/interfaces set up. For one of your bridges, proxmox needs to have an IP. If you want proxmox's traffic to go through OPNsense, it should have an IP on the LAN bridge. You have to make sure the interfaces file explicitly sets a static IP or explicitly says it will get its IP via DHCP.

Since you set a static IP on OPNsense for Proxmox, you will need to manually set it to use DHCP on the LAN bridge. In my experience, this does not work because Proxmox will fail to get an IP via DHCP if OPNsense is not up yet. I highly recommend you set a static IP in the interfaces file.

I'm not knowledgeable on communication between VMs and how to best restrict communication there, but I have tried to make my docker networks more secure.

I went a bit overkill for my reverse proxy and all the docker networks it's connected to. For each service I want to expose through my reverse proxy, I manage a network specifically for that service in my caddy docker compose file. I then refer to that external network in my servjce's docker compose file, so that caddy can access it. For example, caddy is on caddy_net-grafana and on caddy_net-homepage. Grafana and homepage are on those networks respectively. So with this setup, caddy can talk to Grafana and homepage, but Grafana and homepage cannot talk to each other.

It wasn't too bad to setup. I made my own conventions for keeping it manageable and it works for me. I did run into the problem where I had to increase the default subnet pool, as after you create like 30 or 31 networks there aren't any subnets left to give out to new docker networks.

1 more...

Yup. It works pretty well for me. Just will likely have to increase the default address pool in daemon.json if you go all out.