Best approach for Docker resilience with two hosts

Sim@lemmy.nz to Selfhosted@lemmy.world – 33 points –

I'm running Docker on Ubuntu server; around 50 containers running, most admin via Portainer. Configuration files and small databases for container applications are stored on the local SSD, media and larger files are stored on a NAS.

NAS data and the container folders are backed up.

I have a second identical machine doing nothing. What would you recommend researching to add resilience to this setup? Top priority is quick and easy restoration should the SSD fail - everything else is relatively easy to replace.

I'll create an SSD RAID but I like the idea of a second host.

16

You are viewing a single comment

You can use docker swarm (or a better container orchestrator) to have the containers automatically fail over to the second host

Swarm will also spread the load out over both hosts, but all your data would need to be accessible by both hosts

Thanks. That means I need to move all data off the hosts on to, say, a NAS - then the NAS becomes the single point of failure. Can I operate a swarm without doing that but still duplicate everything from host 1 to host 2, so host 2 could take over relatively seamlessly (apart from local DNS and moving port forwarding to nginx on the remaining host)?

I think you can run a ceph or glusterfs cluster for sharing files in a cluster

I think 3 nodes are required for that

Thanks. Can I use my existing, single Docker to start a new swarm, or do I have to start from scratch?