Should I use one docker-compose.yml for all my services?

bronzing@lemmy.fmhy.ml to Selfhosted@lemmy.world – 49 points –

Hi,

I'm using docker-compose to host all my server services (jellyfin, qbittorrent, sonarr, etc.). I've recently grouped some of them into individual categories and then merged the individual docker-compose.yml file I had for each service into one per category. But is there actually any reason for not keeping them together?

The reason why is I've started configuring homepage and thought to myself "wouldn't it be cool if instead of giving the server IP each time (per configured service in homepage) I'd just use the service name?" (AFAIK this only works if the containers are all in the same file).

34

For simplicity sake alone I would say No. As long as services don't share infrastructure (eg. a database) you shouldn't mix them so you have an easier time updating your scripts.

Another point is handling stacks. When you create dockers via compose you are not supposed to touch them individually. Collecting them all, or even just in categories, muddies that concept, since you have unrelated services grouped in a single stack and would need to update/up/down/... them all even if you just needed that for a single one.

Lastly networks. Usually you'd add networks to your stacks to isolate their respective Backend into closed networks, with only the exposing container (eg. a web frontend) being in the publicly available network to increase security and avoid side effects.

So right now I have a single compose file with a file structure like this:

docker/
├─ compose/
│  ├─ docker-compose.yml
├─ config/
│  ├─ service1/
│  ├─ service2/

Would you in that case use a structure like the following?

docker/
├─ service1/
│  ├─ config/
│  ├─ docker-compose.yml
├─ service2/
│  ├─ config/
│  ├─ docker-compose.yml

Or a different folder structure?

The second one is exactly what I have. One folder for each service containing it's compose file and all persistent data belonging to that stack(unless it's something like your media files)

I have a folder that all my docker services are in. Inside the folder is a folder for each discrete service and within that folder is a unique compose file necessary to run the service. Also in the folder is all the storage folders for that service so it's completely portable, move the folder to any server and run it and you're golden. I shut down all the services with a script then I can just tar the whole docker folder and every service and its data is backed up and portable.

In case anyone cares here is my script, I use this for backups or shutting down the server.

#!/bin/bash

logger "Stopping Docker compose services"

services=(/home/user/docker/*)    # This creates an array of the full paths to all subdirs
#the last entry in this array is always blank line, hence the minus 1 in the for loop count below

for ((i=0; i<=(${#services[@]}-1); i++))
do
    docker compose -f ${services[i]}/docker-compose.yml down &
done

#wait for all the background commands to finish
wait 

Exactly my setup and for exactly the reasons you mentioned

No, no you should not. I haven't used homepage but you probably just need to attach the services to the same network or just map the ports on the host and just use the host IP.

Probably want to keep services with different life cycles in separate docker compose files to allow you to shutdown/restart/reconfigure then separately. If containers depend on each other, then combining into compose file makes sense.

That said, experimenting is part of the fun, nothing wrong with testing it out and seeing if you like it.

No, keep them ungrouped, migration to a new server is much easier, otherwise you need to migrate everything everywhere all at once

You can have the same effect (connect to the named container) if you create a docker network and place everything on the same network

I would not. Create an external network and just add those to the compose files.

Bingo. Or just bite the bullet and dive into Kubernetes

Back when I used to use Docker this is what I was doing. If you use a reverse proxy that is Docker-aware (eg Traefik), it can still connect to the services by name and expose them out as subdomains or subpaths based on the names.

But I graduated to Kubernetes a long time ago.

Overkill for home use

You can use an external network if you wish to refer to them all by a name. Just make sure all the containers you wish to refer to are in it.

A compose file is meant for different components of a single service but you're allowed to experiment with whatever you want

I personally don't. It is just messier. I only group things that belong together, like a webserver+database, torrentclient+vpn and so on.

I'll be the opposite of everyone I guess I have all my services in one compose file. Never had an issue with it. Why I have no exposed ports and everything is accessed through a reverse proxy, and the big one it's easy to just go docker compose and have them all come up or down.

Same for me, it all mostly started from the desire to have a single MariaDB and Postgresql container holding all the databases. Not sure if I could achieve the same result with different compose files, perhaps I can, bit never had the need.

I actually find my setup super comfortable to use

I was thinking about that just today, I have something like 30+ services running on a single compose file and maintenance is slowly becoming hard. Probably moving to multiple compose file.

I’ve thought about going that route, but ultimately decided to adopt something like portainer.io. My thought process behind it was that some projects within each category may have overlapping dependencies and so I’d end up with multiple entries for a particular dependency in the same file which I didn’t like.

I don’t expose services to the internet from my home lab, so I generally just add host entries manually to each of my computers so that I don’t have to type in ip and port.

I have multiple files but a single stack. I use an alias to call compose like this:

docker compose -f file1.yaml -f file2.yaml

Etc.

i go so far the other way with this personally.. I actually have a seperate LXC for each docker container and a lot of the time i use docker run instead of docker-compose..

I've still not had anyone explain to me why compose is better than a single command line..

I always thought the compose file is great for maintenance. You can always save the docker run commands elsewhere so at the end of the day it's more of an orchestration choice.