Should I learn Docker or Podman?

stepanzak@iusearchlinux.fyi to Selfhosted@lemmy.world – 60 points –

Hi, I've been thinking for a few days whether I should learn Docker or Podman. I know that Podman is more FOSS and I like it more in theory, but maybe it's better to start with docker, for which there is a lot more tutorials. On the other hand, maybe it's better to straight up learn podman when I don't know any of the two and not having to change habits later. What do you think? For context, I know how containers works in theory, I know some linux I think well, but I never actually used docker nor podman. In another words: If I want to eventually end up with Podman, is it easier to start with docker and then learn Podman, or start with Podman right away? Thanks in advance

90

In case you haven't started yet. Learn docker, but use podman.

OP, listen to this person. Docker will earn you cash. Podman is nicer to work with for your own shit.

Docker and podman in general work the same, commands are the same, ...

Only biggest difference is that now that I'm trying to migrate from docker to podman is figting over volume binds permissions for databases and such.

Finished migration of 3 containers, 50+ left.

At the end of the day, you’re running containers and both will get the job done. Go with whatever you want to start, and be open to try the other when you inevitably end up with jobby job that uses the other one instead.

Docker is more ubiquitous, Podman has use cases that diverge from Docker.

Discover the use case and decide from there.

That said docker is a good starting point, their documentation is pretty great and once you know docker you’ll better appreciate why podman is different.

As a podman user myself, they're essentially the same. I look at the docker documentation when learning new things about podman. 99.9% of the time, it's exactly the same. For the features that aren't in podman, you can use the podman-docker package. This gets you a daemon so you can have some docker-specific features such as a container being able to start/stop other containers by mounting the socket as a volume, and it allows you to use docker-compose.

It's easier to start with docker first simply because of the sheer amount of learning resources available on the internet. If you're having issues, you can usually find a solution quickly with a search engine.

That's being said, there's not much differences on how to use them these days. You can even run docker compose on podman.

I've read somewhere on lemmy that the podman-compose is unmaintained and shouldn't be used. Can't find it now thought.

I can't comment on that, but actual Docker Compose (as distinct from Podman Compose) works great with Podman.

You didn't say what's your goal. What do you want to achieve? For instance, if you work in IT you should probably learn Docker unless Podman is more relevant in your actual daily tasks.

My goal is selfhosting stuff mainly on my raspberry pi. I'm sure I'm not going to work in IT for 3 years and probably not for at least few years after that.

Then just go for Docker. Otherwise you may make it unnecessarily difficult for yourself and get discouraged. In a few years you may revisit the question and see if you still have an interest in podman.

I tried out podman at first, but I found many docker instances simply provide a string of crap instead of explanations. It was easy to get a grasp of how docker worked, and now that I have an idea I feel like I could jump into podman better.

Honestly, if you have never used containers before I would suggest starting with docker as it has more readily accessible beginner walk through and tutorials. From there, you will have a good idea as to switching to podman is the right move for you or not.

Personally, I started with docker and haven’t moved from there since I don’t see a need (yet). I have dozens of services running on docker. I don’t know how heavy of a lift it would be to learn podman but like I said, I don’t feel the need to do so.

Maybe try out both and see which one you like more?

Just to offer the other perspective. I started with podman years ago. I knew very little about containers and I would say it made the learbing curve a lot steeper. Most guides and README's use docker and when things didnt work I had to figure out if it was networking, selinux, rootless, not having the docker daemon, etc... without understanding fully what those things were because I didn't know docker. But when I started running stuff on kubernetes, it was really easy. Pods in podman are isomorphic to kubernetes pods. I think the pain was worth it, but it was definitely not easy at the time. Documentation, guides, and networking have improved since then, so it may not be as big of a deal now

Well sh.t… now I got a weekend project hahah

Both. Start with docker as there's a buttload of tutorials. Once you're familiar with it jump to podman. Learn the differences, use both for a while and decide what suits you best.

They're very similar so you pretty much can't go wrong. Podman, I believe, is more secure by default (or aims to be) so might run into more roadblocks with its use.

so might run into more roadblocks with its use.

This has been my experience with Podman. That's not to say that these roablocks aren't without reason, nor merit, but there is always a trade off of convenience when optimizing for security.

Docker, there are more resources for it and once you know it Podman should be an easy migration if you want to. Also I'm not sure about your claim that Podman is more FOSS than docker, it's "better" because it doesn't run as root, but other than that I don't know of any advantages to it that are not a derivation of "it runs as a regular user".

Also I’m not sure about your claim that Podman is more FOSS than docker

The issue with Docker isn't the core product itself, is the ecosystem, it's the DockerHub, Kubernetes etc.

So if someone made a non-foss frontend for Podman that would somehow make Podman less FOSS? Or of they started working with Podman? You don't need to use any of those other products, and it's not correct to say that docker is less FOSS because people have written proprietary software that uses it.

I see your point and would usually think the same way / agree with it, however the issue with Docker is that you're kind of forced and coerced into using those proprietary solutions around it. It also pushed people into a situation where it's really hard to not depend on constant internet services to use it.

I'm not sure what you're talking about. Most people self-hosting don't need anything special, just a docker compose file. What proprietary software do you think is needed that's not needed for Podman?

I’m not sure what you’re talking about. Most people self-hosting don’t need anything special, just a docker compose file

Yes, and they proceed to pull their software from DockerHub (closed and sometimes decides to delete things) and most of them lack the basic Linux knowledge to do it in any other way. This is a real problem.

On the same machine I have Docker running as root and not as root. I choose which version, root-ful/root-less depending on what the container needs to do.

I think the only advantage is that Podman runs as root-less out of the box, where with Docker you have to do a few extra steps once it's installed.

Podman is [...] “better” because it doesn’t run as root, but other than that I don’t know of any advantages to it that are not a derivation of “it runs as a regular user”.

Podman can run in rootless mode (with some caveats), but it is still able to run as root — it doesn't only have the capability to run as a "regular user".

Still haven't looked into podman properly, but docker is much easier to learn because as you said there's a lot more material available online. I'd say start with Docker, and if in the future you will find out podman better fits your needs you can always switch (they should not be that different)

Doesn't really matter for basic stuff as it will be the same.

Once you get into container orchestration the differences start and then you basically need to decide what you want to get out of it.

Docker and docker-compose. Then learn podman after you have some experience, if you want to...

Or jump into kubernetes (or minikube) instead of podman if you want to do highly useful things.

But first, get comfortable building images with a Dockerfile, and then running them in a meaningful way, and networking them, and locking them down.

Podman only if you really care about using FOSS, having first-class rootless mode, and don't mind the hassle of scarce learning resource and tutorials on all Podman features that are different from docker.

Otherwise docker.

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
DNS Domain Name Service/System
Git Popular version control system, primarily for code
HTTP Hypertext Transfer Protocol, the Web
LXC Linux Containers
Plex Brand of media server package
SSH Secure Shell for remote terminal access
nginx Popular HTTP server

[Thread #623 for this sub, first seen 23rd Mar 2024, 07:55] [FAQ] [Full list] [Contact] [Source code]

Here goes my experience.

When I started the self hosted trip, I was against containers and tried to avoid them at all costs. Then I learned about containers, and now I still am against containers but less vividly so. I have used them and still use them.

Containers are good for the self hoster because they deliver fast deploy and easy testing of lots of services quickly. They are good for developers because they can provide one common installation approach that reduces greatly user issues and support requests.

But containers also have downsides as well. First of all they make the user dumber. Instead of learning something new, you blindly "compose pull & up" your way. Easy, but it's dumbifier and that's not a good thing. Second, there is a dangerous trend where projects only release containers, and that's bad for freedom of choice (bare metal install, as complex as it might be, need to always be possible) and while I am aware that you can download an image and extract the files inside, that's more an hack than a solution. Third, with containers you are forced to use whatever deployment the devs have chosen for you. Maybe I don't want 10 postgres instances one for each service, or maybe I already have my nginx reverse proxy or so. I have seen projects release different composer files for different scenarios, but at that point I would prefer to deploy on bare metal.

Said so, containers are not avoidable today, so study and embrace them, you will not be disappointed as its a cool piece of tech. But please stay clear of docker and go podman instead. Podman doesn't rely on a potentially insecure socket and does not require an always running daemon. Podman also by default doesn't force you to run services as root which you should never do. Also, networking feels clearer on podman and podman feels more .modern by using nft instead of iptables. Yes most of this can be fixed on docker, but since podman is a drop in replacement, why bother? Also, podman is truly open source while docker, shockingly, its not.

Here is my wiki page on the subject: https://wiki.gardiol.org/doku.php?id=gentoo:containers feel free to read it.

One last thought: updating containers should not be taken lightly. Its so easy and fast that you might be tempted to setup cron jobs or install watchtower, but you will end sooner or later with a broken service and lost data. So backup, always backup, and keep updating with rationale.

Tldr: containers are unavoidable today and are a cool piece of tech worth investigating. Don't blindly use them as there are security issues involved, and I hope the trend of making containers the only way doesn't take hold, because containers also make self hosters dumber and that's not good.

First of all they make the user dumber. Instead of learning something new, you blindly “compose pull & up” your way. Easy, but it’s dumbifier and that’s not a good thing

I don't like this Docker trend because, besides what you've said, it 1) leads you towards a dependence on property repositories and 2) robs you from the experience of learning Linux (more later on) but I it does lower the bar to newcomers and let's you setup something really fast. In my opinion you should be very skeptical about everything that is "sold to the masses", just go with a simple Debian system (command line only) SSH into it and install what you really need, take your time to learn Linux and whatnot.

there is a dangerous trend where projects only release containers, and that’s bad for freedom of choice (bare metal install, as complex as it might be, need to always be possible) and while I am aware that you can download an image and extract the files inside, that’s more an hack than a solution

And the second danger there is that when developers don't have to consider the setup of a their solution the code tends to be worse. Why bother with having single binaries, stuff that is easy to understand and properly document things when you can just pull 100 dependencies and compose files? :) This is the unfortunate reality of modern software.

Third, with containers you are forced to use whatever deployment the devs have chosen for you. Maybe I don’t want 10 postgres instances one for each service, or maybe I already have my nginx reverse proxy or so

See? Poorly written software. Not designed to be sane and reasonable and integrate with existing stuff.

But be aware that containers are not the solution to selfhosting-made-easy and, specifically, containers havebeen created to solve different issues than self-hosting!

Your article said it all and is very well written. Let me expand a bit into the "different issues":

The thing with Docker is that people don’t want to learn how to use Linux and are buying into an overhyped solution that makes their life easier without understanding the long term consequences. Most of the pro-Docker arguments go around security, reproducibility and that’s mostly BS because 1) systemd can provide as much isolation a docker containers and 2) there are other container solutions and nobody cares about them.

Companies such as Microsoft and GitHub are all about re-creating and re-configuring the way people develop software so everyone will be hostage of their platforms - that's why nowadays everything and everyone is pushing for Docker/DockerHub/Kubernetes, GitHub actions and whatnot. We now have a generation that doesn’t understand the basic of their tech stack, about networking, about DNS, about how to deploy a simple thing into a server that doesn’t use some Docker BS or isn’t a 3rd party cloud xyz deploy-from-github service.

Before anyone comments that Docker isn’t totally proprietary and there’s Podman consider the following: It doesn’t really matter if there are truly open-source and open ecosystems of containerization technologies. In the end people/companies will pick the proprietary / closed option just because “it’s easier to use” or some other specific thing that will be good on the short term and very bad on the long term.

Docker may make development and deployment very easy and lowered the bar for newcomers have the dark side of being designed to reconfigure and envelope the way development gets done so someone can profit from it. That is sad and above all set dangerous precedents and creates generations of engineers and developers that don’t have truly open tools like we did. There’s LOT of money into transitioning everyone to the “deploy-from-github-to-cloud-x-with-hooks” model so those companies will keep pushing for it.

At the end of the day technologies like Docker are about commoditizing development and about creating a negative feedback loop around it that never ends. Yes, I say commoditizing development because if you look at it those techs only make it easier for the entry level developer and companies instead of hiring developers for their knowledge and ability to develop they’re just hiring “cheap monkeys” that are able to configure those technologies and cloud platforms to deliver something.

Successful cloud companies are not longer about selling infrastructure, we're past that - the profit is now in transforming developer knowledge into products/services that can be bought with a click.

There is a lot of truth in your words.

Unfortunately, things will not change.

At least let's use podman and I will keep fighting for containers being at least optional.

At least let’s use podman and I will keep fighting for containers being at least optional.

Well, systemd can also provide as much isolation and security. It's another option... :) as well as LXC.

You can host your own container repository and write your own docker files to control all your own deployments though, it's not like your have to be at the behest of any company to use containerization to make your own life easier with the benefits of reproducibility.

Do you write all the programs you use too or do you rely on the work of others and are drawing an arbitrary line in the sand when it comes to containerising those apps?

Yes, I can, but this not about what I or you can do. This is about what the actually do, the direction technology is taking and the lack of freedoms that follows. Distribution is important.

Do you object to software repositories that install dependencies precompiled?

Your "lines in the sand" seem idiosyncratic and arbitrary. You are happy presumably to use precompiled software or at the very least rely on software written by others which is already ceding some freedom but then claim that using systems that package all the dependencies into a single runnable unit is too much and cedes too much freedom?

I agree that containers are allowing software projects to push release engineering and testing down stream and cut corners a bit but that was ever the case with precomplied releases that were only tested on a single version of a single distro.

Look this isn't even about "drawing lines in the sand", I do understand why use containers and I use them in certain circumstances, usually not Docker but that's more due to the requirements in said circumstances and not about personal decision.

Do you object to software repositories that install dependencies precompiled? (...) but then claim that using systems that package all the dependencies into a single runnable unit is too much and cedes too much freedom?

No and I never claimed that. I'm perfectly happy to use a single-binary statically linked applications, in fact I use quite a few such as FileBrowser and Syncthing and they're very good and reasonable software. Docker however isn't one of those cases or, at least, not just that.

I agree that containers are allowing software projects to push release engineering and testing down stream and cut corners a bit

Docker is being used and abused for cutting corners and now we've developers that are just unable to deploy any piece of software without it. They've zero understanding of infrastructure and anything related to it and this has a big negative impact on the way they develop software. I'm not just talking about FOSS projects, we see this in the enterprise and bootcamps as well.

Docker is a powerful thing, so powerful it opens the door for poorly put together software to exist and succeed as it abstracts people from having to understand architectures, manually install and configure dependencies and things that anyone sane would be able to do in a lifetime. This is why we sometimes see "solutions" that run 10 instances of some database or some other abnormality.

Besides all that, it adds the half-open repository situation on top. While we can host repositories and use open ones the most common thing is to see everything on Docker Hub and that might turn into a CentOS style situation anytime.

I don't agree with the premise of your comment about containers. I think most of the downsides you listed are misplaced.

First of all they make the user dumber. Instead of learning something new, you blindly "compose pull & up" your way. Easy, but it's dumbifier and that's not a good thing.

I'd argue, that actually using containers properly requires very solid Linux skills. If someone indeed blindly "compose pull & up" their stuff, this is no different than blind curl | sudo bash which is still very common. People are going to muddle through the installation copy pasting stuff no matter what. I don't see why containers and compose files would be any different than pipe to bash or random reddit comment with "step by step instructions". Look at any forum where end users aren't technically strong and you'll see the same (emulation forums, raspberry pi based stuff, home automation,..) - random shell scripts, rm -rf this ; chmod 777 that

Containers are just another piece of software that someone can and will run blindly. But I don't see why you'd single them out here.

Second, there is a dangerous trend where projects only release containers, and that's bad for freedom of choice

As a developer I can't agree here. The docker images (not "containers" to be precise) are not there replacing deb packages. They are there because it's easy to provide image. It's much harder to release a set of debs, rpms and whatnot for distribution the developer isn't even using. The other options wouldn't even be there in the first place, because there's only so many hours in a day and my open source work is not paying my bills most of the time. (patches and continued maintenance is of course welcome) So the alternative would be just the source code, which you still get. No one is limiting your options there. If anything the Dockerfile at least shows exactly how you can build the software yourself even without using docker. It's just bash script with extra isolation.

I am aware that you can download an image and extract the files inside, that's more an hack than a solution.

Yeah please don't do that. It's probably not a good idea. Just build the binary or whatever you're trying to use yourself. The binaries in image often depend on libraries inside said image which can be different from your system.

Third, with containers you are forced to use whatever deployment the devs have chosen for you. Maybe I don't want 10 postgres instances one for each service, or maybe I already have my nginx reverse proxy or so.

It might be easier (effort-wise) but you're certainly not forced. At the very least you can clone the repo and just edit the Dockerfile to your liking. With compose file it's the same story, just edit the thing. Or don't use it at all. I frequently use compose file just for reference/documentation and run software as a set of systemd units in Nix. You do you. You don't have to follow a path that someone paved if you don't like the destination. Remember that it's often someone's free time that paid for this path, they are not obliged to provide perfect solution for you. They are not taking anything away from you by providing solution that someone else can use.

I fully agree with you that devs should not release debs&rpms&etc, that's distro responsibility to create and manage from the binaries that the devs should release. No Dev should have to create those distro-bases formats, it's evil and useless.

Let me be more clear: devs are not required to release binaries at all. Bit they should, if they want their work to be widely used. And in this case, providing also a binary release alongside images solves all freedom of choice issues in my opinion. Here you show me my lack of preparedness as I didn't considered docker files as actual build instructions, I will do in the future.

I also fully agree with you that curl+pipe+bash random stuff should be banned as awful practice and that is much worse than containers in general. But posting instructions on forums and websites is not per se dangerous or a bad practice. Following them blindly is, but there is still people not wearing seatbelts in cars or helmets on bikes, so..

I was not single containers out, I was replying to a post about containers. If you read my wiki, every time a curl/pipe/bash approach is proposed, I decompose it and suggest against doing that.

Chmod 777 should be banned in any case, but that steams from containers usage (due to wrongly built images) more than anything else, so I guess you are biting your own cookie here.

Having docker files and composer file is perfectly acceptable. What is not acceptable is having only those and no binary releases. Usually sources are available (in FOSS apps at least) but that can be useless if there are no building instructions provided or the app uses some less common build stack.

On Immich, which is a perfect example of an amazing piece of software fast growing and very polished, I did try to build from sources but I couldn't manage the ML part properly. This is indeed due to my lack of experience with the peculiar stack they are using, but some build instructions would have been appreciated greatly (now I realize I should have started from the docker files). I gave up and pulled the images. No harm done, but little extra fun for me, and while I do understand the devs position, they too keep talking about making a living out of it and that's a totally different point to discuss on a different thread. I would suggest them that public relations and user support is more important than actually releasing an amazing product for making a living out of it. But that's just my real world experience as product manager.

In a world where containers are the only proposed solution, I believe something will be taken from us all. Somebody else explained that concept better then me in this thread. That's all.

Let me be more clear: devs are not required to release binaries at all. Bit they should, if they want their work to be widely used.

Yeah, but that's not there reality of the situation. Docker images is what drives wide adoption. Docker is also great development tool if one needs to test stuff quickly, so the Dockerfile is there from the very beginning and thus providing image is almost for free.

Binaries are more involved because suddenly you have multiple OSes, libc, musl,.. it's not always easy to build statically linked binary (and it's also often bad idea) So it's much less likely to happen. If you tried just running statically linked binary on NixOS, you probably know it's not as simple as chmod a+x.

I also fully agree with you that curl+pipe+bash random stuff should be banned as awful practice and that is much worse than containers in general. But posting instructions on forums and websites is not per se dangerous or a bad practice. Following them blindly is, but there is still people not wearing seatbelts in cars or helmets on bikes, so..

Exactly what I'm saying. People will do stupid stuff and containers have nothing to do with it.

Chmod 777 should be banned in any case, but that steams from containers usage (due to wrongly built images) more than anything else, so I guess you are biting your own cookie here.

Most of the time it's not necessary at all. People just have "allow everything, because I have no idea where the problem could be". Containers frequently run as root, so I'd say the chmod is not necessary.

In a world where containers are the only proposed solution, I believe something will be taken from us all.

I think you mean images not containers? I don't think anything will be taken, image is just easy to provide, if there is no binary provided, there would likely be no binary even without docker.

In fact IIRC this practice of providing binaries is relatively new trend. (Popularized by Go I think) Back in the days you got source code and perhaps Makefile. If you were lucky a debian/src directory with code to build your package. And there was no lack of freedom.

On one hand you complain about docker images making people dumb on another you complain about absence of pre-compiled binary instead of learning how to build stuff you run. A bit of a double standard.

This is a bit of a Pokemon starter question. Just pick one and see where it takes you! They do roughly the same job, especially now that docker has a rootless mode. At the end of the day you're learning a new technology and that's a positive thing.

not having to change habits later.

If everybody thought like this, we would still be banging rocks together.
I am not sure about your use case, but IMO learning Docker first would be a good default. It is more wide-spread than podman. If you want (or need) to, moving on to podman would probably not be too big a step.

What I meant by that was that it might be easier to start with podman, when my goal is to end up wuth podman anyways.

No flame intended. Quick question though - out of curiosity: what is specifically your use case for podman?

I want to use it for selfhosting stuff on my raspberry pi. And the reason I want to use podman over docker is that podman is more secure and more FOSS (I know the engine is FOSS, but Docker Desktop isn't and in the past they attempted to do few bullshit thing like this )

Learn Docker first, it will be faster and easier. It will both give you an intro to containers and you'll get some practical use for your self hosting needs.

If you're still curious later you can go deeper into Podman and other container technology.

I just downloaded Podman Desktop and am playing with this. Almost all videos and tutorials out there are for Docker but I'm going to watch those but actually use Podman instead to learn.

if you havent started: none

use nixOS.

I'm huge fan of Nix, but for someone wondering if they should "learn docker" Nix is absolutely brutal.

Also IMO while there's some overlap, one is not a complete replacement for the other. I use both in combination frequently.

I had an interview the other day and was surprised to hear that The University of Miami is actually using Nix for about 16 of their machines. I haven't used Nix yet, but thanks to everyone talking about it I could tell them the benefits of using it haha

No

Do you selfhost stuff on bare metal? I feel like most projects provide containers as their officially supported packages.

They're being useless, but what I do is use Proxmox and just install my stuff each in their own LXC

You're using LXC... so you may want to learn about Incus/LXD that was made by the same people who made LXC, can work as a full replacement for Proxmox in most scenarios. Here a few reasons:

  • It is bellow the Linux Containers project, open-source;
  • Available on Debian 12's repositories;
  • Unlike Proxmox, it won't withhold important fixes on the subscription (payed) repositories;
  • Is way, way lighter;
  • LXC was hacked into Proxmox, they simply removed OpenVZ from their product and added LXC and it won't even be as compatible and smooth as Incus;
  • Also has a WebUI;

Why not try it? :)

I use distro packages. In the rare case something isn’t packaged yet, I package it myself. And for the isolation, systemd services can do most of the things docker can if you need (check systemd-analyze security).

For just hosting services that can be done instead with normal system services, docker makes your setup a lot more complex (especially on the networking side), for little if any gain. Unless I need to spin up something multiple times temporarily on demand or something has a hard dependency on it, I’m not going to bother with it anymore.

No, I use an operating system.

Not sure why all the down votes without any explanation.

I too don't use docker for my services. I run Plex on my Arch install via the provided AUR package. 🤷‍♂️ Nobody told me I needed to do otherwise, with docker or anything else. Not sure why that would be better in any way. It could hardly be more performant? And it's as simple as enabling the service and forgetting about it.

Maybe they're having issues with his answer of "using an OS" which implies other people are not? IDK.

But as to you if you're running just one or two services from a machine you also use for other stuff using packages and systems services is perfectly fine. If you have dedicated hardware for it (or plan on having it), it starts to make sense to look at ways of making things easier for yourself in the long run. Docker solves lots of issues no one's talking about (because no one is facing them anymore), e.g.:

  • Different services requiring different versions of the same library/database/etc
  • Moving your service from one computer to another
  • Service requiring specific steps for updates (this is not entirely gone, but it's much better and it's prevents you from breaking your services by doing a random operation like updating your system)
  • Pinning versions of services until you decide to update without needing to sacrifice system updates for it (I know you can pin a version of a package, but if you don't upgrade it it will break when you upgrade it's dependencies)
  • Easily map ports or block access in a generic way, no need to discover how each service config file allows that, you can just do it at the container level. e.g. databases that can't be accessed from the network or even from within the host machine (I mean, they can obviously be accessed from the host system, just not in the traditional way, so a user who gains access to your machine on a user that's not allowed to use docker can't)
  • Isolation between services
  • Isolation from host machine
  • Reproducibility of services (i.e. one small docker compose file guarantees a reproducible host of services)
  • Endurance that no service is running as root (even if they only work as root)
  • Spin services in minutes to test stuff up and clean them out thoroughly in seconds.

There's probably many more reasons to use docker. Plus once you've learned it it's very easy for small self-hosted stuff so there's really no reason not to use it. Every time I see someone saying they don't use docker and don't understand why people use it I'm a bit baffled, it's like someone claiming he doesn't understand why people use knifes to cut bread when the two-handed axe he uses for chopping wood works (like, yes, it does work, but it's obviously not the best tool for the job)

Are you aware that all those isolation, networking, firewall etc. issues can be solved by simply learning how to write proper systemd units for your services. Start by reading this: https://www.redhat.com/sysadmin/mastering-systemd

Yes I'm aware of that, having written several systemd units for my own services in the past. But you're not likely to get any of that by default when you just install from the package manager as it's the discussion here, and most people will just use the default systemd unit provided, and in the vast majority of cases they don't provide the same level of isolation the default docker compose file does.

We're talking about ease of setting things up, anything you can do in docker you can do without, it's just a matter of how easy it is to get good standards. A similar argument to what you made would be that you can also install multiple versions of databases directly on your OS.

For example I'm 99% sure the person I replied to has this file for service:

[Unit]
Description=Plex Media Server
After=network.target network-online.target

[Service]
# In this file, set LANG and LC_ALL to en_US.UTF-8 on non-English systems to avoid mystery crashes.
EnvironmentFile=/etc/conf.d/plexmediaserver
ExecStart=/usr/lib/plexmediaserver/Plex\x20Media\x20Server
SyslogIdentifier=plexmediaserver
Type=simple
User=plex
Group=plex
Restart=on-failure
RestartSec=5
StartLimitInterval=60s
StartLimitBurst=3

[Install]
WantedBy=multi-user.target

Some good user isolation, but almost nothing else, and I doubt that someone who argued that installing from the package manager is easier will run systemctl edit on what he just installed to add extra security features.

Can confirm, have this file. Can confirm, will not learn unit files because I don't know enough to know the provided one is not sufficient, because the wiki has no such mention. You are spot on.

Btw I don't mean any of that as an insult or anything of the sort, I do the same with the services I install from the package manager even though I'm aware of those security flags, what they do and how to add them.

I don't mean any of that as an insult or anything of the sort

None taken.

But you’re not likely to get any of that by default when you just install from the package manager as it’s the discussion here,

This is changing... Fedora is planning to enable the various systemd services hardening flags by default and so is Debian.

We’re talking about ease of setting things up, anything you can do in docker you can do withou

Yes, but at what cost? At the cost of being overly dependent on some cloud service / proprietary solution like DockerHub / Kubernetes? Remember that the alternative is packages from your Linux repository that can be easily mirrored, archived offline and whatnot.

You're not forced to use dockerhub or kubernetes, in fact I use neither. Also if a team chooses to host their images on dockerhub that's their choice, it's like saying git is bad because Microsoft owns GitHub, or that installing software X from the repos is better than compiling because you need to use GitHub to get the code.

Also docker images can also be easily mirrored, archived offline etc, and they will keep working after the packages you archived stop because the base version of some library got updated.

Yet people chose to use those proprietary solutions and platforms because its easier. This is just like chrome, there are other browser, yet people go for chrome.

It’s significantly hard to archive and have funcional offline setups with Docker than it is with an APT repository. It’s like an hack not something it was designed for.

It's definitely much easier to do that on docker than with apt packages, and docker was designed for thst. Just do a save/load, https://docs.docker.com/reference/cli/docker/image/save/ and like I mentioned before this is much more stable than saving some .deb files which will break the moment one of the dependencies gets updated.

Most people will use whatever docker compose file a project shows as default, if the project hosts the images on dockerhub that's their choice. Plus I don't understand what's the problem, GitHub is also proprietary and no one cares that a project is hosted there.

It’s definitely much easier to do that on docker than with apt packages,

What a joke.

Most people will use whatever docker compose file a project shows as default, if the project hosts the images on dockerhub that’s their choice

Yes and they point the market in a direction that affects everyone.

GitHub is also proprietary and no one cares that a project is hosted there.

People care and that's why there are public alternatives such as Codeberg and the base project Gitea.

2 more...
2 more...
2 more...
2 more...
2 more...
2 more...
2 more...

Pretty good points. I especially like the no-root and isolation aspects, as well as the reproducibility aspect.

But I don't have enough services to warrant learning docker at a deeper level yet, and they aren't exposed on the internet yet either. Just local services so far. But all of those points are good to consider. Thanks for replying, friend! 🤝

2 more...

People love to hate on people who don't care for containers.

Also, I'm guessing that nobody here actually knows what it means to run code on bare metal.

What you're doing is fine. No need to make life harder for yourself.

People love to hate on people who don't care for containers.

Maybe so. 😕

what it means to run code on bare metal

I'm guessing it means something slightly different than what most people think, namely to just run it in the OS. Would you explain to me what it really means?

Bare metal would mean without an OS to manage peripherals, resources, even other tasks - like you might find on a resource-constrained embedded system.

The OS is in between the service and the bare metal. Something like OPNsense can be said to be running on bare metal because the OS and the firewall service are so intertwined. However, something like firewalld isn't running on the bare metal because it's just a service of the operating system.

That's how I understand it anyway, I'm not a pro

2 more...
2 more...
2 more...
2 more...