For the vast majority of docker images, the documentation only mention a super long and hard to understand “docker run” one liner.
Why nobody is placing an example docker-compose.yml in their documentation? It’s so tidy and easy to understand, also much easier to run in the future, just set and forget.
If every image had an yml to just copy, I could get it running in a few seconds, instead I have to decode the line to become an yml
I want to know if it’s just me that I’m out of touch and should use “docker run” or it’s just that an “one liner” looks much tidier in the docs. Like to say “hey just copy and paste this line to run the container. You don’t understand what it does? Who cares”
The worst are the ones that are piping directly from curl to “sudo bash”…
Because it’s “quick start”. Least effort to get a taste of it. For actual deployment I would use compose as well.
Many project also have a example docker-compose.yml in the repository if you dig not so deep into it
There is https://www.composerize.com to convert run command to compose. Works ~80% of the time.
I honestly don’t understand why anyone would make “curl and bash” the officially installation method these days, with docker around. Unless this is the ONLY thing you install on the system, so many things can go wrong.
I used to host composerize. Now I host it-tools which has its own version and many other super helpful tools!
I was going to mention it-tools. It’s great!
And if you need more stuff in a similar vein, cyberchef is also pretty neat.Nice! I wonder if there’s anything one has that the other doesn’t.
You have changed my life today.
No, the creator of it-tools did. I just told you about it. Give them a star on GitHub and maybe donate if you can ❤️
Omg I never knew about composerize or it-tools. This would save me a ton of headaches. Absolutely using this in the future.
Out of curiosity, is there much overhead to using docker than installing via curl and bash? I’m guessing there’s some redundant layers that docker uses?
Of course, but the amount of overhead completely depends per container. The reason I am willing to accept the -in my experience- very small amount of overhead I typically get is that the repeatability is amazing with docker.
My first server was unRAID (freebsd, not Linux), I setup proxmox (debian with a webui) later. I took my unRAID server down for maintenance but wanted a certain service to stay up. So I copied a backup from unRAID to another server and had the service running in minutes. If it was a package, there is no guarantee that it would have been built for both OSes, both builds were the same version, or they used the same libraries.
My favorite way to extend the above is Docker Compose. I create a folder with a
docker-compose.yml
file and I can keep EVERYTHING for that service in a single folder. unRAID doesn’t use Docker Compose in its webui. So, I try to stick to keeping things in Proxmox for ease of transfer and stuff.Makes sense! I have a bunch of services (plex, radarr, sonarr, gluetun, etc) on my media server on Armbian running as docker containers. The ease of management is just something else! My HC2 doesn’t seem to break a sweat running about a dozen containers, so the overhead can’t be too bad.
Yeah, that’s going to come completely down to the containers you’re running and the people who designed them. If the container is built on Alpine Linux, you can pretty much trust that it’s going to have barely any overhead. But if a container is built on an Ubuntu Docker image. It will have a bunch of services that probably aren’t needed in a typical docker container.
Good point. Most containers I’ve used do seem to use Alpine as a base. Found this StackOverflow post that compared native vs container performance, and containers fair really well!
It seems like that data is from 2014 as well. I’m sure the numbers would have improved in almost ten years too!
you don’t have to decode anything… just throw it in here :
I was just going to say this. Amazing resource!
Just what I was hoping to find here!
I don’t think you’re out of touch, just use docker compose. It’s not that hard to conver the
docker run
example command line into a neatdocker-compose.yml
, if they don’t already provide one for you. So much better than just running containers manually.Also, you should always understand what any command or docker compose file does before you run it! And don’t blindly
curl | bash
either, download the bash script and look at it first.Nah I’ll just copy paste half the tutorial in one go and then blame others when things break
Average linux user /s
Plain docker is useful when running some simple containers, or even one-off things. A lot of people thing about containers as long running services, but there’s also many containers that are for running essentially a single command to completion and then shuts down.
There’s also alternate ways to handle containers, for example Podman is typically used with systemd services as unlike Docker it doesn’t work through a persistent daemon, so the configuration goes to a service.
I typically skip the docker-compose for simple containers, and turn to compose for either containers with loads of arguments or multi-container things.
Also switching between Docker and Podman depending on the machine and needs.
I used docker run when I first started, I think it’s a fairly easy entry point that “just works”.
However I would never really go back to it, since compose is a lot tighter and offers a better sense of overview and control
I too am endlessly frustrated by documentation that lacks compose file examples.
Fortunately, this exists: Docker Compose Generator
I’ve started replacing my docker compose files with pure ansible that is the equivilent of doing docker run. My ansible playbooks look almost exactly like my compose file but they can also create folders, set config files or cycle services when configs are updated.
It’s been a bit of a learning process but it’s replaced a lot what was previously documentation with code instead.
Check out the GitHub project ansible-nas
ansible-nas
Wow, yeah this is exactly the sort of roles/playbooks that I’ve been building. I’m definitely using this as a source before starting my own from scratch. Thanks for sharing.
I’ve done something similar, but I’m using compose files orchestrated by Ansible instead.
I’m actually doing both right now since I had quite a huge compose file that I haven’t converted to ansible yet. The biggest frustration I have is that there doesn’t seem to be an ansible module that works with compose v2 (the official plugin) which means I’m either stuck on the old version of compose or I have to use shell commands to run stuff like ‘docker compose up -d’.
One nice thing I’ve gained though is for services like Plex. I have an ‘update’ playbook that I use and it will check to see if Plex is actively streaming before updating the container which isn’t something I could do easily with compose.
Well the v2 plugin is basically a binary, while v1 is written with Python, which makes it super easy to write an Ansible module
I’m still using the old docker-compose executable - my Docker role is still installing it until the Ansible module catches up.
I did the same, but I started from my list of run scripts… I used ChatGPT to create them, took 2 minutes…
Hahaha, I’ve been using ChatGPT in the exact same way. It requires a bit of double-checking but it really speeds things up a lot.
Ive almost completely moved to podman managed by systemd and I highly recommend it.
Do you use podman run followed by podman generate or are you using quadlet?
Quadlet is integrated in podman 4.4 and up and makes it possible to declare your containers in .container files that look like systems unit files and still get the full systems integration: https://www.redhat.com/sysadmin/multi-container-application-podman-quadlet
I just generate them. Never heard of quadlet I’m gonna check it out, thanks.
I do this out of habit because this is how my work does it, but I honestly don’t know the benefits of doing it this way. Can you explain (or provide a link?)
Ive found it to just be easier to manage rootless containers this way.
I’ve never tried Podman myself, but managing the containers using systemd would mean that you use exactly the same commands to start a Docker container as you would use to start a regular service. The fact that it’s running in a container essentially just becomes an implementation detail, and you don’t have to remember what’s running in containers vs what’s not running in containers.
Personally, I do usually want the
docker run
command. Much easier to use when orchestrating the deployment with other tools.For readability, I just line-break the command after each argument…
Honestly I never really saw the point of it, just seems like another dependency. The compose file and the docket run commands have almost the same info. I’d rather jump to kubectl and skip compose entirely. I’d like to see a tool that can convert between these 3 formats for you. As for piping into bash, no - I’d only do it on a very trusted package.
Docker-compose is a orchestration tool that wraps around the inbuilt docker functions that are exposed like “docker run”, when teaching people a tool you generally explain the base functions of the tool and then explain wrappers around that tool in terms of the functions you’ve already learned.
Similarly when you have a standalone container you generally provide the information to get the container running in terms of base docker, not an orchestration tool… unless the container must be used alongside other containers, then orchestration config is often provided.
I prefer to use ansible to define and provision my containers (docker/podman over containerd). For work, of course k8s and helm takes the cake. no reason to run k8s for personal self hosting, though.
No reason aside from building endless unnecessary complexity, which–let’s be honest–is 90% of the point of running a home lab.
Shit’s broken at work: hate it. Shit’s broken at home: ooh a project!
I always use docker-compose. It is very handy if you ever want to have a good backup or move the whole server to another. Copy over files -> docker compose up -d and you are done For beginners, they should use docker compose from the start. Easier than docker run
If you ever want to convert those one-liner to a proper .yml then use this converter
That is one
docker compose up -d
for each file you copied over, right… Or are you doing something even smarter?I have one docker-compose.yml for each service. You can use
docker compose -f /path/to/docker-compose.yml up -d
in scriptsI would never use “one big” file for all. You only get various problems imo
You use a separate file for each service? Why? I use one file for each stack, and if anything, breaking them out would give me issues.
I meant stack 😸
My structure is like
/docker/immich/docker-compose /docker/synapse/docker…
But I read that some people make one big file for everything
I have all services in one compose file. Up -d starts them all. Servicename up -d is more selective.
I’m curious to hear from the runners. I use compose and I feel the same, it’s more readable and editable and it allows me to backup the command by backing up the docker-compose.yml
When orchestration or provisioning tools are used (Ansible, kurbernetes, etc…), creating networks and containers are equally readable in code. The way docker compose is designed makes it hard to integrate with these tools.
This is the response I was hoping to hear. I’m primarily a home-automation/self-hosted enthusiast, not necessarily a infrastructure enthusiast. As of yet, I haven’t felt the need for using more involved orchestration tools/infra.
First version of my server, I wrote a bunch of custom shell scripts to execute
docker run
statements to launch all my containers b/c I didn’t know docker at all and didn’t want to learn compose.Current version of my server, I use docker compose. But all the containers I use come from linuxserver.io, and they always give examples for both. I use ansible to deploy everything.