Introduction
Docker is a great tool for automating the deployment of Linux applications within software containers, but to make the most of its potential, it’s best if each component of your application runs in its own container. For complex applications with many components, orchestrating all containers to start and close together (not to mention communicate with each other) can quickly become unwieldy.
The Docker community
came up with a popular solution called Fig, which allowed them to use a single YAML file to orchestrate all of their Docker containers and configurations. This became so popular that the Docker team decided to make Docker Compose based on the Fig source, which is now deprecated. Docker Compose makes it easy for users to organize Docker container processes, including starting, shutting down, and configuring links and volumes within containers.
In this tutorial, you’ll install the latest version of Docker Compose to help you manage multi-container applications and explore the software’s basic commands.
Docker and Docker Compose Concepts
Using Docker
Compose requires a combination of a bunch of different Docker concepts into one, so before we get started, let’s take a minute to review the various concepts involved. If you’re already familiar with Docker concepts like volumes, bindings, and port forwarding, you might want to go ahead and move on to the next section.
Docker images
Each Docker container
is a local instance of a Docker image. You can think of a Docker image as a complete Linux installation. Typically, a minimal installation contains only the minimum number of packages required to run the image. These images use the host system’s kernel, but since they run inside a Docker container and only see their own file system, it’s perfectly possible to run a distribution like CentOS on an Ubuntu host (or vice versa).
Most Docker
images are distributed through Docker Hub, which is maintained by the Docker team. The most popular open source projects have a corresponding image uploaded to the Docker Registry, which you can use to deploy the software. When possible, it’s best to take “official” images, as the Docker team ensures they follow Docker best practices.
Communication between
Docker images Docker containers are isolated from the host machine, which means that,
by default, the host machine does not have access to the file system inside the Docker container, nor any means of communicating with it over the network. This can make it difficult to configure and work with the image running inside a Docker container.
Docker has three main ways to fix this. The first and most common is for Docker to specify the environment variables to be set within the Docker container. The code running inside the Docker container will check the values of these environment variables at startup and use them to configure itself correctly.
Another commonly used method is a volume of Docker data. Docker volumes come in two flavors: internal and shared.
Specifying an internal volume only means that for a folder you specify for a particular Docker container, the data will be retained when the container is deleted. For example, if you want to ensure that log files persist, you can specify an internal /var/log volume.
A shared volume maps a folder inside a Docker container to a folder on the host machine. This allows you to easily share files between the Docker container and the host machine.
The third way to communicate with a Docker container is over the network. Docker allows communication between different Docker containers over links, as well as port forwarding, allowing you to forward ports from inside the Docker container to the ports on the host server. For example, you can create a link to allow your WordPress and MariaDB Docker containers to communicate with each other and use port forwarding to expose WordPress to the outside world so users can connect to it.
Prerequisites
To follow this article, you will need the following:
CentOS 7 server,
-
configured with a non-root user with sudo privileges (see Initial Server Setup
-
installed with instructions in Step 1 and Step 2 of How to install and use Docker on CentOS 7
on CentOS 7 for details) Docker
Once they are in place, You’ll be ready to follow it.
Step 1 — Installing
Docker Compose To get the latest version, take the lead on Docker documents and install Docker
Compose from the binary in the Docker GitHub repository.
Check the current version and, if necessary, update it in the following command:
sudo curl -L “https://github.com/docker/compose/releases/download/1.23.2/docker-compose-$(uname -s)-
- $(uname -m)” -o /usr/local/bin/docker-compose
Next, set the permissions to make the binary executable
:
- sudo chmod +x /usr/local/bin/
docker-compose Then, verify that the
installation was successful by checking
the version: docker-compose -version
This will print the
version you installed:
- Outputdocker-compose
version 1.23.2, build 1110ad01
Now that you have Docker Compose installed, you’re ready to run a “Hello World” example.
Step 2: Run a container with Docker Compose
Docker’s public registry,
Docker Hub, includes a simple “Hello World” image for demonstration and testing. Illustrates the minimum configuration required to run a container using Docker Compose: a YAML file that calls a single image.
First, create a directory for our
YAML file: mkdir hello-world
Then switch to the directory:
- cd hello-world
Now create the YAML file using your favorite text editor. This tutorial will use Vi
:
- vi docker-compose.yml
Enter the insert mode, pressing i, then put the following content into the file
: my-test: image: hello-world
The first line will be part of the container name. The second line specifies which image to use to create the container. When you run the docker-compose up command, it will search for a local image with the specified name, hello-world.
With this in place, press ESC to exit insert mode. Enter 😡 and then ENTER to save and exit the file.
To manually view
the images on your system, use the docker images command: docker images
When there are no local images, only the column headers are displayed
: OutputREPOSITORY LABEL IMAGE ID SIZE CREATED Now,
while you are still in the ~/hello-world directory, run the following command to create the container:
- docker-compose up
The first time we run the command, if there is no local image called hello-world, Docker Compose will pull it from the Docker Hub public repository
: OutputPulling my-test (hello-world:)… Latest: Pulling from library/hello-world 1b930d010525: Pull complete . . .
After extracting the image, docker-compose creates a container, attaches and runs the hello program, which in turn confirms that the installation seems to be working:
Exit… Creating helloworld_my-test_1… Attach to helloworld_my-test_1 mi-test_1 | mi-test_1 | Hello from Docker. mi-test_1 | This message shows that the installation appears to be working correctly. mi-test_1 | . . .
He will then print out an explanation of what he did:
Exit… mi-test_1 | To generate this message, Docker performed the following steps: mi-test_1 | 1. The Docker client contacted the Docker daemon. mi-test_1 | 2. The Docker daemon extracted the “hello-world” image from the Docker Hub. mi-test_1 | (AMD64) MI-test_1 | 3. The Docker daemon created a new container from that image running the my-test_1 | executable that produces the output you are currently reading. mi-test_1 | 4. The Docker daemon transmitted that output to the Docker client, which sent it to my-test_1 | to your terminal….
Docker containers only run while the command is active, so once hello finished running, the container stops. Consequently, when you look at active processes, column headers
will appear, but the hello-world container will not appear because it is not running: docker ps OutputCREATED CONTAINER ID IMAGE COMMAND STATE PORT NAMES
Use the -a flag to display all containers, not just assets:
- docker ps
-a OutputCREATED CONTAINER ID IMAGE COMMAND STATE PORT NAMES 50a99a0beebd
- hello-world
“/hello” 3 minutes ago Exited (0) 3 minutes ago hello-world_my-test_1
Now that you’ve tested running a container, you can move on to explore some
of the basic Docker Compose commands.
Step 3 — Learn Docker Commands
Compose
To get started with Docker Compose, this section will go over the general commands that the docker-compose tool supports
.
The docker-compose command works per directory. You can have multiple groups of Docker containers running on one machine: simply create a directory for each container and a docker-compose.yml file for each directory.
So far you’ve been running docker-compose on your own, from which you can use CTRL-C to shut down the container. This allows debugging messages to be displayed in the terminal window. However, this is not ideal; When running in production, it’s more robust for Docker-Compose to act more like a service. A simple way to do this is to add the -d option when you upload your session
: docker-compose up -d docker-compose will
now fork in the background.
To display the group of Docker containers (stopped and running), use the following command: docker-compose ps –
a
If a container is stopped, the Status will appear as Exited, as shown in the following example
: Output Name Command State Ports – hello-world_my-test_1 /hello Output 0
A running container will appear:
Output name Command state ports – nginx_nginx_1 nginx -g daemon off; Up to 443/tcp, 80/tcp To stop all running Docker containers for an application pool,
run the following command in the same directory as the docker-compose.yml file you used to start the Docker pool: docker-compose
stop
In some cases, Docker containers will store their old information on an internal volume. If you want to start from scratch, you can use the rm command to completely delete all containers that make up your container group
: docker-compose rm
If you try any of these commands from a directory other than the directory that contains a Docker container and a .yml file, it will return an error
: OutputERROR: A suitable configuration file cannot be found in this directory or in any parent. Are you in the right directory? Supported file names: docker-compose.yml, docker-compose.yaml
This section covers the basics of how to manipulate containers with Docker Compose. If you needed to gain more control over your containers, you could access the file system of the Docker container and work from a command prompt inside your container, a process described in the next section.
Step 4 — Accessing the Docker Container File System
To work at the command prompt inside a container and access its file system, you can use the
docker exec command.
The “Hello World” sample closes after running, so to test docker exec, start a container that keeps running. For the purposes of this tutorial, use the Nginx image from Docker Hub.
Create a new directory named nginx and
move
to it: mkdir~/nginx cd~/nginx Next, create a docker-compose.yml file in
your new directory and open it in a text editor: vi docker-compose.yml Next,
add the following lines to the
file: nginx:
- image: nginx
Save the file and exit. Start the Nginx container as a background process with the following command:
- docker-compose up -d
Docker Compose will download the Nginx image and the container will start in the background
.
You will now need the container ID for the container. List all containers that are run with
the following command:
- Docker PS
You’ll see something similar to the following:
Output of ‘docker ps’CREATED CONTAINER ID IMAGE COMMAND STATE PORT NAMES b86b6699714c nginx “nginx -g ‘daemon of…” 20 seconds ago Up to 19 seconds 80/tcp nginx_nginx_1
If you want to make a change to the file system within this container, would take its ID (in this example
b86b6699714c) and use docker exec to start a shell inside the container:
- docker exec -it b86b6699714c
/bin/bash The -t option opens a
terminal and the -i option makes it interactive. /bin/
bash opens a bash shell in the running container.
Next, you’ll see a bash flag for the container similar to:
root@b86b6699714c:/#
From here, you can work from the command prompt inside your container. Note, however, that unless you are in a directory that is saved as part of a data volume, the changes will disappear as soon as the container is restarted. Also, remember that most Docker images are built with minimal Linux installations, so some of the utilities and command-line tools you’re used to may not be present.
Conclusion
You have now installed Docker Compose, tested the installation by running a “Hello World” sample,
and explored some basic commands
. While the “
Hello World” example confirmed its installation, the simple setup doesn’t show one of the main benefits of Docker Compose: being able to upload and download a group of Docker containers at the same time. To see the power of Docker Compose in action, see How to Secure a Node in Containers.js Application with Nginx, Let’s Encrypt, and Docker Compose and How to Set Up a Continuous Integration Test Environment with Docker and Docker Compose on Ubuntu 16.04. Although these tutorials are geared towards Ubuntu 16.04 and 18.04, the steps can be adapted for CentOS 7.