Orchestrate Containers for Development with Docker Compose

Written by: Lukasz Guminski

6 min read

The powerful concept of microservices is gradually changing the industry. Large monolithic services are slowly giving way to swarms of small and autonomous microservices that work together. The process is accompanied by another market trend: containerization. Together, they help us build systems of unprecedented resilience.

Containerization changes not only the architecture of services, but also the structure of environments used to create them. Now, when software is distributed in containers, developers have full freedom to decide what applications they need. As a result, even complex environments like CI servers with database backends and analytical infrastructure can be instantiated within seconds. Software development becomes easier and more effective.

The changes naturally cause new problems. For instance, as a developer, how can I easily recreate a microservice architecture on my development machine? And how can I be sure that it remains unchanged as it propagates through a Continuous Delivery process? And finally, how can I be sure that a complex build & test environment can be reproduced easily?

Introducing Docker Compose

The answer to these questions is Docker Compose, “a tool for defining and running complex applications with Docker. With Compose, you define a multi-container application in a single file, then spin your application up in a single command which does everything that needs to be done to get it running.” (https://docs.docker.com/compose/).

All of that can be done by Docker Compose in the scope of a single host. In that sense, its concept is very similar to Kubernetes pods. For multi-host deployment, you should use more advanced solutions, like Apache Mesos or a complete Google Kubernetes architecture.

In this blog, I’ll look at how we can use Docker Compose in detail. Specifically, I’ll focus on how to orchestrate containers in development.

Functionality

The main function of Docker Compose is the creation of microservice architecture, meaning the containers and the links between them. But the tool is capable of much more:

  • Building images (if an appropriate Dockerfile is provided)

docker-compose build
  • Scaling containers running a given service

docker-compose scale SERVICE=3
  • Healing, i.e., re-running containers that have stopped

docker-compose up --no-recreate

All this functionality is available through the docker-compose utility, which has a very similar set of commands to what is offered by docker:

build    Build or rebuild services
help     Get help on a command
kill     Kill containers
logs     View output from containers
port     Print the public port for a port binding
ps       List containers
pull     Pulls service images
rm       Remove stopped containers
run      Run a one-off command
scale    Set number of containers for a service
start    Start services
stop     Stop services
restart  Restart services
up       Create and start containers

They are not only similar, but they also behave like docker counterparts. The only difference is that they affect the entire multi-container architecture defined in the docker-compose.yml configuration file and not just a single container.

You'll notice some docker commands are not present in docker-compose. Those are the ones that don't make sense in the context of a completely multi-container setup. For instance:

  • Commands for image manipulation (e.g., save, search, images, import, export, tag, history)

  • User-interactive (e.g., attach, exec, ‘run -i’, login, wait)

A command that is worth your attention is the docker-compose up command. It is a shorthand form of docker-compose build && docker-compose run

https://js.hscta.net/cta/current.js

hbspt.cta.load(1169977, '8e00245b-28a2-4866-809e-00da41c22b49', {});

Docker Compose Workflow

There are three steps to using Docker Compose:

  1. Define each service in a Dockerfile.

  2. Define the services and their relation to each other in the docker-compose.yml file.

  3. Use docker-compose up to start the system.

I’ll show you the workflow in action using two real-life examples. First, I’ll demonstrate basic syntax of docker-compose.yml and how to link containers. The second example will show you how to manage an application’s configuration data across development and testing environments.

Example 1: Basic Structure

The syntax of the docker-compose.yml file closely reflects the underlying Docker operations. To demonstrate this, I’ll build a container from Redis Commander sources and connect it to the Redis database.

Implementation (

Let’s create a project with the following structure:

example1
├── commander
│ └── Dockerfile
└── docker-compose.yml

Now let’s follow the workflow. Define the Dockerfile that builds Redis Commander (see source), and then create docker-compose.yml.

backend:
  image: redis:3
  restart: always
frontend:
  build: commander
  links:
    - backend:redis
  ports:
    - 8081:8081
  environment:
    - VAR1=value
  restart: always

Now execute:

docker-compose up -d

After that, point your browser to http://localhost:8081/. You should see the user interface of Redis Commander that's connected to database.

Note:

  • The command docker-compose up -d has the same effect as the following sequence of commands:

docker build -t commander commander
docker run -d --name frontend -e VAR1=value -p 8081:8081
   --link backend:redis commander
  • Each service needs to point to an image or build directory; all other keywords (links, ports, environment, restart) correspond to docker options.

  • docker-compose up -d builds images if needed.

  • docker-compose ps shows running containers.

$ docker-compose ps
     Name               State               Ports
---------------------------------------------------
example1_backend_1       Up               6379/tcp
example1_frontend_1 ...  Up               0.0.0.0:8081->8081/tcp
  • docker-compose stop && docker-compose rm -v stops and removes all containers

Example 2: Configuration Pack

Now let’s deploy an application to two different environments -- development and testing -- in such a way that it would use different configuration depending on the target environment.

Implementation (

One of our possible options is to wrap all environment-specific files into separate containers and inject them into the environment if needed.

Our project will have the following structure:

example2
├── common
│ └── docker-compose.yml
├── development
│ ├── content
│ │ ├── Dockerfile
│ │ └── index.html
│ └── docker-compose.yml
└── testing
├── content
│ ├── Dockerfile
│ └── index.html
└── docker-compose.yml

And again let’s follow the workflow.

First let’s create Dockerfiles ({development,testing}/content/Dockerfile) that wrap environment-specific content in containers. In this case, just to illustrate the mechanism, containers will contain only an index.html file with an environment-specific message (e.g., “You are in development!”).

FROM tianon/true
VOLUME ["/usr/share/nginx/html/"]
ADD index.html /usr/share/nginx/html/

Note that because we don’t need an operating system, the images are built on top of the smallest fully functional image: tianon/true. The image is originally built FROM scratch and contains only /true binary.

Now let’s create docker-compose.yml files. common/docker-compose.yml contains shared services of application. In this case, it’s just one service: nginx server with the definition of port mappings:

web:
  image: nginx
  ports:
    - 8082:80

The definition will be used through inheritance.

Each environment needs its own docker-compose.yml file (development/docker-compose.yml and testing/docker-compose.yml) that injects the correct configuration pack.

web:
  extends:
    file: ../common/docker-compose.yml
    service: web
  volumes_from:
    - content
content:
  build: content

Executing the following commands:

cd development
docker-compose up -d

Point a browser to http://localhost:8082/. You should be able to see the “You are in development!” message. You can activate testing environment setup by executing:

docker-compose stop
cd ../testing
docker-compose up -d

When you reload the browser, you should see the “You are in testing!” message.

Note:

  • web service inherits from common/docker-compose.yml. Its original definition is extended by the volumes_from directive that maps volumes from content container. This is how the application gets access to environment-specific configuration.

  • After start, content container executes the true command and immediately exits, but its data remain exposed to the web container, which stays active.

Summary

Docker Compose is not yet (as of May 2015) recommended by Docker for production use. But its simplicity, and the fact that an entire configuration can be stored and versioned along with your project, makes it a very helpful utility for development. For more information, refer to official documentation at https://docs.docker.com/compose/.

PS: If you liked this article you can also download it as a eBook PDF here: Orchestrate Containers for Development with Docker Compose.

Stay up-to-date with the latest insights

Sign up today for the CloudBees newsletter and get our latest and greatest how-to’s and developer insights, product updates and company news!