cc by-sa flurdy

Docker with Compose, Machine and Swarm, easy steps from localhost to cloud providers

Orchestrate applications, create VMs, cluster and scale, on AWS EC2 and Google Compute Engine and others

Started: March 2015. Last updated: 14th July 2015.

Why

Create reproducible architecture of application containers using Docker Engine, Docker Compose, Docker Machine and Docker Swarm.

Easy steps to create and deploy your applications locally and push to any cloud provider using the same toolset.

What

Docker Engine

Docker, more specifically Docker Engine, is a popular framework to run applications in containers.

It however only gives to the tools to manage one container at time. There is no way to associate containers with each other, beyond basic linking for IP and port aliasing, and sharing folder volumes, nor ability to act on linked containers in one command.

Docker Compose

Fig.sh was a great solution to this as it is a tool to orchestrate multiple containers into one system architecture. E.g. you might have a database container, a API based business application container and a front end application container. You configure these as 3 containers, tell Fig how they connect and off you go.

Fig and its creators at Orchard got swallowed up by Docker, and Docker has instead released Docker Compose as the tool to orchestrate containers to replace Fig which is now deprecated.

Docker Machine

Docker is usually running locally or hosted inside the excellent boot2docker, or a Vagrant instance, as detailed in my Docker with OSX and Ubuntu howto. Docker Machine lets you create a Docker host VM locally or on a cloud provider in one command.

Docker Swarm

Docker Swarm lets you create a cluster of Docker Machine instances, so that they appear as one instance host. This works locally as well as on cloud providers.

Install

Install Docker

For OSX install via the Boot2Docker installer which also installs Docker or via Brew:

brew install docker; brew install boot2docker; boot2docker init

For Ubuntu:

sudo apt-key adv \
--keyserver hkp://keyserver.ubuntu.com:80 \
--recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9;
sudo sh -c \
"echo deb https://get.docker.io/ubuntu docker main >\
/etc/apt/sources.list.d/docker.list";
sudo apt-get update; sudo apt-get install lxc-docker

Test the install:

docker --version

(Read my Docker with OSX and Ubuntu howto for more detailed instructions and alternatives).

Install Compose

Either follow the script on the Compose Install page. At the time of writing it is this:

curl -L https://github.com/docker/compose/releases/download/1.3.3/docker-compose-`uname -s`-`uname -m` > \
/usr/local/bin/docker-compose;
chmod +x /usr/local/bin/docker-compose

Note the script only works in Bash, not in e.g. Fish. You can however replace the uname -s parts manually to get it to work.

Or use Brew:

brew install docker-compose

I would also suggest installing the command line completion for Compose if using Bash. Note the Brew install also installs this by default.

Test the install:

docker-compose --version

Install Machine

Machine require a binary download depending on your OS. Please download the appropriate one from docs.docker.com/machine/#installation.

For OSX :

wget https://github.com/docker/machine/releases/download/v0.3.0/docker-machine_darwin-amd64; chmod +x docker-machine_darwin-amd64; mv docker-machine_darwin-amd64 /usr/local/bin/docker-machine

Or use Brew:

brew install docker-machine

Test the install:

docker-machine --version

Install Virtualbox

We will also use Virtualbox to create VMs locally with Machine.

Either download and install it via the Virtualbox installers.

Or in OSX use Brew with Cask:

brew cask install virtualbox;

Or via apt-get in Ubuntu:

sudo apt-get install virtualbox

Local Machine

Lets create a local VM host with Machine.

docker-machine create --driver virtualbox \
--virtualbox-memory 3076 dev;

This will download boot2docker as a local VM. and allocate 3GB as memory. And suggest you update your default settings, basically replacing the original boot2docker settings, and add these to your .bash_profile or config.fish. (my dotfile)

Set the environment properties for which machine's docker daemon to communicate with. If you use Bash:

$(docker-machine env dev)

Of alternatively if you use Fish shell:

eval (docker-machine env dev)

Lets list current running machines, ie only the dev one.

docker-machine ls NAME ACTIVE DRIVER     STATE   URL                       SWARM
dev  *      virtualbox Running tcp://192.168.99.100:2376

Install Swarm

Swarm is available as a Docker image so to install is simply:

docker pull swarm

Or use Brew:

brew install docker-swarm

Orchestrate

Lets create a common basic 3 system architecture. One database based on Postgres. One API service application based on Spray. One simple frontend application based on Play.

The frontend talks to the API and the API talks to the database.

My preference for all things Scala is obvious, you might want to pull in other tech stacks instead.

Clone my example code or you can install a Java JDK and Typesafe's Activator to manually create the project. The example code use Java and Activator inside Docker images so you don't need to install them.

git clone https://github.com/flurdy/docker-compose-machine-swarm-cloud-example.git

Though the building process later would download them, you might as well pre download the following Docker images that this project uses.

docker pull debian:wheezy; docker pull postgres:9.4; docker pull flurdy/activator:1.3.2

The frontend application has a Dockerfile that describe it like this:

vi frontend/Dockerfile FROM flurdy/activator:1.3.2

MAINTAINER flurdy

ENV DEBIAN_FRONTEND noninteractive

ENV APPDIR /opt/app

RUN mkdir -p /etc/opt/app && \
  mkdir -p $HOME/.sbt/0.13

ADD . /opt/app

WORKDIR /opt/app

RUN rm -rf /opt/app/target /opt/app/project/project /opt/app/project/target

RUN cp /opt/app/conf/* /etc/opt/app/ && \
  cp /opt/app/conf/repositories $HOME/.sbt/ && \
  cp /opt/app/conf/local.sbt $HOME/.sbt./0.13/

RUN /usr/local/bin/activator stage

WORKDIR /opt/app

ENTRYPOINT ["/opt/app/target/universal/stage/bin/frontend"]

EXPOSE 9000
EXPOSE 9999

This pulls in an Activator Docker image I have on Dockerhub. It adds a repository file to use local maven and ivy repositories to avoid constantly downloading the internet on each build.

The service application has a Dockerfile that describe it like this:

vi service/Dockerfile FROM flurdy/activator:1.3.2

MAINTAINER flurdy

ENV DEBIAN_FRONTEND noninteractive

ENV APPDIR /opt/app

RUN mkdir -p /etc/opt/app && \
  mkdir -p $HOME/.sbt/0.13

ADD . /opt/app

WORKDIR /opt/app

RUN rm -rf /opt/app/target /opt/app/project/project /opt/app/project/target

RUN cp /opt/app/src/main/resources/* /etc/opt/app/ && \
  cp /opt/app/src/main/resources/repositories $HOME/.sbt/ && \
  cp /opt/app/src/main/resources/local.sbt $HOME/.sbt./0.13/ && \
  chmod +x /opt/app/src/main/resources/start.sh

RUN /usr/local/bin/activator assembly

ENTRYPOINT ["/opt/app/src/main/resources/start.sh"]

CMD "-config.file=/etc/opt/app/docker.conf"

EXPOSE 8880

The docker-compose.yml is what defines the orchestration for this system.

vi docker-compose.yml; frontend:
build: frontend
command: "-Dconfig.file=/etc/opt/frontend/docker.conf"
links:
- "service"
ports:
- "49910:9000"
# volumes:
# - frontend/conf:/etc/opt/frontend:ro
volumes_from:
- maven
service:
build: service
command: run -Dconfig.file=/etc/opt/service/docker.conf
links:
- "database"
ports:
- "49920:8880"
expose:
- "8880"
# volumes:
# - service/src/main/resources:/etc/opt/service:ro
volumes_from:
- maven
database:
image: postgres:9.4
expose:
- "5432"
maven:
image: debian:wheezy
volumes:
- ~/.m2:/root/.m2:
- ~/.ivy2:/root/.ivy2:rw
- ~/.m2:/home/docker/.ivy2:rw
- ~/.ivy2:/home/docker/.ivy2:rw
- services/src/main/resources:/root/.sbt:ro

It adds a database image, and also a maven image to cache maven and ivy downloads. The maven image does not even have to be running for the volumes to work with other containers.

To speed up maven, ivy and sbt log into your local machine VM and create symlinks to your host folders. The boot2docker image does auto mount your /Users or /Home folders which is handy.

docker-machine ssh; ln -s /Users/myuser/.m2 ~/.m2; ln -s /Users/myuser/.ivy2 ~/.ivy2; exit

Unfortunately you have to redo this step every time the machine VM is restarted.

Now lets build and run these. These steps are where you will spend a lot of time getting right....

docker-compose build

If this builds ok we can can launch the orchestration as a daemon.

docker-compose up -d

And check if it is running.

docker-compose ps

If you have problems, then logs are available.

docker-compose logs

Some quick curl calls to see if it works ok. Lets post a pizza order, and see if it is listed in the database afterwards.

curl -H 'Content-Type: application/json' \
-d '{"pizza": "Pepperoni"}' "http://$(docker-machine ip):49920/pizza";
curl "http://$(docker-machine ip):49920/pizzas"

Hopefull the response is something like this:

{
"pizzas": [{
"id": 1,
"pizza": "pepperoni"
}]
}

You can also see the pizza orders listed via the frontend.

open http://192.168.99.100:49910

(TODO: Make a more lightweight example...)

Create clouds

As you created a Machine locally, you can easily create Machine(s) in a cloud/IAAS provider. Or your own hosting solutions.

Google Compute Engine

Our first provider is Google Compute Engine. Create an account if you have not already done so. Configure billing or take advantage of their generous two month trial to experiment with.

Go to the Google Developers Console and create a project. Take note of the generated project id, not the project name.

You also need to go into the APIs & auth's APIs settings and turn on the Google Compute Engine API.

Google will launch a browser window for OAuth authentication, so make sure you are on a non headless environment. It will provide a token for you to copy-paste into the shell.

Note Google Compute Engine with Machine requires a Docker Engine build that includes identity support.

docker-machine create \
--driver google \
--google-project your-project-name \
--google-zone europe-west1-c \
gcebox

Amazon AWS

As always, create an account with Amazon AWS EC2 if you do not already have one, and log into to the AWS console. AWS also give you a generous trial to experiment with.

I recommend a separate IAM credentials to use for this as these keys will be exposed in your shell history. Depending on your security settings, you may need to add the IAM user to an IAM group as well.

Create a normal VPC, noting the VPC id, the region and zone you created it in. You will also need to find what the public subnet id that the vpc is in.

docker-machine create \
--driver amazonec2 \
--amazonec2-access-key your-aws-access-key \
--amazonec2-secret-key your-aws-secret-key \
--amazonec2-vpc-id your-aws-vpc-id \
--amazonec2-subnet-id your-aws-subnet-id \
--amazonec2-region us-east-1 \
--amazonec2-zone a \
ec2box

You will need to log onto the EC2 console and edit the security group called Docker+Machine. And add the tcp port range 49900-49999 for your ip.

Digital Ocean

As with the others first create an account with Digital Ocean. Then create a personal API token in DO's admin panel.

docker-machine create \
--driver digitalocean \
--digitalocean-access-token your-digitalocean-api-token \
--digitalocean-size 2gb \
dobox

Other providers

Machine and Swarm do support a range of other providers. If anyone want to send Pull Requests with examples, then please do.

Stop and remove cloud machines

If you want stop machines, especially expensive cloud based instances:

docker-machine stop ec2box

Note if you stop and start remote cloud machines, you may end up with different IPs and then that cloud machine's certificate will be recreated automatically (since Machine 0.2).

If you want remove machines.

docker-machine rm ec2box

If you want remove broken machines.

docker-machine rm -f ec2box

Push to the cloud

You will now have your local dev machine and one or more cloud based instances.

docker-machine ls NAME  ACTIVE DRIVER      STATE   URL                       SWARM
dev    *    virtualbox   Running tcp://192.168.99.100:2376
ec2box      amazonec2    Running tcp://1.2.3.4:2376
gcebox      google       Stopped
dobox       digitalocean Stopped

Lets make the ec2 box active, which would be the box any docker or compose commands talk to.

eval $(docker-machine env ec2box); docker-machine ls NAME  ACTIVE DRIVER      STATE   URL                       SWARM
dev         virtualbox   Running tcp://192.168.99.100:2376
ec2box *    amazonec2    Running tcp://1.2.3.4:2376
gcebox      google       Stopped
dobox       digitalocean Stopped

And with more information:

docker info

And run our orchestration on the cloud machine.

docker-compose up -d; docker-compose logs; export DOCKER_IP=$(docker-machine ip); curl "http://$DOCKER_IP:49910"

Scale the cloud

We have created just one machine instance for Docker in a cloud provider(s). If you plan to keep adding orchestrations to it or want to scale more horizontally for capacity and reliability then you need more instances. And as mentioned Swarm handles this and still exposes it as just one instance.

Compose & Swarm issues

Using Swarm with Compose has had some issues, mostly due to routing requests correctly and efficiently between Compose apps across Machine instances. There has been some temporary work arounds.

With the announcement of the new Docker Network this should be resolved and integrated with Swarm 0.3 and Compose 1.3. It is still experimental, but follow this guide to use Network with Swarm and Compose. Alternatively run normal Docker containers directly, and not with Compose.

Swarm locally

Lets scale up locally first.

eval $(docker-machine env dev); docker run swarm create

Which should respond with a token to use when creating a swarm master and nodes.

abcd098765

Then create a swarm master node.

docker-machine create \
-d virtualbox \
--swarm \
--swarm-master \
--swarm-discovery token://abcd09876 \
swarm-master

And create a few swarm nodes

docker-machine create \
-d virtualbox \
--swarm \
--swarm-discovery token://abcd09876 \
swarm-node-00;
docker-machine create \
-d virtualbox \
--swarm \
--swarm-discovery token://abcd09876 \
swarm-node-01

And a simple check to see if it is running:

eval $(docker-machine env --swarm swarm-master); docker info

Swarm in the cloud

Swarming in the cloud is just about changing the driver, and all the extra driver options.

eval $(docker-machine env ec2box); docker run swarm create 12321312 docker-machine create \
-d amazonec2 \
--amazonec2-access-key your-aws-access-key \
--amazonec2-secret-key your-aws-secret-key \
--amazonec2-vpc-id your-aws-vpc-id \
--amazonec2-subnet-id your-aws-subnet-id \
--amazonec2-region us-east-1 \
--amazonec2-zone a \
--swarm \
--swarm-master \
--swarm-discovery token://12321312 \
swarm-ec2-master;
docker-machine create \
-d amazonec2 \
--amazonec2-access-key your-aws-access-key \
--amazonec2-secret-key your-aws-secret-key \
--amazonec2-vpc-id your-aws-vpc-id \
--amazonec2-subnet-id your-aws-subnet-id \
--amazonec2-region us-east-1 \
--amazonec2-zone a \
--swarm \
--swarm-discovery token://12321312 \
swarm-ec2-node-00;
docker-machine create \
-d amazonec2 \
--amazonec2-access-key your-aws-access-key \
--amazonec2-secret-key your-aws-secret-key \
--amazonec2-vpc-id your-aws-vpc-id \
--amazonec2-subnet-id your-aws-subnet-id \
--amazonec2-region us-east-1 \
--amazonec2-zone a \
--swarm \
--swarm-discovery token://12321312 \
swarm-ec2-node-01;
$(docker-machine env --swarm swarm-ec2-master); docker info

Summary

You can now orchestrate applications, create machines locally and in several cloud providers and scale this across multiple instances.

You are now free to automate and extend this progress, add other tools such as Dokku etc.

Future

The Machine and Swarm integration with Compose will get better.

A lot of tools will be created on top of these or added to existing toolsets.

A credentials registry to share between teams for multiple providers would be handy.

I am tempted to create a script that lets you promote a local swarm to another provider in one simple command. This will let you quickly scale up/clone new environments and across providers at ease.

Alternatives

You can instead use the Docker specific container services such as AWS's closed beta Container Service or Google's open beta Container Engine. You can use Google's Kubernetes for a higher level cluster management. Joyent's Triton Container Service across a whole data centre sounds awesome.

References

This howto was heavily influenced by Docker's own documentation and presentations. In time it will mix with other sources and extend with my own experiences.

Feedback

Please fork and send a pull request for to correct any typos, or useful additions.

Buy a t-shirt if you found this guide useful. Hire Ivar for short term advice or long term consultancy.

Otherwise contact flurdy. Especially for things factually incorrect. Apologies for procrastinated replies.