cc by-sa flurdy

Play with Docker

Building Docker images of applications that uses the Play Framework.

Started: December 2016. Last updated: 13th Dec 2016.

Advice on how to build Play Framework Docker images. What are the potential issues, and suggested solutions and alternatives. Is also relevant for any other Scala based application images.

Play Framework

What is the Play Framework? Play is an application framework popular for web applications and web services.

It offers non blocking requests, and many other innovative features and sub projects. The application code can be in either Java or Scala. And it runs on the JVM.

Docker

What is Docker? Docker is a virtualisation technology that create a simple abstraction layer to run containers for single use applications.

An application is dockerised by a DockerFile. The DockerFile describes what base image the application is built upon. E.g. a plain OS image such as Ubuntu Linux, or an image built on top of that with certain framework(s) already installed, e.g. Node.js, Django, etc. The DockerFile then describes further steps to build and run the application. Such as what further tools and frameworks to install, environment variables, what executable to run and ports to expose.

The problem

Downloading the internet

Play depends on a set of Ivy and Maven dependencies. And the application most likely depends on a further set of dependencies. And these dependencies have further transient dependencies.

These dependencies will all have to downloaded from central repositories. And you might need beyond 100 dependencies of a few megabytes each, so the total is often several hundred megabytes. On a fresh install where you need to download everything this is what is popularly is called downloading the internet.

Downloading this on a poor/slow internet connection is when you discover new profanities.

Obviously that is inefficient and on a local computer these dependencies are cached within a ~/.m2 or ~/.ivy2 folder. That way you only download new dependencies which is rare.

And at a organisational level you even prevent fresh installs from needing to download across the internet by using a local repository cache server, such as Artifactory, Nexus, etc.

Downloading the internet inside a Docker container

But when building a Play application with Docker, you do not have a pre populated ~/ivy2 folder. And if this is a public project you would not hard code in a dependency on a local maven repository server either.

So on each build it will download the internet. Every time.

Which is like frequently watching paint dry. And very frustrating when needing to test a new version if relying it on for Continuous Integration. It is also costly on network bandwidth.

This tutorial will discuss the possible solutions and work arounds to this problem.

Full Play Docker build

Ignoring the problem above here is a full Dockerfile, that will have Java installed, download Activator, and build the application.

FROM alpine:3.4

MAINTAINER YOURNAME <EMAIL/HANDLE>

RUN apk update && apk add bash wget unzip openjdk8

ENV PATH $PATH:/usr/lib/jvm/java-1.8-openjdk/jre/bin:/usr/lib/jvm/java-1.8-openjdk/bin
ENV ACTIVATOR_VERSION 1.3.12

RUN mkdir -p /opt/build /etc/app

WORKDIR /opt

RUN wget -q --show-progress \
  http://downloads.typesafe.com/typesafe-activator/$ACTIVATOR_VERSION/typesafe-activator-$ACTIVATOR_VERSION-minimal.zip && \
  unzip -qq typesafe-activator-$ACTIVATOR_VERSION-minimal.zip && \
  mv activator-$ACTIVATOR_VERSION-minimal activator && \
  ln -s /opt/activator/bin/activator /usr/local/bin/activator && \
  rm -f typesafe-activator-$ACTIVATOR_VERSION-minimal.zip

COPY conf /etc/app/

ADD . /opt/build/

WORKDIR /opt/build

RUN /opt/activator/bin/activator clean stage && \
  rm -f target/universal/stage/bin/*.bat && \
  mv target/universal/stage/bin/* target/universal/stage/bin/app && \
  mv target/universal /opt/app && \
  ln -s /opt/app/stage/logs /var/log/app && \
  rm -rf /opt/build /opt/activator $HOME/.ivy2

WORKDIR /opt/app

ENTRYPOINT ["/opt/app/stage/bin/app"]

EXPOSE 9000

Base image (FROM)

This bases the image on alpine:3.4 with Alpine Linux as the OS. Alpine is a very barebones OS with a small footprint and has become a very popular as a base Docker image.

Download and install Activator

The RUN wget ... part downloads Activator and install it. It removes the zip file to keep the footprint low. It also does this all in one RUN command to keep the changes within one Docker layer

You could include a pre downloaded zip file in your project folder. But for a public utility Dockerfile to avoid the suspicion of tampering of the zip file, I recommend the image downloads it from the source.

Application folders

  • /opt/build: The application is built here and then later removed
  • /opt/app: The compiled application is located here
  • /etc/app: The configuration is copied here for ease of overriding
  • /var/log/app: The application's logs

Executable

If you build Play with the stage command then Play framework creates executables in its target/universal/stage/bin folder. This Dockerfile renames the non Windows script and moves it to /opt/app/stage/bin/app.

Alternatively you can use the dist command, or SBT Assembly plugin to create a binary to execute.

You can extend the Entrypoint to include more configurations. E.g. an alternative configuration file:

ENTRYPOINT ["/opt/app/stage/bin/app", "-Dconfig.file=/etc/app/alternative.conf"]

Port

The application is available on port 9000. You can map that port to something else wherever you host your container. Port 9999 can also be exposed as a debug port.

Better base image

If you built the above image you would have noticed as expected it took forever, if not timed out. You would have to download all the Docker layers of the Alpine base layer first (though this will now be cached by the local Docker client). Then download Java, then download Activator, then the core SBT jars, then the applications dependencies jars and finally compile it all.

It would save you a lot of time to base your image off a more defined base image. Alternative base images are:

Java base image

Instead of the OS only alpine image, you could use the OpenJDK one.

FROM openjdk:8u111-jdk-alpine

MAINTAINER YOURNAME <EMAIL/HANDLE>

RUN apk update && apk add bash wget unzip

ENV ACTIVATOR_VERSION 1.3.12

RUN mkdir -p /opt/build /etc/app

WORKDIR /opt

RUN wget -q --show-progress \
  http://downloads.typesafe.com/typesafe-activator/$ACTIVATOR_VERSION/typesafe-activator-$ACTIVATOR_VERSION-minimal.zip && \
  unzip -qq typesafe-activator-$ACTIVATOR_VERSION-minimal.zip && \
  mv activator-$ACTIVATOR_VERSION-minimal activator && \
  ln -s /opt/activator/bin/activator /usr/local/bin/activator && \
  rm -f typesafe-activator-$ACTIVATOR_VERSION-minimal.zip

COPY conf /etc/app/

ADD . /opt/build/

WORKDIR /opt/build

RUN /opt/activator/bin/activator clean stage && \
  rm -f target/universal/stage/bin/*.bat && \
  mv target/universal/stage/bin/* target/universal/stage/bin/app && \
  mv target/universal /opt/app && \
  ln -s /opt/app/stage/logs /var/log/app && \
  rm -rf /opt/build /opt/activator $HOME/.ivy2

WORKDIR /opt/app

ENTRYPOINT ["/opt/app/stage/bin/app"]

EXPOSE 9000

Activator base image

If you trust my Activator image you could save a further step by using it as a base image:

FROM flurdy/activator-mini:1.3.12-alpine

MAINTAINER YOURNAME <EMAIL/HANDLE>

RUN mkdir -p /opt/build /etc/app

COPY conf /etc/app/

ADD . /opt/build/

WORKDIR /opt/build

RUN /opt/activator/bin/activator clean stage && \
  rm -f target/universal/stage/bin/*.bat && \
  mv target/universal/stage/bin/* target/universal/stage/bin/app && \
  mv target/universal /opt/app && \
  ln -s /opt/app/stage/logs /var/log/app && \
  rm -rf /opt/build /opt/activator $HOME/.ivy2

WORKDIR /opt/app

ENTRYPOINT ["/opt/app/stage/bin/app"]

EXPOSE 9000

We use the minimal version of Activator. The dist version would mean a lot of unnecessary bloat in our Docker layer.

Caching

With the Activator base image you have removed the need to download Java and Activator. But still will need to download the internet for all the Ivy/Maven dependencies.

Mount Ivy & Maven folders

SBT will try to download all dependencies that it can not find in the local ~/ivy2 or ~/.m2 folders. If the binaries needed are present somehow in these folders you avoid downloading the internet.

I have with mixed success accomplished this by mounting my local ~/ivy2 or ~/.m2 folders into Docker.

It is not as simple as using the Volume parameter for Docker containers as it is the special Docker builder container that needs the folders present. Doing this varies on a Linux box, inside Vagrant and on a macOS/OSX machine with either boot2docker or xhyve/moby.

The problem is that you would have to do this on all clients that may build this image. So I do not recommend it.

I think some Continuous Integration providers such as CircleCI does this caching somehow in a better way.

Local repository

A better way is to tell SBT to first look in other locations before going across the internet to find dependencies. Such as a local maven repository server.

You can hard code in your projects build.sbt file the location of proxy repository resolvers:

resolvers += "My Special Repository" at "http://repository.example.com/path/"

But that is not very flexible so the prefered option is a special ~/.sbt/repositories file. The contents of this file depends on your set up. See the reference for examples. Below we have enabled local Ivy folders and proxy server called "special-repository", and commented out some other options. (Apologies about the small font, but the Ivy patterns are really stupidly long).

[repositories]
# activator-local: file://${activator.local.repository-${activator.home-${user.home}/.activator}/repository}, [organization]/[module]/(scala_[scalaVersion]/)(sbt_[sbtVersion]/)[revision]/[type]s/[artifact](-[classifier]).[ext]
local
# maven-local
special-repository: http://repository.example.com/path/
# scala-tools-releases
# typesafe-releases: http://typesafe.artifactoryonline.com/typesafe/releases
# typesafe-sbt-releases: http://typesafe.artifactoryonline.com/typesafe/ivy-releases, [organization]/[module]/(scala_[scalaVersion]/)(sbt_[sbtVersion]/)[revision]/[type]s/[artifact](-[classifier]).[ext]
# typesafe-ivy-releases: http://repo.typesafe.com/typesafe/ivy-releases/, [organization]/[module]/[revision]/[type]s/[artifact](-[classifier]).[ext], bootOnly
# maven-central
# sonatype-snapshots: https://oss.sonatype.org/content/repositories/snapshots

Create such a file in your projects conf folder. Then your Dockerfile may be altered like this:

FROM flurdy/activator-mini:1.3.12-alpine

MAINTAINER YOURNAME <EMAIL/HANDLE>

RUN mkdir -p /opt/build /etc/app $HOME/.sbt

COPY conf /etc/app/

COPY conf/repositories /root/.sbt/

ADD . /opt/build/

WORKDIR /opt/build

RUN /opt/activator/bin/activator clean stage && \
  rm -f target/universal/stage/bin/*.bat && \
  mv target/universal/stage/bin/* target/universal/stage/bin/app && \
  mv target/universal /opt/app && \
  ln -s /opt/app/stage/logs /var/log/app && \
  rm -rf /opt/build /opt/activator $HOME/.ivy2

WORKDIR /opt/app

ENTRYPOINT ["/opt/app/stage/bin/app"]

EXPOSE 9000

Make sure the "COPY conf/repositories /root/.sbt/" line is before any Activator/SBT RUN commands.

Pre-populated Ivy

A better caching solution is to make sure the Docker builder Ivy folder is already populated from another Play build. That way the dependencies will already be cached locally for the real application build.

Play base image

If you base your application on my Play Framework image, the dependencies for a plain Play application will be present. Which can be 80-100% of the dependencies you need. Your Dockerfile might look like:

FROM flurdy/play-framework:2.5.10-alpine

MAINTAINER YOURNAME <EMAIL/HANDLE>

RUN mkdir -p /opt/build /etc/app

COPY conf /etc/app/

ADD . /opt/build/

WORKDIR /opt/build

RUN /opt/activator/bin/activator clean stage && \
  rm -f target/universal/stage/bin/*.bat && \
  mv target/universal/stage/bin/* target/universal/stage/bin/app && \
  mv target/universal /opt/app && \
  ln -s /opt/app/stage/logs /var/log/app && \
  rm -rf /opt/build /opt/activator $HOME/.ivy2

WORKDIR /opt/app

ENTRYPOINT ["/opt/app/stage/bin/app"]

EXPOSE 9000

If you don't want to use the image above you can create your own Play base image if you include this RUN step:

RUN /opt/activator/bin/activator new playframework-base play-scala && \
  cd playframework-base && \
  /opt/activator/bin/activator stage && \
  rm -rf /opt/build

Which creates a plain play-scala application, builds it, downloads all dependencies for Play into ~/.ivy2, then deletes the application.

Application base image

If your application includes a lot of further dependencies, you might want to split the Dockerfile into an "application base" and a "application-main" image. That way the base is only built when framework or dependencies change, whilst the frequent code changes are only causing the main image to be built. Which should be a lot quicker.

Your base Dockerfile which might be located in sub folder of the project. It needs a build.sbt file which matches the projects, at least for the dependencies. The base Dockerfile might look something like this:

FROM flurdy/play-framework:2.5.10-alpine

MAINTAINER YOURNAME <EMAIL/HANDLE>

RUN mkdir -p /opt/build

ADD . /opt/build/

WORKDIR /opt/build

RUN /opt/activator/bin/activator new application-base play-scala && \
  mv -f build.sbt application-base/build.sbt && \
  cd application-base && \
  /opt/activator/bin/activator clean stage && \
  rm -rf /opt/build

You may build and tag this as e.g.: yourorg/my-application-base. If many applications use the same dependencies this base image could be shared across these. The main Dockerfile might look something like:

FROM yourorg/my-application-base:latest

MAINTAINER YOURNAME <EMAIL/HANDLE>

RUN mkdir -p /opt/build /etc/app /opt/app

COPY conf /etc/app/

ADD . /opt/build/

WORKDIR /opt/build

RUN /opt/activator/bin/activator clean stage && \
  rm -f target/universal/stage/bin/*.bat && \
  mv target/universal/stage/bin/* target/universal/stage/bin/app && \
  mv target/universal /opt/app && \
  ln -s /opt/app/stage/logs /var/log/app && \
  rm -rf /opt/build /opt/activator $HOME/.ivy2

WORKDIR /opt/app

ENTRYPOINT ["/opt/app/stage/bin/app"]

EXPOSE 9000

This should now no longer download anything. As long as no dependencies are snapshots or different versions from the base image. It should only compile the application binary.

Two step build

Another way to avoid downloading the internet on each Docker build, is to build the JAR binary outside of Docker.

Manual binary build

You can build the JAR manually with e.g.:

sbt stage

Then a Dockerfile like this:

FROM openjdk:8u111-jdk-alpine

MAINTAINER YOURNAME <EMAIL/HANDLE>

RUN apk update && apk add bash

RUN mkdir -p /etc/app /opt/app

WORKDIR /opt/app

COPY conf /etc/app/

ADD target/universal /opt/app/

RUN rm -f stage/bin/*.bat && \
  mv stage/bin/* stage/bin/app && \
  ln -s /opt/app/stage/logs /var/log/app

ENTRYPOINT ["/opt/app/stage/bin/app"]

EXPOSE 9000

This is a much simpler Dockerfile. But is also much more fragile build as it depends on people and integration servers having pre-built the binary.

Integration server binary build

Most organisations will already be practicing the one-binary principle of building the JAR with continuous delivery processes. The project binary is built on a continuous integration server and uploaded to a Maven artifact repository.

Most likely you would assemble the one binary by adding SBT-Assembly to your projects plugins, and build the fat binary with:

sbt assembly

To include that as part of the Docker build the Dockerfile might look like this:

FROM openjdk:8u111-jdk-alpine

MAINTAINER YOURNAME <EMAIL/HANDLE>

ENV ARTREPOSITORY http://example.com/artifactory/libs-release-local
ENV ARTNAME myapplication
ENV ARTPATH com/example/$ARTNAME
ENV ARTVERSION 1.0-SNAPSHOT
ENV ARTURL $ARTREPOSITORY/$ARTPATH/$ARTVERSION/$ARTNAME-$ARTVERSION.jar

RUN mkdir -p /etc/app /opt/app

WORKDIR /opt/app

COPY conf /etc/app/

ADD bin /opt/app/

RUN wget -q --show-progress $ARTURL && \
  mv $ARTNAME.jar /opt/app/


ENTRYPOINT ["/opt/app/bin/start.sh"]

EXPOSE 9000

The Dockerfile can be made smarter by querying your artifact repository for the latest version dynamically.

The start.sh file needs to call Java with the fat jar as classpath, alternatively with some options and then the start class of the application. (An example for a spray app).

Though many new projects are rethinking this process and move the one-binary principle from one JAR to one Docker image to avoid a two stage build process for each commit.

Environment configuration

To specify configurations and secrets for the Docker image you have a few options. You can use the same image but override properties at run time. Or you can bake a different image per environment.

Run time configuration

You can override the configuration file used, per property or both.

Since we use Entrypoint (and not CMD) the command you run the image with only appends to that command, not replace it, which a CMD would do.

docker run -ti --rm yourorg/my-application:latest -Dconfig.file=/etc/app/docker.conf

This executes the /opt/app/stage/bin/app but appends -Dconfig.file=/etc/app/docker.conf to it. The configuration file can be part of the application's source code in the conf folder, or it can be mounted as a Docker volume at runtime.

If you find all the application's Docker containers override with the same config file, then just modify the Dockerfile's Entrypoint:

ENTRYPOINT ["/opt/app/stage/bin/app", "-Dconfig.file=/etc/app/docker.conf"]

Individual properties can also be overridden. Either via -DTHISPROPERTY=thisvalue or even better as a environment variable:

docker run -ti --rm \
  -e MYPROPERTY=thisvalue -e OTHERPROPERTY=thatvalue yourorg/my-application:latest

Then inside the conf/application.conf:

myproperty=some-default-value
myproperty=${MYPROPERTY}
otherproperty=some-other-default-value
otherproperty=${OTHERPROPERTY}

Image per environment

Some might prefer to bake different Docker images per environment. One way to do that is to use the ARG command.

ARG targetenv

RUN mv /etc/app/${targetenv:-docker}.conf /etc/app/environment.conf

ENTRYPOINT ["/opt/app/stage/bin/app", "-Dconfig.file=/etc/app/environment.conf"]

For each build you need to specify what the ARG value is, e.g. staging, production, etc.

docker build --build-arg targetenv=production -t yourorg/my-application-production

Alternatives

Instead of this document's Play & Docker configurations you might consider:

References

I have previously written some related tutorials based on Play, Scala and/or Docker:

Feedback

Please fork and send a pull request for to correct any typos, or useful additions.

Buy a t-shirt if you found this guide useful. Hire me for short term advice or long term consultancy.

Otherwise contact me. Especially for things factually incorrect. Apologies for procrastinated replies.