Over the last few months, we have been working on using Docker to produce efficient containers for our front end solutions.
Requirements
Early on, we noticed that there were several requirements we were going to have to consider:
- Tracking — when changes are made to our containers’ configuration files, we want to know what changes were made, who made them, and why they were made.
- Small container sizes — as our pipelines run numerous times a day, we will have many different containers that we will need to store. We need to minimise the size of the containers so we can store them without storage concerns.
- Container versioning — we need to track which build caused a container to be built, and what version of the application was used in that build. This will give us the ability to retrospectively look at previous build containers when investigating bugs. We also need to be able to identify which container was the last one to be built, as some builds run nightly against the latest merge into our master branch.
- Joining containers — our front end containers need to be connected to a back end to efficiently test them against our system tests. We need to figure out how we can link our front end solution with our mock back end container.
- Container storage — we need to store our containers in a central location, from where our builds and developers can easily pull them when needed.
- Container management and orchestration — the solution must provide each container the resources it requires to run effectively, and the containers must be evenly distributed across hosts to prevent poor performance.
Tracking
To track changes made to our files we used version control. We set up a single Git repository that has directories for each container we want to define at the root level along with some additional helper files. Each of the directories that represent containers contain a README, a Dockerfile, and the relevant configuration we need for our containers to function correctly. In addition to our front-end applications, the repository also includes the container definition for our backend-environment, as well as a directory of compose-files. We’ll talk about these later on.
├── README.md
├── backend-mocks
│ ├── Dockerfile
│ ├── README.md
│ └── files
│ ├── environment_config
│ ├── global_config
│ └── start.sh
├── cleanup.sh
├── compose-files
│ ├── corporate.yml
│ ├── gc.yml
│ ├── jenkins-backend.yml
│ ├── mobile-stack.yml
│ ├── monitoring.yml
│ ├── pro-ct5-stack.yml
│ ├── pro-stack.yml
│ ├── sales-ct5-stack.yml
│ ├── sales-stack-v2.yml
│ └── sales-stack.yml
├── corporate
│ ├── Dockerfile
│ ├── README.md
│ └── files
│ ├── confd
│ ├── start.sh
│ └── tomcat
├── docker.iml
├── mobile
│ ├── Dockerfile
│ ├── README.md
│ └── files
│ ├── confd
│ ├── start.sh
│ └── tomcat
├── pro
│ ├── Dockerfile
│ ├── README.md
│ └── files
│ ├── confd
│ ├── start.sh
│ └── tomcat
└── sales
├── Dockerfile
├── README.md
└── files
├── confd
├── start.sh
└── tomcat
Building our containers
To run our front end solutions, our containers require Apache Tomcat to be installed. To reduce the size as much as possible, we opted for Tomcat-Alpine images. Alpine is a very small distribution of Linux, which only contains the bare essentials. Using Tomcat-Alpine instead of the standard Tomcat image resulted in a size reduction of 50%.
In an attempt to reduce the size even further, we decided to take a multistage build approach. The advantage of this is we can build a dependency on another image, and simply move the parts of the dependency we need out of one image into another. This ‘intermediate’ container is then removed by Docker during the build process.
We also needed a way to pass a version into each container to specify the version of the application to build. To do this, we pass in a VERSION variable using the –build-arg option of the docker command (see the example command following the script below).
Lastly we had to consider the structure of our Docker file. Each command in a Docker file represents a layer. By placing lines that rarely change at the top of a Docker file, it gives Docker the opportunity to reuse layers that it has previously built, while only having to change the lower layers, resulting in a decreased build time.
Here is mock example of one of our Docker files for our FX Professional application, which has the path /pro/Dockerfile:
# Setting up confd in seperate container FROM golang:1.9-alpine as confd # Update Certifcates RUN apk update \ && apk add ca-certificates wget \ && update-ca-certificates #Prepare Directories RUN apk add --no-cache make unzip RUN mkdir -p /go/src/github.com/kelseyhightower/confd && \ ln -s /go/src/github.com/kelseyhightower/confd /app WORKDIR /app #Download Confd and move into correct directory RUN wget -O /tmp/confd.zip https://github.com/kelseyhightower/confd/archive/v0.14.0.zip && \ unzip -d /tmp/confd /tmp/confd.zip && \ cp -r /tmp/confd/*/* /app && \ rm -rf /tmp/confd* && \ make build # App Container FROM tomcat:7-alpine EXPOSE 8080 # default version of app to install ARG VERSION=VERSION_PLACEHOLDER # Move confd into the tomcat alpine container COPY --from=confd /app/bin/confd /usr/local/bin/confd # delete tomcat default webapps RUN rm -r /usr/local/tomcat/webapps/manager \ /usr/local/tomcat/webapps/host-manager \ /usr/local/tomcat/webapps/examples \ /usr/local/tomcat/webapps/docs \ /usr/local/tomcat/webapps/ROOT # copy config and start script into place COPY files/tomcat/conf /usr/local/tomcat/conf COPY files/confd /etc/confd COPY files/start.sh /usr/local/bin/ # download and extract war file RUN mkdir /usr/local/tomcat/webapps/pro && \ cd /usr/local/tomcat/webapps/pro && \ wget http://whereTheArtifactIsStored.com/pro/${VERSION}/pro-${VERSION}.war && \ unzip pro-${VERSION}.war && \ rm pro-${VERSION}.war CMD ["start.sh"]
We can then run the following command from the root of our repository to build the image:
docker build pro --build-arg VERSION=5.6.7
Versioning
Our default versioning at Caplin is to have the first job of our pipelines calculate the version, which is then passed to the jobs that need to set the version for our artifacts. We define our versions here from two sources:
- A version.gradle file, which stores the major, minor and patch numbers. By default, this is stored in the root directory of each application.
- The Git hash, which is appended to the end of the version.
To version our containers, we tag a container with our standard versioning structure, using the tag command.
Joining containers
To test a front end application effectively, we need to pass it one of our mock back ends to simulate a real back end.
As seen above, we already have a container which provides us a mock backend. We just need to link it and a front end together. This is where Docker Compose comes in. Docker Compose allows you to link multiple containers together by defining services, along with image, ports and various other fields in a YAML file. When you call the compose file with the relevant arguments, you will create a service containing your defined containers.
Below is an example of a YAML file, called pro-stack.yml. In the file, we specify that we want to use the FX Professional Docker image and connect it to our mock back end image.
version: '3' services: backend: image: ourImageLocation.com/backend-mocks:2.11.0 ports: - "190${BATCH}:18080" - "191${BATCH}:18081" deploy: placement: constraints: - node.role == worker pro: image: ourImageLocation.com/pro:${VERSION} ports: - "90${BATCH}:8080" environment: - "liberator_primary_address=docker.caplin.com" - "liberator_primary_port=190${BATCH}" - "liberator_primary_https_port=191${BATCH}" - "liberator_secondary_address=docker.caplin.com" - "liberator_secondary_port=190${BATCH}" - "liberator_secondary_https_port=191${BATCH}"
The pro-stack.yml file references two environment variables: the version of FX Professional we want to use (VERSION) and a batch number (BATCH). The BATCH environment variable is incorporated in the port numbers for the container, and, by using a unique BATCH value for each container instance, we ensure that each container uses unique ports. The environment variables VERSION and BATCH are set when the docker command is executed.
The example command below creates two services based on the configuration in the pro-stack.yml file. The services are named by appending the two service names (‘backend’ and ‘pro’ in the YAML file above) to the name in the command line, ‘ExamplePro’.
BATCH=33 VERSION=latest docker stack deploy --compose-file pro-stack.yml ExamplePro
This command creates two services: ExamplePro_pro and ExamplePro_backend. We can check the status of these services by running the command below:
docker stack ls
Storing Containers
To keep track of built containers, we decided to store them alongside our other artifacts.
Each solution has a latest directory, which holds the last container that was built by our continuous delivery build, and subsequent folders with the full major-minor-patch-githash file name.
Container management and orchestration
To manage and store our containers, we decided to use Docker Swarm. Our current setup has 3 managers that split the running containers between 6 workers. Each time a build is run that needs to use a container, the build connects to our swarm and asks for a container to be created. The manager then decides which of the workers will run the container, and gives the build a URL linking it to the container.
We’ve found that using the Docker Swarm has helped us cope with the increased demand as more of our teams convert to using Docker containers.
We’ve also used a visualiser to give our developers a simple user-interface to see what containers are running and details about them.
Next steps
We’ve found that using containers has given us several improvements:
- Our tests have improved due to having stable, repeatable environments.
- It has been easier than ever to view different app versions, due to storing lightweight containers
- Our virtual machine overheads have decreased. Instead of having multiple deployment environments continually running, we can now just create temporary or long-lasting containers.
- We’ve had decreased continuous-deployment pipelines due to only having to rebuild specific layers of our docker container, rather than having to redeploy the whole application as we did in the past.
As we look towards the future, our goal is to start containerising our Liberator and Transformer products.