Dev Week: Changing the face of Jenkins at Caplin

The current situation

At Caplin, we have been trying to improve the lives of the developers by giving them easier access to CI configuration. We have recently moved from having multiple CI servers to having a single, well-administered Jenkins server. This was a new server instance built around the idea of the Jenkins DSL Plugin and “configuration as code”. Unfortunately, coming from a CI server that previously focussed on pipelines, we had a bit of a challenge to make the Jenkins ethos accessible to developers.

We had previously been investigating the use of Multijobs to connect a series of independent jobs together to create a pseudo-pipeline – which worked really well. However, the problem with the Jenkins DSL is that it is not designed around pipelines or Multijobs either, so we had to get creative; I started creating an interpreter for the JenkinsDSL in groovy. This allowed developers to create entire multijobs and all the sub-jobs in a single file, only defining the properties they needed.

Over time, this DSL interpreter, the libraries and the job definitions have created a monolithic code base that takes over half an hour to deploy the 1500 jobs it defines. This is a big problem if mistakes are made, or commits are made too far apart (multiple triggers build up and some changes take all day to take effect).

The plan

The plan for this transformation was to move away from the monolith system of JenkinsDSL that we had written ourselves into. While it allowed for minimal boilerplate code by defining only the properties required, as well as entire products (branches and “pipelines”) defined in a single file, it was no longer practical as our builds were getting more complicated by the day. Caplin had organised a Dev Week for everyone to “down tools” and look into something new, we planned to investigate this topic during the week.

Enter Jenkinsfiles – a way of defining pipelines natively as code.

Pros:

  • Branches define themselves
  • Branches can have different build environments
  • Docker containers reduce queue times
  • Much more modular logic
  • Unit testable logic

Cons:

  • Distributed configuration

Stage 1 – A basic pipeline

The DevOps team had been looking at pipelines for a while before this Dev Week, and were excited about using them, so getting a basic pipeline up and running was not tricky. We decided to use one of our simpler builds – a git repository that runs a single Gradle command. This taught us our first lesson – there has to be a Jenkinsfile in the default branch for Jenkins to pick up any builds. A simple commit later and we were up and running.

At this point, we decided to install Blue Ocean to make the setup even easier.

Stage 2 – Introducing Docker

The DevOps team has also been looking into moving our builds to ephemeral Docker containers to reduce bottlenecks and disks filling up. This seemed like the perfect time to introduce them. Thankfully, Jenkinsfiles support Dockerfiles natively, so we were able to drop that config into place.

from java:8


The basic Dockerfile we used for the build – ./Dockerfile

pipeline {
    agent {
    dockerfile true
}

Of course, for this to work we had to make sure that the Jenkins server could access the Docker Daemon, so we set up Jenkins server and Docker on an Ubuntu box as that had the simplest setup.

Stage 3 – Sharing logic

One of the main problems to overcome moving from JenkinsDSL to JenkinsFiles is working out how to package all of the logic that makes the DSL so powerful. For this, we looked at workflowLibs, or Shared Pipeline Libraries. Writing custom libraries will allow anyone to write the Jenkinsfiles, without having to know every step and condition of our Jenkins builds.

From what we gathered through research, the way we wanted to use the Shared Libraries to make them easier for developers to use was to take the syntax of global variables to give seamless integration (no importing of libraries) and give it logic abilities to manipulate the incoming parameters.

echo 'Building...'
gradle {
    command 'clean createKit'
    flags '--info --refresh-dependencies'
}

This style of groovy closure means that the libraries don’t stand out in a Jenkinsfile – ./Jenkinsfile

This meant there was going to be a couple of parts to each library we created:

  • A var file that defined the entry point to the library from the Jenkinsfile
  • A src (class) file that defined the parameters and contained all the methods to store and use them

The var files are simple files that forward parameters to the the library’s source and return the output to the Jenkinsfile

call(body) {
    def g = new com.caplin.pipeline.buildStep.Gradle()
    sh g.gradle(body)
}

This is our gradle library, all it does it create a new instance of the gradle class, and pass the Jenkinsfile parameters (body) to it, and insert the calculated command into the pipeline as a shell command – ./vars/gradle.groovy

Using the power of Groovy closures, the parameters that get passed can be used as method calls. All we need to do is create those methods to store what is being passed.

Because we’re not using the standard method of implementing vars and logic, we can’t use the regular example code. We altered the closure delegate and executor to run in the right context.

// Runner
def gradle(Closure closure) {
    closure.delegate = this
    closure()
    return compiledCommand()
}

This will execute the fields as method calls within the library class file – ./src/com/caplin/pipeline/buildSteps/Gradle.groovy

The rest of the steps were to construct getters and setters of the parameters, set some defaults, and finally compile the command with conditionals

package com.caplin.pipeline.buildStep
class Gradle implements Serializable {
    // Set default gradle properties
    String gradleCommand = "clean build"
    String gradleFlags = "--info --refresh-dependencies"
    String gradleBuildFile = null
    String gradleWrapperDir = null
    // Create the getters
    def getGradleCommand() { return gradleCommand }
    def getGradleFlags() { return gradleFlags }
    def getGradleFile() { return gradleBuildFile }
    def getGradleDirectory() { return gradleWrapperDir }
    def command(String _gradleCommand) {
        gradleCommand = _gradleCommand
    }

    def flags(String _gradleFlags) {
        gradleFlags = _gradleFlags
    }

    def buildFile(String _buildFile) {
        gradleBuildFile = _buildFile
    }

    def buildDir(String _buildDirectory) {
        gradleWrapperDir = _buildDirectory
    }

    def compiledCommand() {
        def gradleCommand = "./gradlew ${getGradleFlags()} ${getGradleCommand()}"
        if (getGradleFile() != null) {
            gradleCommand += " -b ${getGradleFile()}"
        }
        if (getGradleDirectory() != null) {
            gradleCommand = "cd ${getGradleDirectory()} && ${gradleCommand}"
        }
        return gradleCommand
    }

    // Runner
    def gradle(Closure closure) {
        closure.delegate = this
        closure()
        return compiledCommand()
    }
}

./src/com/caplin/pipeline/buildSteps/Gradle.groovy

The setters in this class are methods with the same name as the parameter field names coming in from the Jenkinsfile; this is because we are executing the closure as-is.

Stage 4 – Testing

The major advantage of using these Shared Libraries is that we can finally unit test our job logic. This is something that our DSL implementation has been lacking, and has caused us issues in the past. Using regular jUnit development, we quickly got these up and running.

package com.caplin.pipeline.buildStep
import org.junit.Before
import org.junit.Test
import org.junit.Ignore
import static groovy.test.GroovyAssert.assertEquals
import static groovy.test.GroovyAssert.assertNull
class GradleTest {
    def g
    @Before
    void setUp() {
        g = new Gradle()
    }

    @Test
    void callingCommandShouldSetGradleCommand() {
        def command = "assemble"
        g.command(command)
        assertEquals(command, g.getGradleCommand())
    }
    ...
}

Leave a Reply

Your e-mail address will not be published. Required fields are marked *