Dynamically provisioning Jenkins slaves with Jenkins Docker plugin

In Jenkins we have the master slave architecture where we have configured one machine as master, and some other machines as slaves. We can have a preferred number of executors in each of these machines. Following illustrates that deployment architecture.

In this approach, the concurrent builds in a given Jenkins slave are not isolated. All the concurrent builds in a given slave would be running in the same environment. If we need several builds to be run inside the same slave those builds should need same environment and actions should have taken to avoid issues such as port conflicts. This prevents us from utilizing the resources in a given slave.

With Docker we can address the above problems which are caused by the inability to isolate the builds. Jenkins Docker plugin allows a docker host to dynamically provision a slave, run a single build, then tear-down that slave. Following illustrates the deployment architecture.

I'll list down the steps to follow to get this done.

First let's see what needs to be done in Jenkins master.
1. Install Jenkins in one node which would be the master node. To install Jenkins, you could either run Jenkins jar directly or deploy the jar in tomcat.

2. Install Jenkins Docker Plugin[1]

Now lets see how to configure nodes which you are using to put up slave containers in.

3. Install Docker engine in each of the nodes. Please note that due to a bug[2] in Docker plugin you need to use a docker version below 1.12. Note that I was using Docker plugin version 0.16.1.

echo deb [arch=amd64] ubuntu-trusty main > /etc/apt/sources.list.d/docker.list

apt-get update

apt-get install docker-engine=1.11.0-0~trusty

4.  Add the current user to the docker group - not a required step. If this is not done you will need to use root privileges(use sudo) to issue docker commands. And once step 3 is done anyone with the keys can give any instructions to Docker daemon. No need of sudo or being in docker group.

You can test if the installation is successful by running hello-world container
docker run hello-world

5. This is not a mandatory step but if you need to protect the docker daemon, by following [3] create a CA, server and client keys.
(Note that by default Docker runs via a non-networked Unix socket. It can also optionally communicate using an HTTP socket, and in order to do our job we need it to be able to communicate through an HTTP socket. And for Docker to be reachable via the network in a safe manner, you can enable TLS by specifying the tlsverify flag and pointing Docker’s tlscacert flag to a trusted CA certificate which is what are doing in this step)

6.  configure /etc/default/docker as follows.
DOCKER_OPTS="--tlsverify --tlscacert=/path/to/ca.pem --tlscert=/path/to/server-cert.pem --tlskey=/path/to/server-key.pem -H tcp://"

Now let's see what are the configurations to be done in Jenkins master. We need jenkins master know about the nodes we previously configured to run slave containers in.

7. Go to https://yourdomain/jenkins/configure.
What Docker plugin does is adding Docker as a jenkins cloud provider. So each node we have will be a new “cloud”. Therefore for each node, throught “Add new cloud” section, add a clould of the type “Docker”. Then we need to fill configuration options as appropriate. Note that the Docker URL should be something like https://ip:2376 or https://thedomain:2376 where ip/thedomain are the ip or the domain of the node you are adding. 

8. If you did follow step 3, in credentials section, we need to “Add” new credentials of the type “Docker certificates directory”. This directory should contain the server keys/CA/certs. Please note that you will need to have the ca,cert, client key names exactly as ca.pem, cert.pem and key.pem because I think those names are hardcoded in docker plugin source code therefore if custom names are put it won't work (I experienced it!)

9. You can press “Test Connection” button to test if the docker plugin could successfully communicate with our remote docker host. If it is successful, the docker version of the remote host should appear once the button is pressed. Note that if you have docker 1.12* installed, you will still see the the connection is successful but once you try building a job, you will get an exception since docker plugin has an issue with that version.

10. Under “Images” section, we need to add our docker image by “Add Docker template”. Note that you must have this image in your nodes you previously configured or need to have it in docker hub so that it can be pulled. 
Here also there are some other configurations to be done. Under, “Launch method” choose, “Docker SSH Computer Launcher” and add the credentials of the docker container which is created by our docker image. Note that these are NOT the credentials for the node itself but the credentials of our dynamically provisioned docker jenkins slaves.
Here, you can add a label to your docker image. This is a normal jenkins label which can be used to bind jobs to a given label.

11. Ok now we are good to try running one of our jenkins build jobs in a Docker container! Bind the job you prefer to a docker image using the label you previously put and click "Build Now"! 

You should see something similar to following. (Look at the bottom left corner)

Here we can see a new node named "docker-e86492df7c41" where "docker" is the  name I put for the docker cloud I had created and "e86492df7c41" is the ID of the docker container which was dynamically spawned to build the project.


No comments:

Post a Comment