Jenkins agents on AWS EKS

The steps outlined here should help you setup Jenkins agents on AWS EKS clusters. The agents are one-time only. In other words, every build gets a fresh agent and then it is thrown away.

The big advantage with Kubernetes is the concept of the pod, which supports multiple containers as a unit, and we make use of that here by bundling a Docker-in-docker container along with the main JNLP container.

PreReq
– The VPC in which Jenkins Master is placed should have network connectivity via either being placed in the same VPC / VPC-peering / Transit-gateway connection.

Jenkins Master
– Plugin kubernetes-plugin https://github.com/jenkinsci/kubernetes-plugin  must be  installed. The minimum required version of this plugin is 1.23.2, which can only be installed on Jenkins version 2.190.1 and up.

– awscli version requirement minimum 1.16.300 and up. Previous versions do not support the command $ aws eks get-token which is a requirement for the plugin functionality.

– (This is no longer required for Kubernetes plugin versions 1.24.1 and up)
Java option “-Dorg.csanchez.jenkins.plugins.kubernetes.clients.cacheExpiration=60”
must be set in /etc/sysconfig/jenkins. This is to force the plugin to refresh the EKS token every 1 minute. The default cacheExpiration value is 24 hours which is much higher than the validity of EKS tokens (15min). Link: https://github.com/jenkinsci/kubernetes-plugin#running-with-a-remote-kubernetes-cloud-in-aws-eks

Jenkins ELB
– The security group for Jenkins ELB needs to allow TCP:$JNLP_PORT and TCP:443 from the EKS worker-node security group. This allows the launched pod to communicate back to Jenkins as “Ready”, among other communication. The JNLP port for Jenkins is defined when you setup Jenkins and is found in Manage Jenkins.

EKS
– The EKS cluster security group needs to allow TCP:443 from Jenkins Master security group. This allows Jenkins Master to communicate with EKS to launch pods.

– Modify the aws-auth ConfigMap for the EKS cluster to allow the Jenkins Masterʼs IAM role access to the EKS cluster to start/stop pods. The IAM role of the Jenkins-Master should show up this way:

apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
<-snip->
- rolearn: arn:aws:iam::<snip-acc-num>:role/rol-ops-tool-jenkins-master
username: jenkins
groups:
- fake-group-because-module-wants-it
mapUsers: |
mapAccounts: |

– Add a Role and a RoleBinding to the EKS cluster with policies that grant Jenkins permissions to spin up/down pods in the ‘operations’ namespace, among a few other actions. If you use helm, this is how you could do it:

---
{{- if .Values.jenkins_agent.enabled -}}
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: jenkins-agent-role-binding
namespace: operations
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: jenkins-agent-access
subjects:
- kind: User
name: jenkins
apiGroup: rbac.authorization.k8s.io
{{- end }}

---
{{- if .Values.jenkins_agent.enabled -}}
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: operations
name: jenkins-agent-access
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
{{- end }}

KubeConfig
– Generate a kubeconfig file for the cluster of your choice. Run this on your computer after logging into the account holding the EKS cluster:

$ aws eks list-clusters --region us-east-1 --output text
CLUSTERS my-awesome-cluster-7MWC

– Then fetch the kubeconfig for this cluster:

$ aws eks update-kubeconfig --name my-awesome-cluster-7MWC --kubeconfig config_file
Added new context arn:aws:eks:us-east-1:<snip-acc-num>:cluster/my-awesome-cluster-7MWC to <snip>

Remember the location of this file on your machine.

Jenkins UI
These changes must be made manually from the UI. Our Jenkins is built on EFS so our master remains stateless and stores everything on the EFS volume. This EFS volume is backed-up regularly, plus we installed the Jenkins Config-History plugin that ensures that we’re never far away from the correct config.

Credentials

Create a credential in Jenkins of Global scope and kind ‘Secret file’. Upload the kubeconfig from the previous step.

Configuration

Browse to Jenkins > Manage Jenkins > Configure System

Scroll to the bottom and setup a new Cloud of type Kubernetes.

Setup the EKS cluster, choosing the kubeconfig credential created in the previous step. The Kube URL and certificate are automatically fetched from the kubeconfig credential, and are not required to be set here.

Click “Test Connection” button and ensure you see a message Connection test successful before proceeding.

Enter the URL for your Jenkins website for “Jenkins URL”, such as “https://jenkins.example.com&#8221;

Jenkins tunnel: is your Jenkins domain followed by the Jenkins JNLP port, such as “jenkins.example.com:49817”. This port is defined in Jenkins > Manage Jenkins > Configure Security.

Next, click “Pod Templates..“ and click “Add Pod Template”. The label defined in the “Labels” text-box is how you’d use this agent from within your builds/pipelines. Labels are separated by a space.

In the Container section, add a container that contains the jnlp-slave agent. This is the container that runs the JNLP jar and connects the slave to the master. Our image is hosted on ECR and is based off of this Dockerfile: https://github.com/jenkinsci/docker-agent/blob/master/8/alpine/Dockerfile

You can add environment variables for this container. In our case, we set one for defining the AWS region.

Optionally, add a second container. In our case, we add a “Docker-in-Docker” image so we may build docker images in our pipeline. Note, docker-in-docker requires that the container be run in the “Privileged” mode. This checkbox is accessible by clicking on the “Advanced” button for the container section.

(Optional) In the “Advanced” section for both the containers, add CPU and Memory limits to prevent the pod from consuming too much of available node resources. Example: 

(Optional) Add raw-YAML for the pod to define additional Pod-spec. In our case, we define a nodeSelector to select a set of nodes:

Next, scroll down and modify the “Workspace Volume” option to “Empty Dir Workspace Volume”. The default option creates an EBS volume in the background which is not only unnecessary, but also slows down pod-launch times. An empty-dir volume https://kubernetes.io/docs/concepts/storage/volumes/#emptydir  provides a scratch space for the build, and uses the underlying EKS worker-node’s disk.

Click “Save” at the bottom of the page. You’re done!

Using the Jenkins pod – the jnlp and dind containers

Pipeline Groovy

In this example, the ‘infra-pod’ agent is called at the pipeline level. The first stage uses the jnlp container (which, in our case, points to the infra-node image). The second stage specifically calls for the ‘dind’ container within the ‘infra-pod’ and builds a docker container within it.

pipeline {
    agent { label 'infra-pod' }
    //Pipeline options
    options {
        ansiColor('xterm')
    }

    stages {
        stage('This stage uses the jnlp container') {
            steps {
                echo "Test"
                    withCredentials([[$class: 'AmazonWebServicesCredentialsBinding', credentialsId: "aws-nonprod"]]) {
                    sh '''
                        aws sts get-caller-identity
                        '''
                    }
            }
        }
        stage('This stage uses the dind container'){
            steps {
            container('dind') {    
                    dir("containers") {
                        checkout scm: [$class: 'GitSCM', userRemoteConfigs: [[url: "git@github.com:Example/containers", credentialsId: 'github-jenkins-ssh-key']], branches: [[name: 'master']]]
                        sh '''
                            docker build example-container -t .
                        '''
                    }
                }
            }
        }        
    }
}

Sharing files between stages

The $WORKSPACE is a variable available in your builds. This variable points to the workspace directory for the current build, and it is shared across stages. This is the mount that we configured when we setup the pod in previous steps.

This shared mount allows another subsequent stage to access files, artifacts, and directories created in a previous stage.

For example, you could create a Dockerfile in one stage and build a docker image in a subsequent stage using the ‘dind’ container.

This entry was posted in Tech. and tagged , , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s