Jenkins agents on AWS EKS

The steps outlined here should help you setup Jenkins agents on AWS EKS clusters. The agents are one-time only. In other words, every build gets a fresh agent and then it is thrown away.

The big advantage with Kubernetes is the concept of the pod, which supports multiple containers as a unit, and we make use of that here by bundling a Docker-in-docker container along with the main JNLP container.

– The VPC in which Jenkins Master is placed should have network connectivity via either being placed in the same VPC / VPC-peering / Transit-gateway connection.

Jenkins Master
– Plugin kubernetes-plugin  must be  installed. The minimum required version of this plugin is 1.23.2, which can only be installed on Jenkins version 2.190.1 and up.

– awscli version requirement minimum 1.16.300 and up. Previous versions do not support the command $ aws eks get-token which is a requirement for the plugin functionality.

– (This is no longer required for Kubernetes plugin versions 1.24.1 and up)
Java option “-Dorg.csanchez.jenkins.plugins.kubernetes.clients.cacheExpiration=60”
must be set in /etc/sysconfig/jenkins. This is to force the plugin to refresh the EKS token every 1 minute. The default cacheExpiration value is 24 hours which is much higher than the validity of EKS tokens (15min). Link:

Jenkins ELB
– The security group for Jenkins ELB needs to allow TCP:$JNLP_PORT and TCP:443 from the EKS worker-node security group. This allows the launched pod to communicate back to Jenkins as “Ready”, among other communication. The JNLP port for Jenkins is defined when you setup Jenkins and is found in Manage Jenkins.

– The EKS cluster security group needs to allow TCP:443 from Jenkins Master security group. This allows Jenkins Master to communicate with EKS to launch pods.

– Modify the aws-auth ConfigMap for the EKS cluster to allow the Jenkins Masterʼs IAM role access to the EKS cluster to start/stop pods. The IAM role of the Jenkins-Master should show up this way:

apiVersion: v1
kind: ConfigMap
name: aws-auth
namespace: kube-system
mapRoles: |
- rolearn: arn:aws:iam::<snip-acc-num>:role/rol-ops-tool-jenkins-master
username: jenkins
- fake-group-because-module-wants-it
mapUsers: |
mapAccounts: |

– Add a Role and a RoleBinding to the EKS cluster with policies that grant Jenkins permissions to spin up/down pods in the ‘operations’ namespace, among a few other actions. If you use helm, this is how you could do it:

{{- if .Values.jenkins_agent.enabled -}}
kind: RoleBinding
name: jenkins-agent-role-binding
namespace: operations
kind: Role
name: jenkins-agent-access
- kind: User
name: jenkins
{{- end }}

{{- if .Values.jenkins_agent.enabled -}}
kind: Role
namespace: operations
name: jenkins-agent-access
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
{{- end }}

– Generate a kubeconfig file for the cluster of your choice. Run this on your computer after logging into the account holding the EKS cluster:

$ aws eks list-clusters --region us-east-1 --output text
CLUSTERS my-awesome-cluster-7MWC

– Then fetch the kubeconfig for this cluster:

$ aws eks update-kubeconfig --name my-awesome-cluster-7MWC --kubeconfig config_file
Added new context arn:aws:eks:us-east-1:<snip-acc-num>:cluster/my-awesome-cluster-7MWC to <snip>

Remember the location of this file on your machine.

Jenkins UI
These changes must be made manually from the UI. Our Jenkins is built on EFS so our master remains stateless and stores everything on the EFS volume. This EFS volume is backed-up regularly, plus we installed the Jenkins Config-History plugin that ensures that we’re never far away from the correct config.


Create a credential in Jenkins of Global scope and kind ‘Secret file’. Upload the kubeconfig from the previous step.


Browse to Jenkins > Manage Jenkins > Configure System

Scroll to the bottom and setup a new Cloud of type Kubernetes.

Setup the EKS cluster, choosing the kubeconfig credential created in the previous step. The Kube URL and certificate are automatically fetched from the kubeconfig credential, and are not required to be set here.

Click “Test Connection” button and ensure you see a message Connection test successful before proceeding.

Enter the URL for your Jenkins website for “Jenkins URL”, such as “;

Jenkins tunnel: is your Jenkins domain followed by the Jenkins JNLP port, such as “”. This port is defined in Jenkins > Manage Jenkins > Configure Security.

Next, click “Pod Templates..“ and click “Add Pod Template”. The label defined in the “Labels” text-box is how you’d use this agent from within your builds/pipelines. Labels are separated by a space.

In the Container section, add a container that contains the jnlp-slave agent. This is the container that runs the JNLP jar and connects the slave to the master. Our image is hosted on ECR and is based off of this Dockerfile:

You can add environment variables for this container. In our case, we set one for defining the AWS region.

Optionally, add a second container. In our case, we add a “Docker-in-Docker” image so we may build docker images in our pipeline. Note, docker-in-docker requires that the container be run in the “Privileged” mode. This checkbox is accessible by clicking on the “Advanced” button for the container section.

(Optional) In the “Advanced” section for both the containers, add CPU and Memory limits to prevent the pod from consuming too much of available node resources. Example: 

(Optional) Add raw-YAML for the pod to define additional Pod-spec. In our case, we define a nodeSelector to select a set of nodes:

Next, scroll down and modify the “Workspace Volume” option to “Empty Dir Workspace Volume”. The default option creates an EBS volume in the background which is not only unnecessary, but also slows down pod-launch times. An empty-dir volume  provides a scratch space for the build, and uses the underlying EKS worker-node’s disk.

Click “Save” at the bottom of the page. You’re done!

Using the Jenkins pod – the jnlp and dind containers

Pipeline Groovy

In this example, the ‘infra-pod’ agent is called at the pipeline level. The first stage uses the jnlp container (which, in our case, points to the infra-node image). The second stage specifically calls for the ‘dind’ container within the ‘infra-pod’ and builds a docker container within it.

pipeline {
    agent { label 'infra-pod' }
    //Pipeline options
    options {

    stages {
        stage('This stage uses the jnlp container') {
            steps {
                echo "Test"
                    withCredentials([[$class: 'AmazonWebServicesCredentialsBinding', credentialsId: "aws-nonprod"]]) {
                    sh '''
                        aws sts get-caller-identity
        stage('This stage uses the dind container'){
            steps {
            container('dind') {    
                    dir("containers") {
                        checkout scm: [$class: 'GitSCM', userRemoteConfigs: [[url: "", credentialsId: 'github-jenkins-ssh-key']], branches: [[name: 'master']]]
                        sh '''
                            docker build example-container -t .

Sharing files between stages

The $WORKSPACE is a variable available in your builds. This variable points to the workspace directory for the current build, and it is shared across stages. This is the mount that we configured when we setup the pod in previous steps.

This shared mount allows another subsequent stage to access files, artifacts, and directories created in a previous stage.

For example, you could create a Dockerfile in one stage and build a docker image in a subsequent stage using the ‘dind’ container.

Posted in Tech. | Tagged , , , , , , , | Leave a comment

Jenkins Declarative Pipeline: Run a stage without holding up an agent

If you have a Jenkins declarative pipeline , you’re generally bound to have more than one stage with steps within each of them. The usual way of declaring a node/agent/slave is by declaring an agent directive encompassing the stages{} directive, like so:

pipeline {
    agent { label 'mynode' }
    stages { 
        stage('Example') {
            steps {
                echo 'Hello World'

However, occasionally you may wish to run a stage or two that doesn’t require an agent. A simple example would be a timeout or a sleep stage that is waiting for a previous stage to finish. If your timeout lasts more than a few minutes, you’d want to release the agent so another build may use it. Holding up an agent is a crime in Jenkins world.

(Although this is shown in the document link I shared above, it isn’t tagged for this usecase as such.)

Here’s a simple way to go about doing that:

pipeline {
    agent none

    stages {

        stage('Stage Do Something') {
            agent { label 'mynode' }
            steps {
                    echo "Something"

        stage('Stage Sleep') {
            agent none
            steps {
                sleep time: 3, unit: 'HOURS'

        stage('Stage Do More Of Something') {
            agent { label 'mynode' }
            steps {
                    echo "More Of Something"

‘Stage Sleep’ here is declared with agent none which simply retires the agent until it is called again in the next stage.

Hope that helps someone looking for a quick answer.

Posted in DevOps | Tagged , , , , , , , , | Leave a comment

AWS: Prevent VPC Modifications

If you have a busy AWS environment accessed by multiple developers, you will have someone modify your some aspect of your core infrastructure inadvertently.

In our case, we have our VPC-related infrastructure deployed using Cloudformation and maintained via CF stack updates. When devs modified VPC-related resources by circumventing CF stack updates, they rendered our infrastructure out-of-date and un-update-able by CF. Tracking these changes via CloudTrail and rolling them back manually was starting to cost us time and frustration.

Note: Our devs use SSO to login to AWS. Upon login, they assume cross-account roles attached with policies that determine what they can or cannot access.

Assuming that you have your developers sign-in in a similar fashion, below is a policy you can attach to that role to prevent them from modifying VPC-related resources.

Notice how, at the end of this policy, is a section that denies the deletion of this policy from the role? That is key to prevent devs from simply removing this policy from the role.

    "Version": "2012-10-17",
    "Statement": [
            "Effect": "Deny",
            "Action": [
            "Resource": "*"
            "Effect": "Deny",
            "Action": "iam:DeleteRolePolicy",
            "Resource": "arn:aws:iam::999999999999:role/Dev-Role-MBHPPM0DPW90"
Posted in Amazon Web Services, DevOps | Tagged , , , , , , , , , | Leave a comment

Cloudformation: Optional Resource Parameters

When creating Cloudformation templates, occasionally, you come across situations where you only want to remove parameters from a Resource when on certain conditions. As an example, for an ECS Service resource, the parameters ‘LoadBalancers’ and ‘Role’ are both required only if you want your service to be serviced by a loadbalancer. However, you may not want to create two templates to serve these two use cases: 1. Service with LB registration, and 2. Service without LB registration.

In such cases, you can make use of Cloudformation’s conditions, conditionals, and pseudo parameters. This use-case is defined in AWS documentation, but it can be hard to end up on that page via a Google search for terms such as ‘Cloudformation optional parameter turn off’

Here’s a quick example to achieve this in Yaml:

    Type: 'AWS::ECS::Service'
      - LogGroup
          - CreateTargetGroup
            'Fn::Sub': '${ClusterStackName}-EcsServiceRole'
          - !Ref "AWS::NoValue"
      TaskDefinition: !Ref TaskDefinition
      DesiredCount: !Ref AppDesiredCount
          - CreateTargetGroup
            - TargetGroupArn: !Ref TargetGroup
              ContainerPort: !Ref AppContainerPort
              ContainerName: !Ref AppName
          - !Ref "AWS::NoValue"
      Cluster: !ImportValue
        'Fn::Sub': '${ClusterStackName}-ClusterName'
        - Field: 'attribute:ecs.availability-zone'
          Type: spread
        - Field: instanceId
          Type: spread
Posted in Amazon Web Services, DevOps | Tagged , , , , , , | Leave a comment

AWS: Deleting Old Access-Key/Secret-Key Pairs

If you have a busy AWS environment with access to multiple developers , it can be useful to automatically clean up IAM user Access Keys every so often for security.

Here’s a simple Python script that can be plugged into an AWS Lambda function to cleanup Access-Key/Secret-Key Pairs older than 90 days.

The script has a whitelist capability if you want to avoid cleaning up IAM users from a certain IAM group.

The script also removes password profiles from IAM users in case your company policy is to use SSO and prevent users from creating their own AWS Console logins.

import boto3, sys, datetime, time

def cleanup(user,iam_client):
    response = iam_client.list_access_keys(UserName=user)
    for key in response['AccessKeyMetadata']:
        create_date = time.mktime(key['CreateDate'].timetuple())
        now = time.time()
        age = (now - create_date) // 86400
        if age > 90:
            print "AK [",key['AccessKeyId'],"] for user [", user, "] is older than 90 days. Deleting..."
            response = iam_client.delete_access_key(

    # Check if user has password profile
        response = iam_client.get_login_profile(UserName=user)
    except Exception as e:
        if 'NoSuchEntity' not in str(e):
        print "User [",user,"] has password profile. Deleting.."
        response = iam_client.delete_login_profile(UserName=user)

def handler(event, context):
    iam_client = boto3.client('iam')

    response = iam_client.list_groups()
    for item in response['Groups']:

    if whitelist_group_name not in group_list:
        print "Automation Users Group Doesn't Exist! Script Exiting."

    response = iam_client.list_users()
    print "----------------------------------------------"
    for item in response['Users']:
        user = item['UserName']
        response = iam_client.list_groups_for_user(UserName=user)
        if response['Groups']:
            for group in response['Groups']:
                if group['GroupName'] == whitelist_group_name:
                    print "User [",user,"] is an automation-user. Won't be touched."
        if is_automation_user==True:
            print "----------------------------------------------"
            print "User [",user,"] is a regular user. Checking credentials.."
            print "Cleanup on user [",user,"] is now complete."
        print "----------------------------------------------"
Posted in Amazon Web Services, DevOps | Leave a comment

Real Backups On The Cheap

So you have your data on the “cloud” – on Dropbox or GDrive folders – and you believe you’ve done a decent job of safe-guarding your precious files, while really you’ve only saved your files against total computer or hard-drive loss. I used to be this guy until one day when I discovered a few of my precious files went missing from Dropbox. I searched and searched everywhere to discover I’d truly lost them. Was it an accidental delete, or was it a bad program that deleted it? I’ll never know.

While free-Dropbox comes with free 30days restore, it did not have my precious files, and that made me realize that we all have way too much stored in our cloud-folders and there is absolutely no way one can keep a handle on what was deleted/added/modified and when.

Although this incident taught me a lesson and made me store offline-copies of my data on another hard-drive, I really had to go through another painful loss before I seriously started looking for an alternative. Several albums belonging to my really precious music collection (synced across my computers using Resilio, Play Music, and duct-tape) got corrupt and/or missing at some point and were nicely synced across all devices. Again, the sheer volume of data (10,000 music files) ensured I’d find out much much later after too much damage had been done.

So now, I seriously started looking for a solution that would endure the tests of time, stupidity, bad software, and a drunk-me. A solution that would ensure extreme durability while still maintaining relatively quick access when needed.

Enter: AWS S3

Now, I’m sure you know everything about S3 and how inexpensive it can be (0.023 per GB/month for standard-class) to store multiple GBs or TBs of data. But if you’re not an enterprise user, and you’re like me who likes simple and cheap, you’re probably considering S3’s infrequent-storage class at 0.0125 per GB/month. At around 60GB of potential data to store on the cloud, that is just 9$ USD per year vs S3’s standard-class ~17$. But, wait, there’s something even cheaper.

Enter Glacier – the cloud-tape solution from Amazon. Glacier is cheap, as durable as S3 standard-class (99.999999999% durability) and comes at a dirt cheap 0.004$ per GB/month cost. That really is 0.4cents. For 60GB, my cost for the entire year is 2.88$. I remember paying 100$ for a 4.75GB HDD way back. Today, I pay 4.8$ for 100GB of ultra-durable and multi-AZ-replicated storage on Glacier. Times have changed, indeed!

However, before you close this tab and proceed to backup your files into Glacier using a glacier tool (such as, Freeze or FastGlacier), I’d like to let you know that once you upload to a Glacier vault, you will need to request AWS to have access to your data. This includes listing of the contents, so uploading to raw Glacier is not recommended for this use-case.

Instead, I recommend uploading your data into an S3 bucket with a lifecycle management policy set to move data into Glacier 0-days (zero-days) after upload. This ensures that your objects in S3 are moved to Glacier end-of-zeroth-day so you’re not billed for even a single day of S3 storage. AWS moves your data into what it calls the Glacier-class of S3 storage.

This approach ensures that you always have the ability to list your glacier contents using S3 APIs/AWS console. This also lets you use cheap or free S3 tools (such as CyberDuck or S3browser) to upload your data into Glacier vs. having to spend $30+ on Glacier-specific tools such as Freeze.

I hope this was helpful. So far, I’ve managed to upload 20 of 60GB of my data and have been pleasantly surprised by how easy it has been and how much stress it takes off your mind regarding your backups.

A future blog post will detail out the steps required to download your S3-Glacier backups to your computer in an emergency.

Posted in Amazon Web Services, DevOps | Tagged , , , , , , , , , , | Leave a comment

AWS S3 Bucket Policy to Only Allow Encrypted Object Uploads

Amazon S3 supports two types of encryption (server-side-encryption or SSE) for security of data at rest — AES256, and AWS/KMS. AES256 is termed as S3-managed encryption keys [SSE-S3], whereas, KMS is termed, well, SSE-KMS where in the customer manages their encryption keys. A default KMS key is created for you the first time you use a service, such as say, S3.

For more information on SSE-S3, check out this link.
For more information on SSE-KMS, check out this link.

There is growing support among tools (such as Logstash) for AES256-based SSE, so it may make sense to choose this encryption algorithm for your data.

If you want your users (whether IAM users, IAM roles, or regular console users) to never upload un-encrypted data (for, well, security reasons), then it makes sense to have a bucket policy to explicitly deny uploads of un-encrypted objtects.

This example bucket policy was derived using this page . This policy allows for both SSE-S3 and SSE-KMS based encrypted objects while denying everything else.

    "Version": "2012-10-17",
    "Id": "PutObjPolicy",
    "Statement": [
            "Sid": "DenyUnEncryptedObjectUploads",
            "Effect": "Deny",
            "Principal": {
                "AWS": "*"
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::your-test-storage-bucket/*",
            "Condition": {
                "StringNotEquals": {
                    "s3:x-amz-server-side-encryption": [
            "Sid": "DenyUnEncryptedObjectUploads",
            "Effect": "Deny",
            "Principal": "*",
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::your-test-storage-bucket/*",
            "Condition": {
                "Null": {
                    "s3:x-amz-server-side-encryption": "true"

And, that is it. As simple as that!

If you’re using the AWS CLI to upload objects, here’s how to use various forms of encryption (or not):


#aws s3 cp --sse AES256  file.txt s3://test-dh01-storage/file.txt

AWS/KMS using the default aws/s3 KMS encryption key

#aws s3 cp --sse aws:kms file.txt s3://test-dh01-storage/file.txt


#aws s3 cp file.txt s3://test-dh01-storage/file.txt


Posted in Amazon Web Services, Tech. | Tagged , , , , , , , , , , , | Leave a comment

DC/OS Exhibitor on S3 – Issues & Workarounds

If you want basic resiliency around your DC/OS master nodes when hosting them on AWS, you’ll want to have Exhibitor store its data in AWS S3. In order to do so, you’ll want to grant S3 IAM roles to your master nodes so they may talk to a specific S3 bucket. Then, you use this genconf/config.yaml config to install (or re-install) your DC/OS cluster:

exhibitor_storage_backend: aws_s3
aws_region: us-east-1
exhibitor_explicit_keys: false
s3_bucket: <bucket-name>
s3_prefix: my-dcos-exhibitor-file

Note: You do NOT need to have your master nodes use a load balancer (master_discovery: master_http_loadbalancer ) for discovery even if you decide to use S3 for exhibitor backend. Yes, they’re often used together, but it’s not mandatory to use them together.

Back to your installation, assuming you follow instructions here to continue the installation. Once everything is deployed, head to your S3 bucket, search for and open up the my-dcos-exhibitor-file. Here, two things can happen:

  1. You find the file doesn’t exist. In this case, take a look at your genconf/config.yaml file and count the number of master nodes that you listed in there. Here’s my list of masters:

    If you have less than 3 master nodes in your list, then Exhibitor defaults to using a “static” exhibitor backend, and won’t use S3 to store its config. So, use 3 or more nodes and reinstall.

  2. The second issue that can happen is that only one of the master nodes succeeds to write to S3 into the my-dcos-exhibitor-file and you’re left with a broken cluster. Your services ( #systemctl | grep dcos ) will all fail and your postflight will timeout and fail( #sudo bash --postflight ). You may also see tons of “null-lock-*” files hanging out in your S3 bucket:screenshot-at-2016-12-14-154748

    If this is your case, go checkout the my-dcos-exhibitor-file from S3. If you see something like this, there may be something I can do to help:

    #Auto-generated by Exhibitor
    #Wed Dec 14 19:49:59 UTC 2016\:\:

    If you look at the highlighted lines, you may notice something. What happened to your other master nodes, you ask? Well, I don’t have an answer, but there’s a workaround.

    Edit those two (highlighted in red) lines to include all your servers. Make sure to give them an id.\:,1\:,3\:\:,1\:,3\:

    Now, give your entire cluster a few minutes while all master nodes stop being asshats and start to discover each other. Once Exhibitor is happy, DC/OS stops being whiny. All your services will be up, and you’ll soon be able to login to your DC/OS UI.

    I hope that was helpful. I wasted an entire day (well I got paid to do it) trying to figure this out.

Posted in Amazon Web Services, Linux, Tech. | Tagged , , , , , , , | Leave a comment

DC/OS Kill Mesos Framework

You want to kill a Mesos framework but you’ve no idea how? You’ve looked at this page but it still doesn’t make sense? Then here’s what you need to do to kill a framework on Mesos.

In my case, I have Spark running on DC/OS. In this particular situation, I had Spark in a limbo state not running any tasks we were throwing at it. Our resources were at 100% utilization but none of our tasks were really running, although their status said otherwise. After trying in vain to kill using “dcos spark kill” I tried to kill individual drivers using this:

curl -XPOST https://<your-mesos-endpoint>/master/teardown -d 'frameworkId=6078e555-358c-454f-9359-422f1b6026bd-0002-driver-20161203012921-30627'

But, even though it deleted the drivers from Mesos, the drivers continued to run in the cluster. I figured there had to be a better way to do this. And that’s when I decided to kill the Spark framework instead:

curl -XPOST https://<your-mesos-endpoint>/mesos/master/teardown -d 'frameworkId=6078e555-358c-454f-9359-422f1b6026bd-0002'

And that worked like magic. It killed all the running and pending drivers and also killed Spark.

Once that was done, I removed Spark’s exhibitor entry, and reinstalled Spark.

Posted in Tech. | Tagged , , , , , , | Leave a comment

[How To] Java Heap Dump and Stack Trace

Here’s how you can quickly get java heap dump and stack dump on Amazon Linux with Java 8. Your mileage with these commands may vary.

First, find the process ID (pid) and user for the java process that you want heap/stack dumps of.  I like running:

ps -aux | grep java

Note the user under which the Java process is running and switch to it. In my case, it was “webapp”.

# sudo su
# su webapp

Now, run this for the heap dump:

jmap -dump:file=/home/webapp/heapdump.bin <pid>

This for stack trace:

 jstack -l <pid> > /home/webapp/strace.log

Optional (to scp these files out): Copy these files over to your regular user’s home directory. For me, this was “ec2-user”. Exit to root user first.

# exit
# cp /home/webapp/heapdump.bin /home/webapp/strace.log /home/ec2-user
# chown ec2-user:ec2-user /home/ec2-user/heapdump.bin /home/ec2-user/strace.log

Now use your favorite scp tool to scp the files out.

[Extra Tip] Here’s how you’d get more information (such as HeapSize) about the specific JDK running on the instance:

#java -XX:+PrintFlagsFinal -version | grep HeapSize
    uintx ErgoHeapSizeLimit                         = 0                                   {product}
    uintx HeapSizePerGCThread                       = 87241520                            {product}
    uintx InitialHeapSize                          := 132120576                           {product}
    uintx LargePageHeapSizeThreshold                = 134217728                           {product}
    uintx MaxHeapSize                              := 2095054848                          {product}
    openjdk version "1.8.0_91"
    OpenJDK Runtime Environment (build 1.8.0_91-b14)
    OpenJDK 64-Bit Server VM (build 25.91-b14, mixed mode)
Posted in Linux, Tech. | Leave a comment