[How To] Java Heap Dump and Stack Trace

Here’s how you can quickly get java heap dump and stack dump on Amazon Linux with Java 8. Your mileage with these commands may vary.

First, find the process ID (pid) and user for the java process that you want heap/stack dumps of.  I like running:

ps -aux | grep java

Note the user under which the Java process is running and switch to it. In my case, it was “webapp”.

# sudo su
# su webapp

Now, run this for the heap dump:

jmap -dump:file=/home/webapp/heapdump.bin <pid>

This for stack trace:

 jstack -l <pid> > /home/webapp/strace.log

Optional (to scp these files out): Copy these files over to your regular user’s home directory. For me, this was “ec2-user”. Exit to root user first.

# exit
# cp /home/webapp/heapdump.bin /home/webapp/strace.log /home/ec2-user
# chown ec2-user:ec2-user /home/ec2-user/heapdump.bin /home/ec2-user/strace.log

Now use your favorite scp tool to scp the files out.

[Extra Tip] Here’s how you’d get more information (such as HeapSize) about the specific JDK running on the instance:

#java -XX:+PrintFlagsFinal -version | grep HeapSize
    uintx ErgoHeapSizeLimit                         = 0                                   {product}
    uintx HeapSizePerGCThread                       = 87241520                            {product}
    uintx InitialHeapSize                          := 132120576                           {product}
    uintx LargePageHeapSizeThreshold                = 134217728                           {product}
    uintx MaxHeapSize                              := 2095054848                          {product}
    openjdk version "1.8.0_91"
    OpenJDK Runtime Environment (build 1.8.0_91-b14)
    OpenJDK 64-Bit Server VM (build 25.91-b14, mixed mode)
Posted in Linux, Tech. | Leave a comment

S3 access from VPC or Corporate IP

If you’ve been wanting to allow HTTP access to your favorite S3 bucket from your VPC and/or from your corporate LAN’s public IP, then this blog could help make your job easier. At the end of this, you will be able to use your S3 bucket as an artifact server serving files via HTTP.

To begin, we’ll need to setup a VPC Endpoint. Head to VPC->Endpoints->Create. Select your VPC and choose S3 from the next dropdown:

screen-shot-2016-11-18-at-11-59-05-am

Copy the “vpce-xxxxxxx” resource-id that is returned after create.

Next, head to your S3 bucket and in the properties side bar, hit “Permissions” and click “Add Bucket Policy” and enter something like this:

{
    "Version": "2012-10-17",
    "Id": "Policy1478550966902",
    "Statement": [
        {
            "Sid": "Stmt1478708488423",
            "Effect": "Allow",
            "Principal": "*",
            "Action": [
                "s3:GetObject",
                "s3:ListBucket",
                "s3:PutObject"
            ],
            "Resource": [
                "arn:aws:s3:::yourbucketname",
                "arn:aws:s3:::yourbucketname/*"
            ],
            "Condition": {
                "StringEqualsIgnoreCase": {
                    "aws:sourceVpce": "vpce-xxxxxx"
                }
            }
        }
    ]
}

In my case, I wanted GetObject, ListBucket, and PutObject access only from our VPC so that’s what I put in there. Your use case may vary.

Remember, I also want access to the bucket from our corporate IP address. Note, if you just added the IP address to the Condition field, it’d act as an “AND” policy so it will only grant access to the bucket if the IP address matches AND traffic is coming from the VPC. Whereas we want access to the bucket if IP address matches OR if hitting the bucket from VPC. So this is NOT going to work:

            "Condition": {
                "StringEqualsIgnoreCase": {
                    "aws:sourceVpce": "vpce-xxxxxx"
                },
                "IpAddress": {
                    "aws:SourceIp": "X.X.X.X/16"
                }
            }

Here’s what you’d use instead – use two different statements:

{
    "Version": "2012-10-17",
    "Id": "Policy1478550966902",
    "Statement": [
        {
            "Sid": "Stmt1478550959905",
            "Effect": "Allow",
            "Principal": "*",
            "Action": [
                "s3:GetObject",
                "s3:ListBucket",
                "s3:PutObject"
            ],
            "Resource": [
                "arn:aws:s3:::yourbucketname",
                "arn:aws:s3:::yourbucketname/*"
            ],
            "Condition": {
                "IpAddress": {
                    "aws:SourceIp": "X.X.X.X/16"
                }
            }
        },
        {
            "Sid": "Stmt1478708488423",
            "Effect": "Allow",
            "Principal": "*",
            "Action": [
                "s3:GetObject",
                "s3:ListBucket",
                "s3:PutObject"
            ],
            "Resource": [
                "arn:aws:s3:::yourbucketname",
                "arn:aws:s3:::yourbucketname/*"
            ],
            "Condition": {
                "StringEqualsIgnoreCase": {
                    "aws:sourceVpce": "vpce-xxxxxx"
                }
            }
        }
    ]
}

And, done!

Posted in Amazon Web Services, Tech. | Tagged , , , , , , | Leave a comment

Programmatically Trigger Travis Builds on GitHub

Our code for this specific project (which is on GitHub) is designed to handle infrastructure creation and rebuilds. For this purpose, we wanted to to be able to trigger our Travis builds for specific branches via simple Bash scripts, based on meeting certain conditions.

Here we look at logging into Travis using GitHub a token and then using this login to generate a Travis authorization token – a token which allows us to run API calls against Travis’ REST API.

First, we begin by creating a new GitHub token – this is a one time activity. At the end of creating this token, we’ll end up with bare-minimum scopes to trigger Travis builds.

To do this, you click on your profile on the GitHub page and click “Settings”. Here, in the “Developer Settings” heading, head to “Personal access tokens” and hit “Generate new token”. Select the following scopes:

git-2

Check the “public_repo” box if yours is a public Git repository. Hit “Generate Token”.

You’ll now be able to copy the token. I’ve snipped the code below, but you’ll see a long 82-character token string.

git-3

No more GUI. From here on, we can use this Github token to get our work done all via command line.

In your script, you’ll login to Travis this way (make sure the machine running the script has Travis package installed):

/usr/local/bin/travis login --pro --github-token <snipped> --skip-completion-check

The “–skip-completion-check” flag is required to ensure non-interactive CLI access when using Travis.

Next, get a Travis authorization token by doing this:

token=$(/usr/local/bin/travis token --pro --skip-completion-check | cut -f5)

Now, set the following string as the body for the upcoming API call. $branch will be the branch of your project on which you want to trigger the build:

body='{ "request": { "branch":"'${branch}'" }}'

Next, go ahead and make the actual API call to trigger a Travis build using the last known successful build:

curl -so /dev/null -X POST -w "%{http_code}" \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-H "Travis-API-Version: 3" \
-H "Authorization: token $token" \
-d "$body" https://api.travis-ci.com/repo/yourproject/requests

The -so /dev/null -X POST -w “%{http_code}” portion is specific to our case where we only want the HTTP response code and nothing else. Our script then takes further decisions based on the output of the CURL call.

Well, that’s it! If you have any questions, please drop them in as comments.

Posted in DevOps, Tech. | Tagged , , , , , , , , | Leave a comment

Automating IAM Instance Profile with Ansible

My previous post talked about creating AWS IAM Instance Profiles so you don’t have to save keys on the instances. In this post, we’ll look at using Ansible to launch EC2 instances with IAM Instance Profiles attached to them (you don’t want to do this manually forever, do you?).

(You cannot attach IAM Instance Profiles to existing instances, unfortunately).

Assuming that you already have in place an IAM user that you’ve dedicated for use by Ansible, let’s head to the next step. In order for this IAM user to attach IAM Instance Profiles to EC2 instances, the user will have to be setup with “PassRole” privileges.

Head to the AWS Console and navigate to Identity and Access Management (IAM). Hit the “Users” tab and click on the user, as discussed above.

policy (1)

Then, hit the “Permissions” tab and scroll down to “Inline Policies”, and hit “click here”.

policy (2)

In the “Set Permissions” page, select “Policy Generator” and hit Select.

policy (3)

In the AWS Service drop down, select AWS IAM, and then select “PassRole” from the Actions drop down. Paste in the ARN of the role created from the previous blog post.

policy-333

Click “Add Statement” and head to the next screen for a Summary.

policy (4)

Hit Next to arrive at the final Review screen.policy (5)

Hit Apply Policy to finish.

Your Ansible playbooks can now create EC2 instances with IAM Instance Profiles attached to them. You’re done!

Posted in Amazon Web Services, Tech. | Tagged , , , , , , , , | Leave a comment

Simple Howto: AWS IAM Instance Profiles

For those of you looking to set up applications to run on EC2 instances without having to put credentials on the machines, there is an option. AWS has a great feature for exactly this purpose, and it’s called IAM Instance Profiles.

The IAM Instance Profile feature allows EC2 instances to call other AWS services on your behalf, with no need for setting up keys on the instance. AWS takes care of securing the keys within instance metadata, and also rotates keys regularly. More info here.

In my case, I was trying to setup Logstash to run on EC2 instances, talking to S3 buckets, without having to hard-code keys or upload them to the instance itself. This AWS feature saved the day.

Let’s get started with setting up an IAM Instance Profile. To begin with, login to the AWS Console and head to Identity and Access Management (IAM). Once there, click “Create Role”. Then, enter a name for the role:

role (1)

On the next page, select Role Type as “Amazon EC2”. This will create the necessary IAM Instance Profile in the background, with the same name as the role.

role (2)

Next, attach a policy to this role depending on your use case. In my case I wanted S3 access from my instances, so I selected AmazonS3FullAccess.

role (3)

Click Finish. On the review page, grab the Role ARN for later use. We’re done!

role (4)

Now, to launch instances using this IAM Instance Profile, simply select the IAM Role from the dropdown on the “Configure Instance Details” page.

role (5)png

You’re all set with IAM Instance Profiles!

Posted in Amazon Web Services, Tech. | Tagged , , , , , , , | Leave a comment

TravisCI: Export From Bash Scripts

Let’s say your TravisCI file is cluttering up due to having too many shell / bash commands in the “.travis.yml” file and you’ve decided to move the commands out to a separate shell script. Now, you’ll want to get export/return data out from this script but you don’t know how. There are two ways to approach this.

If your bash script is only expected to return one value, then you may want to call it from the TravisCI yaml file this way:

script:
- export ENDPOINT=$(bash discover_service.sh);

And you’ll want to add this to the bash script you’re calling (exit returns the variable to whatever’s calling it):

if [ $TRAVIS_BRANCH == 'master' ]; then  
  exit $ENDPOINT  
else  
  exit $ENDPOINT
fi

 

However, if your bash script sets multiple variables or is expected to generate a lot of data, you could “source” it from the TravisCI yaml file this way:

script:
   - . ./deploy.sh

I hope that was helpful!

Posted in DevOps, Tech. | Tagged , , , , , , , , | Leave a comment

Learn Scrum in Less Than An Hour

Well, if you’re part of an organization that does not do Agile (Scrum or not), OR an organization that’s in a transformation towards it, OR in an organization that already does Agile but you’re new and do not know what it is, then look no further than this book right here: http://amzn.to/1tiovv9

Scrum: a Breathtakingly Brief and Agile Introduction

This is an amazingly short, crisp, and distilled book that doesn’t waste any time in beating around the bush or thanking their family and friends for their help and patience in writing it. The book quickly gets to talking about the various features of Scrum form of Agile – including: Roles, Scrum Artifacts, Sprints (and what they mean), and, well, that’s almost it! At the end of the read you’d have come out with a very clear foundational idea of what Scrum is. You could choose to further explore each of the topics on your own (Google?) for further information or clarification.

This is also one of those books that you could give out to your team if you’re part of an organization about to embark on an Agile Scurm journey. It’s cheap, apt, concise and will get the team going in a very short time.

I’m a huge fan of short books that can be re-read regularly – I prefer this form over long books where you forget the context of the first chapter by the time you get to the last one. I chose the Kindle version of the book which is under a dollar and is cheaper than a can of soda.

Posted in Reviews, Tech. | Tagged , , , , | Leave a comment