Real Backups On The Cheap

So you have your data on the “cloud” – on Dropbox or GDrive folders – and you believe you’ve done a decent job of safe-guarding your precious files, while really you’ve only saved your files against total computer or hard-drive loss. I used to be this guy until one day when I discovered a few of my precious files went missing from Dropbox. I searched and searched everywhere to discover I’d truly lost them. Was it an accidental delete, or was it a bad program that deleted it? I’ll never know.

While free-Dropbox comes with free 30days restore, it did not have my precious files, and that made me realize that we all have way too much stored in our cloud-folders and there is absolutely no way one can keep a handle on what was deleted/added/modified and when.

Although this incident taught me a lesson and made me store offline-copies of my data on another hard-drive, I really had to go through another painful loss before I seriously started looking for an alternative. Several albums belonging to my really precious music collection (synced across my computers using Resilio, Play Music, and duct-tape) got corrupt and/or missing at some point and were nicely synced across all devices. Again, the sheer volume of data (10,000 music files) ensured I’d find out much much later after too much damage had been done.

So now, I seriously started looking for a solution that would endure the tests of time, stupidity, bad software, and a drunk-me. A solution that would ensure extreme durability while still maintaining relatively quick access when needed.

Enter: AWS S3

Now, I’m sure you know everything about S3 and how inexpensive it can be (0.023 per GB/month for standard-class) to store multiple GBs or TBs of data. But if you’re not an enterprise user, and you’re like me who likes simple and cheap, you’re probably considering S3’s infrequent-storage class at 0.0125 per GB/month. At around 60GB of potential data to store on the cloud, that is just 9$ USD per year vs S3’s standard-class ~17$. But, wait, there’s something even cheaper.

Enter Glacier – the cloud-tape solution from Amazon. Glacier is cheap, as durable as S3 standard-class (99.999999999% durability) and comes at a dirt cheap 0.004$ per GB/month cost. That really is 0.4cents. For 60GB, my cost for the entire year is 2.88$. I remember paying 100$ for a 4.75GB HDD way back. Today, I pay 4.8$ for 100GB of ultra-durable and multi-AZ-replicated storage on Glacier. Times have changed, indeed!

However, before you close this tab and proceed to backup your files into Glacier using a glacier tool (such as, Freeze or FastGlacier), I’d like to let you know that once you upload to a Glacier vault, you will need to request AWS to have access to your data. This includes listing of the contents, so uploading to raw Glacier is not recommended for this use-case.

Instead, I recommend uploading your data into an S3 bucket with a lifecycle management policy set to move data into Glacier 0-days (zero-days) after upload. This ensures that your objects in S3 are moved to Glacier end-of-zeroth-day so you’re not billed for even a single day of S3 storage. AWS moves your data into what it calls the Glacier-class of S3 storage.

This approach ensures that you always have the ability to list your glacier contents using S3 APIs/AWS console. This also lets you use cheap or free S3 tools (such as CyberDuck or S3browser) to upload your data into Glacier vs. having to spend $30+ on Glacier-specific tools such as Freeze.

I hope this was helpful. So far, I’ve managed to upload 20 of 60GB of my data and have been pleasantly surprised by how easy it has been and how much stress it takes off your mind regarding your backups.

A future blog post will detail out the steps required to download your S3-Glacier backups to your computer in an emergency.

Advertisements
Posted in Amazon Web Services, DevOps | Tagged , , , , , , , , , , | Leave a comment

AWS S3 Bucket Policy to Only Allow Encrypted Object Uploads

Amazon S3 supports two types of encryption (server-side-encryption or SSE) for security of data at rest — AES256, and AWS/KMS. AES256 is termed as S3-managed encryption keys [SSE-S3], whereas, KMS is termed, well, SSE-KMS where in the customer manages their encryption keys. A default KMS key is created for you the first time you use a service, such as say, S3.

For more information on SSE-S3, check out this link.
For more information on SSE-KMS, check out this link.

There is growing support among tools (such as Logstash) for AES256-based SSE, so it may make sense to choose this encryption algorithm for your data.

If you want your users (whether IAM users, IAM roles, or regular console users) to never upload un-encrypted data (for, well, security reasons), then it makes sense to have a bucket policy to explicitly deny uploads of un-encrypted objtects.

This example bucket policy was derived using this page . This policy allows for both SSE-S3 and SSE-KMS based encrypted objects while denying everything else.

{
    "Version": "2012-10-17",
    "Id": "PutObjPolicy",
    "Statement": [
        {
            "Sid": "DenyUnEncryptedObjectUploads",
            "Effect": "Deny",
            "Principal": {
                "AWS": "*"
            },
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::your-test-storage-bucket/*",
            "Condition": {
                "StringNotEquals": {
                    "s3:x-amz-server-side-encryption": [
                        "AES256",
                        "aws:kms"
                    ]
                }
            }
        },
        {
            "Sid": "DenyUnEncryptedObjectUploads",
            "Effect": "Deny",
            "Principal": "*",
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::your-test-storage-bucket/*",
            "Condition": {
                "Null": {
                    "s3:x-amz-server-side-encryption": "true"
                }
            }
        }
    ]
}

And, that is it. As simple as that!

If you’re using the AWS CLI to upload objects, here’s how to use various forms of encryption (or not):

AES256

#aws s3 cp --sse AES256  file.txt s3://test-dh01-storage/file.txt

AWS/KMS using the default aws/s3 KMS encryption key

#aws s3 cp --sse aws:kms file.txt s3://test-dh01-storage/file.txt

Unencrypted

#aws s3 cp file.txt s3://test-dh01-storage/file.txt

 

Posted in Amazon Web Services, Tech. | Tagged , , , , , , , , , , , | Leave a comment

DC/OS Exhibitor on S3 – Issues & Workarounds

If you want basic resiliency around your DC/OS master nodes when hosting them on AWS, you’ll want to have Exhibitor store its data in AWS S3. In order to do so, you’ll want to grant S3 IAM roles to your master nodes so they may talk to a specific S3 bucket. Then, you use this genconf/config.yaml config to install (or re-install) your DC/OS cluster:

exhibitor_storage_backend: aws_s3
aws_region: us-east-1
exhibitor_explicit_keys: false
s3_bucket: <bucket-name>
s3_prefix: my-dcos-exhibitor-file

Note: You do NOT need to have your master nodes use a load balancer (master_discovery: master_http_loadbalancer ) for discovery even if you decide to use S3 for exhibitor backend. Yes, they’re often used together, but it’s not mandatory to use them together.

Back to your installation, assuming you follow instructions here to continue the installation. Once everything is deployed, head to your S3 bucket, search for and open up the my-dcos-exhibitor-file. Here, two things can happen:

  1. You find the file doesn’t exist. In this case, take a look at your genconf/config.yaml file and count the number of master nodes that you listed in there. Here’s my list of masters:
    master_list:
    
    - 10.0.2.34
    - 10.0.0.147
    - 10.0.4.247

    If you have less than 3 master nodes in your list, then Exhibitor defaults to using a “static” exhibitor backend, and won’t use S3 to store its config. So, use 3 or more nodes and reinstall.

  2. The second issue that can happen is that only one of the master nodes succeeds to write to S3 into the my-dcos-exhibitor-file and you’re left with a broken cluster. Your services ( #systemctl | grep dcos ) will all fail and your postflight will timeout and fail( #sudo bash dcos_generate_config.sh --postflight ). You may also see tons of “null-lock-*” files hanging out in your S3 bucket:screenshot-at-2016-12-14-154748

    If this is your case, go checkout the my-dcos-exhibitor-file from S3. If you see something like this, there may be something I can do to help:

    #Auto-generated by Exhibitor 10.0.0.163
    #Wed Dec 14 19:49:59 UTC 2016
    com.netflix.exhibitor-rolling-hostnames=
    com.netflix.exhibitor-rolling.zookeeper-data-directory=/var/lib/dcos/exhibitor/zookeeper/snapshot
    com.netflix.exhibitor-rolling.servers-spec=2\:10.0.0.163
    com.netflix.exhibitor.zookeeper-pid-path=/var/lib/dcos/exhibitor/zk.pid
    com.netflix.exhibitor.java-environment=
    com.netflix.exhibitor.zookeeper-data-directory=/var/lib/dcos/exhibitor/zookeeper/snapshot
    com.netflix.exhibitor-rolling-hostnames-index=0
    com.netflix.exhibitor-rolling.java-environment=
    com.netflix.exhibitor-rolling.observer-threshold=0
    com.netflix.exhibitor.servers-spec=2\:10.0.0.163
    com.netflix.exhibitor.cleanup-period-ms=300000
    com.netflix.exhibitor.zookeeper-config-directory=/var/lib/dcos/exhibitor/conf
    com.netflix.exhibitor.auto-manage-instances-fixed-ensemble-size=3
    com.netflix.exhibitor.zookeeper-install-directory=/opt/mesosphere/active/exhibitor/usr/zookeeper
    com.netflix.exhibitor.check-ms=30000
    com.netflix.exhibitor.zookeeper-log-directory=/var/lib....

    If you look at the highlighted lines, you may notice something. What happened to your other master nodes, you ask? Well, I don’t have an answer, but there’s a workaround.

    Edit those two (highlighted in red) lines to include all your servers. Make sure to give them an id.

    com.netflix.exhibitor-rolling.servers-spec=2\:10.0.0.163,1\:10.0.4.50,3\:10.0.2.174
    com.netflix.exhibitor.servers-spec=2\:10.0.0.163,1\:10.0.4.50,3\:10.0.2.174

    Now, give your entire cluster a few minutes while all master nodes stop being asshats and start to discover each other. Once Exhibitor is happy, DC/OS stops being whiny. All your services will be up, and you’ll soon be able to login to your DC/OS UI.

    I hope that was helpful. I wasted an entire day (well I got paid to do it) trying to figure this out.

Posted in Amazon Web Services, Linux, Tech. | Tagged , , , , , , , | Leave a comment

DC/OS Kill Mesos Framework

You want to kill a Mesos framework but you’ve no idea how? You’ve looked at this page but it still doesn’t make sense? Then here’s what you need to do to kill a framework on Mesos.

In my case, I have Spark running on DC/OS. In this particular situation, I had Spark in a limbo state not running any tasks we were throwing at it. Our resources were at 100% utilization but none of our tasks were really running, although their status said otherwise. After trying in vain to kill using “dcos spark kill” I tried to kill individual drivers using this:

curl -XPOST https://<your-mesos-endpoint>/master/teardown -d 'frameworkId=6078e555-358c-454f-9359-422f1b6026bd-0002-driver-20161203012921-30627'

But, even though it deleted the drivers from Mesos, the drivers continued to run in the cluster. I figured there had to be a better way to do this. And that’s when I decided to kill the Spark framework instead:

curl -XPOST https://<your-mesos-endpoint>/mesos/master/teardown -d 'frameworkId=6078e555-358c-454f-9359-422f1b6026bd-0002'

And that worked like magic. It killed all the running and pending drivers and also killed Spark.

Once that was done, I removed Spark’s exhibitor entry, and reinstalled Spark.

Posted in Tech. | Tagged , , , , , , | Leave a comment

[How To] Java Heap Dump and Stack Trace

Here’s how you can quickly get java heap dump and stack dump on Amazon Linux with Java 8. Your mileage with these commands may vary.

First, find the process ID (pid) and user for the java process that you want heap/stack dumps of.  I like running:

ps -aux | grep java

Note the user under which the Java process is running and switch to it. In my case, it was “webapp”.

# sudo su
# su webapp

Now, run this for the heap dump:

jmap -dump:file=/home/webapp/heapdump.bin <pid>

This for stack trace:

 jstack -l <pid> > /home/webapp/strace.log

Optional (to scp these files out): Copy these files over to your regular user’s home directory. For me, this was “ec2-user”. Exit to root user first.

# exit
# cp /home/webapp/heapdump.bin /home/webapp/strace.log /home/ec2-user
# chown ec2-user:ec2-user /home/ec2-user/heapdump.bin /home/ec2-user/strace.log

Now use your favorite scp tool to scp the files out.

[Extra Tip] Here’s how you’d get more information (such as HeapSize) about the specific JDK running on the instance:

#java -XX:+PrintFlagsFinal -version | grep HeapSize
    uintx ErgoHeapSizeLimit                         = 0                                   {product}
    uintx HeapSizePerGCThread                       = 87241520                            {product}
    uintx InitialHeapSize                          := 132120576                           {product}
    uintx LargePageHeapSizeThreshold                = 134217728                           {product}
    uintx MaxHeapSize                              := 2095054848                          {product}
    openjdk version "1.8.0_91"
    OpenJDK Runtime Environment (build 1.8.0_91-b14)
    OpenJDK 64-Bit Server VM (build 25.91-b14, mixed mode)
Posted in Linux, Tech. | Leave a comment

S3 access from VPC or Corporate IP

If you’ve been wanting to allow HTTP access to your favorite S3 bucket from your VPC and/or from your corporate LAN’s public IP, then this blog could help make your job easier. At the end of this, you will be able to use your S3 bucket as an artifact server serving files via HTTP.

To begin, we’ll need to setup a VPC Endpoint. Head to VPC->Endpoints->Create. Select your VPC and choose S3 from the next dropdown:

screen-shot-2016-11-18-at-11-59-05-am

Copy the “vpce-xxxxxxx” resource-id that is returned after create.

Next, head to your S3 bucket and in the properties side bar, hit “Permissions” and click “Add Bucket Policy” and enter something like this:

{
    "Version": "2012-10-17",
    "Id": "Policy1478550966902",
    "Statement": [
        {
            "Sid": "Stmt1478708488423",
            "Effect": "Allow",
            "Principal": "*",
            "Action": [
                "s3:GetObject",
                "s3:ListBucket",
                "s3:PutObject"
            ],
            "Resource": [
                "arn:aws:s3:::yourbucketname",
                "arn:aws:s3:::yourbucketname/*"
            ],
            "Condition": {
                "StringEqualsIgnoreCase": {
                    "aws:sourceVpce": "vpce-xxxxxx"
                }
            }
        }
    ]
}

In my case, I wanted GetObject, ListBucket, and PutObject access only from our VPC so that’s what I put in there. Your use case may vary.

Remember, I also want access to the bucket from our corporate IP address. Note, if you just added the IP address to the Condition field, it’d act as an “AND” policy so it will only grant access to the bucket if the IP address matches AND traffic is coming from the VPC. Whereas we want access to the bucket if IP address matches OR if hitting the bucket from VPC. So this is NOT going to work:

            "Condition": {
                "StringEqualsIgnoreCase": {
                    "aws:sourceVpce": "vpce-xxxxxx"
                },
                "IpAddress": {
                    "aws:SourceIp": "X.X.X.X/16"
                }
            }

Here’s what you’d use instead – use two different statements:

{
    "Version": "2012-10-17",
    "Id": "Policy1478550966902",
    "Statement": [
        {
            "Sid": "Stmt1478550959905",
            "Effect": "Allow",
            "Principal": "*",
            "Action": [
                "s3:GetObject",
                "s3:ListBucket",
                "s3:PutObject"
            ],
            "Resource": [
                "arn:aws:s3:::yourbucketname",
                "arn:aws:s3:::yourbucketname/*"
            ],
            "Condition": {
                "IpAddress": {
                    "aws:SourceIp": "X.X.X.X/16"
                }
            }
        },
        {
            "Sid": "Stmt1478708488423",
            "Effect": "Allow",
            "Principal": "*",
            "Action": [
                "s3:GetObject",
                "s3:ListBucket",
                "s3:PutObject"
            ],
            "Resource": [
                "arn:aws:s3:::yourbucketname",
                "arn:aws:s3:::yourbucketname/*"
            ],
            "Condition": {
                "StringEqualsIgnoreCase": {
                    "aws:sourceVpce": "vpce-xxxxxx"
                }
            }
        }
    ]
}

And, done!

Posted in Amazon Web Services, Tech. | Tagged , , , , , , | Leave a comment

Programmatically Trigger Travis Builds on GitHub

Our code for this specific project (which is on GitHub) is designed to handle infrastructure creation and rebuilds. For this purpose, we wanted to to be able to trigger our Travis builds for specific branches via simple Bash scripts, based on meeting certain conditions.

Here we look at logging into Travis using GitHub a token and then using this login to generate a Travis authorization token – a token which allows us to run API calls against Travis’ REST API.

First, we begin by creating a new GitHub token – this is a one time activity. At the end of creating this token, we’ll end up with bare-minimum scopes to trigger Travis builds.

To do this, you click on your profile on the GitHub page and click “Settings”. Here, in the “Developer Settings” heading, head to “Personal access tokens” and hit “Generate new token”. Select the following scopes:

git-2

Check the “public_repo” box if yours is a public Git repository. Hit “Generate Token”.

You’ll now be able to copy the token. I’ve snipped the code below, but you’ll see a long 82-character token string.

git-3

No more GUI. From here on, we can use this Github token to get our work done all via command line.

In your script, you’ll login to Travis this way (make sure the machine running the script has Travis package installed):

/usr/local/bin/travis login --pro --github-token <snipped> --skip-completion-check

The “–skip-completion-check” flag is required to ensure non-interactive CLI access when using Travis.

Next, get a Travis authorization token by doing this:

token=$(/usr/local/bin/travis token --pro --skip-completion-check | cut -f5)

Now, set the following string as the body for the upcoming API call. $branch will be the branch of your project on which you want to trigger the build:

body='{ "request": { "branch":"'${branch}'" }}'

Next, go ahead and make the actual API call to trigger a Travis build using the last known successful build:

curl -so /dev/null -X POST -w "%{http_code}" \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-H "Travis-API-Version: 3" \
-H "Authorization: token $token" \
-d "$body" https://api.travis-ci.com/repo/yourproject/requests

The -so /dev/null -X POST -w “%{http_code}” portion is specific to our case where we only want the HTTP response code and nothing else. Our script then takes further decisions based on the output of the CURL call.

Well, that’s it! If you have any questions, please drop them in as comments.

Posted in DevOps, Tech. | Tagged , , , , , , , , | Leave a comment