Skip to main content

Posts

Showing posts from 2017

Map External Domain name to S3 Bucket

Though it sounds very straight forward to map domain names from the likes to godaddy to AWS S3 bucket, trust me it's not. Following are the steps to do this mapping (assuming the domain name is example.com) Create a bucket called example.com. It's mandatory to have bucket name exactly same as domain name. Create a hosted zone in Route53 : example.com Add A record in the hosted zone. Keep name same as domain name Check the Yes option for Alias In the alias Target select your bucket name (it takes few seconds to load bucket names) Click Save Recordset Go to your domain provider site. Go to NS (name server section). Click add NS record Add all records from Route53 Hosted zone, NS records section to above mentioned section That's all you need to do to map bucket with a external domain name. If you also want to redirect subdomain like www to  base domain, just add a bucket with same name. And mark it as Redirect bucket in the bucket hosting properties

Intelli J Idea Tricks n Tips

Switch Function key of your mac or windows machine, so that you don’t have to press function key while using F1, F2. Search : File Search: Cmd + Shft + O Text search in whole project : Cmd + Shft + F Search settings, commands : Cmd + Shft + A    ( Brahmastra ) Navigation : Disable tabs. We don’t need it. Recent files: Cmd + E.  Last accessed file, Cmd + E + Enter Recently edited files: Cmd + Shft + E Browse backward / forward in history: Cmd + [    OR Cmd + ] Project Window Toggle: Alt + 1 In project window setting Autoscroll from source Open a selected file from project window: F4 Create new file: Ctr + Alt + N Hide all windows except Code editor:  Cmd + Shft + F12 Toggle Terminal: Cmd + F12 Find implementation of any method: Cmd + Alt + F8 Find usages: Cmd + F7   OR   Cmd + B Remove navigation bar, from View menu. Code Editor : Move code up or down: Alt + Shft + Up / down Duplicate a line : Cmd + D Delete Line :

The Great Gradle Cheat Sheet

This article intend to work as a gradle in nutshell. I will try to keep it as concise as possible while trying to touch as many frequently used gradle operations as possible. Task Chain task compile { doLast { println 'compiling source' } } task compileTest(dependsOn: compile) { doLast { println 'compiling unit tests' } } # gradle compileTest Above command will execute "compile" task first and then compileTest. Excluding Tasks with -x #gradle compileTest -x compile Abbreviating task names by camel initials #gradle compileTest #gradle ct   Running subproject #gradle -p subdir compileTest Listing all tasks #gradle -q tasks List project dependencies #gradle -q dependencies Listing build script dependencies gradle buildEnvironment List project properties gradle -q properties Profiling build operation gradle build --profile Above command generates nice report about time taken in

Auto Scaling DynamoDB

Those of you who have worked with the DynamoDB long enough, will be aware of the tricky scaling policies of DynamoDB. It allows user to explicitly set requests per second (units per second, but for simplicity we will just say request per second). User can do so for read as well as write operations. Anytime user can increase or decrease the provision capacity from DynamoDB web console and it will reflect immediately. Sounds all good....... Or not? What if you set provisioned capacity to 50 req per second but load on the server crosses 100 req per second? Requests gets throttled!! Mostly times out. What's worse? This can cause requests getting queued up in your web server. Which can potentially bring your entire server down. What if you set provisioned capacity to 1000 req per second but load on the server is only 100 req per second through out the day? You lose your hard earned money for remaining 900 req per second. What if you set it to 1000 req per sec and then realis

StackDriver Integration with AWS Elastic Beanstalk - Part 2

Eb Extension Config Our goal is to create a configuration such that, it can run on any elastic beanstalk instance. It should send production and non-production logs to two seperate Stackdriver projects. Adding monitoring for new log file should include minimal changes at best. If you have used Elastic Beanstalk before, probably you will be familiar with eb-extensions scripts. These are the set of commands those run everytime application is deployed on the EB. Step 1: Create folder .ebextension in your WEB-ROOT directory.  EB by default looks for ".config" files under .ebextension directory and executes on app-deployment. Add sub-directory called "stackdriver" under .ebextension directory that you just created. Step 2: Add google-fluentd.conf file in stackdriver directory.  Fluent-d agent runs on this configuration. This file tells the fluentd where to look for log files. Following sample file configures fluentd to check for app.log and billing.log files.

StackDriver Integration with AWS Elastic Beanstalk - Part 1

Stack Driver Setup Step 1: Create stackdriver project Navigate to https://stackdriver.com  After logging in you will be redirected to project creation page. To manage multiple environments of your application its advisable that you create just two projects. One for production env and other one for non-prod. Even though you might have multiple non-production environments, but having one stackdriver project per application-env is overkill. So, lets start with production project. For this tutorial, I'll create CodeHooDoo-Prod project. Once the project is created, StackDriver will ask you to link your AWS account resources to it for monitoring.  But as AWS CloudWatch has good monitoring and alerting support you can skip this one. Keep clicking continue until you get to monitoring console. Click on the logging. This is where you will get all the logs from your application server. Step 2: Download authentication key Navigate backt to https://stackdriver.com .

StackDriver Integration with AWS Elastic Beanstalk

Introduction In Today's era where application server spreads across hundreds of machines, log-aggregation has become mandatory tool. It provides centralised location for browsing / searching through your logs coming from all of the server instances. Google's stack driver is indeed quite stable log-aggregation tool. After reading this blog you will be able to: send your multi-server logs to centralised  location ie stack driver monitor / search logs in real time re-use same config on different elastic beanstalk environments (prod vs non-prod) This tutorial is divided into two chapters: Stack Driver setup Elastic Beanstalk configuration