Skip to main content


Serve and Process S3 Objects Request via CloudFront & Lambda@Edge.

Serve and Authenticate S3 Objects Request via CloudFront & Lambda@Edge: Use Case :  Need to check if the Requester of a S3 object has the right permissions to access the requesting object. Solution : Create s3 bucket and associate cloudFront distribution with it.  Create a lambda with your code. In lambda you can handle your object authentication request or check if the right person is accessing the right object. Add cloudfront event to the Lambda@Edge function.  Deploy Lambda@Edge function.  Select your CloudFront distribution and cloudFront event as Viewer Request (It caches request). Keep rest as default.  Now every request you make to your CloudFront will be monitored by Lambda function.
Recent posts

Aws services overview for AWS Certified Solutions Architect – Associate Exam

AWS Services Overview For  AWS Certified Solutions Architect – Associate Exam The AWS Certified Solutions Architect - Associate examination is intended for individuals who perform a solutions architect role and have one or more years of hands-on experience designing available, cost-efficient, fault-tolerant, and scalable distributed systems on AWS. Well, preparing for the exam and taking notes by watching content and reading blogs is kinda like more hurdles. So I am sharing this consolidated short overview of each AWS service which are needed to cover for the exam. AWS Compute services 1)     EC2 - Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud. It is designed to make web-scale computing easier for developers. i.e. its a virtual machine inside the aws platform. It can have physical dedicated machines under EC2. 2)     EC2 container service - Where you can manage/run docker containers at scale. 3)

Going deep into the Instagram’s feed ranking algorithm

Going deep into the Instagram’s feed ranking algorithm Ranking algorithms means feed posts showing up based on what Instagram thinks you want to see and not just the newest posts first. Let's check the Instagram’s giant feed handling –   They never revealed how the feed algorithm worked until they called upon a group of reporters to their San-Francisco office and pulled the curtains off from the Instagram feed ranking algorithm. They explained the reporters that there were three factors where an individual’s engagement to the app is taken care of, the three being: -

Load Balancer and Load balancing Algorithms

What is Load Balancer? Load balancer is a virtual or physical device which distributes network traffic to clustered servers. It improves responsiveness and increases availability of applications. Load balancer is an interface between client and server which acts as an advisor for the client network and request traffic. Passes the requests to the right server by using various load balancing algorithms. Which reduces the load on single server and prevents any one application server from becoming a single point of failure. It results applications availability and response time.

Easiest way to understand how the Ethereum Smart Contract work

Ethereum is one the fastest growing block chain with ever growing eco-system. It's beautiful in that, unlike Bitcoin, it's not just a currency, it's a complete platform for Decentralized Application (Dapp). Ethereum as currency just being one of the many application of the platform. Sheer enthusiasm about this currency in recent times has caused emergence of tonnes of technologies around it. Once you start reading about ethereum you will be hit by technical Jargons at 1 jargon per minute speed. Hence, this article is an attempt to help you visualize one aspect of the Ethereum system, ie. how smart contract work. Imagine you want to create a bank where users can manage their accounts. Also, any other third party bank should be able to integrate with your super awesome bank. So, you create a following piece of code. class Bank{     Map<String, Double> accountMap;     public void transferTo(String accountNumber, double amount){         double balance =

Map External Domain name to S3 Bucket

Though it sounds very straight forward to map domain names from the likes to godaddy to AWS S3 bucket, trust me it's not. Following are the steps to do this mapping (assuming the domain name is Create a bucket called It's mandatory to have bucket name exactly same as domain name. Create a hosted zone in Route53 : Add A record in the hosted zone. Keep name same as domain name Check the Yes option for Alias In the alias Target select your bucket name (it takes few seconds to load bucket names) Click Save Recordset Go to your domain provider site. Go to NS (name server section). Click add NS record Add all records from Route53 Hosted zone, NS records section to above mentioned section That's all you need to do to map bucket with a external domain name. If you also want to redirect subdomain like www to  base domain, just add a bucket with same name. And mark it as Redirect bucket in the bucket hosting properties

Intelli J Idea Tricks n Tips

Switch Function key of your mac or windows machine, so that you don’t have to press function key while using F1, F2. Search : File Search: Cmd + Shft + O Text search in whole project : Cmd + Shft + F Search settings, commands : Cmd + Shft + A    ( Brahmastra ) Navigation : Disable tabs. We don’t need it. Recent files: Cmd + E.  Last accessed file, Cmd + E + Enter Recently edited files: Cmd + Shft + E Browse backward / forward in history: Cmd + [    OR Cmd + ] Project Window Toggle: Alt + 1 In project window setting Autoscroll from source Open a selected file from project window: F4 Create new file: Ctr + Alt + N Hide all windows except Code editor:  Cmd + Shft + F12 Toggle Terminal: Cmd + F12 Find implementation of any method: Cmd + Alt + F8 Find usages: Cmd + F7   OR   Cmd + B Remove navigation bar, from View menu. Code Editor : Move code up or down: Alt + Shft + Up / down Duplicate a line : Cmd + D Delete Line :

The Great Gradle Cheat Sheet

This article intend to work as a gradle in nutshell. I will try to keep it as concise as possible while trying to touch as many frequently used gradle operations as possible. Task Chain task compile { doLast { println 'compiling source' } } task compileTest(dependsOn: compile) { doLast { println 'compiling unit tests' } } # gradle compileTest Above command will execute "compile" task first and then compileTest. Excluding Tasks with -x #gradle compileTest -x compile Abbreviating task names by camel initials #gradle compileTest #gradle ct   Running subproject #gradle -p subdir compileTest Listing all tasks #gradle -q tasks List project dependencies #gradle -q dependencies Listing build script dependencies gradle buildEnvironment List project properties gradle -q properties Profiling build operation gradle build --profile Above command generates nice report about time taken in

Auto Scaling DynamoDB

Those of you who have worked with the DynamoDB long enough, will be aware of the tricky scaling policies of DynamoDB. It allows user to explicitly set requests per second (units per second, but for simplicity we will just say request per second). User can do so for read as well as write operations. Anytime user can increase or decrease the provision capacity from DynamoDB web console and it will reflect immediately. Sounds all good....... Or not? What if you set provisioned capacity to 50 req per second but load on the server crosses 100 req per second? Requests gets throttled!! Mostly times out. What's worse? This can cause requests getting queued up in your web server. Which can potentially bring your entire server down. What if you set provisioned capacity to 1000 req per second but load on the server is only 100 req per second through out the day? You lose your hard earned money for remaining 900 req per second. What if you set it to 1000 req per sec and then realis

StackDriver Integration with AWS Elastic Beanstalk - Part 2

Eb Extension Config Our goal is to create a configuration such that, it can run on any elastic beanstalk instance. It should send production and non-production logs to two seperate Stackdriver projects. Adding monitoring for new log file should include minimal changes at best. If you have used Elastic Beanstalk before, probably you will be familiar with eb-extensions scripts. These are the set of commands those run everytime application is deployed on the EB. Step 1: Create folder .ebextension in your WEB-ROOT directory.  EB by default looks for ".config" files under .ebextension directory and executes on app-deployment. Add sub-directory called "stackdriver" under .ebextension directory that you just created. Step 2: Add google-fluentd.conf file in stackdriver directory.  Fluent-d agent runs on this configuration. This file tells the fluentd where to look for log files. Following sample file configures fluentd to check for app.log and billing.log files.

StackDriver Integration with AWS Elastic Beanstalk - Part 1

Stack Driver Setup Step 1: Create stackdriver project Navigate to  After logging in you will be redirected to project creation page. To manage multiple environments of your application its advisable that you create just two projects. One for production env and other one for non-prod. Even though you might have multiple non-production environments, but having one stackdriver project per application-env is overkill. So, lets start with production project. For this tutorial, I'll create CodeHooDoo-Prod project. Once the project is created, StackDriver will ask you to link your AWS account resources to it for monitoring.  But as AWS CloudWatch has good monitoring and alerting support you can skip this one. Keep clicking continue until you get to monitoring console. Click on the logging. This is where you will get all the logs from your application server. Step 2: Download authentication key Navigate backt to .

StackDriver Integration with AWS Elastic Beanstalk

Introduction In Today's era where application server spreads across hundreds of machines, log-aggregation has become mandatory tool. It provides centralised location for browsing / searching through your logs coming from all of the server instances. Google's stack driver is indeed quite stable log-aggregation tool. After reading this blog you will be able to: send your multi-server logs to centralised  location ie stack driver monitor / search logs in real time re-use same config on different elastic beanstalk environments (prod vs non-prod) This tutorial is divided into two chapters: Stack Driver setup Elastic Beanstalk configuration