Skip to main content

Auto Scaling DynamoDB

Those of you who have worked with the DynamoDB long enough, will be aware of the tricky scaling policies of DynamoDB. It allows user to explicitly set requests per second (units per second, but for simplicity we will just say request per second). User can do so for read as well as write operations. Anytime user can increase or decrease the provision capacity from DynamoDB web console and it will reflect immediately.

Sounds all good....... Or not?

What if you set provisioned capacity to 50 req per second but load on the server crosses 100 req per second? Requests gets throttled!! Mostly times out. What's worse? This can cause requests getting queued up in your web server. Which can potentially bring your entire server down.

What if you set provisioned capacity to 1000 req per second but load on the server is only 100 req per second through out the day? You lose your hard earned money for remaining 900 req per second.

What if you set it to 1000 req per sec and then realised that the traffic is low, so you bring it down to 900 then 800 then 700. After four dial downs, you won't be able to dial it down further for the day!! Also, practically its impossible to monitor this throughout a day everyday.

These are some serious limitations. In this article I will try to provide a way to resolve all of these problems.

Over all approach is to create an AWS Lambda, which gets triggered every time DynamoDB's table's provisioned capacity is about to be consumed completely. This lambda then increases the capacity. Similarly, the lambda gets triggered when actual consumption of the provisioned capacity is too low, then this lambda brings down the provisioned capacity.

After doing this setup, your server will never go down due to throttled DynamoDB queries AND DynamoDB operational cost will go down considerably.

Lets Start

DynamoDB being nice creates two basic alarms as soon as you create a table. Following are those two alarms:

You can find those in Alarms tab in table properties section as shown below.

Click on Edit Alarm button, it will open up following dialogue box which indicates that the alarm is sending notification to SNS topic called dynamodb
Next we will create an AWS Lambda which will listen to above SNS topic and adjust the provision capacity as required.

Open AWS Lambda web console. Click on Create lambda function.  Select Blank Function from the provided templates. Click on the dotted square and choose SNS. Select the SNS topic from the Edit alarm dialogue. Most probably it will be named dynamodb.

It will open up configure Lambda page as below.

Add some good name for you lambda. Select Run time as Node js 4.3. Because the code that I am going to share is written and tested for Node js 4.3

Get the lambda code from my github and post it in the inline code editor. ( Node.js not being my prime tool, I am sure there will be lot of scope for the improvement in the code. If you can suggest optimisation in the code, I'll be happy to include it. )

This lambda does the following things:
  • Gets triggered whenever alarm is set off on any DynamoDB table
  • Checks the last minute of average consumption
  • Based on difference in consumed vs provisioned it will set the new provisioned capacity to ensure requests won't get throttled as well as not much of provisioned capacity is getting wasted.
  • It updates the cloudwatch alarms set for the table as per new provisioned capacity
  • It sends the slack notification to the channel where we can keep an eye on the activities
Following configuration you can see on the top of lambda:
var PROVISIONED_UPPER_MARGIN_RATIO = 0.8;  //Autoscale If consumed >= 80% of provisioned
var PROVISIONED_LOWER_MARGIN_RATIO = 0.3; //Autoscale If consumed <= 30% of provisioned
var PROVISIONED_TO_CONSUMED_IDEAL_RATIO = 1.3; //Autoscale to 130% of consumed


var MINIMUM_DELTA_TO_ACT_UPON = 50;  //Update provision cap only if the change is higher than this

var MAX_DIAL_DOWNS = 24; //number of times dynamodb allows you to bring down provisioned cap

P.S. By default DynamoDB allows you to bring down provisioned capacity only four times a day. But you can talk to their support centre and they might increase that limit for you.

Word of caution: Make sure you add some upper limit to the provisioned capacity, so that your bills won't shoot high if someone makes DDOS attack on your web server.

Happy Coding!


  1. Replies
    1. Great Article
      Cloud Computing Projects

      Networking Projects

      Final Year Projects for CSE

      JavaScript Training in Chennai

      JavaScript Training in Chennai

      The Angular Training covers a wide range of topics including Components, Angular Directives, Angular Services, Pipes, security fundamentals, Routing, and Angular programmability. The new Angular TRaining will lay the foundation you need to specialise in Single Page Application developer. Angular Training


Post a Comment

Popular posts from this blog

StackDriver Integration with AWS Elastic Beanstalk - Part 2

Eb Extension Config Our goal is to create a configuration such that, it can run on any elastic beanstalk instance. It should send production and non-production logs to two seperate Stackdriver projects. Adding monitoring for new log file should include minimal changes at best. If you have used Elastic Beanstalk before, probably you will be familiar with eb-extensions scripts. These are the set of commands those run everytime application is deployed on the EB. Step 1: Create folder .ebextension in your WEB-ROOT directory.  EB by default looks for ".config" files under .ebextension directory and executes on app-deployment. Add sub-directory called "stackdriver" under .ebextension directory that you just created. Step 2: Add google-fluentd.conf file in stackdriver directory.  Fluent-d agent runs on this configuration. This file tells the fluentd where to look for log files. Following sample file configures fluentd to check for app.log and billing.log files.