AWS Database Blog

Optimize costs by scheduling provisioned capacity for Amazon DynamoDB

Amazon DynamoDB is a fully managed, serverless, key-value NoSQL database designed to run high-performance applications at any scale. DynamoDB charges for reading, writing, and storage of your DynamoDB tables, along with any optional features you choose to enable. When you create a DynamoDB table, you choose from two capacity modes that have different billing options for processing reads and writes: on-demand and provisioned.

To manage the cost of a DynamoDB provisioned capacity table, you can adjust the provisioned read/write capacity. You can do this through auto scaling or based on a schedule that accounts for peak and off-peak hours of traffic.

Amazon DynamoDB has auto scaling capabilities that use the AWS Application Auto Scaling service to dynamically adjust provisioned throughput capacity in response to actual traffic patterns. Auto scaling enables a table or a global secondary index to increase its provisioned read and write capacity to handle sudden increases in traffic. When the workload decreases, auto scaling decreases the throughput so that you don’t pay for unused provisioned capacity.

In this post, I show you how to optimize the cost of a provisioned capacity table by combining auto scaling and changing the minimum throughput of a provisioned capacity table on a set schedule so that it allows for an increased write capacity at the desired time without any delay.

Problem statement

There are workloads that have predictable changes on a regular cycle. For example, trading platforms in the financial industry. In the financial industry, trading starts on Monday morning and ends on Friday evening. When the markets open on Monday, there is a sudden surge in activity with no time to ramp up. Another example is a retail sale that coincides with significant marketing promotion. When the sale starts, the online store might experience a surge of orders.

On-demand capacity is great for unpredictable workloads, but in use cases like the ones described above, it might throttle and only scale by doubling capacity every 30 minutes, as described in Why is my on-demand DynamoDB table being throttled?

If you’re using provisioned capacity and have predictable, cyclical high demand that must be adjusted for immediately, such as the financial industry example, you need to set a high minimum read/write provisioned capacity on your DynamoDB table. This means you might be paying for a lot of unused capacity during periods of low traffic, such as over the weekends. You can use Application Auto Scaling to schedule scaling policies that adjust your provisioned throughput to support predictable changes in traffic. You could adjust the capacity manually, but an automated solution is preferred.

Solution overview

When using Application Auto Scaling, you can use a cron expression to schedule a policy. You can create multiple schedules to change your table’s capacity as needed.

Download and run the complete AWS CloudFormation template to set up the sample solution. Wait for your off-peak or peak schedule to begin and then you can see the results in Amazon CloudWatch metrics for the table. The CloudFormation template needs a role ARN as input that can perform scaling. For the permissions needed by this role, refer to the Developer Guide.

The following resources are created in CloudFormation by the template, as shown in the snippet that follows:

  • A DynamoDB table with an initial read/write provisioned capacity.
  • An AWS::ApplicationAutoScaling::ScalableTarget that uses ScheduleAction to start peak scaling on Monday at 8:00 AM UTC by changing the minimum write capacity to 90 write capacity units (WCUs). It changes the minimum write capacity to 30 WCUs on Friday at 6:00 PM UTC (you can change the schedule to fit your use case).
    Type: AWS::ApplicationAutoScaling::ScalableTarget
      MinCapacity: 30
      MaxCapacity: 90
      ResourceId: !Sub "table/${ CyclicalTable }"
      RoleARN: !Ref DynamoDBAutoscalingRoleArn
      ScalableDimension: dynamodb:table:WriteCapacityUnits
      ServiceNamespace: dynamodb
        - ScheduledActionName: startWeekday
          #Monday 8AM: UTC
          Schedule: "cron(0 8 ? * 2 *)"
            MinCapacity: 90
        - ScheduledActionName: startWeekEnd
          #Friday 6pm: UTC
          Schedule: "cron(0 18 ? * 6 *)"
            MinCapacity: 30

As an example of the effect of scheduled changes to provisioned write capacity, Figure 1 that follows shows the results of hourly changes to the provisioned write capacity. The change to the provisioned write capacity is immediately reflected in the write usage of the table.

Figure 1: Provisioned write capacity changing hourly

Figure 1: Provisioned write capacity changing hourly

Cleaning up

To avoid incurring future charges, delete the resources created by the template. You can delete all of the resources by deleting the CloudFormation stack by using either the AWS Management Console or the AWS Command Line Interface (AWS CLI). For more information, see Deleting a stack on the AWS CloudFormation console.


A financial industry customer was able to reduce their costs by 25 percent by using scheduling to adjust their provisioned capacity to reflect expected traffic.


In this post, I showed you how to scale Amazon DynamoDB provisioned capacity using a cron-based Application Auto Scaling schedule. You can use this method to optimize costs for workloads that have known and predictable traffic patterns.

For more information about DynamoDB table capacity modes, see Read/write capacity mode. For more information about Application Auto Scaling for DynamoDB, see AWS DynamoDB and Application Auto Scaling.

About the Author

Jiten Dedhia is a Sr. Solutions Architect with over 20 years of experience in the software industry. He has worked with global financial services clients, providing them advice on modernizing by using services provided by AWS.