On DynamoDB Provisioning: Simple, Flexible, and Affordable
We are excited to let you know that starting today you can buy and manage Amazon DynamoDB reserved capacity through the DynamoDB Console. Also, today, we reduced the minimum units for reserved capacity to 100, giving startups like yours the ultimate flexibility in buying as much or as little IOPS as you need. This makes cost savings available at a much lower scale, allowing any startup to get started with reserved capacity and minimize commitments by growing in finer-grained increments.
For those of you using DynamoDB today with provisioned throughput, these features could enable you save up to 76% based on a 3-Year reserved capacity purchase. For example, you can make 259 million writes in a month with a 3-Year reserved capacity purchase for about $12/month — a 76% savings compared to regular write capacity price.
DynamoDB Provisioning Background
Before we go into the details of reserved capacity, let’s take a step back and describe the design and architecture of DynamoDB and the genesis of our approach. When we launched DynamoDB, we started with a clean sheet of paper. While we innovated on many dimensions, we invested in identifying an appropriate provisioning model for developers. Within Amazon, each team operates as an independent startup, and like any startup, these teams are conscious of the total cost of ownership of their databases. They care about the flexibility, ability to change performance at a very fine granularity, and minimizing the benchmarking and subsequent operational load on the team from database issues.
Traditionally, provisioning a database required the engineers in each team at Amazon to benchmark hardware, drives, network, and even tune the network connectivity for replication. The other issue that teams tried to address was the granularity of provisioning: we wanted our developers to get a super small table that works at a low cost but also continues to scale with their needs without a lot of hassle. The increments available in the traditional world were simply the smallest possible box — so if you have a 10GB database, developers were over-provisioning with 100GB+ drives to minimize operational burden due to growth. Yet, when the inevitable happened and we grew to 80GB on that box, we would have to do a migration, regardless. While this is certainly easier on the cloud, we asked ourselves, can we make this even simpler?
Our first insight was that developers do not need to worry about instances, drives, and networking. They care about the number of transactions their app needs and the raw storage. Along these lines, developers did not want to migrate their database, regardless of how seamless it can be, when they outgrew a box. The second insight was that even though partitioning a database was super easy, re-partitioning while taking live traffic is always a challenge. We wanted to build a system that could scale up or down while taking live traffic, without the developers having to think about the underlying hardware. Similarly, developers do not want to worry about the storage: if the storage of a table grows from 100MB to 100TB, it should be done seamlessly with zero work required from the developer.
Based on these insights, we developed the provisioning model for DynamoDB. We enabled our customers to provision reads per second and writes per second and the rest just works. Given that DynamoDB is a fully managed service, it translated into no software to download and install — it is literally as simple as provision and go. By default, DynamoDB gives each of its customers multi-datacenter durability by replicating data across 3 data centers for each table, and it only acknowledges writes after they have been persisted on disk across at least two data centers. All this works without developers ever having to think about any configuration. All this works without the developers ever having to think about any configuration. That is a huge benefit, especially for Startups like yours, who would rather spend time on activities that grow their business rather than dealing with database operations. Building on this provisioning model, we emphasized the sacred tenet of flexibility. If a developer creates a table with 10 reads per second and 10 writes per second for an app that ends up getting a Reddit hug, all they have to do is to use the DynamoDB console or CLI to re-provision the table to support 100 or 1000 or even a million TPS, all while taking live traffic. It works just as well going the other direction, which is why our customers love using tools like Dynamic DynamoDB to appropriately provision IOPS based on their actual load.
DynamoDB Reserved Capacity
With that context set, let’s look at how reserved capacity lowers your total cost of ownership. Early on, we announced Reserved Capacity for DynamoDB, giving our customers the ability to reduce their costs up to 76% with a three year reserved capacity. The DynamoDB reserved capacity also has a ton of flexibility built into it. For instance, if you buy 500 reserved capacity units, you can use them all on one table with 500 or more IOPS or you can use it across multiple tables. More importantly, you never have to assign the reserved capacity: if you have N IOPS in reserved capacity, we automatically calculate your bill such that your first N IOPS come from the reserved capacity. You can buy reserved capacity and view all your reservations directly in the console:
You can then click “Purchase Reserved Capacity”. This brings you to the next dialogue in which you can select the region, the type of capacity you want to purchase, the terms (1 year or 3 years), and an email address. That’s it — you have purchased reserve capacity, which is now available to use across any or all of your tables. You can view the reservations and your total usage to ensure that you have the optimal number of reservations as well! To learn more about reserved capacity check out our FAQ page here.
As some of you may recall, we announced support for JSON, and launched DynamoDB Streams in preview (more info here) at re:Invent. Now, we are following up with this reserved capacity buying experience to lower your total cost of ownership.
DynamoDB Free Tier
While you can now buy reserved capacity at scales as low as $10/month, truly the cost of trying DynamoDB is as low as $0. As part of the AWS Free Tier, DynamoDB customers can get 25 writes/second, 25 reads/second, and 25GB of storage for free. That is enough to get you started for a production ready app, with multi-datacenter availability, durability, and zero operational overhead. Then, once you reach production scale you can use our new reserved capacity buying experience. We hope you will give this feature a try, and let us know what you think.