Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. Amazon DynamoDB enables customers to offload the administrative burdens of operating and scaling distributed databases to AWS, so they don’t have to worry about hardware provisioning, setup and configuration, replication, software patching, or cluster scaling.
Amazon DynamoDB takes away one of the main stumbling blocks of scaling databases, the management of the database software and the provisioning of hardware needed to run it. Customers can deploy a non-relational database in a matter of minutes. DynamoDB automatically partitions and re-partitions your data and provisions additional server capacity as your table size grows or you increase your provisioned throughput. In addition, Amazon DynamoDB synchronously replicates data across three facilities in an AWS Region, giving you high availability and data durability.
Amazon DynamoDB stores three geographically distributed replicas of each table to enable high availability and data durability. Read consistency represents the manner and timing in which the successful write or update of a data item is reflected in a subsequent read operation of that same item. Amazon DynamoDB exposes logic that enables you to specify the consistency characteristics you desire for each read request within your application.
When reading data from Amazon DynamoDB, users can specify whether they want the read to be eventually consistent or strongly consistent:
Amazon DynamoDB supports fast in-place updates. You can increment or decrement a numeric attribute in a row using a single API call. Similarly, you can add or remove to a set of strings atomically as well. View our documentation for more information on atomic updates.
Amazon DynamoDB runs exclusively on Solid State Drives (SSDs). SSDs help us achieve our design goals of predictable low-latency response times for storing and accessing data at any scale. The high I/O performance of SSDs also enables us to serve high-scale request workloads cost efficiently, and to pass this efficiency along in low request pricing.
As with any product, we encourage potential customers of Amazon DynamoDB to consider the total cost of a solution, not just a single pricing dimension. The total cost of servicing a database workload is a function of the request traffic requirements and the amount of data stored. Most database workloads are characterized by a requirement for high I/O (high reads/sec and writes/sec) per GB stored. Amazon DynamoDB is built on SSD drives, which raises the cost per GB stored, relative to spinning media, but it also allows us to offer very low request costs. Based on what we see in typical database workloads, we believe that the total bill for using the SSD-based DynamoDB service will usually be lower than the cost of using a typical spinning media-based relational or non-relational database. If you have a use case that involves storing a large amount of data that you rarely access, then DynamoDB may not be right for you. We recommend that you use S3 for such use cases.
It should also be noted that the storage cost reflects the cost of storing multiple copies of each data item across multiple facilities within an AWS Region.
No. DynamoDB offers seamless scaling so you can start small and scale up and down in line with your requirements. If you need fast, predictable performance at any scale then DynamoDB may be the right choice for you.
Go to the Amazon DynamoDB Detail Page and click “Sign Up” to get started with Amazon DynamoDB today. From there, you can begin interacting with Amazon DynamoDB using either the AWS Management Console or Amazon DynamoDB APIs. If you are using the AWS Management Console, you can create a table with Amazon DynamoDB and begin exploring with just a few clicks.
Amazon DynamoDB supports key-value GET/PUT operations using a user-defined primary key. The primary key is the only required attribute for items in a table and it uniquely identifies each item. You specify the primary key when you create a table.
A primary key can either be a single-attribute hash key or a composite hash-range key. A single attribute hash primary key could be, for example, “UserID”. This would allow you to quickly read and write data for an item associated with a given user ID.
A composite hash-range key is indexed as a hash key element and a range key element. This multi-part key maintains a hierarchy between the first and second element values. For example, a composite hash-range key could be a combination of “UserID” (hash) and “Timestamp” (range). Holding the hash key element constant, you can search across the range key element to retrieve items. This would allow you to use the Query API to, for example, retrieve all items for a single UserID across a range of timestamps.
After you have created a table using the AWS Management Console or CreateTable API, you can use the PutItem or BatchWriteItem APIs to insert items. Then you can use the GetItem, BatchGetItem, or, if composite primary keys are enabled and in use in your table, the Query API to retrieve the item(s) you added to the table.
Yes, you can specify a condition that must be satisfied for a PUT, update, or delete operation on an item to be completed. For example, you could choose to update an item only if it has a certain value. You could also choose to PUT an item into the table only if no record exists for the primary key you have specified. Conditional operations allow users to implement optimistic concurrency control systems on DynamoDB. For more information on conditional operations, please see our documentation.
Yes, Amazon DynamoDB allows atomic increment and decrement operations on scalar values.
Today’s web-based applications generate and consume massive amounts of data. For example, an online game might start out with only a few thousand users and a light database workload consisting of 10 writes per second and 50 reads per second. However, if the game becomes successful, it may rapidly grow to millions of users and generate tens (or even hundreds) of thousands of writes and reads per second. It may also create terabytes or more of data per day. Developing your applications against Amazon DynamoDB enables you to start small and simply dial-up your request capacity for a table as your requirements scale, without incurring downtime. You pay highly cost-efficient rates for the request capacity you provision, and let Amazon DynamoDB do the work over partitioning your data and traffic over sufficient server capacity to meet your needs. Amazon DynamoDB does the database management and administration, and you simply store and request your data. Automatic replication and failover provides built-in fault tolerance, high availability, and data durability. Amazon DynamoDB gives you the peace of mind that your database is fully managed and can grow with your application requirements.
While Amazon DynamoDB tackles the core problems of database scalability, management, performance, and reliability, it does not have all the functionality of a relational database. It does not support complex relational queries (e.g. joins) or complex transactions. If your workload requires this functionality, or you are looking for compatibility with an existing relational engine, you may wish to run a relational engine on Amazon RDS or Amazon EC2. While relational database engines provide robust features and functionality, scaling a workload beyond a single relational database instance is highly complex and requires significant time and expertise. As such, if you anticipate scaling requirements for your new application and do not need relational features, Amazon DynamoDB may be the best choice for you.
Both services are non-relational databases that remove the work of database administration. Amazon DynamoDB focuses on providing seamless scalability and fast, predictable performance. It runs on solid state disks (SSDs) for low-latency response times, and there are no limits on the request capacity or storage size for a given table. This is because Amazon DynamoDB automatically partitions your data and workload over a sufficient number of servers to meet the scale requirements you provide. In contrast, a table in Amazon SimpleDB has a strict storage limitation of 10 GB and is limited in the request capacity it can achieve (typically under 25 writes/second); it is up to you to manage the partitioning and re-partitioning of your data over additional SimpleDB tables if you need additional scale. While SimpleDB has scaling limitations, it may be a good fit for smaller workloads that require query flexibility. Amazon SimpleDB automatically indexes all item attributes and thus supports query flexibility at the cost of performance and scale.
Amazon CTO Werner Vogels' DynamoDB blog post provides additional context on the evolution of non-relational database technology at Amazon.
Amazon DynamoDB stores structured data, indexed by primary key, and allows low latency read and write access to items ranging from 1 byte up to 64KB. Amazon S3 stores unstructured blobs and suited for storing large objects up to 5 TB. In order to optimize your costs across AWS services, large objects or infrequently accessed data sets should be stored in Amazon S3, while smaller data elements or file pointers (possibly to Amazon S3 objects) are best saved in Amazon DynamoDB.
Amazon DynamoDB lets you specify the request throughput you want your table to be able to achieve. Behind the scenes, the service handles the provisioning of resources to achieve the requested throughput rate. Rather than asking you to think about instances, hardware, memory, and other factors that could affect your throughput rate, we simply ask you to provision the throughput level you want to achieve. This is the provisioned throughput model of service.
Amazon DynamoDB lets you specify your throughput needs in terms of units of read capacity and write capacity for your table. During creation of a table, you specify your required read and write capacity needs and Amazon DynamoDB automatically partitions and reserves the appropriate amount of resources to meet your throughput requirements. To decide on the required read and write throughput values, consider the number of read and write data plane API calls you expect to perform per second. If at any point you anticipate traffic growth that may exceed your provisioned throughput, you can simply update your provisioned throughput values via the AWS Management Console or Amazon DynamoDB APIs. You can also reduce the provisioned throughput value for a table as demand decreases. Amazon DynamoDB will remain available while scaling it throughput level up or down.
When storing data, Amazon DynamoDB divides a table into multiple partitions and distributes the data based on the hash key element of the primary key. While allocating capacity resources, Amazon DynamoDB assumes a relatively random access pattern across all primary keys. You should set up your data model so that your requests result in a fairly even distribution of traffic across primary keys. If a table has a very small number of heavily accessed hash key elements, possibly even a single very heavily used hash key element, traffic is concentrated on a small number of partitions – potentially only one partition. If the workload is heavily unbalanced, meaning disproportionately focused on one or a few partitions, the operations will not achieve the overall provisioned throughput level. To get the most out of Amazon DynamoDB throughput, build tables where the hash key element has a large number of distinct values, and values are requested fairly uniformly, as randomly as possible. An example of a good primary key is CustomerID if the application has many customers and requests made to various customer records tend to be more or less uniform. An example of a heavily skewed primary key is “Product Category Name” where certain product categories are more popular than the rest.
A unit of Write Capacity enables you to perform one write per second for items of up to 1 KB in size. Similarly, a unit of Read Capacity enables you to perform one strongly consistent read per second (or two eventually consistent reads per second) of items of up to 4 KB in size. Larger items will require more capacity. You can calculate the number of units of read and write capacity you need as follows:
Units of Capacity required for writes = Number of item writes per second x item size (round up to the nearest integer)
Units of Capacity required for reads* = Number of item reads per second x (item size / 4) (round up to the nearest integer)
* If you use eventually consistent reads you’ll get twice the throughput in terms of reads per second.
Here's an example. Suppose you have a DynamoDB table with items that are 3 KB in size. If you want to do 10 writes per second, you will need to 10 x 3 = 30 write capacity units. If you want to do 10 reads per second, you will need 10 x (3 / 4) = 8 read capacity units (rounded up to the nearest integer).
Amazon DynamoDB assumes a relatively random access pattern across all primary keys. You should set up your data model so that your requests result in a fairly even distribution of traffic across primary keys. If you have a highly uneven or skewed access pattern, you may not be able to achieve your level of provisioned throughput.
When storing data, Amazon DynamoDB divides a table into multiple partitions and distributes the data based on the hash key element of the primary key. The provisioned throughput associated with a table is also divided among the partitions; each partition's throughput is managed independently based on the quota allotted to it. There is no sharing of provisioned throughput across partitions. Consequently, a table in Amazon DynamoDB is best able to meet the provisioned throughput levels if the workload is spread fairly uniformly across the hash key values. Distributing requests across hash key values distributes the requests across partitions, which helps achieve your full provisioned throughput level.
If you have an uneven workload pattern across primary keys and are unable to achieve your provisioned throughput level, you may be able to meet your throughput needs by increasing your provisioned throughput level further, which will give more throughput to each partition. However, it is recommended that you considering modifying your request pattern or your data model in order to achieve a relatively random access pattern across primary keys.
DynamoDB is designed to scale without limits However, if you wish to exceed throughput rates of 10,000 write capacity units or 10,000 read capacity units for an individual table, you must first contact Amazon through this online form. If you wish to provision more than 20,000 write capacity units or 20,000 read capacity units from a single subscriber account you must first contact us using the form described above.
The smallest provisioned throughput you can request is 1 write capacity unit and 1 read capacity unit.
This falls within the free tier which allows for 5 units of write capacity and 10 units of read capacity. The free tier applies at the account level, not the table level. In other words, if you add up the provisioned capacity of all your tables, and if the total capacity is no more than 5 units of write capacity and 10 units of read capacity, your provisioned capacity would fall into the free tier.
Yes. Amazon DynamoDB allows you to change your provisioned throughput level by up to 100% with a single UpdateTable API call. If you wish to increase your throughput by more than 100%, you can simply call UpdateTable again.
For example, if your table has 1,000 units of write capacity provisioned, you could not update your table to 3,000 with a single API call as that is more than the maximum allowed change for a single UpdateTable operation. To increase your throughput from 1,000 to 3,000 units of write capacity, simply call UpdateTable to first double your throughput to 2,000, then call UpdateTable a second time to reach 3,000 writes/second.
Every Amazon DynamoDB table has pre-provisioned the resources it needs to achieve the throughput rate you asked for. You are billed at an hourly rate for as long as your table holds on to those resources. For a complete list of prices with examples, see the DynamoDB pricing page.
There are two ways to update the provisioned throughput of an Amazon DynamoDB table. You can either make the change in the management console, or else you can use the UpdateTable API call. You may change your throughput by up to 100% with a single API call, as described above: “Is there any limit on how much I can change my provisioned throughput with a single API call?"
Amazon DynamoDB will remain available while your provisioned throughput level increases or decreases.
You can increase your provisioned throughput as often as you want. You can decrease it four times per day. A day is defined according to the GMT time zone. For example, if you decrease the provisioned throughput for your table four times on December 12th, you won’t be able to decrease the provisioned throughput for that table again until 12:01am GMT on December 13th.
Keep in mind that you can’t change your provisioned throughput if your Amazon DynamoDB table is still in the process of responding to your last request to change provisioned throughput. Use the management console or the DescribeTables API to check the status of your table. If the status is “CREATING”, “DELETING”, or “UPDATING”, you won’t be able to adjust the throughput of your table. Please wait until you have a table in “ACTIVE” status and try again.
Yes. For a given allocation of resources, the read-rate that a DynamoDB table can achieve is different for strongly consistent and eventually consistent reads. If you request “1,000 read capacity units”, DynamoDB will allocate sufficient resources to achieve 1,000 strongly consistent reads per second of items up to 4 KB. If you want to achieve 1,000 eventually consistent reads of items up to 4 KB, you will need to half of that capacity, (i.e. 500 read capacity units). For additional guidance on choosing the appropriate throughput rate for your table, see our provisioned throughput guide.
Yes. Larger items may require that you provision additional throughput capacity to achieve the same throughput rate. For additional guidance on choosing the appropriate throughput rate for your table, see our provisioned throughput guide.
If your application performs more reads/second or writes/second than your table’s provisioned throughput capacity allows, requests above your provisioned capacity will be throttled and you will receive 400 error codes. For instance, if you had asked for 1,000 write capacity units and try to do 1,500 writes/second of 1 KB items, DynamoDB may only allow 1,000 writes/second to go through and you will receive error code 400 on your extra requests. You should use CloudWatch to monitor your request rate to ensure that you always have enough provisioned throughput to achieve the request rate that you need.
DynamoDB publishes your consumed throughput capacity as a CloudWatch metric. You can set an alarm on this metric so that you will be notified if you get close to your provisioned capacity.
In general, decreases in throughput will take anywhere from a few seconds to a few minutes, while increases in throughput will typically take anywhere from a few minutes to a few hours.
We strongly recommend that you do not try and schedule increases in throughput to occur at almost the same time when that extra throughput is needed. We recommend provisioning throughput capacity sufficiently far in advance to ensure that it is there when you need it.
The data model for Amazon DynamoDB is as follows:
The total size of an item, including attribute names and attribute values, cannot exceed 64KB.
There is no limit to the number of attributes that an item can have. However, the total size of an item, including attribute names and attribute values, cannot exceed 64KB.
Amazon DynamoDB supports three scalar data types: Number, String, and Binary. Additionally, Amazon DynamoDB supports multi-valued types: Number Set, String Set, and Binary Set.
No. You can store any amount of storage you can put into an Amazon DynamoDB table. As the size of your data set grows, Amazon DynamoDB will automatically spread your data over sufficient machine resources to meet your storage requirements.
No, you can increase the throughput you have provisioned for your table using UpdateTable API or in the AWS Management Console. DynamoDB is able to operate at massive scale and there is no theoretical limit on the maximum throughput you can achieve. DynamoDB automatically divides your table across multiple partitions, where each partition is an independent parallel computation unit. DynamoDB can achieve increasingly high throughput rates by adding more partitions.
If you wish to exceed throughput rates of 10,000 writes/second or 10,000 reads/second, you must first contact Amazon through this online form.
Yes. Amazon DynamoDB is designed to scale its provisioned throughput up or down while still remaining available.
No. Amazon DynamoDB removes the need to partition across database tables for throughput scalability.
The service runs across Amazon’s proven, high-availability data centers. The service replicates data across three facilities in an AWS Region to provide fault tolerance in the event of a server failure or Availability Zone outage.
To achieve high uptime and durability, Amazon DynamoDB synchronously replicates data across three facilities within an AWS Region.
Local secondary indexes enable some common queries to run more quickly and cost-efficiently, that would otherwise require retrieving a large number of items and then filtering the results. It means your applications can rely on more flexible queries based on a wider range of attributes.
Before the launch of local secondary indexes, if you wanted to find specific items within a hash key bucket (items that share the same hash key), DynamoDB would have fetched all objects that share a single hash key, and filter the results accordingly. For instance, consider an e-commerce application that stores customer order data in a DynamoDB table with hash-range schema of customer id-order timestamp. Without LSI, to find an answer to the question “Display all orders made by Customer X with shipping date in the past 30 days, sorted by shipping date”, you had to use the Query API to retrieve all the objects under the hash key “X”, sort the results by shipment date and then filter out older records.
With local secondary indexes, we are simplifying this experience. Now, you can create an index on “shipping date” attribute and execute this query efficiently and just retieve only the necessary items. This significantly reduces the latency and cost of your queries as you will retrieve only items that meet your specific criteria. Moreover, it also simplifies the programming model for your application as you no longer have to write customer logic to filter the results. We call this new secondary index a ‘local’ secondary index because it is used along with the hash key and hence allows you to search locally within a hash key bucket. So while previously you could only search using the hash key and the range key, now you can also search using a secondary index in place of the range key, thus expanding the number of attributes that can be used for queries which can be conducted efficiently.
Redundant copies of data attributes are copied into the local secondary indexes you define. These attributes include the table hash and range key, plus the alternate range key you define. You can also redundantly store other data attributes in the local secondary index, in order to access those other attributes without having to access the table itself.
Local secondary indexes are not appropriate for every application. They introduce some constraints on the volume of data you can store within a single hash key value. For more information, see the FAQ items below about item collections.
The set of attributes that is copied into a local secondary index is called a projection. The projection determines the attributes that you will be able to retrieve with the most efficiency. When you query a local secondary index, Amazon DynamoDB can access any of the projected attributes, with the same performance characteristics as if those attributes were in a table of their own. If you need to retrieve any attributes that are not projected, Amazon DynamoDB will automatically fetch those attributes from the table.
When you define a local secondary index, you need to specify the attributes that will be projected into the index. At a minimum, each index entry consists of: (1) the table hash key value, (2) an attribute to serve as the index range key, and (3) the table range key value.
Beyond the minimum, you can also choose a user-specified list of other non-key attributes to project into the index. You can even choose to project all attributes into the index, in which case the index replicates the same data as the table itself, but the data is organized by the alternate range key you specify.
You need to create a LSI at the time of table creation. It can’t currently be added later on. To create an LSI, specify the following two parameters:
Local secondary indexes are updated automatically when the primary index is updated. Similar to reads from a primary index, LSI supports both strong and eventually consistent read options.
No, not necessarily. Local secondary indexes only reference those items that contain the indexed range key specified for that LSI. DynamoDB’s flexible schema means that not all items will necessarily contain all attributes.
This means local secondary index can be sparsely populated, compared with the primary index. Because local secondary indexes are sparse, they are efficient to support queries on attributes that are uncommon.
For example, in the Orders example described above, a customer may have some additional attributes in an item that are included only if the order is canceled (such as CanceledDateTime, CanceledReason). For queries related to canceled items, an local secondary index on either of these attributes would be efficient since the only items referenced in the index would be those that had these attributes present.
Local secondary indexes can only be queried via the Query API.
To query a local secondary index, explicitly reference the index in addition to the name of the table you’d like to query. You must specify the index hash attribute name and value. You can optionally specify a condition against the index key range attribute.
Your query can retrieve non-projected attributes stored in the primary index by performing a table fetch operation, with a cost of additional read capacity units.
Both strongly consistent and eventually consistent reads are supported for query using local secondary index.
Local secondary indexes must be defined at time of table creation. The primary index of the table must use a hash-range composite key.
No, it’s not possible to add local secondary indexes to existing tables at this time. We are working on adding this capability and will be releasing it in the future. When you create a table with local secondary index, you may decide to create local secondary index for future use by defining a range key element that is currently not used. Since local secondary index are sparse, this index costs nothing until you decide to use it.
Each table can have up to five local secondary indexes.
Each table can have up to 20 projected non-key attributes, in total across all local secondary indexes within the table. Each index may also specifify that all non-key attributes from the primary index are projected.
No, an index cannot be modified once it is created. We are working to add this capability in the future.
No, local secondary indexes cannot be removed from a table once they are created at this time. Of course, they are deleted if you also decide to delete the entire table. We are working on adding this capability and will be releasing it in the future.
You don’t need to explicitly provision capacity for a local secondary index. It consumes provisioned capacity as part of the table with which it is associated.
Reads and writes to LSIs consume capacity by the standard formula of 1 write capacity unit per 1KB of data written per second and 1 read capacity unit per 4 KB of data read per second, with the following differences:
Local secondary indexes consume storage for the attribute name and value of each LSI’s primary and index keys, for all projected non-key attributes, plus 100 bytes per item reflected in the LSI.
All scalar data types (Number, String, Binary) can be used for the range key element of the local secondary index key. Set types cannot be used.
All data types (including set types) can be projected into a local secondary index.
In Amazon DynamoDB, an item collection is any group of items that have the same hash key, across a table and all of its local secondary indexes. Traditional partitioned (or sharded) relational database systems call these shards or partitions, referring to all database items or rows stored under a hash key.
Item collections are automatically created and maintained for every table that includes local secondary indexes. DynamoDB stores each item collection within a single disk partition.
Every item collection in Amazon DynamoDB is subject to a maximum size limit of 10 gigabytes. For any distinct hash key value, the sum of the item sizes in the table plus the sum of the item sizes across all of that table's local secondary indexes must not exceed 10 GB.
The 10 GB limit for item collections does not apply to tables without local secondary indexes; only tables that have one or more local secondary indexes are affected.
Although individual item collections are limited in size, the storage size of an overall table with local secondary indexes is not limited. The total size of an indexed table in Amazon DynamoDB is effectively unlimited, provided the total storage size (table and indexes) for any one hash key does not exceed the 10 GB threshold.
DynamoDB’s write APIs (PutItem, UpdateItem, DeleteItem, and BatchWriteItem) include an option, which allows the API response to include an estimate of the relevant item collection’s size. This estimate includes lower and upper size estimate for the data in a particular item collection, measured in gigabytes.
We recommend that you instrument your application to monitor the sizes of your item collections. Your applications should examine the API responses regarding item collection size, and log an error message whenever an item collection exceeds a user-defined limit (8 GB, for example). This would provide an early warning system, letting you know that an item collection is growing larger, but giving you enough time to do something about it.
If a particular item collection exceeds the 10GB limit, then you will not be able to write new items, or increase the size of existing items, for that particular hash key. Read and write operations that shrink the size of the item collection are still allowed. Other item collections in the table are not affected.
To address this problem , you can remove items or reduce item sizes in the collection that has exceeded 10GB. Alternatively, you can introduce new items under a new hash key value to work around this problem. If your table includes historical data that is infrequently accessed, consider archiving the historical data to Amazon S3, Amazon Glacier or another data store.
You have the ability to monitor table performance for free using Amazon CloudWatch in the AWS Management Console. You have access to information such as: latencies for each operation type, total amount of data stored in the table, request throughput for each API, and any throttled requests in a given time period. You can use this data to proactively scale your database table resources ahead of expected traffic increases.
Yes, DynamoDB will support API-level permissions through AWS Identity and Access Management (IAM) service integration
For more information about IAM, go to:
DynamoDB supports implicit item-level transactions. When you use UpdateItem, PutItem, or DeleteItem, the operation is guaranteed to either succeed or fail atomically. The atomicity of these operations is guaranteed at the item level. Atomicity is also guaranteed for conditional operations and for increment/decrement operations.
Each DynamoDB table has provisioned read-throughput and write-throughput associated with it. You are billed by the hour for that throughput capacity if you exceed the free tier.
Please note that you are charged by the hour for the throughput capacity that you provision for your table, whether or not you are sending requests to your table. If you would like to change your table’s provisioned throughput capacity, you can do so using the AWS Management Console or the UpdateTable API.
In addition, DynamoDB also charges for indexed data storage as well as the standard internet data transfer fees
To learn more about DynamoDB pricing, please visit the DynamoDB pricing page.
Here is an example of how to calculate your throughput costs using US East (Northern Virginia) Region pricing. To view prices for other regions, visit our pricing page
If you create a table and request 10 units of write capacity and 200 units of read capacity of provisioned throughput, you would be charged:
$0.01 + (4 x $0.01) = $0.05 per hour
If your throughput needs changed and you increased your reserved throughput requirement to 10,000 units of write capacity and 50,000 units of read capacity, your bill would then change to:
(1,000 x $0.01) + (1,000 x $0.01) = $20/hour
To learn more about DynamoDB pricing, please visit the DynamoDB pricing page.
Except as otherwise noted, our prices are exclusive of applicable taxes and duties, including VAT and applicable sales tax. For example, our prices for the Asia Pacific (Tokyo) Region are inclusive of Japan consumption tax.
Reserved Capacity is a billing feature that allows you to obtain discounts on your provisioned throughput capacity in exchange for:
Log into the AWS management console, go to the DynamoDB console page, and then click on "Purchase Reserved Capacity”. This will take you to a form you can fill out to purchase Reserved Capacity. Make sure you have selected the AWS Region in which your Reserved Capacity will be used. Please allow up to two weeks for your purchase request to be processed. You will be notified by email when your purchase request has been processed.
No, you cannot cancel your Reserved Capacity and the one-time payment is not refundable. You will continue to pay for every hour during your Reserved Capacity term regardless of your usage.
The smallest Reserved Capacity offering is 5,000 write capacity units and 5,000 read capacity units.
Not yet. We will provide APIs and add more Reserved Capacity options over time.
Currently, each AWS account can make one Reserved Capacity purchase. We will expand our Reserved Capacity offering in future to allow more Reserved Capacity purchases per account.
No. Reserved Capacity is associated with a single Region.
Yes. When you purchase Reserved Capacity, you are agreeing to a minimum usage level and you pay a discounted rate for that usage level. If you provision more capacity than that minimum level, you will be charged at standard rates for the additional capacity.
Reserved Capacity is automatically applied to your bill. For example, if you purchase 5,000 write capacity units of Reserved Capacity and you have provisioned 6,000, then your Reserved Capacity purchase will automatically cover the cost of 5,000 write capacity units and you will pay standard rates for the remaining 1,000 write capacity units.
A Reserved Capacity purchase is an agreement to pay for a minimum amount of provisioned throughput capacity, for the duration of the term of the agreement, in exchange for discounted pricing. If you use less than your Reserved Capacity, you will still be charged each month for that minimum amount of provisioned throughput capacity.
Yes. Reserved Capacity is applied to the total provisioned capacity within the Region in which you purchased your Reserved Capacity. For example, if you purchased 5,000 write capacity units of Reserved Capacity, then you can apply that to one table with 5,000 write capacity units, or 100 tables with 50 write capacity units, or 1,000 tables with 5 write capacity units, etc.