Skip to main content

Amazon DynamoDB features

About DynamoDB

DynamoDB is a serverless, fully managed, distributed NoSQL database service with single-digit millisecond performance at any scale. It offers zero infrastructure management, zero downtime maintenance, and zero maintenance windows. 

Developers can use DynamoDB to build serverless applications that can start small and scale globally. DynamoDB scales to support tables of virtually any size. Availability, durability, and fault tolerance are built-in and cannot be turned off, removing the need to architect your applications for these capabilities.

DynamoDB is designed to run high-performance, internet-scale applications that would overburden traditional relational databases. It supports key-value and document data models. DynamoDB has a flexible schema so it can easily adapt as your business requirements change without the burden of having to redefine the table schema as you would in relational databases.

With over ten years of pioneering innovation investment, DynamoDB offers limitless scalability with consistent single-digit millisecond performance and up to 99.999% availability.

Performance and scalability

Open all

With DynamoDB, there are no servers to provision, patch, or manage, and no software to install, maintain, or operate. DynamoDB does not have versions (major, minor, or patch), there are no maintenance windows, and DynamoDB provides zero-downtime maintenance. DynamoDB on-demand pricing provides pay-as-you-go pricing, scales to zero, and automatically scales tables to adjust for capacity and maintains performance with zero administration.

DynamoDB offers warm throughput which means all resources are instantaneously available. Unlike cold starts, warm throughput provides better performance since the resources are already allocated and ready to handle requests efficiently.  This benefits applications requiring quick data access and reliable performance.

Similar to all other database systems, you start by creating a table, which is a collection of items. With DynamoDB, each item in the table has its own primary key. Many applications can also benefit from having one or more secondary keys to more efficiently search data using other attributes. DynamoDB offers the option to create both global and local secondary indexes, which lets you query the data in the table using a secondary or alternate key. 

Secondary indexes enhance DynamoDB's performance primarily by enabling efficient querying of data using attributes other than the table's primary key. With secondary indexes, queries on non-primary key attributes don't have to do a full table scan. thereby speeding up performance for non-primary keys. Global secondary indexes are also known as sparse indexes. In addition to giving you maximum flexibility on how to access your data, you can provision lower write throughput with excellent performance at a lower cost.

Security

Open all

DynamoDB encrypts all customer data in transit and at rest by default. For encryption in transit, HTTPS protocol is used to protect network traffic by using Secure Sockets Layer encryption. Encryption at rest enhances the security of your data by using encryption keys stored in AWS Key Management Service (AWS KMS). With the addition of AWS Database Encryption SDK, you can perform attribute-level encryption to further enforce granular access control on data within your table. DynamoDB helps you to build security-sensitive applications that meet strict encryption compliance and regulatory requirements.

Encryption keys provide an additional layer of data protection by securing your data from  unauthorized access to the underlying storage. You can specify whether DynamoDB should use an AWS owned key (default encryption type), an AWS managed key, or a customer managed key to encrypt user data. The default encryption using AWS KMS keys is provided at no additional charge.

DynamoDB uses AWS Identity and Access Management (IAM) to authenticate and authorize access to resources. You can specify IAM policies, resource-based policies, define attribute-based access control (ABAC) using tags in the policies, and specify conditions that allow fine-grained access, restricting read or write access down to specific items and attributes in a table, based on identities.

DynamoDB supports gateway virtual private cloud (VPC) endpoints and interface VPC endpoints for connections within a VPC or from on-premises data centers. You can configure private network connectivity from your on-premises applications to DynamoDB through interface VPC endpoints enabled with AWS PrivateLink. This allows customers to simplify private connectivity to DynamoDB and maintain compliance.

Resilience

Open all

Point-in-time recovery (PITR) helps protect your DynamoDB tables from accidental write or delete operations. For example, if a test script writes accidentally to a production DynamoDB table or someone mistakenly issues a "DeleteItem" call, PITR has you covered. When enabled, PITR automatically provides continuous backups of your DynamoDB table data with per-second granularity so that you can restore to any given second from within your configured recovery period between 1 and 35 days. 

Using PITR, you can back up tables of any size with no impact on the performance or availability of your production applications as PITR does not use provisioned capacity. PITR backups are automatically encrypted and catalogued, easily discoverable, and retained until you explicitly delete them. You can restore from a backup to a new table across AWS Regions to help meet your multi-regional compliance and regulatory requirements, and to develop a disaster recovery and business continuity plan. 

You can enable PITR with a single click in the AWS Management Console or a single API call, and you can fully automate creation, retention, restoration, and deletion of backups via APIs.

On-demand backup and restore allows you to create full backups of your DynamoDB tables’ data for data archiving, helping you meet your corporate and governmental regulatory requirements. You can back up tables from a few megabytes to hundreds of terabytes of data with no impact on performance and availability to your production applications. 

Backups process in seconds regardless of the size of your tables so you don't have to worry about backup schedules or long-running processes. They also don't consume any provisioned capacity. In addition, all backups are automatically encrypted, cataloged, easily discoverable, and retained until explicitly deleted. You can create as many backups for tables of any size, and retain those backups as long as you need them. 

With AWS Backup integration, you can also copy on-demand backups cross-account and cross-Region, create cost allocation tagging for backups, and transition backups to cold storage. You can fully automate creation, retention, restoration, and deletion of backups via APIs. Backup and restore operations with a single click in the AWS Management Console or a single API call. 

DynamoDB global tables provides active-active replication of your data across your choice of AWS Regions with 99.999% availability. Global tables are multi active, meaning you can read and write from any replica, and your globally distributed applications can locally access data in the selected Regions to get single-digit millisecond read and write performance.

Also, global tables automatically scale capacity to accommodate your multi-Region workloads. Global tables improve your application’s multi-Region resiliency and should be considered as part of your organization’s business continuity strategy.

DynamoDB is built for mission-critical workloads, including support for atomicity, consistency, isolation, and durability (ACID) transactions for applications that require complex business logic. DynamoDB provides native, server-side support for transactions, simplifying the developer experience of making coordinated, all-or-nothing changes to multiple items both within and across tables.

DynamoDB supports 100 actions per transaction, improving developer productivity. With support for transactions, developers can extend the scale, performance, and enterprise benefits of DynamoDB to a broader set of mission-critical workloads.

Cost-effectiveness

Open all

For tables using on-demand capacity mode, DynamoDB instantly accommodates your workloads as they ramp up or down to any previously reached traffic level. You only pay for the reads and writes made by your application. If a workload’s traffic level hits a new peak, DynamoDB adapts rapidly to accommodate the workload.

You also can optionally configure maximum read or write (or both) throughput for individual on-demand tables and associated secondary indexes, making it easy to balance costs and performance. You can use on-demand capacity mode for both new and existing tables, and you can continue using the existing DynamoDB APIs without changing code.

For data that is infrequently accessed, DynamoDB Standard-IA table class reduces storage costs by 60% compared to existing Standard tables while delivering the same performance, durability, and scaling capabilities. It is the most cost-effective option for tables with storage as the dominant table cost. DynamoDB Standard-IA table's lower storage cost is designed for long-term storage of data that is infrequently accessed, such as application logs, ecommerce history, historical gaming data, old social media posts, and more. 

DynamoDB Standard is your default table class and the most cost-effective option for the vast majority of workloads as it offers lower throughput costs than the DynamoDB Standard-IA table class.. You can switch between DynamoDB Standard and DynamoDB Standard-IA table classes with no impact on table performance, durability, or availability and without changing your application code. It's easy to manage table classes using the AWS Management Console, AWS CloudFormation, or the AWS CLI/SDK. To learn more about DynamoDB Standard-IA pricing, see the DynamoDB pricing page.

DynamoDB provides capacity modes for each table: on demand and provisioned.

  • For workloads that are less predictable and for which you are unsure whether you'll have high utilization, on-demand capacity mode takes care of managing capacity for you, and you pay only for what you consume. On-demand scales to zero when resources are not used.
  • Tables using provisioned capacity mode require you to set read and write capacity. Provisioned capacity mode is more cost-effective when you’re confident you’ll have decent utilization of the provisioned capacity you specify.  You can also reserve capacity by committing to a 1- or 3-year term and receive a significant discount on the cost of provisioned throughput, saving up to 77%. 

For tables using provisioned capacity, DynamoDB provides auto scaling of throughput and storage based on your previously set capacity by monitoring the performance usage of your application.

  • If your application traffic grows, DynamoDB increases throughput to accommodate the load.
  • If your application traffic shrinks, DynamoDB scales down so that you pay less for unused capacity.

Integrations with AWS services

Open all

Amazon DynamoDB bulk import and export capabilities provide a simple and efficient way to move data between Amazon S3 and DynamoDB tables without writing any code. DynamoDB import and export capabilities help you easily move, transform, and copy DynamoDB table data acrpss applications, AWS accounts, and/or AWS regions with a few clicks in the AWS Management console or API calls. They do not consume any read or write capacity, and do not require you to develop custom solutions or manage additional infrastructure to perform imports and exports. The process is fully managed by DynamoDB and you can check the status of imports and exports via the AWS console or API calls.

You can import data directly into new DynamoDB tables to help you migrate data from other systems, import test data to help you build new applications, facilitate data sharing between tables and accounts, and simplify your disaster recovery and business continuity plans. Bulk imports from Amazon S3 allow you to import data at any scale, from megabytes to terabytes, using supported formats including CSV, DynamoDB JSON, and Amazon Ion. With bulk imports from Amazon S3, customers can save up to 66% compared to client-based writes using provisioned capacity.

With bulk exports to Amazon S3, you can export data from tables with PITR enabled for any point in time in the last 35 days with a per-second granularity. You can export data to S3 in either DynamoDB JSON or Amazon Ion format. Once you export data from DynamoDB to Amazon S3, you can use other AWS services, such as Amazon Athena and Amazon SageMaker, to analyze your data and extract actionable insights. 

DynamoDB Streams is a change data–capture capability. Whenever an application creates, updates, or deletes items in a table, DynamoDB Streams record a time-ordered sequence of every item-level change in near real time, making it ideal for event-driven architecture applications to consume and action the changes. All changes are deduplicated and stored for 24 hours.

Applications can also access this log and view the data items as they appeared before and after they were modified in near real time. DynamoDB Streams ensures that each stream record appears exactly once in the stream and, for each modified item, the stream records appear in the same sequence as the actual modifications to the item.

Amazon Kinesis Data Streams for DynamoDB captures item-level changes in your DynamoDB tables to power live dashboards, generate metrics, and deliver data into data lakes. Kinesis Data Streams enables you to build advanced streaming applications such as real-time log aggregation, real-time business analytics, and IoT data capture.

Through Kinesis Data Streams, you also can use Amazon Kinesis Data Firehose to automatically deliver DynamoDB data to other AWS services such as Amazon S3, Amazon OpenSearch Service, and Amazon Redshift.

To easily monitor your database performance, DynamoDB is integrated with Amazon Cloudwatch, which collects and processes raw database performance data. You can use CloudWatch to create customized views and dashboards of metrics and alarms for your DynamoDB databases. This monitoring capability is offered by default and is complimentary. You also can create alarms that are automatically sent to you based on metric performance.

Amazon CloudWatch Contributor Insights helps you to quickly identify who or what is impacting your databases and application performance. This capability makes it easier to quickly isolate, diagnose, and remediate issues during an operational event.