AWS Database Blog

Motivations for migration to Amazon DynamoDB

Amazon DynamoDB is a fully managed, serverless, key-value NoSQL database for single-digit millisecond performance at any scale. DynamoDB offers built-in security, continuous backups, automated multi-Region replication, 99.999% availability SLA, and data import and export tools.

DynamoDB was built working backward from the needs of external customers and internally at Amazon.com to overcome the scale and operational shortcomings of relational databases and to break free from punitive and expensive old guard relational database licenses. DynamoDB was purpose built and optimized for operational workloads that require consistent performance at any scale. What that means is that, for example, DynamoDB delivers the same consistent single-digit millisecond performance for a shopping cart use case with 10 users as it does for 100 million users. That is why DynamoDB could power Zoom during the pandemic as they scaled from 10 million daily users to 300 million daily users. For customers like Zoom, that is a 30-times scale without the need to re-architecture or over-provision hardware.

Eleven years after its first launch, DynamoDB is still helping customers free themselves of relational databases while reducing cost and improving performance at scale. DynamoDB has hundreds of thousands of customers, continues to grow at an incredible pace, and is serving customers of virtually every size, industry, and geography.

While relational and NoSQL migrations to DynamoDB have been consistent since 2012, the current economic trend has accelerated such migrations. The key motivation for migrating to DynamoDB is to reduce costs while simplifying operations and improving performance at scale. DynamoDB enables you to free up resources to contribute to growing top-line revenue and value for your customers. DynamoDB’s a serverless architecture enables you to save costs with capabilities like pay-per-request, scale-to-zero, and no up-front costs. On the operational side, not having to deal with scaling resources, maintenance windows, or major version upgrades saves significant operational hours and removes undifferentiated heavy lifting. Overall, DynamoDB provides a cost-optimized approach to build innovative and disruptive solutions that provide differentiated customer experiences.

In this post, we discuss common types of migrations to DynamoDB, how such migrations to DynamoDB have helped customers save time and money, how DynamoDB helps reduce costs and operations, and the approaches for migrating to DynamoDB.

Common types of migrations to Amazon DynamoDB

Over the years, many customers have migrated their mission-critical, legacy applications to DynamoDB. Common sources of migrations to DynamoDB include the following:

  • Traditional relational databases – Migrations from a relational database to DynamoDB are often motivated by the need for more consistent high performance at scale than what a relational database management system (RDBS) can provide. In addition, you can break free from legacy databases to save money on licenses, over-provisioned resources and operating costs.
  • NoSQL databases – Migrations from other NoSQL databases to DynamoDB are motivated by the desire to take advantage of its cloud-native capabilities, such as its fully managed, serverless architecture, global tables, automatic scaling, no-impact backups, and integration with other AWS services. We also see customers over-provision their self-managed NoSQL databases to account for application growth and data compaction. DynamoDB’s usage-based pricing and scale to zero capabilities enable these customers to lower their TCO.
  • On-premises databases – Migrations from on-premises databases (relational or NoSQL) to DynamoDB are motivated by the desire to move to a cloud-based serverless architecture for improved agility, scalability, and cost-efficiency. You can take advantage of the fully managed capabilities of DynamoDB and eliminate the need for on-premises hardware and infrastructure, reducing costs.

The following are additional benefits of migrating to DynamoDB:

  • Improved performance and scalability – DynamoDB handles large volumes of data and high traffic loads with consistent low latency, making it an excellent choice for applications that require fast, reliable access to data.
  • Reduced costs – As a fully managed service, DynamoDB eliminates the need for you to manage your own hardware and infrastructure, reducing costs and simplifying management.
  • Increased agility – By moving to DynamoDB, you can take advantage of the AWS Cloud’s scalability and flexibility, enabling you to quickly and easily adapt to changing business needs. For example, Customers like Branch were able to scale to 40 times their previous capacity within minutes without rearchitecting their application. With DynamoDB, you don’t need to spend time determining instance types, instance sizes, maintenance window or database versions.

Migration efforts require careful planning and implementation to ensure a successful outcome. Migrations to DynamoDB involve evaluating existing data models, understanding access patterns, designing an appropriate schema that’s optimized for performance, and determining scale and cost requirements. The solution design should also factor in data security, compliance, and regulatory requirements, and should use the native capabilities of DynamoDB to reduce time-to-market.

Customer stories for migrations to DynamoDB

Over the years, there have been multiple examples of customers reducing costs by moving to DynamoDB. We present a few in this section.

SmugMug

SmugMug migrated from a traditional MySQL database to DynamoDB to improve the scalability and performance for their photo-sharing platform. SmugMug initially redesigned their MySQL database to a key-value store, but was struggling to achieve predictable response time at scale. SmugMug was experiencing significant growth and needed a more scalable solution to handle their increasing data volumes and traffic loads.

After migrating to DynamoDB, SmugMug saw significant improvements in performance and predictable query response time regardless of their storage volume. The agility of DynamoDB allowed SmugMug to migrate Flickr (acquired by SmugMug) from Yahoo data centers to the AWS Cloud within 1 year. SmugMug migrated Flickr’s hundreds of petabytes of data, tens of billions of photos, and over 100 million users to DynamoDB and other AWS services. If you are viewing a photo on Flickr, you are interacting with DynamoDB. To hear the full story, refer to A Decade of Innovation with Amazon DynamoDB.

Experian

Experian adopted a cloud-first approach and migrated from Microsoft SQL Server to DynamoDB to build a microservices-driven architecture for achieving scalability, flexibility, and security. Experian scaled its customer services platform to support 50–75% growth in data volume every year, handling multiple terabytes of storage with no significant overhead or latency.

Additionally, with its flexible cost model DynamoDB helped Experian save significant capital investments in hardware, software, networking, and storage. The simplicity of development with DynamoDB allowed Experian to reduce their deployment time from days to hours. Read the full case study to learn more.

Samsung Cloud

Samsung Cloud migrated from Cassandra to DynamoDB to improve operational efficiency and reduce costs for Samsung Cloud services that support the backup and restore capability for Samsung Galaxy smart phones. Samsung Cloud was experiencing significant operational overhead and costs associated with managing their Cassandra clusters, and needed a more efficient and cost-effective solution.

After migrating to DynamoDB, Samsung Cloud saw significant improvements in operational efficiency, with reduced costs and complexity. They also experienced improved scalability, consistent performance at tens of millions of operations per day, and a 40% cost savings. To learn more, refer to Moving a Galaxy into the Cloud: Best Practices from Samsung on Migrating to Amazon DynamoDB.

Dropbox

Dropbox migrated a quarter of its metadata store from Edgestore, an on-premises distributed database built using sharded MySQL clusters, to Alki, a production-ready cloud metadata store, on DynamoDB and Amazon Simple Storage Service (Amazon S3). The rapid growth in metadata required a highly scalable yet cost-effective solution that Dropbox could implement within 2 years and with just two resources.

They migrated the hot metadata to DynamoDB, supporting ingestion up to 6,000 writes per second per table and storing up to 80 GB data per day. The scalability of DynamoDB allowed Dropbox to complete migration of 300 TB data in less than 2 weeks by ingesting data at 600,000 queries per second during migration. Dropbox reduced cost by 5.5 times per user-gigabyte per year by migrating to DynamoDB, in addition to saving millions of dollars in expansion costs of their on-premises data centers. Read the full case study to learn more.

The Pokémon Company

The Pokémon Company migrated from a third-party NoSQL document store to DynamoDB and Amazon Aurora. With users exceeding 300 million in 2 years, their existing NoSQL database required over 300 servers and had operational and reliability challenges. To remove undifferentiated heavy lifting, Pokémon migrated to AWS fully managed services.

They migrated global configuration and time-to-live (TTL) data to DynamoDB to achieve single-digit millisecond performance at scale. The built-in TTL settings allowed The Pokémon Company to track users exceeding maximum login attempts and deny entry, resulting in a 90% reduction in bot-login attempts, thereby freeing up system resources for legitimate users and avoiding the need to over scale. Read the full case study to learn more.

Branch

Branch faced challenges with a third-party NoSQL key-value store in scaling its infrastructure beyond 60 large Amazon Elastic Compute Cloud (Amazon EC2) instances to address the 100-fold growth in traffic. To address scaling needs and minimize cost of the infrastructure, Branch migrated its mission-critical links system, hosting 40 billion records, to DynamoDB. Their primary requirements were to scale storage and throughput, increase reliability and durability, reduce expenses, and have a predictable cost model.

Migrating to DynamoDB increased availability by 33%, improved scaled throughput up to 40 times within minutes, and allowed them to predict cost for 10 times the growth while reducing costs by 66%. To learn more about their migration journey, refer to From Zero to 40 Billion Links: Our Journey Migrating to DynamoDB.

Trustpilot

Trustpilot, an online review platform with over 100 million reviews, migrated from MongoDB to DynamoDB to accommodate access patterns with high scalability requirements. Trustpilot’s migration is a classic example of using AWS purpose-built databases. Over time, DynamoDB became their database of choice, with over 144 production tables supporting wide range of access patterns.

By using appropriate data modeling techniques, Trustpilot could achieve high scalability, improved performance, and lower cost. To learn more, refer to Amazon DynamoDB: Untold stories of databases in a serverless world.

Snapchat

Snapchat migrated their critical storage use cases from Google Cloud Platform to DynamoDB. The team’s goal was to reduce costs and strategically expand to cloud providers other than GCP for scalability. Story inboxes, a collection of story posts, was one of the critical but challenging features that Snapchat migrated from GCP to DynamoDB to support millions of writes per second while achieving significant cost savings with a resilient architecture.

Snapchat migrated 100% of their users to DynamoDB, and the scalability offered by the service allowed them to support peak traffic during New Year’s Eve with no operator intervention. With this migration, Snapchat eliminated most of the storage and throughput costs incurred in GCP, saving millions of dollars per year. To learn more, refer to Snapchat Stories on Amazon DynamoDB.

Amazon.com

Amazon.com migrated Wallet, a service enabling customers to store and pay for their orders, from Oracle to DynamoDB. Reaching 5 billion transaction per day, Amazon needed to simplify scaling and storage management while keeping costs in check and performance consistently high. To support its scale needs, Amazon scaled its Oracle database vertically and horizontally, but achieving such scale with no downtime was time consuming and required 6 months of work every year from a team of skilled engineers and DBAs.

Amazon reduced the need for skilled DBAs by using the fully managed capabilities of DynamoDB. The Wallet team migrated 10 billion records from eight Oracle tables to six DynamoDB tables with no downtime. This migration reduced the average latency by 50%, increased the throughput by 40%, and reduced infrastructure cost significantly. The Wallet team also saved 90% of the time they previously invested in scaling and managing Oracle instances. Read the full case study to learn more.

McAfee

McAfee modernized its campaign management system by migrating from a legacy on-premises commercial database to DynamoDB. The campaign management system automated message delivery with personalized content for hundreds of millions of impressions per month and was reaching the limits of scaling its legacy commercial database.

By migrating to DynamoDB, McAfee scaled to handle 8 billion read operations per month and 2 billion write operations per month, and they continue to support double-digit year-over-year subscription growth to its current millions of subscribers. Additionally, McAfee achieved 40% monthly cost savings in their infrastructure costs while reducing latency and improving scalability and reliability. McAfee’s engineers now spend time on developing and launching new products by using the fully managed service to eliminate the undifferentiated heavy lifting. Read the full case study to learn more.

These are a few public examples; we hear about many more customers benefiting from the simplicity of DynamoDB on a daily basis.

How DynamoDB helps reduce costs

In this section, we discuss some of the built-in features in DynamoDB that enable you to reduce costs.

Pay-per-request

The DynamoDB pay-per-request pricing model allows you to pay for only the read and write requests you actually make to your tables. With this pricing model, you don’t have to worry about idle capacity because you only pay for the capacity you actually use.

Pay-per-request pricing is beneficial for applications with spikey workloads, where traditional pricing models would require you to provision capacity for peak usage, resulting in unused capacity during periods of low usage. For customers with steady state workloads, provisioned capacity with auto scaling provides the option to further optimize for cost. The auto scaling feature in DynamoDB allows you to automatically scale capacity up or down to match the real-time demand of your application. Auto scaling enables you to be hands-off because you don’t have to manually manage and adjust capacity, which can be time-consuming and costly.

Pay-per-GB storage

With pay-per-GB storage pricing, you only pay for the storage that you actually use, avoiding the need to overprovision storage. You start with a small amount of storage and then scale up as your data grows. Automatic scaling of storage helps you avoid paying for storage that you don’t need, which can be a significant cost savings.

The pay-per-GB pricing model of DynamoDB offers low storage costs, especially for large volumes of data, enabling you to reduce your overall storage costs while still maintaining the performance and reliability of your application. DynamoDB automatically manages the storage and scales it as needed, helping you save time and resources needed for managing and provisioning storage. With its Time-to-Live (TTL) feature, DynamoDB allows customers to delete data beyond the desired retention period at no cost. DynamoDB’s pay-per-GB pricing model is a flexible, cost-effective database solution for customers looking to reduce storage costs and optimize their database costs.

Separation of reads, writes, and storage

DynamoDB scales read throughput, write throughput, and storage independently. You can control the amount of read and write capacity to provision and optimize your workloads based on access patterns. It’s common for DynamoDB customers to increase capacity during periods of high usage and then scale it back down during periods of low usage to optimize costs. Separating storage costs helps you optimize cost for applications with large amounts of data that may not require frequent reads and writes. DynamoDB launched features like Standard infrequent access (IA) to help you save cost for such workloads.

Scale-to-zero

As a true serverless database, DynamoDB scales to zero. That means if no requests are being issued against a DynamoDB table using on-demand capacity mode, you don’t pay for the throughput. Scaling to zero is useful for workloads that have periodic and spikey workloads because it allows you to reduce costs during periods of low usage. When the traffic to the application increases, DynamoDB automatically adjusts the capacity to meet the demand, ensuring that the application responds to requests. DynamoDB’s ability to scale to zero capacity helps you reduce costs by ensuring you pay for the resources you consume while still providing the scalability and responsiveness needed to handle varying levels of traffic.

Higher utilization

DynamoDB allows you to achieve higher utilization rates of your capacity, resulting in cost savings. Earlier in 2023, we helped a customer save 99% on their database costs by moving from a self-provisioned caching cluster scaled for peak load to DynamoDB on-demand pricing.

Because DynamoDB can scale in real time to accommodate load and to zero when the table is no longer accessed, you can lower overall costs. Its on-demand capacity allows you to pay only for the read and write capacity you consume instead of provisioning for peak capacity up front. This capability helps significantly lower costs for workloads with unpredictable access patterns because you only pay for the capacity you use, resulting in higher utilization rates and lower costs. By using efficient data modeling techniques, customers like Trustpilot and many others achieve higher utilization rates by designing tables that are optimized for access patterns to minimize the number of read and write operations.

Features to Optimize cost

DynamoDB provides several cost-optimization options, like reserved capacity for stable provisioned capacity workloads to receive up to 77% additional savings, Standard Infrequent Access to reduce cost up to 60% for tables with data that is infrequently accessed, and on-demand backup and restore to create backups without having to provision backup capacity. TTL reduces storage costs by automatically deleting data that is no longer needed at no additional cost for in-Region deletes. Features like auto scaling and provisioned capacity are designed to optimize costs for workloads with predictable access patterns, and on-demand capacity mode is designed for spikey workloads. By allowing you to seamlessly switch between these modes, DynamoDB enables you to achieve maximum cost savings for any workload.

How DynamoDB helps reduce operations

Our customers often choose DynamoDB to accelerate time-to-market by improving developer agility. Reducing the time spent on mundane operations is a common requirement for these customers. The following are some of the built-in features that help you reduce operations and overall Total Cost of Ownership (TCO):

  • No instances – The serverless architecture of DynamoDB removes the need for you to manage instances. You can simply create tables and start using them without having to provision instances. You don’t have to worry about managing the underlying infrastructure, operating system, or software stack, which saves significant time and money. To learn how to create and query DynamoDB in less than 10 minutes, refer to Create and Query a NoSQL Table with Amazon DynamoDB.
  • No maintenance windows – Traditional databases require maintenance windows to perform tasks such as software updates, hardware upgrades, and database maintenance, which often result in downtime and operational overhead. DynamoDB automatically applies updates, including operating system and database software, without requiring downtime or maintenance windows. This not only reduces operational overhead for software updates, but also ensures that the database is always up to date and secure.
  • No version upgrades – Version upgrades with traditional databases are time consuming and require significant testing and validation before production rollout. As a fully managed service, DynamoDB removes the need for you to perform any version upgrades. DynamoDB manages version upgrades and handles upgrading the database software requiring no manual intervention on your part. DynamoDB performs continuous maintenance and monitoring to ensure smooth and efficient operations of the database and reduce the risk of downtime.
  • No query processor – Unlike many databases, DynamoDB doesn’t have a complex query processor and doesn’t create query plans. This simplifies the service usage and improves performance because DynamoDB doesn’t spend the processing time that other databases typically spend in planning and optimizing phases. During debugging, developers typically spend significant time in understanding the query plan selected (or not) and determining why query performance degrades at scale. DynamoDB removes this task for developers and DBAs and saves you time and cost.
  • Automatic scaling – Auto scaling adjusts the provisioned capacity in response to changing application traffic patterns automatically, reducing the need for manual capacity planning and scaling, which is often time consuming and error-prone. Because DynamoDB adjusts the capacity dynamically based on the application spikes and drops, you can maintain consistent performance for your application while reducing the need for manual intervention or tuning. You can set up auto scaling in a few seconds by defining scaling policies based on target utilization and DynamoDB automatically adjusts capacity based on these policies.

By reducing time spent on operations, you can focus on developing your applications and delivering value to your customers without having to worry about managing the underlying infrastructure.

Migrate to DynamoDB

After identifying the workload to migrate to DynamoDB, it’s essential to understand the phases involved in migrations. There are five phases involved in migrations to DynamoDB.

Phase 1: Develop the mental model

DynamoDB is a NoSQL database built for delivering consistent performance at any scale. When migrating from other relational or non-relational databases, it’s critical to understand how DynamoDB works and develop a mental model required to implement an existing workload on DynamoDB. For example, migrating all tables from a relational database with a one-to-one mapping to DynamoDB tables is an anti-pattern. It’s important to think about existing access patterns and identify opportunities to optimize them when migrating to DynamoDB. To help relational developers understand how to develop with DynamoDB, check out the AWS Twitch show on Debunking Amazon DynamoDB Myths.

Phase 2: Design the data model

Data modeling with DynamoDB is an area where customers spend most of their time during migration (rightly so!). Efficient data modeling helps you achieve optimal performance at low cost. Read and write access patterns, frequency of access patterns, data cardinality, partition and sort key design, and evaluating the need for Global Secondary Indexes (GSIs) are some of the common topics to understand during data modeling. To learn more about data modeling with DynamoDB, check out Data Modeling with DynamoDB Workshop. You can also find some sample data models and source code to get started on GitHub.

Phase 3: Migrate the data

After you design the schema, it’s time to determine approaches for migrating data from your source database to DynamoDB. It’s important to understand requirements around downtime to define a solution for data migration:

Phase 4: Migrate apps

No real app migration is involved because customers typically develop new applications to interact with DynamoDB. Most application modernization efforts use this phase as an opportunity to develop microservices with AWS Lambda and orchestrate these microservices with Step Functions, making your entire software stack serverless. In this phase, you can use DynamoDB SDKs to interact with DynamoDB tables. This phase is relatively straightforward and involves using CRUD operations from the application tier to the database. Get started with DynamoDB using SDKs using our hands-on tutorials.

Phase 5: Operationalize the workload

The goal of this phase is to enable you to operate DynamoDB at scale. Understanding Amazon CloudWatch metrics to monitor, determining pricing, estimating reserved capacity needs, and identifying approaches to optimize for cost are examples of tasks in this phase.

As you design your data model and develop your application, it is a best practice to ensure that your workload is architected as per the AWS Well Architected Framework. The DynamoDB Well Architected Lens can to help you review your workload and identify opportunities for improvements.

Summary

In this post, we discussed common types of migrations to DynamoDB and how migrating to DynamoDB from relational and non-relational databases can help you save cost and reduce operations, as shown through some customer examples. We also discussed the approach to migration and provided resources to help you get started with your migration to DynamoDB.

Are you ready to upgrade your application to achieve consistent performance at scale while significantly lowering cost? Get started now.


About the Authors

HeadShot Karthik VijayraghavanKarthik Vijayraghavan is a Senior Manager of DynamoDB Specialist Solutions Architects at AWS. He has been helping customers modernize their applications using NoSQL databases. He enjoys solving customer problems and is passionate about providing cost effective solutions that performs at scale. Karthik started his career as a developer building web and REST services with strong focus on integration with relational databases and can relate to customers that are in the process of migration to NoSQL.

Joseph Idziorek is currently a Director of Product Management at Amazon Web Services. Joseph has over a decade of experience working in both relational and non-relational database services and holds a PhD in Computer Engineering from Iowa State University. At AWS, Joseph leads product management for DynamoDB and Keyspaces and previously led Amazon DocumentDB as well as many other purpose-built database initiatives.