AWS Database Blog
How to use Amazon DynamoDB global tables to power multiregion architectures
More and more, AWS customers want to make their applications available to globally dispersed users by deploying their application in multiple AWS Regions. These global users expect fast application performance.
In this post, I describe how to use Amazon DynamoDB to power the database of a global backend deployed in multiple AWS Regions. I use DynamoDB global tables, which provide a fully managed, multiregion, and multimaster database so that you can deliver low-latency data access to your users no matter where they are located on the globe.
Why use a multiregion architecture?
AWS customers typically want a multiregion architecture for two reasons:
- To provide low latency and improve their app experience.
- To facilitate disaster recovery.
1. Improve your users’ experience with low-latency apps
Network latency and network throughput are the key factors used to define network performance and quality. Latency is the time a data packet takes to travel back and forth between entities, and throughput is the quantity of data transmitted between entities during a specific period. Web applications’ performance and success is directly associated with users’ perception of that network performance and quality.
Content providers across the globe use content delivery networks such as Amazon CloudFront to get content to users faster, especially when the content is static (such as images, videos, and JavaScript libraries). Using a globally distributed network of caching servers, static content is served as if it is local to users, improving the delivery of that static content. In other words, the closer your backend origin is to users, the better their experience of content is likely to be.
However, even if CloudFront solves the problem of delivering static content, some dynamic calls still need to made on the backend, and the backend servers could be far away, adding precious milliseconds to requests. For example, if you have users in Europe, but your backend is in the United States or Australia, the added latency is approximately 140 milliseconds and 300 milliseconds, respectively. Those delays are unacceptable for many popular games, ecommerce platforms, and other interactive applications. Latency affects customers’ behavior, with lower latency generating more user engagement.
As technology improves—especially with the advent of augmented reality, virtual reality, and mixed reality—and requires even more immersive and lifelike experiences, developers need to produce applications with stringent latency requirements. Therefore, having locally available applications and content is more important than ever.
2. Facilitate disaster recovery
Let’s say you deploy your application in an AWS Region and the application is composed of multiple services. If one of these services is critical to your application and is experiencing issues, you might want to shift the traffic to a healthy region to prevent customer dissatisfaction. Failures will happen sometimes, and when they do, it is important to work on mitigating the severity of impact of problems. Using a multiregion architecture can facilitate disaster recovery.
Building a multiregion, active-active architecture
The two most commonly used multiregion architecture configurations are active-passive and active-active. An active-passive architecture configuration typically comprises at least two regions. However, not all regions serve traffic simultaneously, and therefore the name “active-passive.” When the configuration includes two regions, one region actively serves traffic while the second region is passive, ready to support a failover in case the active region experiences issues.
An active-active configuration comprises at least two regions. However, unlike an active-passive configuration where one region is actively serving traffic while the other is passive, in an active-active configuration, all regions are actively running the same kind of service and serving traffic simultaneously. The main purpose of an active-active configuration is to achieve load balancing between regions, often using latency-based routing that routes service traffic to the region that provides the fastest experience.
To have a multiregion, active-active architecture configuration, you have to fulfill a few requirements:
- Services must be stateless.
- You should be able to both read and write data from any region within your active-active configuration.
- Data replication between regions must be fast and reliable.
Though the first requirement is fairly straightforward—don’t maintain local state in the application—the second and third requirements have traditionally been difficult. This is because you have to write code to replicate data changes asynchronously and resolve conflicts among these regions, which involves time-consuming and labor-intensive effort.
For distributed data stores such as DynamoDB global tables, asynchronous replication decouples the primary node from its replicas. Changes performed on the primary node are replicated to its replicas within a couple of seconds. This type of replication is called eventual consistency. When a system achieves eventual consistency, it has achieved replica convergence.
To achieve replica convergence, a system must reconcile the differences between multiple copies of distributed data. The most common approach to reconciliation is called last writer wins. With this conflict resolution mechanism, all of the replicas agree on the latest update and converge toward a state in which they all have identical data.
A few years ago, when deploying a multiregion architecture, it was standard practice to set up secured VPN connections between regions to replicate the data asynchronously. Though deploying and managing those connections have become easier, the connections still go over the internet and are subject to sudden changes in routing and latency, making it difficult to maintain consistently good replication.
To overcome this problem, James Hamilton, vice president and distinguished engineer at AWS, announced that AWS now provides a high-bandwidth, global network infrastructure powered by redundant 100 gigabit ethernet (GbE) links circling the globe.
As a result, AWS Regions are now connected to a private global network backbone that provides lower cost and more consistent cross-region network latency when compared with the public internet. The benefits are clear:
- Improved latency, reduced packet loss, and better overall quality.
- Network interconnect capacity conflicts are avoided.
- Greater operational control.
DynamoDB global tables use this global network backbone to enable you to build globally distributed applications for globally dispersed users. Global tables eliminate the difficult work of replicating data between regions and resolving update conflicts, enabling you to focus on your application’s business logic. A global table consists of multiple replica tables (one per region that you choose) that DynamoDB treats as a single unit.
How to create a global table
When you create a DynamoDB table, in addition to the table name, you must specify the primary key of the table. The primary key uniquely identifies each item in the table so that no two items can have the same key. In a global table, every replica table shares the same table name and the same primary key. Because a global table is a multimaster database, applications can write data to any of the replica tables. DynamoDB automatically propagates these writes to the other replica tables in the AWS Regions you choose.
To create a global table, open the DynamoDB console and create a table with a primary key. The primary key can be simple (a partition key only) or composite (a partition key combined with a sort key).
In the console, I create a table called MyGlobalTable
with item_id
as the primary key, and then choose Create. This table will serve as the first replica table in the global table.
To create the global table, I do the following in the AWS Management Console, as shown in the following screenshot:
- Choose the Global Tables tab from the AWS Management Console.
- Choose Enable streams.
Global tables use DynamoDB Streams to propagate changes between replicas. A DynamoDB stream is an ordered flow of information about changes to items in a DynamoDB table. Whenever an application creates, updates, or deletes items in the table, streams writes a stream record with the primary key attributes of the items that were modified.
Note: As shown in the following screenshot, you might see a pop-up that mentions the view type of the stream that is being used: New and old images. This simply means that both the new and old images of the item in the table will be written to the stream whenever data in the table is modified. This stream is used to replicate the data across regions.
After you enable streams, choose Add region to add new regions to your global table (see the following screenshot). Choose the AWS Regions where you want to deploy replica tables and then choose Continue.
In my example, because I am located in Europe, I want to replicate my data within the EU only, so I choose EU (Frankfurt) and EU (Ireland) as the regions forming my global table.
Note: Always replicate data between regions in compliance with the law. Specific compliance requirements might dictate that you cannot replicate data between continents (say Europe and North America) or even regions.
Adding regions starts the table creation process in the regions you chose. After a few seconds, you should be able to see the different regions forming your newly created global table. Remember that a global table consists of multiple replica tables (one per region of your choice) that DynamoDB treats as a single unit. Therefore, every replica has the same table name and the same primary key schema.
Previously, I created the global table in two regions: EU (Frankfurt) and EU (Ireland). I created the global table by using the AWS Management Console, but you can do the same thing with the AWS Command Line Interface (CLI). The AWS CLI allows for automation and repeatability, which are always good things.
To use the AWS CLI to create the global table in two regions, I do the following:
- Using the AWS CLI in a terminal, I create the initial table with streams enabled. Remember, global tables use streams to propagate changes between replicas (I’m creating the table in the EU [Ireland] Region in this example).
The output of this previous command should look something similar to the following (replace xxxxxxxxxxxx with your AWS account number).
- I create an identical table with streams enabled in another region (I’m creating the identical table in the EU [Frankfurt] Region).
The preceding command returns the following.
- I then create a global table consisting of the replica tables previously created. I run the following command in my terminal.
The output of the preceding command should output something similar to the following (replace xxxxxxxxxxxx with your AWS account number).
You can verify that the global table has been created successfully by using the AWS Management Console.
Let’s talk capacity
I have talked so far in this post about creating global tables, but an important part of scaling your applications is managing capacity at the database level. Many application workloads are either unpredictable (such as flash sales and social networking with viral content) or cyclical.
In order to maintain a high-level customer experience, your database must be able to scale regardless of its traffic patterns, and it must not require manual intervention. DynamoDB auto scaling uses AWS Application Auto Scaling to adjust provisioned throughput capacity dynamically on your behalf in response to actual traffic patterns. Auto scaling allows a table or a global secondary index to increase its provisioned read and write capacity to handle sudden increases in traffic. When traffic decreases, auto scaling decreases the throughput so that you don’t pay for unused provisioned capacity.
We recommend that you use auto scaling to manage throughput capacity settings for global tables. If you prefer to manage throughput capacity for global tables manually, follow the DynamoDB best practices to avoid data replication issues.
To enable auto scaling on your global table, navigate to the DynamoDB console. On the global table page:
- Choose the Capacity tab as shown in the following screenshot.
- Choose Consistent settings across all regions in the auto scaling section and choose Save to apply the changes.
When modifying global tables (as I’ve just done), use the DynamoDB console or the UpdateGlobalTableSettings
call to apply changes automatically to all replica tables and matching secondary indexes in the global table. This ensures that provisioned write capacity settings are consistent across the replica tables and secondary indexes in your global table.
We do not recommend using the UpdateTable
, RegisterScalableTarget
, or PutScalingPolicy
calls because they target only a specific replica table and changes are not applied to other replica tables automatically. You must make the same change to all your replica tables manually. Inconsistent write capacity settings across your replica tables can lead to data replication issues.
For more information, see Managing Throughput Capacity Automatically with DynamoDB Auto Scaling.
Accessing DynamoDB global tables
To access and use DynamoDB global tables, you can use the AWS Management Console and the AWS CLI. However, I suggest that you use the AWS Management Console and the AWS CLI for testing or small scripts only.
For larger projects and to get the most out of DynamoDB, you should write application code using DynamoDB SDKs. These SDKs provide support for Java, JavaScript in the browser, .NET, Node.js, PHP, Python, Ruby, C++, Go, Android, and iOS.
Let’s take a look at a simple example using the AWS Management Console and another using the AWS CLI.
Using the AWS Management Console to update global tables
To add an item to a global table using the AWS Management Console, I do the following:
- I select a region where my DynamoDB global table is replicated (for this example, I choose the table in the EU [Frankfurt] Region).
- I choose the Items tab on the MyGlobalTable
- I choose Create Item.
I can now add an item to the global table. I choose to additem_id: (String) foobar
, which simply means I am adding the stringfoobar
as a value foritem_id
. - To save the item in the table, choose Save.
- I can verify that the item has been saved in the table by choosing the Items tab.
- I also want to confirm that the item is replicated in the EU West (Ireland) region. I choose the Global Tables tab and choose the second region: EU West (Ireland).
- This opens the global table in the EU West (Ireland) console. I then choose the Items tab to verify that the
item_id
:foobar
has been successfully replicated.
As you can see, the item_id
: foobar
has been successfully replicated, and the origin of the item is eu-central-1
, which is the code for the EU (Frankfurt) Region.
Have you noticed the new fields created by the DynamoDB global table? The cross-region replication process adds the aws:rep:deleting
, aws:rep:updateregion
, and aws:rep:updatetime
attributes so that you can track the origin of items created in the table. Your application can use these attributes, but you should not modify them because global tables use them to keep your table data in sync between regions.
Using the AWS CLI to update global tables
To add an item to DynamoDB global table using the AWS CLI, I run the following command in my terminal. I decide to store the item from the EU West (Ireland) region.
I then test to confirm that preceding command created an item in the table by fetching the item created from the EU (Frankfurt) Region.
This output of the preceding command should look like the following.
As you can see, the item
: foobarcli
has been replicated successfully in the global table.
Using the Python SDK to update global tables
To add an item to a DynamoDB global table by using the AWS Python SDK, you can use the following generic Python code as the starting point in your Lambda application.
You can call this Lambda function using the handler lambda_function.put_to_dynamo
and test it with the following test event.
Similarly, to read an item from a DynamoDB global table by using the AWS Python SDK, you can use the following generic Python code as the starting point in your Lambda application.
You can call this Lambda function using the handler lambda_function.get_from_dynamo
and test it with the following test event.
Note: Both of the previous examples using the Python AWS SDK in Lambda functions assume you will configure the name of the DynamoDB table and the AWS Region as environment variables.
Wrapping up
I hope this blog post inspires you to use DynamoDB global tables at the center of your multiregion architectures. Global tables give you a fully managed, multiregion, and multimaster database. Your applications remain high performing and available even in the unlikely event of isolation or degradation of an entire region. And you don’t need to worry about writing and maintaining code for replicating databases between regions and resolving update conflicts.
About the Author
Adrian Hornsby is a technical evangelist at AWS. When not helping customers understand AWS services, Adrian is climbing rocks, throwing frisbees, and practicing Muay Thai