Building globally distributed MySQL applications using write forwarding in Amazon Aurora Global Database
AWS released Amazon Aurora Global Database in 2018. Aurora Global Database enables two primary use cases. The first use case is supporting a disaster recovery solution that can handle a full regional failure with a low recovery point objective (RPO) and a low recovery time objective (RTO), while minimizing performance impact to the database cluster being protected. With Aurora Global Database, you can typically achieve an RPO of less than 5 seconds and an RTO of less than 1 minute. Even with a large write workload, the performance impact on both the source and target cluster is negligible.
The second major use case for Aurora Global Database is providing read-only copies of an Amazon Aurora cluster in up to five remote regions to serve users close to those regions. This provides users in remote regions with lower latency reads than having to connect to the, further away, primary region.
The following graph represents an example of using MySQL logical replication between two regions. As the number of queries increases in a stepwise manner, the replication time lag observed on the target cluster increases exponentially. Additionally, the number of queries per second that the tested configuration could handle peaked at around 35,000.
In contrast, the following graph shows the same workload and instance sizing using Aurora Global Database, where the replication time lag remains under one second and the queries per second peaked around 200,000 (+5.7x).
Read replica write forwarding
In addition to serving low-latency reads closer to users in multiple regions, applications running in the remote region may also need to write to the database. To do this, the application must do the following:
- Establish connectivity from each remote region to the primary region
- Split read and write traffic in the application code such that reads are sent to the cluster local to the region and writes are sent to the primary region
- Manage the consistency between writes and subsequent reads because Aurora Global Database replication is asynchronous and, while low, has a replication time lag. If this is not done correctly, a read against the local cluster may not observe a previous write against the primary region performed by the application.
With the new read replica write forwarding feature of Aurora Global Database, performing writes in a remote regions becomes much easier. Write forwarding enables the application to send writes to the local read-only cluster, which then handles the above mentioned steps transparently for the application. This enables the application to send writes to any Aurora Global Database remote cluster, simplifying application development. Some of the key benefits of write forwarding include:
- Managed solution – Writes issued at a remote cluster are transparently forwarded to the primary cluster.
- No replication conflicts – Because all writes are applied by the primary cluster, replication related update conflicts do not occur.
- Simple – You can issue writes to the remote cluster followed by reads that will observe the previous writes.
- Flexibility – You can choose among several read consistency levels to balance consistency and performance.
To simplify application development and enable you to write to remote Aurora Global Database clusters, write forwarding has the following features:
- Connectivity between the remote cluster and the primary cluster is managed automatically using a secure connection across the Amazon backbone.
- You can issue reads and writes to the same instance in a remote cluster. You don’t need to split read and write traffic or manage separate connections, sessions, or transactions for reads and writes.
- Multiple consistency modes are provided which allows you to balance consistency and performance.
Aurora Global Database write forwarding works by accepting a write statement from an application at an instance in a remote cluster and forwarding that statement with the necessary context to the primary cluster, where the write statement is then executed on the primary instance. Any results from executing the statement, including warnings and errors, are returned to the remote instance, and then to the application. This entire process is transparent to the application. You only need to enable write forwarding for the cluster and set the consistency mode for each session where you want to perform a write.
In order to use write forwarding you should note the following:
- You need to enable write forwarding in your session by setting the
DELETEare all supported, except those that modify permanent tables based on the results from temporary tables.
- Locking reads can be used (
SELECT … FOR UPDATE,
SELECT … LOCK IN SHARE MODE).
- Prepared statements using the
EXECUTEsyntax are supported.
- Stored procedures are not supported and need to be executed on the primary cluster.
- DDL statements are not supported and need to be executed on the primary cluster.
For more information, see Working with Amazon Aurora Global Database.
When an application executes a write statement against a remote cluster, the result from that statement is returned to the application immediately following execution on the primary cluster. This means that the transaction is durable independent of the consistency mode used. However, after the change is applied on the primary cluster, that change takes time to replicate back to the remote cluster to serve remote reads. Depending on the specific transaction in your application, you may or may not, require read-after-write consistency. You may want to wait for the replication to complete and make sure that you can read the previous write, or you may want to improve performance and continue with the next statement without waiting for the replication to complete. To that end, write forwarding supports a configurable read consistency mode.
The consistency mode is configured at the session level and is controlled by the
aurora_replica_read_consistency parameter. By default, this parameter is set to an empty value and you must set this option to
eventual before write forwarding can be used.
session makes sure that read queries following a write in the same session will wait for replication to catch up to that previous write. This makes sure that the session sees its own changes, but it is not guaranteed to see changes issued by other sessions.
global makes sure that read queries wait for replication to catch up to the point in time when the read started. This means that the read from the remote cluster will see all changes committed to the primary cluster up to the point when the read query was started on the remote cluster. Although this mode provides the strongest read-after-write consistency, it does so at the expense of performance. When using this mode, the wait time on queries is at least as long as the replication lag.
eventual results in read queries being subject to the replication time lag. This reduces write latency because the remote replica does not wait for replication to complete, but trades that off for not making sure that following reads can see the previous write. Note that this does not mean that writes can be lost. If the query continues it means that the primary cluster has applied the write and acknowledged that back to the remoter cluster. However the resulting data changes generated by that write may not have been replicated back to the remote cluster when the read is executed.
Setting up Aurora Global Database with write forwarding
Given a pre-existing cluster, the first step to enabling Aurora Global Database with write forwarding is to create a global cluster. Complete the following steps:
- On the Amazon RDS console, choose Databases.
- Choose your source cluster.
- From the Actions drop-down menu, choose Add region.
- On the Add an AWS Region page, for Global database identifier, enter a name for your global database; for example,
This is the name of the global cluster that contains both the writer and reader Regions.
- For Secondary Region, choose your target Region.
This post chooses EU (Ireland).
- For the remaining settings on this page, use the same settings that you use to create an Aurora DB cluster, with the following exception:
- For Read replica write forwarding, select Enable read replica write forwarding.
When the global cluster creation is complete, the view on the console looks similar to the following screenshot.
At this point, both the writer and reader clusters are online and ready to accept traffic.
- For Read replica write forwarding, select Enable read replica write forwarding.
Writing between Regions
In order to perform a write in a reader Region, we will do the following:
- Connect to the cluster in the writer Region and create a new schema with the following code:
Write forwarding only forwards DML commands. This means that you must run any DDL commands like
CREATE TABLEin the writer Region. See the following code:
- After you create a new schema with a new table, insert a single row into that table in the writer Region and select out the results. See the following code:
A single schema now exists with a single table containing a single row in the writer Region.
- Connect to the reader Region to confirm that all these items exist in the reader Region. See the following code:
- Set the read consistency mode, insert a row, and check the current table contents. See the following code:
You must set the
@@aurora_replica_read_consistencymode at the session level before executing any supported statements. If you don’t set this parameter, the following exception is thrown:
Amazon Aurora Global Database allows you to create globally distributed applications that can serve local reads in remote regions. You can maintain a disaster recovery solution with minimal RPO and RTO, and can provide low-latency reads to regions across the world. With write forwarding, you can now also enable your global applications to perform writes in remote regions with minimal code changes.
Get started with Amazon Aurora Global Database and write forwarding today!
About the Author
Steve Abraham is a Principal Solutions Architect for Amazon Web Services. He works with our customers to provide guidance and technical assistance on database projects, helping them improving the value of their solutions when using AWS.