AWS Database Blog

Migrate data from Amazon Aurora PostgreSQL to Amazon MemoryDB for Redis using AWS DMS

A common challenge customers face as their business grows is providing the same level of service to their end-users. Most often, databases become bottlenecks as usage outgrows capacity. Caching strategies may help improve performance by offloading frequently used data to a cache like Redis. This requires additional overhead in keeping your cache up to date. If you plan on modernizing your application, consider using purpose-built databases to serve the underlying business need. You can take advantage of a durable in-memory database such as Amazon MemoryDB for Redis to store frequently accessed data such as reference data or product catalogs. This offloads the burden from your transactional database, keeps data in a single location, and provides durability, while maintaining the high performance that your end-users expect.

You can modernize your applications using AWS database services, such as Amazon Relational Database Service (Amazon RDS), Amazon Aurora, and MemoryDB for Redis, and overcome the reliability and operational challenges associated with heavy and demanding workloads. In this post, we show you how to migrate data from Amazon Aurora PostgreSQL-Compatible Edition to Amazon MemoryDB using AWS Database Migration Service (AWS DMS). We consider a use case of a retail website that stores the transactional data in Aurora PostgreSQL and caches the product catalog in MemoryDB.

Solution overview

Before we dive deep into the solution, let’s review the concepts of some of the key components used in this solution:

  • AWS Database Migration Service (AWS DMS) is a service to migrate data between source and target data stores. The source and target data stores can be of the same database engine type or different, and reside either on premises or on the AWS Cloud. One of the requirements to use AWS DMS is that one of the data stores must be on the AWS Cloud.
  • Amazon Aurora PostgreSQL-Compatible Edition is a fully managed PostgreSQL-compatible relational database engine that combines the speed, reliability, and manageability of Amazon Aurora and cost-effectiveness of an open-source PostgreSQL database.
  • Amazon MemoryDB for Redis is a Redis-compatible, durable, in-memory database service that delivers ultra-fast performance. It’s purpose-built for modern applications created with microservices architectures. MemoryDB is compatible with Redis, a popular open-source data store, enabling you to quickly build applications using the same flexible and friendly Redis data structures, APIs, and commands that you already use. With MemoryDB, all your data is stored in memory, which enables you to achieve microsecond read and single-digit millisecond write latency and high throughput.

This solution aims to improve an application’s performance by offloading the frequently accessed product catalog data from Amazon Aurora PostgreSQL-Compatible Edition to store it durably in an in-memory MemoryDB database. In this solution, we use AWS DMS to perform the one-time data migration of the product catalog data from Amazon Aurora PostgreSQL-Compatible Edition to MemoryDB. After the product catalog data is migrated to MemoryDB, moving forward, the application reads or writes transactional data in Amazon Aurora PostgreSQL-Compatible Edition and the product catalog data in MemoryDB. The following diagram illustrates this architecture.
Solution Architecture

Prerequisites

Make sure you complete the following prerequisite steps:

  1. Set up the AWS Command Line Interface (AWS CLI) to run commands for interacting with your AWS resources.
  2. Have the appropriate permissions to interact with resources in your AWS account.

Create resources with AWS CloudFormation

The AWS CloudFormation template for this solution deploys the following key resources:

Use the AWS Pricing Calculator to estimate the cost before you run this solution. The resources deployed are not eligible for the Free Tier, but if you choose the stack defaults, as of February 2023, this solution has an hourly cost of $3.00 in the us-east-1 Region.

To create the resources, complete the following steps:

  1. Clone the GitHub project by running the following commands from your terminal:
    git clone https://github.com/aws-samples/aws-dms-postgresql-to-memorydb-migration.git
    
    cd aws-dms-postgresql-to-memorydb-migration
  2. Deploy AWS CloudFormation resources with the following code:
    aws cloudformation create-stack \
    --stack-name DMSPostgreSQLMemoryDB \
    --template-body \
    file://DMSPostgreSQLMemoryDBRedis.yaml \
    --capabilities CAPABILITY_IAM CAPABILITY_NAMED_IAM \
    --region us-east-1

Provisioning the resources takes approximately 15–20 minutes to complete. If you plan to use a different stack name, replace DMSPostgreSQLMemoryDB in the install-db-tools.sh script with your stack name.

  1. You can ensure successful stack deployment by going to the AWS CloudFormation console and verifying that the status is CREATE_COMPLETE.

CF_Stacks

Migrate product catalog data

We want to migrate the product catalog data from a transaction database (Amazon Aurora PostgreSQL-Compatible Edition to MemoryDB). To mimic this scenario, first, we need to stage the data in Amazon Aurora PostgreSQL-Compatible Edition by importing the data from an Amazon Simple Storage Service (Amazon S3) bucket. Then we migrate the data to MemoryDB using AWS DMS.

Stage data in Amazon Aurora PostgreSQL-Compatible Edition

Let’s complete the following steps to stage the data in Amazon Aurora PostgreSQL-Compatible Edition:

  1. On the AWS Cloud9 console, under My environments, select the environment PostgreSQLInstance.
  2. Choose Open in Cloud9 to access the AWS Cloud9 IDE.
    Cloud9_Environments
  3. In your AWS Cloud9 terminal, run the following command to clone the repository and install the required tools:
    git clone https://github.com/aws-samples/aws-dms-postgresql-to-memorydb-migration.git
  4. Navigate to the aws-dms-postgresql-to-memorydb-migration/scripts folder to install the client tools to access Amazon Aurora PostgreSQL-Compatible Edition and MemoryDB:
    cd aws-dms-postgresql-to-memorydb-migration/scripts
    sh install-db-tools.sh

The script takes 5 minutes to install all the necessary tools. After the installation, your terminal window should look like the following screenshot.

cloud9_cli

  1. Initialize the environment variables by running the following command:
    source ~/.bashrc

Let’s migrate the frequently used product catalog data from Amazon Aurora PostgreSQL-Compatible Edition to MemoryDB. Remember, we are in the aws-dms-postgresql-to-memorydb-migration/scripts folder.

  1. Run the following script to migrate the data:
    sh stage_data_in_aurora.sh
  2. This script downloads the data from an S3 bucket, creates a product_catalog table, and stages the data in Amazon Aurora PostgreSQL-Compatible Edition. The following screenshot shows the output of a successful run of the script.
    cloud9_cli2
  3. Connect to the Aurora PostgreSQL database to validate the data has been staged in the product_catalog table by running the following command:
    sh connect_to_aurora_postgresql.sh
  4. After successfully connecting to the database, run the following SQL to make sure that the records are successfully copied:
    select count (*) from product_catalog;

The output on your terminal window should like the following screenshot. After checking the count of records, exit from the SQL command prompt using the command \q.

cloud9_cli3

Migrate data to MemoryDB

In this section, we go through the steps to migrate the data from Amazon Aurora PostgreSQL-Compatible Edition to MemoryDB.

  1. On the AWS DMS console, choose Database migration tasks in the navigation pane.
    dms_dashboard
  2. Select the task replicate-products and on the Actions menu, choose Restart/Resume. This DMS task performs a full load of data from Aurora PostgreSQL to MemoryDB. It includes table mappings to copy all the tables and data within the public schema.
    dms_tasksNote: We encourage you to review the replication task and endpoints configurations to learn more.
  3. The data migration starts, and the AWS DMS task status is displayed as Running.
  4. While the data is getting copied, you can validate the data migration in MemoryDB by connecting to it using the following command:
    sh connect_to_memorydb.sh
  5. Once you are connected to MemoryDB, you can spot check the migration by retrieving a specific record from MemoryDB using the following command:
    hgetall public.product_catalog.30

The output on your terminal should be something like the following screenshot.
Final Output

Next steps

After you have migrated the frequently used product catalog data to MemoryDB, you need to modify your application code to read/write the transactional data from Amazon Aurora PostgreSQL-Compatible Edition and the product catalog data from MemoryDB. Now the frequently accessed product catalog data may be read in a few microseconds, providing faster application performance and improving the overall end-user experience.

Clean up

To avoid incurring ongoing charges, clean up your infrastructure by deleting the DMSPostgreSQLMemoryDB stack from the AWS CloudFormation console. Alternatively, you can use the following command to delete the cloudformation stack.

aws cloudformation delete-stack --stack-name DMSPostgreSQLMemoryDB

Conclusion

The idea of using a relational database alone in an enterprise application may not scale well for all customers. Applications that need to provide microsecond read and single-digit millisecond response can benefit from using MemoryDB. In this post, we demonstrated how you can migrate data from Amazon Aurora PostgreSQL-Compatible to Amazon MemoryDB for Redis using AWS DMS. Frequently looked-up data can be stored in MemoryDB and transactional data in Amazon Aurora PostgreSQL-Compatible Edition. To learn more about MemoryDB and its use cases, refer to Amazon MemoryDB for Redis.


About the authors

Kishore Dhamodaran is a Senior Solutions Architect at AWS. Kishore helps strategic customers with their cloud enterprise strategy and migration journey, leveraging his years of industry and cloud experience.

PrathapPrathap Thoguru is a Technical Leader and an Enterprise Solutions Architect at AWS. He’s an AWS certified professional in nine areas and specializes in data and analytics. He helps customers get started on and migrate their on-premises workloads to the AWS Cloud. He holds a Master’s degree in Information Technology from the University of Newcastle, Australia.

Kishore VKishore Vinjam is a Partner Solutions Architect focusing on AWS Service Catalog, AWS Control Tower, and AWS Marketplace. He is passionate about working in cloud technologies, working with customers, and building solutions for them. When not working, he likes to spend time with his family, hike, and play volleyball and ping-pong.

Sandeep Kashyap is a Principal Tech Business Development Manager with AWS marketplace. In his role, Sandeep works with customers to help them adopt cloud management best practices such as multi-account frameworks using AWS Services and partner solutions from AWS Marketplace. Sandeep also works with partners to develop Independent Software Vendor Solutions with AWS Services in the management and tools category.