Easily migrate large data workloads using AWS migration features
This Guidance demonstrates best practices for migrating large Oracle databases to Amazon Aurora PostgreSQL, helping you to modernize your data infrastructure cost-effectively. It uses AWS Database Migration Service (AWS DMS) to streamline the migration process, reducing operational overhead and minimizing downtime. With this Guidance, you can optimize costs, enhance scalability, and set yourself up for long-term growth and adaptability while benefiting from AWS robust security and compliance capabilities.
Note: [Disclaimer]
Architecture Diagram
[Architecture diagram description]
Step 1
Download the AWS CloudFormation template from the GitHub repository and deploy the CloudFormation stack.
Step 2
The CloudFormation stack deploys an Amazon Elastic Compute Cloud (Amazon EC2) instance, an Amazon Relational Database Service (Amazon RDS) Oracle instance, an Amazon Aurora PostgreSQL cluster, AWS Secrets Manager for secrets, and the AWS Database Migration Service (AWS DMS) infrastructure and tasks.
Step 3
Connect to the Amazon EC2 Bastion host using AWS Systems Manager. Clone the GitHub repo to the host. Run the data load procedures.
Step 4
Start the AWS DMS task for full load. AWS DMS is preconfigured with sample tasks, leveraging the auto-parallelism feature for partitioned tables to provide better performance. Monitor task performance and completion in the AWS DMS section of the AWS Management Console.
Step 5
Monitor the task for full load in the AWS DMS section of the Management Console. Once completed, proceed with the change data capture (CDC) task.
Step 6
Start the AWS DMS CDC task, paying attention to the non-default task settings being used for high performance.
Step 7
Monitor the task for CDC in the AWS DMS section of the Management Console.
Get Started
Deploy this Guidance
Well-Architected Pillars
The AWS Well-Architected Framework helps you understand the pros and cons of the decisions you make when building systems in the cloud. The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems. Using the AWS Well-Architected Tool, available at no charge in the AWS Management Console, you can review your workloads against these best practices by answering a set of questions for each pillar.
The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.
-
Operational Excellence
CloudFormation provisions the AWS resources needed for migration, simplifying the provisioning and operations management. Aurora emits performance and utilization metrics to Amazon CloudWatch. AWS DMS provides metrics for monitoring the migration tasks, including performance statistics for tasks, source and target databases, and the replication instance. These tools help you to proactively monitor and optimize the migration tasks.
-
Security
Amazon Virtual Private Cloud (Amazon VPC), Amazon EC2 security groups, AWS Identity and Access Management (IAM), and AWS Key Management Service (AWS KMS) work collectively to enhance security in this Guidance. Using security groups and IAM policies, access is granted to services based on the principle of least privilege. For example, only the required ports are allowed for AWS DMS replication between source (Oracle) and target (Aurora PostgreSQL) instances. Secrets Manager stores and retrieves the database credentials programmatically, avoiding any hard coding of passwords.
-
Reliability
Amazon RDS Oracle, Aurora PostgreSQL, and AWS DMS support multi-Availability Zone (AZ) configuration for high availability and automatic failover, minimizing the interruptions due to AZ failures. AWS DMS tasks are configured to use the Amazon RDS and Aurora cluster endpoints for seamless failover support in case of any issues with the primary database instance. Aurora keeps a copy of the data across three AZs in a Region, providing high durability. Both Amazon RDS and Aurora support databases backups through manual and automatic snapshots for protection against accidental deletions and logical corruptions.
-
Performance Efficiency
This Guidance provides best practices for sizing the AWS DMS replication instances based on the source database transactional load, data size, and number of database objects. The AWS DMS task configurations are optimized to achieve maximum throughput for both full load and CDC migrations. This includes number of tasks, tasks per replication instance, threads, parallelism, and transformation rules. Aurora supports Graviton3-based R7g instances with 50 percent more memory bandwidth than previous generations, enabling high-speed access to data in memory.
-
Cost Optimization
Amazon RDS and Aurora database instances can be scaled up and down to meet migration goals. The Aurora Standard configuration delivers cost-effective pricing for applications with moderate I/O usage, while the Aurora I/O-Optimized configuration provides enhanced pricing for I/O-intensive workloads, supporting optimal performance without overspending. Amazon RDS Oracle supports a range of instance types to match the performance and cost needs.
-
Sustainability
Aurora PostgreSQL, AWS DMS, and Amazon EC2 can easily scale up or down to match load for sustainable utilization of resources. Furthermore, by supporting the use of energy-efficient processor instance types, like AWS Graviton processors, this Guidance provides increased sustainability. Using AWS Graviton in Amazon EC2 and Aurora can improve the performance of workloads with fewer resources, decreasing your overall resource footprint.
Related Content
[Title]
Disclaimer
The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.
References to third-party services or organizations in this Guidance do not imply an endorsement, sponsorship, or affiliation between Amazon or AWS and the third party. Guidance from AWS is a technical starting point, and you can customize your integration with third-party services when you deploy the architecture.