This Guidance demonstrates how you can scale your web or mobile applications using the read local, write global approach to build a resilient, self-healing system spanning multiple AWS Regions. Within each Region, your app will automatically scale on AWS-managed compute instances to meet fluctuating demand. A proxy service maintains your database connections, splitting reads and writes to optimize performance. If a Region fails, the system can quickly shift to a backup, with event-driven automation monitoring for issues and syncing your configuration.

Please note: [Disclaimer]

Architecture Diagram

[Architecture diagram description]

Download the architecture diagram PDF 

Well-Architected Pillars

The AWS Well-Architected Framework helps you understand the pros and cons of the decisions you make when building systems in the cloud. The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems. Using the AWS Well-Architected Tool, available at no charge in the AWS Management Console, you can review your workloads against these best practices by answering a set of questions for each pillar.

The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.

  • Operational excellence in a multi-Region context involves ensuring that your infrastructure operates smoothly across different Regions. Amazon CloudWatch is used to monitor various aspects, including the performance of the Aurora Global Database, the Amazon EKS cluster (such as CPU and memory usage), and incoming requests routed through the Application Load Balancer. Performance insights within Aurora allow for further observation of database performance. Moreover, in configuring this approach, collaboration between Amazon EKS, Application Load Balancer, Aurora Global Database, and Global Accelerator is emphasized to isolate faults in individual partitions. This collaborative effort enhances scalability and resilience, particularly in mitigating the rare but potential occurrence of Availability Zone (AZ) or Region failures.

    Read the Operational Excellence whitepaper 
  • This Guidance utilizes various AWS services to protect resources and data. Amazon EKS employs Kubernetes Role-Based Access Control (RBAC) to manage access to cluster resources so that only authorized entities can interact with sensitive components. Network security is strengthened through the use of the AWS Transit Gateway and Global Accelerator, which provide secure communication channels between Regions and help mitigate distributed denial of service (DDoS) attacks. Aurora Global Database enhances data security with encryption at rest and in transit, safeguarding sensitive information from unauthorized access. Additionally, AWS Identity and Access Management (AWS IAM) is utilized to manage user permissions and access policies so that only authenticated and authorized users can interact with AWS resources.

    Read the Security whitepaper 
  • This Guidance incorporates redundancy and fault tolerance across multiple layers of the architecture. Amazon EKS uses multiple AZs to distribute workloads for high availability and fault tolerance. The use of Aurora Global Database enhances database reliability by replicating data across Regions, minimizing the risk of data loss, and helping to ensure continuity of operations in the event of a Regional outage. Additionally, Global Accelerator routes traffic to healthy endpoints, automatically rerouting traffic away from unhealthy or degraded resources to maintain service availability. Automated scaling mechanisms within Amazon EKS and Application Load Balancer help manage fluctuations in workload demand so that resources are dynamically allocated to meet performance requirements without manual intervention.

    Read the Reliability whitepaper 
  • This Guidance uses various AWS services and features to streamline resource utilization, enhance scalability, and minimize latency. Amazon EKS employs auto-scaling capabilities to dynamically adjust compute resources based on workload demands. Utilizing add-ons like Horizontal Pod Autoscaler and Karpenter, along with Application Load Balancer, supports automatic and elastic scaling of applications and worker nodes, as well as efficient traffic distribution across healthy targets.

    Aurora facilitates scaling database reads across Regions and positioning applications near users. Additionally, Aurora Optimized Reads for Aurora PostgreSQL offers improved query latency by up to 8x and cost savings of up to 30% compared to instances without it.

    Global Accelerator further enhances network performance by up to 60% by routing application traffic through the AWS global network infrastructure, simplifying the management of multi-Regional deployments with two static IPs cast from AWS globally distributed edge locations.

    Read the Performance Efficiency whitepaper 
  • In this Guidance, cost optimization is facilitated through the strategic utilization of AWS services and features, with a focus on maximizing efficiency while minimizing expenses. With Amazon EKS, Horizontal Pod Auto Scaling and Karpenter enable automatic scaling of applications and worker nodes, optimizing resource allocation to match varying demand levels.

    Aurora plays a pivotal role in cost optimization by offering two storage configurations tailored to specific workload requirements. The Aurora Standard configuration delivers cost-effective pricing for applications with moderate I/O usage, while the Aurora I/O-Optimized configuration provides enhanced pricing for I/O-intensive workloads, supporting optimal performance without overspending. The Aurora Autoscaling feature dynamically adjusts read replicas based on application workload fluctuations for efficient resource utilization and minimizing unnecessary expenses.

    Furthermore, the use of AWS Graviton Processors, specifically Graviton3 instances, on Aurora and Amazon EKS optimizes price-to-performance ratios, offering significant cost savings while maintaining high performance.

    Read the Cost Optimization whitepaper 
  • This Guidance deploys and integrates an Amazon EKS cluster and an Aurora Global Database in the AWS Cloud—there is no need to procure any physical hardware. Capacity providers keep virtual “infrastructure” provisioning to a minimum, along with the necessary auto-scaling events should the workloads demand it.

    Every pod running on the Kubernetes platform, including the Amazon EKS cluster and the Aurora Global Database, will consume memory, CPU, I/O, and other resources.

    Furthermore, by supporting the use of energy-efficient processor instance types, like AWS Graviton Processors, this Guidance provides increased sustainability. Using Graviton in Amazon EC2 and Aurora can improve the performance of workloads with fewer resources, thereby decreasing the user's overall resource footprint.

    Read the Sustainability whitepaper 
Blog

Scale applications using multi-Region Amazon EKS and Amazon Aurora Global Database: Part 1

In Part 1, this blog post demonstrates the architecture patterns and design attributes of a multi-Region application.
Blog

Scale applications using multi-Region Amazon EKS and Amazon Aurora Global Database: Part 2

In Part 2, this blog post showcases how we implement a solution for a retail website using microservices run on Amazon EKS clusters in multiple Regions, and Aurora Global Database (PostgreSQL-compatible edition) for transactional data persistence and low-latency local reads.

Disclaimer

The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.

References to third-party services or organizations in this Guidance do not imply an endorsement, sponsorship, or affiliation between Amazon or AWS and the third party. Guidance from AWS is a technical starting point, and you can customize your integration with third-party services when you deploy the architecture.

Was this page helpful?