AWS Database Blog

Scale applications using multi-Region Amazon EKS and Amazon Aurora Global Database: Part 2

This is the second in a two-part series about scaling applications globally using multi-Region Amazon Elastic Kubernetes Service (Amazon EKS) and Amazon Aurora Global Database. In Part 1, you learned the architecture patterns and foundational pillars of a multi-Region application design. In this post, we use the read local and write global design pattern to scale, build resiliency and automatic failover of your multi-region applications using Amazon EKS, AWS Global Accelerator, and Aurora Global Database. This solution can benefit many industry verticals including retail and financial institutions such as banking, capital markets, fintech, insurance and payments, that are expanding globally. This post provides a template for how organizations can modernize and scale their applications into multiple Regions while providing a resiliency and disaster recovery strategy.

Overview of solution

The following architecture diagram shows the components used for this solution.

We configure both Regions using the local read and global write design pattern. We start by creating an Amazon EKS cluster and Amazon Aurora global database with PostgreSQL compatibility in Regions us-east-2 and us-west-2. We use PgBouncer, an open-source connection pooler, for both database connection pooling and handling planned Amazon Aurora global database failover. We then deploy the application stack, which includes stateless containerized applications, on EKS clusters in both Regions. We then expose the application endpoint using an Application Load Balancer in the respective Regions. Finally, we configure Global Accelerator for the load balancers as endpoints.

Prerequisites

To follow along with this tutorial, you should have the following prerequisites:

Deploy the solution

To deploy the solution, complete the following steps:

  1. First, we launch AWS CloudFormation stacks to setup the following resources in the us-east-2 and us-west-2 Regions using AWS CloudShell:
    • An Amazon Aurora database cluster with a writer node in us-east-2.
    • An Amazon Aurora global database with the secondary Region in us-west-2.
    • VPC peering between us-east-2 and us-west-2 to help our applications securely connect to the Amazon Aurora PostgreSQL-Compatible Edition database across Regions. You can also connect applications over private network using AWS Transit Gateway for intra-Region VPC peering. Please review Building a global network using AWS Transit Gateway Inter-Region peering for additional details. We use VPC peering for keeping this solution simple.Run the following from AWS CloudShell in us-east-2 Region to set AWS access credentials, clone the git repository and launch AWS CloudFormation stacks. This step may take approximately 30 minutes.
      # Replace AWS Access Key ID and Access Key
      # Set REGION1 to source Region
      # Set REGION2 to target Region
      export AWS_ACCESS_KEY_ID=<key id> 
      export AWS_SECRET_ACCESS_KEY=<access key> 
      export AWS_DEFAULT_REGION=us-east-2 
      export REGION1=us-east-2
      export REGION2=us-west-2
      git clone https://github.com/aws-samples/eks-aurora-global-database.git 
      cd eks-aurora-global-database 
      ./auroraglobaldb_eks.sh setup_env
  2. Next, run the following to get the AWS Cloud9 URL information from the CloudFormation templates from both Regions us-east-2 and us-west-2:
    for region in "${REGION1}" "${REGION2}"; do 
    cft=$(aws cloudformation describe-stacks --stack-name EKSGDB1 --region $region --query 'Stacks[].Outputs[?(OutputKey == `Cloud9IDEURL`)][].{OutputValue:OutputValue}' --output text) 
    echo $region:  $cft 
    done
  3. Connect to the AWS Cloud9 terminal through a web browser for each Region using the preceding URL from Step 2.
  4. Choose Settings, AWS Settings, and disable AWS managed temporary credentials.
  5. Run the following code from both AWS Cloud9 terminals in both Regions, one at a time, to deploy the following resources:
    • Amazon EKS clusters in both Regions
    • Cluster Autoscaler, Horizontal Pod Autoscaler and AWS Load Balancer Controller on Amazon EKS on both Regions.
      # Set REGION1 to source Region
      # Set REGION2 to target Region
      cd ~/environment/eks-aurora-global-database
      export REGION1=us-east-2
      export REGION2=us-west-2
      ./auroraglobaldb_eks.sh
  6. Next, we deploy a PgBouncer database connection pooler and ClusterIP service for PgBouncer on Amazon EKS, and create a sample database schema on the Amazon Aurora PostgreSQL cluster. It also provisions retail application microservices to EKS clusters in us-east-2 and us-west-2. These micro services in both Regions use readers from local regional Aurora database clusters for reads for better performance.
    • Set up the PgBouncer database connection pooler deployment on Amazon EKS in both Regions.
    • Set up retail application deployments on Amazon EKS in both Regions.
    • Set up ClusterIP service for PgBouncer on Amazon EKS in both Regions.
    • Set up Ingress for retail application Pods on Amazon EKS in both Regions.
    • Set up an AWS Lambda function AuroraGDBPgbouncerUpdate on both Regions to synchronize PgBouncer configuration with the respective Aurora writer endpoint and reader endpoints.
    • Set up an Amazon EventBridge event rule AuroraGDBPgBouncerUpdate on the event bus default on both Regions on the event category global-failover (event ID RDS-EVENT-0185). The event rule uses the target as the Lambda function AuroraGDBPgbouncerUpdate to synchronize the PgBouncer configuration when the Amazon Aurora global database is failed over across Regions. Please review Building an event-driven application with Amazon EventBridge post on how to scale applications using event driven architecture.Run the following from both AWS Cloud9 terminals on Regions us-east-2 and us-west-2, one at a time:
      cd retailapp; make
      cd ..
      # Set REGION1 to source Region
      # Set REGION2 to target Region
      export REGION1=us-east-2
      export REGION2=us-west-2
      ./auroraglobaldb_eks.sh configure-retailapp

      The retail application deployment manifest consists of various microservices such as webapp, product, order, user, and kart. It deploys Kubernetes services ClusterIP for internal microservices and a NodePort and Ingress for the external website-facing microservice. The Ingress creates internet facing Application Load Balancer for our retail website application using an AWS Load Balancer controller.

  7. Next, run the following code from both AWS Cloud9 terminals on Regions us-east-2 and us-west-2:
    kubectl get all -n retailapp
    kubectl get ingress -n retailapp
    
    # Output should look like below
    NAME                                        READY   STATUS    RESTARTS   AGE
    pod/kart-deployment-fdd4564fd-fs29k         1/1     Running   0          8h
    pod/kart-deployment-fdd4564fd-nrgrp         1/1     Running   0          8h
    pod/order-deployment-6b649877ff-vrvsf       1/1     Running   0          8h
    pod/order-deployment-6b649877ff-vs24r       1/1     Running   0          8h
    pod/pgbouncer-deployment-868cf88754-4kbs9   1/1     Running   0          8h
    pod/pgbouncer-deployment-868cf88754-92xft   1/1     Running   0          8h
    pod/product-deployment-8656cfbf8d-dt84z     1/1     Running   0          8h
    pod/product-deployment-8656cfbf8d-r5dlj     1/1     Running   0          8h
    pod/user-deployment-6d9dc5d45b-65z9n        1/1     Running   0          8h
    pod/user-deployment-6d9dc5d45b-n7kt6        1/1     Running   0          8h
    pod/webapp-544fffd888-c8hfx                 1/1     Running   0          8h
    pod/webapp-544fffd888-dq2zt                 1/1     Running   0          8h
    NAME                    TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
    service/kart            ClusterIP   172.20.252.177   <none>        8445/TCP       8h
    service/order           ClusterIP   172.20.88.53     <none>        8448/TCP       8h
    service/product         ClusterIP   172.20.66.101    <none>        8444/TCP       8h
    service/retailapp-pgb   ClusterIP   172.20.113.156   <none>        6432/TCP       8h
    service/user            ClusterIP   172.20.173.124   <none>        8446/TCP       8h
    service/webappnp        NodePort    172.20.2.152     <none>        80:32681/TCP   8h
    NAME                                   READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/kart-deployment        2/2     2            2           8h
    deployment.apps/order-deployment       2/2     2            2           8h
    deployment.apps/pgbouncer-deployment   2/2     2            2           8h
    deployment.apps/product-deployment     2/2     2            2           8h
    deployment.apps/user-deployment        2/2     2            2           8h
    deployment.apps/webapp                 2/2     2            2           8h
    NAME                                              DESIRED   CURRENT   READY   AGE
    replicaset.apps/kart-deployment-fdd4564fd         2         2         2       8h
    replicaset.apps/order-deployment-6b649877ff       2         2         2       8h
    replicaset.apps/pgbouncer-deployment-868cf88754   2         2         2       8h
    replicaset.apps/product-deployment-8656cfbf8d     2         2         2       8h
    replicaset.apps/user-deployment-6d9dc5d45b        2         2         2       8h
    replicaset.apps/webapp-544fffd888                 2         2         2       8h
    NAME     CLASS    HOSTS   ADDRESS                                                                PORTS   AGE
    webapp   <none>   *       k8s-retailap-webapp-e44963c64a-459733736.us-east-2.elb.amazonaws.com   80      8h
    
  8. Run the following on both Regions us-east-2 and us-west-2 to verify the API health check using the /healthcheck call:
    url=$(kubectl get ingress webapp -n retailapp -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
    curl $url/healthcheck
    
    # Output should look like below
    {
       "status": "success"
     }

Configure AWS Global Accelerator

Now that our retail application service works in both Regions, we need to direct Internet traffic to the application. Using DNS is one way to do this, but DNS can be problematic during failover events due to propagation times and client-side caching. For this solution, we chose to use AWS Global Accelerator, which can switch traffic routes without requiring DNS changes.

AWS Global Accelerator is a networking service that sends application traffic through AWS’s global network infrastructure, improving network performance by up to 60%. It also makes it easier to operate multi-regional deployments by providing two static IPs that are anycast from AWS’s globally distributed edge locations, giving you a single entry point to your application regardless of how many Regions it is deployed in.

  1. Run the following script on your AWS Cloud9 terminal on region us-west-2. The script configures AWS Global Accelerator and return DNS name of your accelerator:
    # Set REGION1 to source Region
    # Set REGION2 to target Region
    export REGION1=us-east-2
    export REGION2=us-west-2
    ./auroraglobaldb_eks.sh global-accelerator
    # Output should look like below
    Global Accelerator ARN : arn:aws:globalaccelerator::xxxxx:accelerator/4e0ef161-05dc-46d5-b0ef-fe40e7e96e56
    Global Accelerator Listener ARN : arn:aws:globalaccelerator::xxxxx:accelerator/4e0ef161-05dc-46d5-b0ef-fe40e7e96e56/listener/01881098
    Global Accelerator DNS Name: xxyyzz.awsglobalaccelerator.com
    Checking deployment status
    Global Accelerator deployment status IN_PROGRESS
    ....
    Global Accelerator deployment completed. DNS Name: xxyyzz.awsglobalaccelerator.com
  2. Next, you run the following API call apiproduct on AWS Cloud9 terminal on both Regions us-east-2 and us-west-2 using AWS Global Accelerator DNS name from the previous step (Step 1). The API call returns the IP address of the Aurora writer node, reader node, and the application locality. The call is routed to our retail application webapp microservice to the closest region. The application uses the PgBouncer database connection pooler for database connection scaling. PgBouncer is configured with two databases: one for connecting to the writer node, which is an Aurora cluster endpoint in the primary region us-east-2, and another one for the reader that connects to the Aurora reader endpoint in the local region that is local to the application.Run the following API call on your AWS Cloud9 terminal on us-east-2. The call is routed to the retail application webapp microservice on us-east-2:
    # Use AWS Global Accelerator DNS name from Step 1
    curl xxyyzz.awsglobalaccelerator.com/apiproduct
    
    # Output should look like below
    {
      "Aurora": {
         "reader": {
            "inet_server_addr": "10.40.30.164" <---- Aurora Reader in region us-east-2
             },
         "writer": {
            "inet_server_addr": "10.40.30.164" <---- Aurora Writer in region us-east-2
            }
       },
      "Lab": "Amazon EKS & Aurora Global Database Workshop",
      "instanceId": "i-0c067696fdbe33e9a",
      "region": "us-east-2"                  <----- EKS/Container in region us-east-2
    }
  3. Run the following API call on the AWS Cloud9 terminal on region us-west-2. The call is routed to the retail application webapp microservice on us-west-2:
    # Use AWS Global Accelerator DNS name from Step 1
    curl xxyyzz.awsglobalaccelerator.com/apiproduct
    
    # Output should look like below
    {
    "Aurora": {
       "reader": {
          "inet_server_addr": "10.50.40.11"  <---- Aurora Reader in region us-west-2
        },
       "writer": {
          "inet_server_addr": "10.40.30.164" <---- Aurora Writer in region us-east-2
        }
      },
      "Lab": "Amazon EKS & Aurora Global Database Workshop",
      "instanceId": "i-023a7f839a2c302c7",
      "region": "us-west-2"                  <----- EKS/Container in region us-west-2
    }

    In this example, the API call returns the application region and Amazon Aurora database writer and local reader endpoints. You should see a response originating from the region closest to you. AWS Global Accelerator sends traffic to the Amazon EKS cluster in the nearest AWS Region.

  4. Run the following API call on the AWS Cloud9 terminal on us-east-2 and us-west-2 to ensure that application works fine and is able to retrieve product information and create new orders:
    kubectl exec -ti deployment/product-deployment -n retailapp -- curl http://product.retailapp.svc.cluster.local:8444/products/view?id=2
    
    # Output should look like below
    {
       "product_items": [
       {
         "description": ..
         .....
       }
    
    kubectl exec -ti deployment/webapp -n retailapp -- bash
    curl --request POST --header "Content-type: application/json" --data '{"email": "test1@test1.com", "items": [{"order_id":100, "item_id": 2, "qty": 1, "unit_price": 42.95}]}' http://order.retailapp.svc.cluster.local:8448/order/add
    exit
    
    # Output should look like below
    {
      "order_details": {
      "email": "test1@test1.com",
      "items": [
        {
          "item_id": 2,
          "order_id": 100,
          "qty": 1,
          "unit_price": 42.95
         }
       ],
      "order_id": 4
      },
      "status_code": 200,
      "title": "Orders"
    }
    
  5. Use a web browser such as Chrome and open the retail application interface using the AWS Global Accelerator DNS name (from Step 1).
    The retail application has been deployed to both Regions and is fully functional. We now perform scalability and disaster recovery tests.

Application scalability test

Kubernetes autoscaling mechanism offers node-based scaling using Cluster Autoscaler and pod-based scaling using Horizontal Pod Autoscaler (HPA). This example focuses on pod-based scaling using HPA.

To test for application scalability, we set up Horizontal Pod Autoscaler (HPA) for the retail application microservice deployment webapp on Amazon EKS and perform a synthetic load on the retail application using the Apache HTTP server benchmarking (ab) utility. The benchmark test uses 100 requests with 50 concurrent clients and runs for 300 seconds on our retail application website in us-east-2. This intense CPU stress test cause HPA to scale out application when its CPU threshold reach 50%.

  1. Run the following on your AWS Cloud9 terminal on both us-east-2 and us-west-2 to create an auto scaler for the ReplicaSet webapp, with target CPU utilization set to 50% and the number of replicas between 2–20:
    kubectl autoscale deployment webapp --cpu-percent=50 --min=2 --max=20 -n retailapp
    
    # Output should look like below
    horizontalpodautoscaler.autoscaling/webapp autoscaled
  2. Next, run the following on an AWS Cloud9 terminal on us-east-2 to perform a stress test. Use two terminal windows on your AWS Cloud9 terminal on us-east-2: one to watch HPA and another to initiate a stress test. The HPA scales out webapp pods when its CPU threshold reaches 50% as per the configuration. It automatically scales back pods when the load subsides.
    # Run the following from first terminal
    # Replace AWS Global Accelerator DNS name
    ab -c 50 -n 100 -t 300 xxyyzz.awsglobalaccelerator.com/products/fashion/
    
    # Run the following from second terminal
    kubectl get hpa -n retailapp -w
    # Press Ctrl+C to exit (after 5 minutes)
    
    # Output should look like below from first terminal
    This is ApacheBench, Version 2.3 <$Revision: 1901567 $>
    Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
    Licensed to The Apache Software Foundation, http://www.apache.org/Benchmarking a9d3278dd00f99a35.awsglobalaccelerator.com (be patient)
    Completed 5000 requests
    Completed 10000 requests
    ....
    
    # Output should look like below from second terminal
    NAME     REFERENCE           TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
    webapp   Deployment/webapp   2%/50%    2         20        2          21m
    webapp   Deployment/webapp   63%/50%   2         20        2          23m
    webapp   Deployment/webapp   305%/50%   2         20        3          23m
    webapp   Deployment/webapp   294%/50%   2         20        6          23m
    webapp   Deployment/webapp   25%/50%    2         20        12         23m
    webapp   Deployment/webapp   89%/50%    2         20        13         24m
    webapp   Deployment/webapp   171%/50%   2         20        13         24m
    ...
    webapp   Deployment/webapp   2%/50%     2         20        13         29m
    webapp   Deployment/webapp   2%/50%     2         20        2          29m

    In this example, the auto scaler (HPA) scaled out webapp pods from 2 to 13 as load increased on the retail application. It automatically scaled back pods to two when the load subsided. To automatically scale your database for high load scenarios, you can also use Aurora features such as Amazon Aurora Serverless v2 for on-demand and automatic vertical scaling, and Auto Scaling with Amazon Aurora replicas to scale out Amazon Aurora reader nodes, as the application increases load on the database.

Database cluster cross Region failover test

Next, we perform a managed planned failover for the Amazon Aurora global database to fail over the database cluster from us-east-2 to us-west-2 region. Following the failover, the application should continue to work fine and able to create and retrieve new orders.

  1. Run the following on your AWS Cloud9 terminal on us-east-2 to perform an Amazon Aurora global database failover:
    FAILARN=$(aws rds describe-global-clusters --query 'GlobalClusters[?(GlobalClusterIdentifier == `agdbtest`)].GlobalClusterMembers[]' | jq '.[] | select(.IsWriter == false) | .DBClusterArn'| sed -e 's/"//g')
    PRIMARN=$(aws rds describe-global-clusters --query 'GlobalClusters[?(GlobalClusterIdentifier == `agdbtest`)].GlobalClusterMembers[]' | jq '.[] | select(.IsWriter == true) | .DBClusterArn'| sed -e 's/"//g')
    PRIMREGION=`echo $PRIMARN | awk -F: '{print $4}'`
    GLCLUIDE=agdbtest aws rds failover-global-cluster \
         --region $PRIMREGION \
         --global-cluster-identifier agdbtest \
         --target-db-cluster-identifier ${FAILARN}
    
    # Output should look like below
    {
       "GlobalCluster": {
       "GlobalClusterIdentifier": "agdbtest",
       "GlobalClusterResourceId": "cluster-433c4f5b8c6d365b",
       "GlobalClusterArn": "arn:aws:rds::xxxxxxx:global-cluster:agdbtest",
       "Status": "failing-over",
       "Engine": "aurora-postgresql",
       .....
       "FailoverState": {
          "Status": "pending",
          "FromDbClusterArn": "arn:aws:rds:us-east-2:xxxxxx:cluster:adbtest",
          "ToDbClusterArn": "arn:aws:rds:us-west-2:xxxxxx:cluster:adbtest"
         }
       }
    }

    Next, make sure that the global database failover is completed successfully by checking the status.

  2. Run the following on your AWS Cloud9 terminal on us-east-2 to check the database events:
    aws rds describe-events --source-identifier adbtest --source-type db-cluster --query 'Events[?(EventCategories == [`global-failover`])]'
    
    # Output should look like below
    [
      {
       "SourceIdentifier": "adbtest",
       "SourceType": "db-cluster",
       "Message": "Global failover to DB cluster adbtest in Region us-west-2 started.",
       "EventCategories": [
         "global-failover"
        ],
       "SourceArn": "arn:aws:rds:us-east-2:xxxxxx:cluster:adbtest"
       },
       ........
      {
       "SourceIdentifier": "adbtest",
       "SourceType": "db-cluster",
       "Message": "Global failover to DB cluster adbtest in Region us-west-2 finished.",
       "EventCategories": [
         "global-failover"
        ],
       "SourceArn": "arn:aws:rds:us-east-2:xxxxxxx:cluster:adbtest"
      }
    ]
  3. Next, ensure that the PgBouncer configuration in Amazon EKS on both Regions has been synchronized using the event rule and Lambda function. You should see Amazon Aurora cluster endpoint from us-west-2 region as host entry for gdbdemo database in pgbouncer.ini.Run the following on your AWS Cloud9 terminal on us-east-2 to get the current cluster endpoint (writer node):
    aws rds describe-global-clusters --query 'GlobalClusters[?(GlobalClusterIdentifier == `agdbtest`)].GlobalClusterMembers[]' | jq '.[] | select(.IsWriter == true) | .DBClusterArn'
    
    # Output should look like below
    "arn:aws:rds:us-west-2:xxxxxx:cluster:adbtest"
  4. Run the following on your AWS Cloud9 terminal on both us-east-2 and us-west-2 to check PgBouncer configuration (Use Global Accelerator DNS name from Step 1 for /apiproduct API call):
    # on region us-east-2, us-west-2
    # Replace AWS Global Accelerator DNS name
    kubectl exec -ti deployment/pgbouncer-deployment -n retailapp -- egrep ^gdbdemo /etc/pgbouncer/pgbouncer.ini
    curl xxyyzz.awsglobalaccelerator.com/apiproduct
    
    # Output should look like below on region us-east-2
    gdbdemo = host=adbtest.cluster-cg2psgfrllkh.us-west-2.rds.amazonaws.com port=5432 dbname=eksgdbdemo
    gdbdemo-ro = host=adbtest.cluster-ro-cqy9igkqggyn.us-east-2.rds.amazonaws.com port=5432 dbname=eksgdbdemo
    {
       "Aurora": {
         "reader": {
            "inet_server_addr": "10.40.30.164"
           },
         "writer": {
           "inet_server_addr": "10.50.40.11" <---- Aurora Writer in region us-west-2
          }
       },
       "Lab": "DAT312 Workshop",
       "instanceId": "i-077460000094a5578",
       "region": "us-east-2"
    }
    # Output should look like below on region us-west-2
    gdbdemo = host=adbtest.cluster-cg2psgfrllkh.us-west-2.rds.amazonaws.com port=5432 dbname=eksgdbdemo
    gdbdemo-ro = host=adbtest.cluster-ro-cg2psgfrllkh.us-west-2.rds.amazonaws.com port=5432 dbname=eksgdbdemo
    {
      "Aurora": {
        "reader": {
           "inet_server_addr": "10.50.40.11"
         },
         "writer": {
            "inet_server_addr": "10.50.40.11" <---- Aurora Writer in region us-west-2
         }
       },
       "Lab": "DAT312 Workshop",
       "instanceId": "i-074b9e517a55e9cd4",
       "region": "us-west-2"
    }
  5. Run the following on your AWS Cloud9 terminal on both Regions to ensure that the application works fine and is able to retrieve product information and create new orders, and confirm that the application works following the role transition:
    kubectl exec -ti deployment/product-deployment -n retailapp -- curl http://product.retailapp.svc.cluster.local:8444/products/view?id=2
    kubectl exec -ti deployment/webapp -n retailapp -- bash
    curl --request POST --header "Content-type: application/json" --data '{"email": "test1@test1.com", "items": [{"order_id":100, "item_id": 2, "qty": 1, "unit_price": 42.95}]}' http://order.retailapp.svc.cluster.local:8448/order/add
    exit
    
    # Output should look like below
    {
      "order_details": {
      "email": "test1@test1.com",
      "items": [
        {
        "item_id": 2,
        "order_id": 100,
        "qty": 1,
        "unit_price": 42.95
        }
       ],
       "order_id": 34
       },
      "status_code": 200,
      "title": "Orders"
    }

    In this example, the PgBouncer configuration has been automatically synchronized with new Amazon Aurora primary cluster endpoint following the Aurora global database failover. The database cluster role transition was transparent to our application and all the API calls to retail application works fine after the failover.

Cleanup

To clean up your resources, run the following on your AWS Cloud9 terminal in us-east-2 and us-west-2Regions:

kubectl delete ingress,services,deployments,statefulsets -n retailapp --all
kubectl delete ns retailapp --cascade=background
export AWS_REGION=$(curl -s 169.254.169.254/latest/dynamic/instance-identity/document | jq -r '.region')
eksctl delete iamserviceaccount --name aws-load-balancer-controller --cluster eksclu --namespace kube-system --region ${AWS_REGION}
eksctl delete cluster --name eksclu -r ${AWS_REGION} --force -w

Run the following from AWS CloudShell in us-west-2 Region:

export AWS_ACCESS_KEY_ID=<key id> 
export AWS_SECRET_ACCESS_KEY=<access key> 
export AWS_DEFAULT_REGION=us-west-2 
# Set REGION1 to source Region
# Set REGION2 to target Region
export REGION1=us-east-2
export REGION2=us-west-2
bash ./cleanup.sh

Conclusion

In Part 1 of this series, you learned the architecture patterns and foundational pillars of a multi-Region application design. In this post, you learned how to do the following:

  • Run and scale your applications in multiple Regions using Amazon EKS clusters and Aurora Global Database
  • Improve multi-Region application resiliency using AWS Global Accelerator’s health checks to route the traffic to a Region in close proximity to end-users and also to detect failures and route traffic to a failover Region automatically
  • Implement automatic configuration synchronization of a PgBouncer database connection pooler using Amazon EventBridge and event rules for Amazon Aurora global database failover events
  • Ensure your application can transparently and automatically handle a managed planned Aurora global database cross-Region failover

We welcome your feedback; leave your comments or questions in the comments section.

Further reading

For more information, refer to the following:


About the Authors

  Krishna Sarabu is a Senior Database Specialist Solutions Architect with Amazon Web Services. He works with the Amazon RDS team, focusing on open-source database engines Amazon RDS for PostgreSQL and Amazon Aurora PostgreSQL. He has over 20 years of experience in managing commercial and open-source database solutions in the financial industry. He enjoys working with customers to help design, deploy, and optimize relational database workloads on AWS.

Chirag Dave is a Senior Database Specialist Solutions Architect with Amazon Web Services, focusing on managed PostgreSQL. He maintains technical relationships with customers, making recommendations on security, cost, performance, reliability, operational efficiency, and best practice architectures.

Raj Jayakrishnan is a Senior Database Specialist Solutions Architect with Amazon Web Services helping customers reinvent their business through the use of purpose-built database cloud solutions. Over 20 years of experience in architecting commercial & open-source database solutions in financial and logistics industry.