Containers

Effortless application modernization: migrate to Amazon EKS with existing NLB setup

This post was co-authored by Henrique Santana, Container Specialist, AWS and Luis Felipe, Principal Solutions Architect, AWS.

Introduction

Many organizations have built their infrastructure using Amazon Elastic Compute Cloud (Amazon EC2) and Network Load Balancer (NLB), often with security policies built around the NLB’s static IP addresses. As these organizations adopt containerization and move to Amazon Elastic Kubernetes Service (Amazon EKS) for their modern applications, they face a significant challenge with preserving their existing endpoint configurations. This can make the modernization complex and risky, because changing the load balancer setup may disrupt client connections or necessitate major DNS changes.

The good news is that this transition doesn’t have to be an all-or-nothing endeavor. In this post we explore a hybrid deployment pattern that provides a smooth, low-risk migration path from Amazon EC2 to Amazon EKS. Using the TargetGroupBinding from the AWS Load Balancer Controller allows organizations to gradually shift traffic from EC2 instances to Amazon EKS pods while maintaining their existing NLB static IP addresses. This approach not only preserves the static IP requirement but also provides a pragmatic path to modernization. Furthermore, this helps organizations to validate their containerized workloads without disrupting existing client connections or necessitating DNS changes.

In this post, we dive into how you can implement this pattern and modernize your infrastructure while maintaining business continuity.

Hybrid deployments

Similar to blue/green deployments patterns, our hybrid deployment approach supports running applications simultaneously on Amazon EKS and Amazon EC2 during the migration process. The key to this strategy is the ability to route traffic through your existing load balancer to both deployments using TargetGroupBinding, leading to a controlled migration process.

TargetGroupBinding uses Kubernetes Custom Resources (CR) to automatically manage the relationship between your containerized workloads and target groups. Creating a new target group for your containerized workload allows you to maintain independent control over traffic distribution while using your existing NLB infrastructure.

The following figure shows this hybrid architecture. The green pathway represents the current traffic flow to EC2 instances, or Elastic Container Service (Amazon ECS), while the blue pathway shows how traffic can be directed to the new Amazon EKS workloads through the same NLB:

Figure 1: Traffic flow diagram showing migration from Amazon EC2 (green) to Amazon EKS (blue)

Figure 1: Traffic flow diagram showing migration from Amazon EC2 (green) to Amazon EKS (blue)

This post focuses on Amazon EC2 to Amazon EKS migration. However, this pattern is versatile. The same approach can be applied when migrating from other deployment architectures, including non-containerized, on-premises, or Amazon ECS. Moreover, this pattern proves valuable when upgrading existing Kubernetes clusters to newer versions, offering the same controlled migration benefits.

Advantages using the hybrid deployment approach

The hybrid deployment approach offers the following advantages:

  • Controlled migration: Route traffic gradually to the Amazon EKS workloads while maintaining service through your existing Amazon EC2 infrastructure, significantly reducing migration risk.
  • Simple rollback: Quickly redirect traffic back to EC2 instances if issues arise, providing a reliable rollback during migration.
  • A/B testing: Compare performance between Amazon EC2 and Amazon EKS deployments in real-world conditions, providing data-driven decisions about the most effective configuration and resource allocation.
  • Flexibility: Use the strengths of both deployment environments during transition, helping optimize workload placement based on specific requirements.
  • Minimized service interruption: Reduce the risk of downtime by operating both environments simultaneously during migration.
  • Risk mitigation: Validate containerized deployments with real traffic while maintaining the fallback option, providing business continuity.

Prerequisites for the migration

This post is written assuming you have a basic understanding of Docker containerization and Kubernetes concepts (pods, events, namespaces, and deployments, and so on).

Before beginning the migration process, make sure you have the following components ready:

Migrating your application using hybrid deployments with NLB

We have an application running on EC2 instances managed by an Amazon EC2 Auto Scaling groups. Our target environment is an Amazon EKS cluster where we’ve already installed and configured the AWS Load Balancer Controller. To migrate, we take a slightly different approach from the conventional pattern. Instead of migrating using new load balancers or making significant infrastructure changes, we implement a controlled migration that maintains the existing NLB while gradually shifting traffic to the Amazon EKS cluster. This approach preserves current client connections, minimizes service disruption, and maintains high availability throughout the transition.

To achieve this, rather than using the standard Kubernetes LoadBalancer Service type, we select a more controlled approach using manually created target groups and TargetGroupBinding, which is a CR provided by the AWS Load Balancer Controller. This provides independently managing target groups and finer-grained control over how traffic is routed to the applications, which is valuable when integrating with existing infrastructure. This offers the flexibility needed for complex migration scenarios where maintaining specific load balancer configurations is crucial, providing precise control over target group configurations, and maintaining existing load balancer resources for a seamless migration.

A crucial detail when working with NLB is connection handling during target group changes. When modifying target groups, new connections are routed to the new targets while existing connections remain with their original targets. To illustrate this, consider it similar to a restaurant changing shifts. New customers (connections) are directed to the new team, but existing customers (established connections) continue with their original servers until they naturally complete their session. This behavior persists even if the Connection termination on deregistration is enabled on the target group. This is because when using the modifyListener operation, the DeregisterTargets operation isn’t invoked. Understanding this behavior is essential for planning smooth application transitions, and it is particularly important for long-lived TCP connections or UDP workloads.

Step-by-step migration

The first step is to create a new target group:

TARGET_GROUP_ARN=$(aws elbv2 create-target-group \
    --name eks-green \
    --protocol TCP \
    --port 80 \
    --target-type ip \
    --vpc-id <VPCID> \
    --query 'TargetGroups[0].TargetGroupArn' \
    --output text)
Bash

The target group  Amazon Resource Name (ARN) is saved into the TARGET_GROUP_ARN variable. Then, verify that both target groups (Amazon EC2 and Amazon EKS) are configured with Terminate connections on deregistration as true.

aws elbv2 describe-target-group-attributes \
    --target-group-arn $TARGET_GROUP_ARN \
    --query 'Attributes[*]' --output text | \
    grep deregistration_delay.connection_termination.enabled
Bash

Observe the following output:

deregistration_delay.connection_termination.enabled true
Bash

If the output of the command returns the attribute false, then it means that this isn’t enabled. To enable, use the following command:

aws elbv2 modify-target-group-attributes \
    --target-group-arn $TARGET_GROUP_ARN \
    --attributes 'Key=deregistration_delay.connection_termination.enabled,Value=true' \
    --query 'Attributes[*]' --output text | \
    grep deregistration_delay.connection_termination.enabled
Bash

At this stage, the traffic is still directed to the application running on Amazon EC2. You can now deploy the containerized application on the Amazon EKS cluster:

kubectl create deployment myapp \
    --image public.ecr.aws/aws-containers/retail-store-sample-ui:0.8.1 \
    --replicas 2 --port=8080
Bash

We can expose the application by creating a service. Here’s the YAML file for the service:

cat << EOF > service.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    app: myapp
  name: ui-service
  namespace: default
spec:
  ports:
  - port: 80
    targetPort: 8080
  selector:
    app: myapp
  type: ClusterIP
EOF
Bash

We can apply it:

kubectl apply -f service.yaml
Bash

With the application running and exposed as a Kubernetes service, create a new listener for the existing NLB:

aws elbv2 create-listener \
    --load-balancer-arn <NLB-ARN> \
    --protocol TCP --port 81 \
    --default-actions Type=forward,TargetGroupArn=$TARGET_GROUP_ARN
Bash

You currently have a listener on port 80. This listener continues forwarding traffic to the target group linked with the EC2 instances. The newly created listener forwards traffic to the target group linked with Amazon EKS. It doesn’t have any targets until you use the TargetGroupBinding to bind the service with the new target group. We can create it:

cat << EOF > tg-binding.yaml
apiVersion: elbv2.k8s.aws/v1beta1
kind: TargetGroupBinding
metadata:
  name: ui-tgbinding
spec:
  serviceRef:
    name: ui-service # Service name
    port: 80 # Service port
  targetGroupARN: $TARGET_GROUP_ARN
EOF
Bash

Apply the manifest:

kubectl apply -f tg-binding.yaml
Bash

You can verify that the targets are properly registered to the Amazon EKS target group. Give the targets sufficient time to pass health checks, then verify that the new targets can successfully serve traffic. If the health checks fail, then verify that the Amazon EKS node Security Group allows the necessary traffic. To confirm proper functionality, try accessing the application through the new listener to make sure it’s being served as intended before proceeding further.

Before starting the migration, you need to create another listener pointing to the existing Amazon EC2 target group:

aws elbv2 create-listener \
    --load-balancer-arn <NLB-ARN> \
    --protocol TCP --port 82 \
    --default-actions Type=forward,TargetGroupArn=<EC2-TARGET-GROUP-ARN>
Bash

After this step, the NLB has three listeners (ports 80, 81, and 82), where you created two more listeners for a smooth traffic migration operation. The port numbers used in this post are examples, and you can select the port numbers to reflect your application needs. We recommend testing if the traffic to the existing target groups is being served on the new listener. This makes sure that the NLB configuration is ready to proceed to the next steps.

Migrating traffic from Amazon EC2 to the Amazon EKS target group

Both target groups are healthy, and all listeners are ready to process traffic, so it’s time to migrate the traffic. First, configure the existing listener on port 80 to send traffic to the eks-green target group.

aws elbv2 modify-listener \
    --listener-arn <NLB-Listen-80> \
    --default-actions Type=forward,TargetGroupArn=$TARGET_GROUP_ARN
Bash

This configuration change makes sure that all new flows are directed to the new target group. As the targets have already passed their health checks when associated with the listener on port 81, this helps speed up this process.

This listener change can take a few minutes to propagate. Therefore, we suggest you monitor the traffic going to the targets in the new target group before proceeding to next steps. During this time, its important to monitor your application’s behavior and performance metrics. You can monitor the target health status using the following:

aws elbv2 describe-target-health \
    --target-group-arn $TARGET_GROUP_ARN
Bash

Although existing connections will remain unaffected, new connections stop routing to the older target group after the traffic migration is completed. After confirming that the new targets are successfully handling traffic, and the previously existing targets are no longer serving traffic, retrieve the list of targets from the previously existing Amazon EC2 target group to prepare for deregistration:

aws elbv2 describe-target-health \
    --target-group-arn <EC2-TARGET-GROUP-ARN> \
    --query 'TargetHealthDescriptions[*].Target.Id' \
    --output text
Bash

This command outputs a list of instance IDs. For each instance ID, run the deregister-targets command:

aws elbv2 deregister-targets \
    --target-group-arn <EC2-TARGET-GROUP-ARN> \
    --targets Id=<InstanceID>
Bash

The deregistration step is important because the Amazon EC2 target group remains associated with the NLB through the port 82 listener. When followed, the deregister-targets call enforces Connection termination on deregistration, which terminates existing connections to old targets. When clients reconnect, they are routed to the new Amazon EKS target group targets.

Rollback procedure to the original target group (If needed)

If you need to rollback to the original Amazon EC2 target group, follow these steps:

  1. Register all original targets again to the Amazon EC2 target group:
aws elbv2 register-targets \
    --target-group-arn <EC2-TARGET-GROUP-ARN> \
    --targets Id=<InstanceID1> Id=<InstanceID2>
Bash
  1. Wait for the targets to pass health checks. Monitor their health status:
aws elbv2 describe-target-health \
    --target-group-arn <EC2-TARGET-GROUP-ARN>
Bash
  1. Re-configure the listener on port 80 on the NLB to send traffic to the Amazon EC2 target group:

aws elbv2 modify-listener \
    --listener-arn <NLB-Listen-80> \
    --default-actions Type=forward,TargetGroupArn=<EC2-TARGET-GROUP-ARN>
Bash

Post-migration cleanup

If the migration has been successful (no rollback needed), then you can proceed with clean up operations. This includes removing the two additional listeners that you created on ports 81 and 82, because they were only needed for the migration process. Finally, you can safely delete the Amazon EC2 target group, as it no longer receives any traffic.

aws elbv2 delete-listener \ 
     --listener-arn <Listen-Port81-ARN>

aws elbv2 delete-listener \ 
     --listener-arn <Listen-Port82-ARN>

aws elbv2 delete-target-group \ 
    --target-group-arn <EC2-TG-ARN>
Bash

Conclusion

The hybrid deployment pattern using NLBs and TargetGroupBinding offers a practical, low-risk approach to migrating applications to Amazon EKS from various sources, including Amazon EC2, on-premises infrastructure, or other container orchestration solutions. Maintaining the existing NLB configuration, while gradually shifting traffic to containerized workloads, allows this method to support seamless transitions and provides built-in rollback capabilities. Although we’ve focused on Amazon EC2 to Amazon EKS migrations, this pattern’s versatility extends to various scenarios, including transitions from on-premises infrastructure or other container orchestration solutions.

Recent enhancements to the AWS Load Balancer Controller, particularly the introduction of MultiCluster target groups, further expand these capabilities. Organizations can now manage workloads across multiple Kubernetes clusters and integrate with non-cluster resources, facilitating more sophisticated migration strategies and distributed application architectures. This hybrid approach serves as a reliable blueprint for modernization, providing the tools needed to maintain business continuity and minimize risk while offering flexibility to adapt to evolving infrastructure requirements.

To further your migration journey, we recommend reading our companion post: Migrating from self-managed Kubernetes to Amazon EKS: Here are some key considerations. This post provides further insights and best practices that complement the hybrid deployment strategy discussed here, especially if you’re moving from a self-managed Kubernetes environment to Amazon EKS.