AWS for Industries
Deployment patterns: AWS Network Load Balancer with Amazon EKS for Telco workloads
As Communication Service Providers (CSPs) increasingly adopt cloud-native architectures, the need for performant, reliable, and secure network load balancing solutions is essential. Amazon Web Services (AWS) offers services and capabilities to address these challenges. AWS Network Load Balancer (NLB) stands out with its ability to handle high throughput and low latency requirements. NLB provides robust high-availability features through its architecture and design. This post walks you through deploying services in Amazon Elastic Kubernetes Service (Amazon EKS) using NLB. Whether you upgrade legacy applications such as Session Initiation Protocol (SIP) applications, Online Charging Systems (OCS), Network Management Systems (NMS), or creating new microservices, NLB can handle low latency and fluctuating traffic patterns with ease.
We explore four deployment patterns that use the AWS Load Balancer Controller and advanced features such as TargetGroupBinding. These patterns optimize service exposure, reduce infrastructure complexity, and lower operational costs. Furthermore, we discuss how network segmentation and VPC Endpoints powered by AWS PrivateLink can be used to maintain traffic separation to enhance security and flexibility in telco cloud environments.
In this guide we demonstrate how to implement these strategies so that your telco workloads on Amazon EKS are performant, reliable, and secure. Whether you modernize legacy applications or build new microservices, these four patterns can help you achieve a streamlined and efficient network architecture.
Getting started
You need a command line environment to create the AWS resources. We recommend AWS CloudShell or use AWS Command Line Interface (AWS CLI) locally. You need to use AWS CLI, eksctl, and kubectl. To install AWS CLI follow this user guide. To install eksctl and kubectl, follow these instructions.
Use the following command to create an EKS cluster. This creates a VPC with two private and two public subnets, Amazon Elastic Compute Cloud (Amazon EC2) instances (worker nodes), and an EKS cluster. Resource creation takes a couple of minutes.
With an existing EKS cluster we can explore the four patterns to deploy an NLB with an EKS cluster.
Pattern 1: One NLB instance per Kubernetes service
The following figure shows the regular pattern with NLB and Amazon EKS. The AWS Load Balancer Controller associates an NLB to a Kubernetes service in an EKS cluster. The controller automatically provisions the NLB with a listener and matching targetgroup with target type “instance.”
Figure 1: NLB with single listener port/service
Deployment
Expected behavior
- DNS resolution on NLB Fully Qualified Domain Name (FQDN) shows NLB Elastic Network Interface (ENI) IP addresses (the number depends on how many subnets/Availability Zones (AZs) to which you attached the NLB).
- By default, the source IP addresses hitting the pods are the NLB ENI IP addresses.
- Enabling “attribute: Preserve client IP addresses” of a targetgroup means that the source IP address hitting the pods is the actual client IP address instead of the NLB ENI IP addresses. Enable this when the application needs to observe the actual client IP addresses.
- On the client side, the response IP address is always the NLB ENI IP addresses.
Pattern 2: One NLB instance for multiple Kubernetes services (using NodePort Type and instance target type)
With Pattern 1, each service needs a dedicated NLB. With Pattern 2, multiple Kubernetes services can share a single NLB, as shown in the following figure. This pattern minimizes the number of NLBs needed to expose multiple services in an application. The approach reduces costs and operational overhead. The pattern uses the AWS Load Balancer Controller TargetGroupBinding feature. TargetGroupBinding is a custom resource to expose Kubernetes pod where the NLB already exists or isn’t managed by Kubernetes annotations. With this pattern, the NLB is deployed independently of the Kubernetes cluster and multiple targetgroups are created under the NLB. Each targetgroup then “binds” to a service.
Figure 2: NLB with multiple listener ports/service
Deployment
1. Deploy sample applications
example nginx app:
2. Create multiple services with type NodePort
Important When you deploy multiple Kubernetes service, the port and NodePort are unique per service. The listener port of the NLB corresponds to the service port. (for example, 8081 and 8082).Verify the service:
3. Create an NLB outside of Amazon EKS. Choose two subnets on the VPC to create the NLB. This VPC is the same VPC where you have the EKS cluster. Unlike the previous pattern, here we are using the manual creation of NLB.
4. Create a listener and a targetgroup per service. Note the port number (listener port), which should match the Kubernetes service details that you created previously.
Get the Amazon Resource Name (ARN) of each targetgroup:
Register the targets (Amazon EC2 workers) with the corresponding instance IDs of the workers. You need to get the instance IDs of the workers.
Create a listener for each targetgroup. Note the listener port which should match the port that you defined on the targetgroup.
5. Create and verify TargetGroupBinding CRD.
6. Apply TargetGroupBinding for each targetgroup. This binds the NLB targetgroup with the service that you created. Update the targetGroupARN accordingly.
7. To verify that when an Amazon EC2 worker is replaced by the Auto Scaling group, the Amazon EC2 target on the targetgroup is recognized and there is no need to redo the TargetGroupBinding. You must attach the targetgroups to the Auto Scaling group. Get the value of your Auto Scaling group name and set it as asg1 variable, as shown in the following.Note
Verify that the Security Group is configured to allow traffic from your source or client IP and to allow NLB IPs for successful health checks.
Expected behavior
- The same behavior as Pattern 1. The difference is that a single NLB has multiple listeners, and each listener forwards the traffic to its own targetgroup or service.
- When a target EC2 instance target is replaced by the Auto Scaling group, the TargetGroupBinding is retained, and no manual configuration would be needed.
Pattern 3: One NLB per Kubernetes service with VPC Endpoint
Most CSPs implement Virtual Routing and Forwarding (VRF) to create isolated network segments. VRFs enable traffic separation in a CSP network such as signalling, diameter, management, or billing traffic. In AWS, an on-premises VRF segment can be extended as a “VRF VPC,” as described in Pattern 5 of our networking architecture series. On-premises communication to AWS of a VRF segment travels through the “VRF VPC” network.
Pattern 3 shows the use of a VPC endpoint located on a “VRF VPC” to expose an NLB created on a different VPC, as shown in the following figure. The VPC Endpoint and the NLB are connected by AWS PrivateLink, where no VPC peering or AWS Transit Gateway routing is needed. This pattern targets use cases where customers need to expose an NLB on a VRF segment for traffic segregation. Beyond the concept of “VRF VPC”, this pattern applies to applications that must expose the NLB on a VPC outside of their account, for example use cases such as Platform as a Service (PaaS).
Figure 3: NLB with VPC Endpoint
Deployment
1. Get the NLB ARN created with Pattern 1. Reuse the same deployed sample application and service.
2. Create VPC Endpoint service for the NLB.
3. Create VPC Endpoint for the endpoint service.
Get the VPC ID and subnets to create the VPC Endpoints on the “VPC VRF” (not the VPC where Amazon EKS is running). Then, create the endpoints.
Test it by invoking the p3endpoint-service-nlb DNS address.Expected behavior
- The source address that is observed by the application pods is the NLB Elastic Network Interface (ENI) IPs, regardless of whether or not the client preservation parameter is on.
- The response IP address observed by the client application is the VPC Endpoint ENI IPs.
- AWS PrivateLink is established in the background, and no VPC Peering or Transit Gateway routing are needed between VPCs.
Pattern 4: NLB multi-segment through VPC endpoints and TargetGroupBinding (combines Pattern 2 and Pattern 3)
This pattern expands the previous pattern to a multiple VRF VPC approach. The VPC endpoint on each “VRF VPC” ties to a NLB listener port. This needs multiple listeners using TargetGroupBinding as described in Pattern 2. This pattern suits applications that expose an NLB across multiple VRF segments and maintain traffic separation across, as shown in the following figure. Think of an application that is now accessible from different segments such as signaling, diameter, O&M, while maintaining the application VPC isolated from these networks.
Figure 4: NLB with multiple VPC Endpoints
Deployment
1. Reuse the application and service deployment from Step 1 and Step 2 of Pattern 2.
2. Get the NLB ARN from Pattern 2 and use the existing target group configurations.
3. Create the VPC Endpoint service for the NLB.
4. Create the VPC Endpoint for the endpoint service.
Get the VPC ID and subnets of the “VPC VRFs” and create the endpoints. In this example we have two VPC VRFs.
Note
Define the VPC Endpoint security group to verify that only the desired inbound addresses are allowed by the endpoints.
Expected behavior
- The traffic behavior is like Pattern 3 where the source IP address observed hitting the application pods is the NLB IP addresses.
- Response IP address seen by the client application is the VPC Endpoint ENI IPs.
Cleaning up
This section provides cleanup commands to remove all AWS resources created for the patterns. It deletes Kubernetes resources (target group bindings and sample pods), removes NLB listeners and target groups, deletes VPC endpoints across multiple VRF VPCs, removes VPC endpoint service configurations, deletes the load balancer itself, and finally deletes the entire EKS cluster.
Conclusion
Deploying services in Amazon EKS with the NLB necessitates a strategic approach to network load balancing. The combination of NLB and the AWS Load Balancer Controller offers a robust and adaptable solution for service exposure. Telco applications can use the TargetGroupBinding feature to achieve more granular and cost-effective service deployments, reducing the need for multiple load balancers and streamlining infrastructure management. This approach not only decreases operational overhead and costs but also provides a streamlined and efficient network architecture.
Moreover, the deployment patterns discussed enable organizations to maintain critical security boundaries through network segmentation while facilitating essential service communications. The integration of VPC Endpoints powered by AWS PrivateLink further enhances the flexibility and security of these deployments. Overall, CSPs can use these strategies to optimize their telco workloads on Amazon EKS, providing both high performance and stringent security in their cloud environments.

