How to route UDP traffic into Kubernetes

Since its release, Amazon Elastic Kubernetes Service (Amazon EKS) has been helping customers to run their applications reliably and at scale. UDP, or User Datagram Protocol, is a low-latency protocol that is ideal for workloads such as real-time streaming, online gaming, and IoT. The Network Load Balancer (NLB) is designed to handle tens of millions of requests per second while maintaining high throughput at ultra low latency. Together, UDP, Kubernetes, and Network Load Balancers give customers the ability to improve their agility while also meeting their low latency requirements.

While the configuration of the UDP-based LoadBalancer Kubernetes Service type is similar to the TCP-based, there are a few things to keep in mind, such as configuring Network Load Balancer health checks. In this blog, we will demonstrate how to deploy a UDP-based game server, as well as a pattern to enable health checks that are not UDP-based.


In Kubernetes, Pods are the smallest deployable units of computing that you can create. Each Pod consists of one or more containers that share an IP address. This makes it difficult for applications to discover and identify which Pods to connect to, for example, when a front-end application attempts to connect to backend workloads.

A Kubernetes Service enables network access to a set of Pods. Kubernetes Services connect a set of Pods to an abstracted Service name and IP address. This abstraction allows other applications to reach the application by simply referring to the service’s name. This means that other applications no longer need to know the IP addresses assigned to the individual Pods. External applications and end users can also access Kubernetes Services, assuming that they are exposed outside of the cluster.

A Kubernetes Service is responsible for exposing an interface to the Pods, which enables network access from either within the cluster or between external processes and the service through different service types such as ClusterIP, NodePort, LoadBalancer, and ExternalName. Kubernetes Services support TCP (default), UDP, and SCTP protocols.

One of the most popular ways to use Kubernetes Services in AWS is with the LoadBalancer type. In the AWS Cloud, you can use the AWS Load Balancer Controller to configure a Network Load Balancer to route TCP and UDP traffic from the internet to services running in your cluster. Settings such as instance or IP targets can be configured using annotations.

Key considerations for deploying UDP-based applications on EKS

Deploying UDP-based applications on Kubernetes is identical to setting up TCP-based applications.

Here’s an example of a TCP-based service:

apiVersion: v1
kind: Service
  name: sample-tcp-svc
  annotations: "external" "ip" 3 3 10 10  internet-facing
  type: LoadBalancer
    app: sample-tcp-app
    - protocol: TCP
      port: 80
      targetPort: 80

Additionally, the AWS Load Balancer Controller uses the same process to configure a Network Load Balancer for UDP-based services as it does for TCP-based services. However, there is a key distinction to keep in mind:

Health checks

Health checks cannot be performed using UDP. They can, however, be performed over any other protocol (TCP, HTTP, or HTTPS) and on any port specified by the target. If your application only supports UDP communication, you can use a sidecar container to expose a TCP port. The sidecar enables you to concentrate on the application container without changing its behavior to support TCP-based health checks. For example, the sample game server referred to in this blog post runs NGINX as a sidecar that listens on port 80.

TCP health checks are not integrated into the UDP application container and are handled by a sidecar. The sidecar is not aware of any UDP application container failures. It is recommended to configure a liveness probe to increase the accuracy of the game server health checks. The script used for the liveness probe monitors the status of the UDP game server and kills NGINX server when the UDP health check fails. When the unhealthy threshold count is reached on the target group, the request is redirected to another healthy Pod. The interval between liveness probes should be less than the interval between Network Load Balancer health checks. You can modify the Load Balancer’s unhealthy threshold count (which is set to three by default) for faster failover.

A UDP-based service with a TCP health check, for example:

apiVersion: v1
kind: Service
  name: sample-udp-svc
  annotations: "external" "ip" "80" TCP 3 3 10 10 internet-facing
    app: sample-udp-app
    - protocol: UDP
      port: 8081
      targetPort: 8081
  type: LoadBalancer

Let’s have a look at the architectural components involved in running a UDP-based game server on Kubernetes. Additionally, we will demonstrate how you can route multiplayer game server UDP traffic into an EKS cluster.


The architecture used to deploy a sample connectionless UDP-based game server comprises the following components:

  • Amazon EKS cluster and a node group of Amazon EC2 C6g instances powered by Arm-based Amazon Web Services Graviton2 processors.
  • AWS Load Balancer Controller that manages AWS Elastic Load Balancers for a Kubernetes cluster.
  • A UDP game server is deployed to EKS, a service of type LoadBalancer.
  • An NGINX container deployed as a sidecar along side game server that exposes port 80.
  • The game server also includes udp-health-probe, used as a liveness probe.
  • A Network Load Balancer provisioned through AWS Load Balancer Controller with a UDP listener that is associated with single target group. This target group is configured to register its targets using IP mode and perform health checks on its targets using TCP protocol (port 80).

Getting started


  • An AWS account with admin privileges. For this blog, we will assume you already have an AWS account with admin privileges.
  • Command-line tools. Mac/Linux users need to install the latest version of AWS CLI, kubectl, eksctl, and git on your workstation.
  • To get started with the game server install, clone the containerized-game-servers GitHub repository on your local workstation.

Publish application images

As part of this step, you will build and publish the game server and sidecar NGINX container images to Amazon Elastic Container Registry (Amazon ECR). Locate the udp-nlb-sample directory in the local folder in which you recently cloned the containerized-game-servers repo into.

cd containerized-game-servers/udp-nlb-sample

Enter the following command to set environment variables AWS Region and AWS account ID.

export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --output text --query Account)
export AWS_REGION=us-west-2

Build and publish game server image

The next step is to build a Docker image of a game server and publish it to the ECR repository. This step creates an Arm-based image.

The Buildx CLI plug-in for Docker simplifies the process of creating multi-arch images. If you’re running Docker Desktop >= 2.1.0, for example, on macOS or Windows, it comes configured with Buildx and all the necessary functionality for cross-platform image building. Docker Linux packages also include Buildx when installed using the DEB or RPM packages.

cd containerized-game-servers/udp-nlb-sample/stk

Build and publish sidecar image

We are going to run NGINX as a sidecar container alongside the game server to support TCP target health checks used by the Network Load Balancer Controller. First, we need to publish an Arm-based NGINX image to the ECR repository.

cd containerized-game-servers/udp-nlb-sample/nginx-static-sidecar/

Create EKS cluster

Use eksctl to create a cluster. Make sure you are using the latest version of eksctl for this example. Additionally, the following command creates managed node group of instances powered by Arm-based AWS Graviton3 processors.

cd containerized-game-servers/udp-nlb-sample
eksctl create cluster -f eks-arm64-cluster-spec.yaml

Deploy AWS Load Balancer Controller

AWS Load Balancer Controller is responsible for the management of AWS Elastic Load Balancers in a Kubernetes cluster. To deploy an AWS Load Balancer Controller, follow the steps outlined in the EKS user guide.

Deploy game server

The sample game server is configured as a LoadBalancer service type. When deployed, the AWS Load Balancer Controller will provision an external-facing Network Load Balancer with a target type “IP” and a “UDP” listener protocol. In this demonstration, we are using AWS VPC CNI for Pod networking. The VPC CNI supports direct access to a Pod IP via a secondary IP address on an ENI of a node. If you are using an alternative CNI, ensure that it supports directly routable Pod IPs.

For the purpose of this demonstration, we will use NGINX as a sidecar. The script included in the demo is configured as a liveness probe for the game server. The liveness probe runs the script at periodic intervals to verify the health of the game server and helps restart the game server without notifying the NGINX process performing health checks. As a result, it is recommended that the game server and sidecar share a process namespace. The liveness probe executes the script on a periodic basis to check the game server’s health and sends a SIGKILL signal to the NGINX container if the UDP port becomes unavailable. When the Network Load Balancer’s target health check fails, incoming UDP traffic is redirected to a healthy Pod.

Make sure the AWS_REGION and AWS_ACCOUNT_ID variables are set correctly.

cd containerized-game-servers/nginx-static-sidecar/
cat stknlb-static-tcphealth-sidecar.yaml | envsubst | kubectl apply -f -

Wait for all the pods to be in running state.

kubectl get Pods --selector=app=stknlb --watch
NAME                     READY   STATUS    RESTARTS   AGE
stknlb-8c59f46d8-ln558   2/2     Running   0          3m33s

Test game server

You can test the game server on your local workstation by downloading the supertuxkart. Use online mode in supertuxkart and connect to the Network Load Balancer URL recorded in the previous step.

Next, navigate to the game lobby by entering the EXTERNAL-IP URL with port 8081. Wait for other players to join, then select Start race to begin the game.


To avoid incurring future charges, you can delete all resources created during this exercise. This action, in addition to deleting the cluster, will also delete the node group.

eksctl delete cluster --name arm-us-west-2


In this blog post, we explained how you can scale a connectionless UDP-based application behind a network load balancer to meet low-latency needs. We’ve shown how to use a sidecar to enable TCP target health checks with a Network Load Balancer that uses UDP listeners.

Please refer to the containerized-game-servers GitHub repo to learn more deployment patterns that use mutating webhooks. As of today, the AWS Network Load Balancer supports UDP for IPv4 targets only. We will continue to publish examples demonstrating how to route UDP traffic into IPv6 Kubernetes clusters as Network Load Balancer adds support for UDP protocol with dual-stack address types. Please visit AWS Containers Roadmap to provide feedback, suggest new features, and review our roadmaps.

Sheetal Joshi

Sheetal Joshi

Sheetal Joshi is a Principal Developer Advocate on the Amazon EKS team. Sheetal worked for several software vendors before joining AWS, including HP, McAfee, Cisco, Riverbed, and Moogsoft. For about 20 years, she has specialized in building enterprise-scale, distributed software systems, virtualization technologies, and cloud architectures. At the moment, she is working on making it easier to get started with, adopt, and run Kubernetes clusters in the cloud, on-premises, and at the edge.

Yahav Biran

Yahav Biran

Yahav Biran is a Principal Solutions Architect in AWS, focused on AI frameworks and applications. Yahav enjoys contributing to open source projects and publishing in AWS blog and academic journals. He currently contributes to the K8s Helm community, AWS databases and compute blogs, and Journal of Systems Engineering. He delivers technical presentations at technology events and working with customers to design their applications in the Cloud. He received his Ph.D. (Systems Engineering) from Colorado State University.