Containers

The journey to IPv6 on Amazon EKS: Implementation patterns (Part 2)

Introduction

In Part 1 of this blog series we covered the foundation of Amazon Elastic Kubernetes Service (Amazon EKS) IPv6 clusters and the deep integration into the underlying Amazon Virtual Private Cloud (Amazon VPC) dual-stack IP mode.

As customers evaluate their migration strategies to IPv6 to harness the benefits of scale and simplicity, they need to devise comprehensive implementation strategies that include operating in dual-stack environments. Based on our recent interactions, IPv6 support for common AWS Services and the ability of Amazon EKS in IPv6 mode to operate in mixed settings are still some of the key considerations for customers migrating to IPv6. In this part of the series, we will focus on some of the common implementation patterns and address related concerns. As mentioned in Part 1, we won’t cover deep technical aspects of IPv6 on AWS, but rather quickly delve into Amazon EKS on IPv6 clusters and how they interact with EKS on IPv4 clusters as well as with AWS core networking services. For a comprehensive guide refer to our IPv6 on AWS whitepaper.

By now, you should have an introductory understanding of a dual-stack VPC and EKS cluster in IPv6 mode. There are, however, other questions you might have. For example: How do Pods communicate in hybrid environments or with internet endpoints which are IPv4-only? How would IPv4 endpoints outside the cluster boundary communicate with them?

Let’s look into a few communication patterns that will help answer those questions.

Solution overview

Communication patterns

In Cluster

In Amazon EKS/IPv6 cluster Pod to Pod communication is IPv6 native (Figure 1).

Amazon EKS/IPv6 Pod to Pod communication

Figure 1: Amazon EKS/IPv6 Pod to Pod communication

In Amazon EKS/IPv6 cluster Pod to Kubernetes service (i.e., ClusterIP based) communication is IPv6 native (Figure 2).

EKS/IPv6 Pod to Kubernetes Service communication

Figure 2: EKS/IPv6 Pod to Kubernetes Service communication

Note: The assigned IPv6 ClusterIP is carved from a Unique Local IPv6 Unicast Addresses (ULA) based non-globally reachable IPv6-address range allocated by the Amazon EKS control plane upon cluster creation (i.e., the range is immutable [see Part 1 in this blog series for more information]).

Now, let’s dive into how the Amazon EKS/IPv6 VPC-CNI built-in interoperability layer is able to communicate with IPv4 endpoints.

Inbound

For IPv6 endpoints to successfully connect (i.e., egress) to IPv4 endpoints, two well-known approaches come to mind:

  • Turn the IPv6-only endpoint to a dual-stack endpoint, where the endpoint is assigned with a valid routable IPv4 address and a valid routable IPv6 address. This is the Kubernetes upstream approach to IPv6; however, as noted in Part 1 this doesn’t solve the IPv4 exhaustion challenge. It also creates a long-term dependency on IPv4, which impacts the ultimate goal of migrating to IPv6-only networks.
  • Implement an interoperability layer that translates IPv6 to IPv4. NAT64 and DNS64 are well-known core constructs which form this layer. Dual-stack load balancer services – Application Load Balancer (ALB) and Network Load Balancer (NLB) – can also act as an interoperability layer, but aren’t architecturally aimed at egress traffic patterns.

The Amazon VPC CNI plugin for Amazon EKS clusters has built-in capability designed to merge those methods by creating an opinionated egress-only interoperability layer. In simpler words, no user action is needed to egress IPv4 endpoints. Let’s look at Figure 3 to learn more on the flow and mechanism.

EKS/IPv6 Pod connecting to an IPv4 endpoint located outside the cluster boundary

Figure 3: EKS/IPv6 Pod connecting to an IPv4 endpoint located outside the cluster boundary

The Pod is dual-stacked – it means that it is assigned both IPv6 and IPv4 address. However, the assigned IPv4 address is non-routable and only the worker-node’s IP address is visible. So, the Pod IPv4 egress utilizes the dual-stack capable Amazon Elastic Compute Cloud (Amazon EC2) worker-node’s primary IPv4 address. This is explained in steps 1 to 4 in Figure 3. This is the default behavior in IPv6-enabled Amazon EKS clusters and you don’t need to enable additional options. You can check here to learn more about how it is implemented.

Next, let’s explore how this IPv4 egress Pod is able to connect to an internet-based IPv4 endpoint:

At this point, you will not be surprised to learn that once the Pod will egress IPv4 using the dual-stack Amazon EC2 worker-node’s IPv4 routable address, we will simply need a Network Address Translation (NAT) Gateway to connect to internet-based IPv4 endpoint (Figure 4).

EKS/IPv6 Pod connecting to an internet-based IPv4 endpoint

Figure 4: EKS/IPv6 Pod connecting to an internet-based IPv4 endpoint

The flow is described with steps 1 to 5 in Figure 4. Steps 4 to 5 are standard practices for IPv4 Public NAT Gateway Source Network Address Translation (SNAT).

Note: This flow depends on the fact that the Amazon EKS work-node is deployed into a dual-stack subnet that is part of a dual-stack VPC.

The tradeoff of this approach is that each dual-stack Amazon EC2 work-node will consume a valid IPv4 address (in addition to the IPv6 address). In the vast majority of use-cases the IPv4 address space usage will likely be kept to minimal due to the nature of Pods to Amazon EC2 worker nodes ratio.

Next, let’s explore how internet-based IPv4 endpoints are able to connect to load balanced IPv6 Pods in Amazon EKS clusters.

Inbound traffic

In contrast to in-cluster communication egress patterns mentioned previously, traffic from outside the cluster to IPv6 pods isn’t subject to the Amazon VPC CNI egress-only built-in interoperability implementation.

The IPv4 to IPv6 ingress flows are made possible by:

  1. Including specific annotations in the ingress or Kubernetes service definitions, which are detected and consumed by the AWS load balancer controller (LBC), deploying a dual-stack ALB or NLB, and
  2. Exposing the Amazon EKS services using ALB (via standard Kubernetes ingress definition) or NLB (via standard Kubernetes service definition).

Note that the AWS LBC needs to be installed as it is not one of the default add-ons of an Amazon EKS cluster at the time of writing this blog.

The results of this flow are depicted in Figure 5:

Internet-based IPv4 endpoint connecting to load-balanced EKS/IPv6 Pods 

Figure 5: Internet-based IPv4 endpoint connecting to load-balanced EKS/IPv6 Pods

At step a, in the mechanism a standard Kubernetes ingress definition with specific annotations is created and consumed by the LBC.

alb.ingress.kubernetes.io/ip-address-type: "dualstack"
alb.ingress.kubernetes.io/target-type:  "ip"

The LBC then creates a dual-stack ALB (step b).

In steps 1 to 4 of Figure 5, the Internet IPv4 endpoint connects to the public IPv4 endpoint of the ALB. Later, in steps 4 and 5, the dual-stack ALB uses its IPv6 capabilities to natively load balance the IPv6 based Pods. In simpler words, the ALB acts as the interoperability layer in this case bridging between IPv4 and IPv6 addresses.

The NLB mechanism and flows are exactly the same as depicted at Figure 5, with the exception of the annotations that are set on the Kubernetes service construct definition:

service.beta.kubernetes.io/aws-load-balancer-ip-address-type: "dualstack"
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: "ip"

It is worth mentioning that internal facing load balancers will support the same IPv4 to IPv6 bridge flows for internal endpoints, which implies that ALBs and NLBs can be used as interoperability endpoints across VPC network boundaries.

Note: As of the time of writing this post, the ALB and NLB only support load balancing Pods in IP as Targets in IPv6 Amazon EKS clusters.

Conclusion

In this part of the blog series, we showed you how Pods in IPv6 Amazon EKS clusters interact and interoperate with IPv4 networks as well as services. The built-in interoperability layer of Amazon VPC CNI allows you to start migrating your workloads to IPv6-based Amazon EKS clusters in a gradual manner while IPv6 support is available for all other services. In the next post, we will dive deep into how IPv6-based Amazon EKS clusters interoperate in dual-stack networks and highlight additional considerations for connecting them with other AWS services, especially when they are using IPv4 address space.