Containers

The journey to IPv6 on Amazon EKS: Interoperability scenarios (Part 3)

Introduction

So far, in Part 1 and Part 2 of this blog series we covered the foundational aspects of Amazon Elastic Kubernetes Service (Amazon EKS) IPv6 clusters and highlighted key patterns for implementing IPv6 to future-proof your networks. Besides configuring your IPv6 Amazon EKS clusters, migration to the world of IPv6 involves careful infrastructure planning and validation of other networking components and applications’ support for IPv6. This includes identifying dependencies on IPv4 and understanding the scope of migration, to minimize any service disruptions.

Solution overview

AWS services and IPv6

In many use-cases, applications deployed into Amazon EKS Pods require constant access to services outside the Amazon EKS cluster boundary. One such example is access to database-managed services, such as Amazon Relational Database Service (Amazon RDS) or Amazon Simple Storage Service (Amazon S3) buckets from Pods in Amazon EKS clusters. A vast majority of key AWS-managed services support IPv6 either as IPv6-only or more commonly dual-stack. Please refer this documentation for the latest list of AWS services that support IPv6.

So, you may be wondering what about services that are yet to support IPv6, such as Amazon Elastic Container Registry (Amazon ECR) or Amazon DynamoDB? For Pods in IPv6 Amazon EKS clusters, this will be transparent as the access to the IPv4-only Amazon DynamoDB or Amazon ECR endpoints will be subject to the egress-only IPv4 interoperability layer built into the Amazon VPC CNI. This will require neither user action nor any specific configuration as described in Part 2 of the blog series.

AWS interconnect networking services and IPv6

In the past, customers we worked with were more focused on deploying single Amazon EKS clusters for all purposes. The notion of running multiple purpose-built Amazon EKS clusters was considered an operational overhead (rightfully, in most cases). That said, in recent years the Infrastructure-as-Code (IaC) tool chains and GitOps methodologies have democratized the automatic cluster creation and lifecycle management of infrastructure as well as applications. With multiple Amazon EKS clusters becoming the new norm, connecting services across Amazon EKS clusters’ network boundaries remains a key challenge. The good news is that a Pod utilizes first-class IP constructs within the Amazon VPC when EKS Pod networking is implemented by the Amazon VPC CNI. So, any VPC interconnect service or flow is agnostic to Pod traffic.

Private network connectivity across Amazon EKS clusters, which reside in distinct VPCs, is implemented in the vast majority of use-cases using the following three AWS services:

  • VPC Peering: VPC peering is the simplest method for VPC-to-VPC connectivity. VPC Peering supports dual-stack, which practically means that dual-stack VPCs can be peered using their IPv4 and IPv6 Classless Inter-Domains (CIDRs). It is also possible to opt-in for IPv6 only peering routes across the VPCs, which can be useful in use cases where egress IPv4 across those VPCs is not required. See this detailed information and diagram.
  • AWS Transit Gateway (TGW): This is a scalable, highly available way to establish network connectivity between multiple VPCs. Similar to VPC peering, TGW is dual-stack and can connect dual-stack VPCs. It is also possible to opt-in for IPv6 only TGW routes across the VPCs, which can be useful in use-cases where egress IPv4 across those VPCs is not required. See the detailed information and diagram.
  • AWS PrivateLink: PrivateLink provides private connectivity between exposed services across VPCs without an Internet Gateway (IGW) and without exposing traffic to the public internet. Similar to VPC peering and TGW, AWS PrivateLink supports dual-stack, allowing service consumers and service providers to communicate on both IPv4 and IPv6. In-fact, IPv6-based service providers can allow both IPv4 and IPv6 service consumers to connect and consume the exposed service, allowing interoperability.

In summary, you can start interconnecting Amazon EKS cluster network boundaries using the standard purpose-built AWS interconnect services with no specific changes to your current environment. At the same time, the Amazon VPC CNI egress-only interop layer (for egress traffic) and the dual-stack load balancers (for ingress traffic) allows a seamless network flow. You can enable existing Transit Gateway, Peering, or Cloud WAN attachments to route using IPv6. All you need is to associate an IPv6 prefix to the attachment subnets, and edit the attachment to enable it for IPv6. There is no downtime expected with this operation.

Private connectivity scenarios

In this section, we will explore advanced architectural use cases for IPv6 Amazon EKS clusters and interoperability with IPv4 based applications.

Egress connectivity to IPv4 endpoints

During the migration to IPv6, it is likely that you may need to operate services in mixed IPv4 and IPv6 environments. Take for instance a Pod in IPv6 EKS cluster connecting to existing IPv4 as showing in the below architecture (Figure 6):

EKS/IPv6 Pods privately connecting to IPv4 web application in remote VPC

Figure 6: EKS/IPv6 Pods privately connecting to IPv4 web application in remote VPC

VPC peering is dual-stack and able to peer IPv6 and IPv4 VPCs provided the the IPv4 CIDRs do not overlap. The Amazon EKS/IPv6 egress-only interop layer is able to create an IPv4 only egress connection to the single-stack network load balancer.

Note: In this pattern VPC peering can be replaced with AWS Transit gateway, the IPv4 service might be located on an on-premise network.

Egress connectivity to IPv6 endpoints

Connectivity of Amazon EKS/IPv4 Pods to EKS/IPv6 services can be achieved using the dual-stack nature of application and network loads balancers. That said, certain use cases often mandate direct pod to pod private communication. This may include applications that use client-side load balancing, service discovery or service mesh based implementations that mandate sidecar-to-sidecar Pod and Container-direct network communication.

It is likely that during an IPv6 migration phase, Amazon EKS/IPv4 clusters will need to co-exist with EKS/IPv6 clusters. Amazon EKS supports an IPv6 egress-only mechanism built into the Amazon VPC CNI, which allows EKS/IPv4 Pods to privately communicate with EKS/IPv6 Pods without the need to implement yet another interop layer.

The following diagram depicts the problem statement:

EKS/IPv4 Pod is required to establish a direct, private connectivity to an EKS/IPv6 Pod

Figure 7: EKS/IPv4 Pod is required to establish a direct, private connectivity to an EKS/IPv6 Pod

For an Amazon EKS/IPv4 Pod to successfully connect (i.e., egress) to EKS/IPv6 Pod, two approaches come to mind:

  • Recreate the Amazon EKS/IPv4 cluster to support IPv6, this practically means:
    1. Opt-in the IPv4 single-stack VPC into a dual-stack VPC
    2. Recreate the EKS cluster with IPv6 family type
  • Implement an interoperability layer that translates IPv4 to IPv6.

Both methods mentioned above can be very disruptive and possibly result in high implementation overhead.

Instead, the IPv6 egress only mechanism is built into the Amazon VPC CNI and involves two main steps:

  1. Convert IPv4 single-stack VPC into a dual-stack VPC
  2. Enable the V6 egress configuration option in the Amazon VPC CNI

The following diagram depicts the process step-by-step manner:

EKS/IPv4 VPC is dual-stack, VPC-CNI SNAT with ULA IPv6

Figure 7a: EKS/IPv4 VPC is dual-stack, VPC-CNI SNAT with ULA IPv6

The above diagram depicts the dual-stack VPC opt-in, which adds IPv6 capabilities to the existing single-stack IPv4 VPC. The opt-in is phased and requires a reboot to all worker-nodes at the end of the process. The process is explained in detail (steps A to E) in Figure 7.

You must validate the setup after the worker nodes have rebooted. For instance, simply tap into any Pod and list the network interfaces where you should observe two interfaces: one with a VPC Primary IPv4 address and another non-internet-routable Unique Local Address (ULA) based IPv6 address.

The following diagram (Figure 7b) depicts the full network flows of an Amazon EKS/IPv4 Pod connecting directly to EKS/IPv6 Pod across distinct VPCs (Peered with VPC Peering):

EKS/IPv4 VPC is dual-stack, VPC-CNI SNAT with ULA IPv6

Figure 7b: EKS/IPv4 VPC is dual-stack, VPC-CNI SNAT with ULA IPv6

Figure 7b flow pattern considerations:

  • The service discovery implementation must support standard DNS service.
  • VPC peering can be replaced with any VPC interconnect method, such as AWS Transit gateway.
  • IPv4 cross VPC routing entries were not defined in the routing tables, due to the lack of a requirement to route between IPv4 endpoints across the VPCs.

Looking ahead

Today, Amazon EKS in IPv6 mode only supports a dual-stack foundation architecture, which comprises of dual-stack VPC and Amazon EKS nodes deployed into dual-stack subnets. Some of our customers have asked us to support the deployment of Amazon EKS clusters into IPv6-only subnets, where Amazon EC2 worker nodes use IPv6-only address space. They may be trying to use DNS64 and NAT64 constructs as a centralized interoperability layer. Stay tuned as we continue to evolve our features to enable different deployment modes and you may provide your feedback on this on our public roadmap.

Conclusion

In this three-part series, we showed you how Amazon EKS clusters in IPv6 address space interact and interoperate with IPv4 networks as well as AWS services. The inbuilt dual-stack interoperability layer allows you to start migrating your workloads to IPv6 in a gradual manner using Amazon VPC CNI, while IPv6 support is available of all other services. The future belongs to IPv6. We highly encourage you to start planning the migration to Amazon EKS in IPv6 mode now! Please visit the Amazon EKS best practices guide for recommendations and additional IPv6 considerations.