Networking & Content Delivery

Dual-stack IPv6 architectures for AWS and hybrid networks – Part 2

In part one of our series on IPv6 for AWS and hybrid network architectures, we explored some of the most common dual stack designs: dual stack Amazon Virtual Private Cloud (Amazon VPC) and Amazon Elastic Compute Cloud (Amazon EC2) instances, Internet connectivity, Internet-facing Network Load Balancer and Application Load Balancer deployments, as well as VPC and hybrid connectivity. We recommend you check it out first before diving into the journey in this second part.

One of the main reasons for adopting IPv6 in the internal networks – in AWS and on-premises – is the increasing strain on the private IPv4 space, driven by large multi-region deployments and containers adoption at scale. In November 2021, we launched support for IPv6-only subnets and in a dual stack Amazon VPC, together with the ability to launch IPv6-only Nitro EC2 instances in these subnets.

This second part of our IPv6 architectures journey explores some of the IPv6 architectures that you can leverage today for AWS and hybrid networks driven by IPv6-only subnets, internal dual stack ELB support and IPv6-only targets, DNS64, NAT64, Amazon Elastic Kubernetes Service (Amazon EKS), Amazon Relational Database Service (Amazon RDS) and AWS PrivateLink IPv6 support. For detailed information on the full extent of the configuration options for each referenced service, we recommend that you follow the documentation links in each section.

The following IPv6 and dual-stack architectures are in the scope of this post:

  1. IPv6-only subnets and EC2 instances in an IPv6-enabled Amazon VPC.
  2. Backwards compatibility with IPv4-only services and workloads using NAT64 and DNS64 in your dual stack AWS network.
  3. Dual-stack application endpoints and IPv6 target groups with Application and Network Load Balancers (NLB).
  4. Amazon EKS IPv6 support in a dual stack Amazon VPC.
  5. Amazon RDS and AWS PrivateLink IPv6 support in a dual stack Amazon VPC

We assume that you’re familiar with the Amazon VPC constructs and the dual stack functionality and configuration options for the services mentioned. Furthermore, you should be aware of the IPv6 protocol definition, the types of addresses, and the configuration mechanisms. We’ll be discussing architectures implementations that use both IPv4 and IPv6, focusing on the interoperability and best practices.

1. IPv6-only subnets and EC2 instances in an IPv6-enabled Amazon VPC

The support for IPv6-only subnets and EC2 instances in a dual stack VPC means that you can scale up the workloads that consume a large number of IP addresses. Additionally, you can meet the government mandated requirements for the adoption of IPv6-only network environments, and minimize the need for translation software or systems, thereby creating a simplified, cost-effective, and performance-driven architecture.

Your VPCs will continue to be dual stack, and you can have a mix of IPv4-only subnets, dual-stack subnets, and IPv6-only subnets within the same VPC. This lets you create your IPv6 native workloads and have them co-exist with your IPv4-only, or dual stack workloads. Each IPv6-only or dual stack subnet in the VPC has a fixed /64 IPv6 prefix length. Refer to Introducing IPv6-only subnets and EC2 instances and the documentation for details on how to create IPv6-only subnets.

It’s worth discussing some of the principles that govern the IPv6 addressing plan. In IPv6, the question “How many hosts am I planning to have in this subnet?”, which is the starting point for any IPv4 address planning, has no relevance. When considering the IPv6 addressing plan, the primary concern instead becomes “How many subnets are needed, and how can I allocate those subnets, build a logical and scalable address plan, and more efficiently operate my network?”

Let’s consider the VPC design again. Now that you have created IPv6-only subnets, let’s look at the traffic patterns within the VPC and how connectivity outside of the VPC is ensured.

a. VPC internal flows

The following diagram shows a sample VPC with two dual stack subnets, as well as two IPv6-only subnets:

Amazon VPC IPv6-only internal flows

Figure 1: Amazon VPC IPv6-only internal flows

The first traffic flow highlighted in the diagram above generically maps to IPv6-only communication inside of the VPC. IPv6-only resources in the VPC can communicate with other IPv6-only resources using the IPv6 stack. For the second flow, communication between the IPv6-only and the dual stack resources in the VPC is possible using the IPv6 protocol stack. Since both flows are within the VPC, the public or private nature of the subnets has no influence – provided that the VPC security controls (Security Groups, Network ACLs) allow for communication.

Next, let’s look at Internet connectivity for the IPv6-only workloads.

b. Internet connectivity

The IPv6-enabled VPC lets you maintain the same security controls and enforcement mechanisms across all three types of subnets: IPv4-only, dual stack, and IPv6-only. The security posture of your workloads in the VPC dictates their placement across private and public subnets.

Public IPv6-only subnets

Internet connectivity for public IPv6-only subnets follows the same mechanism as dual-stack subnets, using the Internet Gateway (IGW) for egress and ingress. Refer to part one of this series for details on dual stack connectivity. The following diagram shows the traffic flows for public IPv6 connectivity for IPv6-only public resources, as well as the corresponding route table:

Public IPv6-only subnets Internet connectivity

Figure 2: Public IPv6-only subnets Internet connectivity

Bidirectional connectivity for the public IPv6-only resource is achieved using the IPv6 address from the Subnet CIDR, without the need for an Elastic IP and the 1:1 Network Address Translation (NAT) that happens at the IGW level for the IPv4 traffic. Any route table that you create in your VPC contains both the IPv4 and the IPv6 CIDR blocks of the VPC as local routes, regardless of the IPv6-only subnets that are associated with it.

Private IPv6-only subnets

Internet connectivity for private subnets is strictly bound to a client-server communication model. The routing configuration, for both IPv4 and IPv6, allows the workloads deployed inside of the private subnets to open connections to public endpoints, and to receive the response traffic. For IPv6-only subnets, the Egress-only IGW is the VPC components that enforces this communication flow type, thus preventing the Internet from initiating IPv6 connections to your private IPv6-only instances, as well as allowing only return traffic, without relying on address or port translation.

Private IPv6-only subnets Internet connectivity

Figure 3: Private IPv6-only subnets Internet connectivity

Next, let’s consider that, in IPv6, there’s no need for network address translation (NAT) as it’s being used in IPv4 networks. Some consider NAT44 to be a security mechanism, although the security community has tried to dispel that myth over the years. In today’s environment of escalating attacks, we must understand why NAT44 (or NAT66) is insufficient. This starts with acknowledging that there are many different types of security services at the ingress/egress points, in addition to the simple denial of inbound traffic. Best practices dictate the need to implement DDoS protection, port and protocol filtering, and web application filtering, with the help of services such as AWS Shield and Shield Advanced, AWS Network Firewall, AWS Web Application Firewall (AWS WAF), Security Groups, and Network ACLs, to name just a few.

2. Backwards compatibility with IPv4-only services and workloads using NAT64 and DNS64 in your dual stack AWS network

You can understand the different protocol versions as languages that the participants in the network can speak and understand. For IPv4 and IPv6, there’s no direct compatibility between the two versions, so hosts in the network speak completely different languages. The migration to IPv6-only workloads brings about the need to maintain backwards compatibility with the IPv4-only endpoints and services, both in the Internet and inside your network.

Network communication is usually based on DNS (Domain Name System) resolution, which enables clients to resolve service names to IP addresses that they can then use to send traffic. Your IPv6-only workloads running in VPCs can send and receive only IPv6 network packets. Therefore, they need DNS resolution to provide them with IPv6 addresses in response to queries. By default, the workloads in your VPC use the Amazon Route53 Resolver for DNS resolution, irrespective of the IP protocol version that they’re configured for: IPv4 or IPv6. Without DNS64, a DNS query from an IPv6-only instance for an IPv4-only service would return an IPv4 address, which is unusable.

To bridge the communication gap, you can use the subnet-level settings to enable DNS64. With DNS64, the Route53 Resolver can identify the IPv6-only parts of the network, and reply to DNS queries with IPv6 addresses, even if the original record was an IPv4 one. Once the Route53 Resolver receives a query from an IPv6-only instance in an IPv6-only subnet with DNS64 enabled (1), it looks up the record and returns either the original IPv6 address (if the record contains an IPv6 address), or a synthesized IPv6 address by prepending the well-known 64:ff9b::/96 prefix to the IPv4 address (2).

DNS64 resolution from IPv6-only subnet

Figure 4: DNS64 resolution from IPv6-only subnet

In our diagrams, we’ve chosen to depict the IPv4 address in dotted decimal format, as the 64:ff9b::/96 prefix is appended to it, only for the illustration purposes. The address received and used by the IPv6-only EC2 instance is a valid IPv6 address in hexadecimal format. For example, 64:ff9b::10.1.1.20 is in hexadecimal format 64:ff9b::a01:114. Let’s look at an example of how DNS64 works:

[ec2-user@i-01f432eb0e6efecb7~]$ dig AAAA ip-10-1-1-20.ec2.internal

; <<>> DiG 9.11.4-P2-RedHat-9.11.4-26.P2.amzn2.5.2 <<>> AAAA ip-10-0-5-24.ec2.internal
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 14364
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;ip-10-1-1-20.ec2.internal.	IN	AAAA

;; ANSWER SECTION:
ip-10-0-5-24.ec2.internal. 60	IN     AAAA       "64:ff9b::a01:114"
;; Query time: 12 msec
;; SERVER: fd00:ec2::253#53(fd00:ec2::253)
;; WHEN: Sat Mar 19 23:43:46 UTC 2022
;; MSG SIZE  rcvd: 82

Now that the IPv6-only instance has an IPv6 address to send traffic to, it does so. However, the destination lacks the capability to understand IPv6 packets. Therefore, NAT64 is needed to translate the IPv6 packet that originates from the IPv6-only instance (3) to an IPv4 packet that can be understood by the IPv4-only destination (4). The IPv6-only subnet needs a route in its route table for the well-known prefix 64:ff9b::/96 with the NAT Gateway as target.

DNS64 and NAT64

Figure 5: DNS64 and NAT64

The destination IPv4-only service or endpoint can reside inside the same VPC, the Internet, or in the AWS and hybrid private network (4). Also, you can use a centralized NAT64 Internet Egress VPC for IPv6 to IPv4 backwards compatibility and NAT44 for the IPv4 flows, with AWS Transit Gateway. For simplicity, the following figure shows a subset of all of the possible paths to an IPv4-only destination:

NAT64 in a hybrid network

Figure 6: NAT64 in a hybrid network

You may have noticed above the different names that our IPv6-only and IPv4-only instances have: ip-10-1-1-20.ec2.internal and i-01f432eb0e6efecb7.ec2.internal. Therefore, let’s clarify how VPC DNS and resource naming work. DNS resolution that relies on the Route53 Resolver in the VPC is based upon a hierarchy that the Resolver abides to when it performs the lookup process: first, Private DNS (namely Route53 Private Hosted Zones associated to the VPC), second, VPC DNS (which is defined through IP-based or Resource-based naming mechanisms), and third, Public DNS (namely Public Hosted Zones or public DNS). Taking a closer look at VPC DNS configuration, resource-based naming became available with the launch of IPv6-only subnets and EC2 instances. This is an instance level setting that’s enabled by default for IPv6-only instances and is optional for dual stack instances. The following table summarizes the available options for VPC DNS naming, depending on the instance IP type:

Amazon VPC DNS namingTable 1: Amazon VPC DNS naming

The VPC DNS record type is an instance-level setting, and both IP-based and Resource-based naming can coexist in the same VPC and in the same subnets. It’s a matter of how the application utilizes them to obtain the necessary IP addresses for communication. For more information on IP-based and Resource-based naming, check out the IPv6-only subnets launch post.

It’s important to remember that DNS64 and NAT64 are meant to be enabled for IPv6-only subnets. If you enable DNS64 on a dual stack subnet, then your dual stack instances will receive both A and AAAA records for the same IPv4 destination address. The OS-level settings will choose the preferred protocol, which in most cases is IPv6. Therefore, the communication will flow through the NAT64 gateway instead of using the IPv4 protocol directly.

3. Dual-stack application endpoints and IPv6 target groups with Application and Network Load Balancers

To enable you to scale your applications and workload on AWS beyond the limits imposed by IPv4 addressing, the Amazon Application and Network Load Balancers support both internal and external dual stack operation, as well as and-to-end IPv6 support with IPv6 targets.

When you create your load balancer, you can choose the scheme, internal or Internet-facing, as well as the IP address type, IPv4 or dual stack. For dual stack Network Load Balancers, you can either manually specify IPv4 and IPv6 addresses from the load balancer subnets IPv4 and IPv6 ranges, or AWS will dynamically choose them. Due to their scaling mechanism, Application Load Balancers are automatically assigned IPv4 and IPv6 addresses, from the load balancer subnets CIDRs. For more configuration details, refer to the Application Load Balancer and Network Load Balancer documentation.

When you create a target group, you can select the IP address type of your target group. This controls the IP version used to communicate with targets and check their health status. Both NLBs and ALBs support IPv6 targets in specific IPv6 target groups, and IPv6 target groups can only be associated with dual stack ALBs or NLBs with TCP or TLS listeners. The following diagram depicts an internal ALB deployment that uses IPv6 only subnets within a VPC.

Application and Network Load Balancer with IPv6 targets

Figure 7: Application and Network Load Balancer with IPv6 targets

Security groups setting for Application Load Balancers and for Elastic Load Balancer targets must now include client IPv6 CIDRs and/or VPC IPv4 and IPv6 CIDRs. For detailed recommendations, check the Application Load Balancer security groups recommendations and the Network Load Balancer targets security group recommendations. We’ve also looked at them closely in part one of this series.

An important consideration for dual stack Load Balancers with IPv6 targets is the native Client IP Address Preservation feature. For the Network Load Balancer with IPv6 targets, client IP address preservation is enabled for IPv6 connections to the dual stack load balancer with IPv6 targets. Client IP address preservation has no effect on traffic converted from IPv6 to IPv4 or IPv4 to IPv6. The source IP address of this traffic type is always the IP address of the NLB. Therefore, you must configure Proxy Protocol v2 (PPv2) to transmit the IP address to the targets. The following table summarizes the protocol – IPv4 or IPv6 – used by the Network Load Balancer clients and for target connectivity, and the Client IP address visibility the targets have, based on Client IP Address Preservation settings:

Client IP Address Preservation

Table 2: Client IP Address Preservation

4. Amazon EKS IPv6 support in a dual stack Amazon VPC

Kubernetes has gained a lot of popularity, and it’s quickly becoming the standard for deploying containerized applications. Amazon EKS is a managed container service that you can use to run Kubernetes-based applications inside of an Amazon VPC or on-premises. You can deploy your IPv6-enabled EKS clusters in dual stack Amazon VPCs and subnets, and make sure that only IPv6 addresses are assigned to pods. From the networking perspective, containers adoption in IPv6-enabled environments smoothly interoperates with any existing workload in your VPCs. Refer to this post that discusses various IPv6 traffic patterns for Amazon EKS.

5. Amazon RDS and AWS PrivateLink IPv6 support in a dual stack Amazon VPC

You can expand your dual stack IPv6 deployments by enabling IPv6 for new and existing Amazon Relational Database Service (Amazon RDS) instances in your dual stack Amazon VPC. RDS supports IPv6 for RDS MariaDB, RDS MySQL, RDS PostgreSQL, Microsoft SQL Server, and Oracle engines, in all AWS Regions. To create a DB subnet group that supports dual-stack mode, make sure that each subnet that you add to the DB subnet group has an Internet Protocol version 6 (IPv6) CIDR block associated with it. Also, note that you cannot have a mix of IPv4-only and dual stack subnets in a subnet group destined for dual stack databases. For more details, check the Amazon RDS detailed database addressing documentation. AWS PrivateLink now supports IPv6 for services and endpoints, allowing you to expedite your IPv6 adoption, and scale your environment using IPv6-only subnets for both the client and the service provider deployment. For more details on the deployment and configuration of IPv6 for AWS PrivateLink, check this blog post and the documentation.

Conclusion

This post concludes our series of IPv6-only and dual stack network architectures that you can currently create on AWS. Although this series doesn’t provide an exhaustive list of all of the possible architectures, we hope that it facilitates your IPv6 adoption journey on AWS. If you have questions about this post, then start a new thread on the Amazon Networking and Content Delivery Forum, or contact AWS Support.

About the Authors

Alexandra Huides

Alexandra Huides is a Senior Networking Specialist Solutions Architect supporting Strategic Accounts at Amazon Web Services. She focuses on helping customers with building and developing networking architectures for highly scalable and resilient AWS environments. Outside work, she likes traveling, discovering new cultures and experiences, hiking, and reading.

Ankit Chadha

Ankit Chadha is a Networking Specialist Solutions Architect supporting Global Accounts at AWS. He has over 13 years of experience with designing and building various Networking solutions like MPLS backbones, overlay/underlay based data-centers and campus networks. In his spare time, Ankit enjoys playing cricket, earning his cat’s trust, and reading biographies.