AWS for Industries

Automated Application Failover Across Availability Zones with Floating/Virtual IP on Amazon EKS

Reliability and high availability are key design principles for an application deployment. Even though an Availability Zone (AZ) is comprised of one or more data centers, you should design your applications to handle AZ failure in rare circumstances.

Some applications, such as telco and networking applications, rely on highly-available floating/virtual static IP addresses to provide resiliency and fast failover. They don’t use Domain Name System (DNS), either by design constraints or the failover via DNS isn’t fast enough for their functionality. For public traffic use cases, you can use the AWS Elastic IP (EIP) and move it across AZs via re-associating Elastic IP to another ENI. However, if you must use private IP addresses, then you can’t move the Amazon Virtual Private Cloud (VPC) IP addresses across AZs. This is because VPC IP addresses and the subnets are associated with a single AZ.

If an AZ fails, then you can’t reuse these IP addresses in another AZ. To provide high availability, applications must be deployed in multiple AZs with different IP subnets/addresses. Consequently, clients are forced to do mesh connectivity and implement a failover mechanism.

In this post, we introduce design patterns, that let you fail-over your EKS based applications seamlessly to another AZ while using same IP addresses, in an automated way, with no change needed in the application code.

Deployment architecture

The following sample deployment approach has Amazon Elastic Kubernetes Service (Amazon EKS) cluster and worker node groups in two AZs. You can use more than two AZs if needed. The worker node group is behind an autoscaling group, per AZ, providing resiliency for node failures in that AZ. Worker node groups use the same node Selector labels, allowing the application to be scheduled on the workers, regardless of their AZs.

We work with Multus meta plugin along with ipvlan CNI to create and attach secondary interfaces to the pod, which will be using these floating IP addresses. In the following sample deployment, VPC CNI manages the primary interface (eth0), and ipvlan Multus plugin manages secondary interfaces (eth1 and eth2). Pod assigns the floating IP address via the Multus network-attachment-definition, and it communicates to other applications/clients.

Refer to the GitHub repo Automated Floating/Virtual IP MultiAZ solution on EKS for a sample nodegroup AWS CloudFormation template and Multus Net-attach-definitions.

To achieve a private floating IP failing over across multi-AZ deployment, you can’t use VPC-assigned IP addresses and local VPC routing, since VPC IP addresses assigned from a VPC subnet are associated with a specific AZ. Instead, you can use either of the following two design patterns.

In the first pattern, you assign the floating IP address from non-VPC CIDR address space to the application pod via Multus Network-attachment-definition on secondary interfaces. Then, you define the static routes in the VPC routing table for the non-vpc floating IP address (/32) as destination and the worker node ENI as target.

In the second pattern, you create dedicated subnets within your VPC and use one of the IP addresses as a floating IP address to the application pod via Multus Network-attachment-definition on secondary interfaces. Then, you define a static route in the VPC route table, for the whole subnet CIDR as destination, and the application worker node ENI (eth1 or eth2) as target. We explain both solutions in detail in the next section.

Automated solution using init/sidecar container

You can utilize a separate container to automate the POD routing configuration in VPC, along with the application business logic containers to avoid changes in your application images. This container reads the IP address of the pod secondary interfaces (floating IP) along with the worker node networking details, within the pod network space. Based on the defined configuration, such as peering host/networks, it defines routes in the pod network space to go via the VPC subnet default gateway. Furthermore, it updates VPC route tables with floating IPs as the destination, and the relevant worker node ENI as the target.

You can add this container in your deploymentset/statefulset either as an initContainer or as a sidecar container, based on your application design requirement. If it’s used as initContainer, then this container gets executed as a first container while the POD is in the init state. This initContainer terminates after preforming these tasks and application pod containers come up. If it’s added as sidecar container, then it runs as an additional container and constantly monitors the pod IP addresses on Multus interfaces for new or changed IP addresses, and it updates the VPC route tables accordingly. This is helpful for the pods having custom “Floating IP” handling for the active/standby internal logic.

Next we’ll walk through the automated solution patterns for this automated container in detail.

Pattern 1: Using non-VPC floating IP addresses

In this pattern, a sample pod is using two non-VPC IP address(es) as the floating IPs, and it’s assigned to a pod on the secondary interfaces as shown in the following with IP 192.168.0.2 and 192.168.1.2 on eth1 and eth2 via Multus and ipvlan. A sample of this pod with automated init/sidecar container can be found in GitHub repo Using non-VPC Floating IP Addresses.

For simplicity, we use a single Pod, which runs on a worker in az1. The init/sidecar container updates the VPC routing with the ENI ID of the eth1 and eth2 interface of the worker node (shown as ENI2 and ENI3) for the destination floating IPs (192.168.0.2 and 192.168.1.2).

Note that for ingress traffic from other VPCs and on-premises, use AWS Transit Gateways with static routes for non-VPC floating IP addresses in Transit Gateway route tables.

Failover

If the availability zone fails, Kubernetes schedules the new pod to the worker node in different AZs. The newly scheduled Pod assumes the floating IP addresses 192.168.0.2 and 192.168.1.2. The init/sidecar container updates the VPC routing with new ENIs (shown as ENI5 & ENI6) as target for the destination floating IPs (192.168.0.2 and 192.168.1.2).

Refer to the GitHub repo Using non-VPC Floating IP Addresses for further details and samples.

Pattern 2: Using VPC floating IP addresses

The non-VPC floating IP solution is usually preferred, as it doesn’t mix with VPC IP addresses and VPC routing. Additionally, it provides a clear separation between VPC and non-VPC IP addresses. In some cases, you might not want to manage, configure, and automate these non-VPC IP address spaces separately. In these cases, if you prefer to use the VPC IP addresses for the floating IP address across AZs, then with some additional steps you can achieve the same routing results.

Here are the steps:

  1. Create dummy Floating IP subnets in any AZ, Ex: 10.10.254.0/28 and 10.10.254.16/28 (/28 is the smallest subnet size).
  2. Pick any IP (other than network and broadcast IP address) from this subnet as your floating IP (Ex: 10.10.254.2 and 10.10.254.18)
  3. Don’t use these dummy subnets for any instance/ENI creation, as we define more specific routing for these subnets in the VPC. To avoid accidental DHCP assignment, you can also use subnet CIDR reservation to reserve the subnet CIDR.

In this pattern, a sample pod is using two VPC IP addresses from the previous subnets as the floating IPs, and assigned to a pod on the secondary interfaces as shown in the following with IP 10.10.254.2 and 10.10.254.18 on eth1 and eth2 via Multus and ipvlan. A sample of this pod with automated init/sidecar container can be found in GitHub repo Using VPC Floating IP Addresses.

For simplicity in this example, we use a single Pod, which runs on a worker in az1. In this case, you would notice that init/sidecar container updates the VPC routing table with the ENI ID of the eth1 and eth2 interface of the worker node (shown as ENI2 and ENI3) against the whole subnet CIDR as 10.10.254.0/28 and 10.10.254.16/28, and not as /32 addresses (pattern 1).

Note that for ingress traffic from other VPCs and on-premises, use Transit Gateways. You don’t need to define static routes in the Transit Gateway route tables, as VPC CIDRs are auto-propagated.

Failover

If the availability zone fails, Kubernetes schedules the new pod to the worker node in a different AZ. The newly-scheduled Pod assumes the floating IP addresses 10.10.254.2 and 10.10.254.18. The init/sidecar container updates the VPC routing table with new ENIs (shown as ENI5 and ENI6) as target for the destination floating CIDRs 10.10.254.0/28 and 10.10.254.16/28.

Refer to the GitHub repo Using VPC Floating IP Addresses for further details and samples.

Conclusion

In this post, we presented two patterns to achieve the failover of your Kubernetes-based applications. Additionally, we demonstrated a single pod failing over across AZs using floating IPs. You can utilize this solution with more pods/replicas with applicable floating/virtual IP ranges in the Multus network-attach-definitions. In this case we showcased for fast failover, however based on requirements second AZ node can be implemented as stopped node/warm pool to offer cost efficiency with increased failover time.

Furthermore, you can extend these patterns to your existing applications, running as active/active or active/standby, in separate AZs, to seamlessly failover across AZs. Try the sample application from the GitHub repo and leave us a comment, we would love to hear your feedback. Reach out to your AWS account teams and Partner Solution Architects to learn more about 5G and Telecommunications on AWS.

Raghvendra Singh

Raghvendra Singh

Raghvendra Singh is a Principal Portfolio Manager and Telco Network Transformation specialist in AWS. He specializes in AWS infrastructure, containerization and Networking, helping users accelerate their modernization journey on AWS.

Neb Miljanovic

Neb Miljanovic

Neb Miljanovic is an AWS Telco Partner Solutions Architect supporting migration of Telecommunication Vendors into public cloud space. He has extensive experience in 4G/5G/IMS Core architecture and his mission is to apply that experience to migration of 4G/5G/IMS Network Functions to AWS using cloud-native principles.

Dr. Young Jung

Dr. Young Jung

Dr. Young Jung is a Principal Solutions Architect in the AWS Worldwide Telecom Business Unit. As a specialist in the telco domain, his primary focus and mission are to help telco Core/RAN partners and customers design and build cloud-native NFV solutions on the AWS environment. He has deep expertise in leveraging AWS services and technologies to enable telco network transformation, particularly in the areas of AWS Outposts for telco edge service implementation. Dr. Jung works closely with telco industry leaders to architect and deploy innovative cloud-based solutions that drive efficiency, agility, and innovation in the telecommunications sector.