How do I use multiple CIDR ranges with Amazon EKS?

Last updated: 2020-02-05

I want to use multiple CIDR ranges with Amazon Elastic Kubernetes Service (Amazon EKS) to address issues with my pods. For example, how do I run pods with different CIDR ranges added to my Amazon Virtual Private Cloud (Amazon VPC)? Also, how can I add more IP addresses to my subnet when it runs out of IP addresses? Finally, how can I be sure that pods running on worker nodes have different IP ranges?

Short Description

Before you complete the steps in the Resolution section, confirm that you have the following:

Note: If you run your pods on different CIDR ranges, then you get more available IP addresses for pods managed by Amazon EKS and more flexibility for your networking architectures. If you add secondary CIDR blocks to a VPC from the 100.64.0.0/10 and 198.19.0.0/16 ranges and use CNI custom networking, then your pods won't consume any RFC 1918 IP addresses in your VPC.

Note: In scenarios with carrier-grade network address translation (NAT), 100.64.0.0/10 is a private network range used in shared address space for communications between a service provider and its subscribers. You must have a NAT gateway configured at the route table for pods to communicate with the internet.

Resolution

In the following resolution, you set up your VPC and configure the CNI plugin to use a new CIDR range.

Add additional CIDR ranges to expand your VPC network

1.    Find your VPCs.

If your VPCs have a tag, then run the following command to find your VPC:

VPC_ID=$(aws ec2 describe-vpcs --filters Name=tag:Name,Values=yourVPCName | jq -r '.Vpcs[].VpcId')

If your VPCs don't have a tag, then run the following command to list all VPCs in the AWS Region:

aws ec2 describe-vpcs --filters  | jq -r '.Vpcs[].VpcId'

2.    To attach your VPC to a VPC_ID variable, run the following command:

export VPC_ID=vpc-xxxxxxxxxxxx

3.    To associate an additional CIDR block with the range 100.64.0.0/16 to the VPC, run the following command:

aws ec2 associate-vpc-cidr-block --vpc-id $VPC_ID --cidr-block 100.64.0.0/16

Create subnets with a new CIDR range

1.    To list all the Availability Zones in your AWS Region, run the following command:

aws ec2 describe-availability-zones --region us-east-1 --query 'AvailabilityZones[*].ZoneName'

Note: Replace us-east-1 with your AWS Region.

2.    Choose the Availability Zone where you want to add the subnets, and then assign those Availability Zones to variables. See the following examples:

export AZ1=us-east-1a
export AZ2=us-east-1b
export AZ3=us-east-1c

Note: You can add more Availability Zones by creating more variables.

3.    To create new subnets under the VPC with the new CIDR range, run the following commands:

CUST_SNET1=$(aws ec2 create-subnet --cidr-block 100.64.0.0/19 --vpc-id $VPC_ID --availability-zone $AZ1 | jq -r .Subnet.SubnetId)
CUST_SNET2=$(aws ec2 create-subnet --cidr-block 100.64.32.0/19 --vpc-id $VPC_ID --availability-zone $AZ2 | jq -r .Subnet.SubnetId)
CUST_SNET3=$(aws ec2 create-subnet --cidr-block 100.64.64.0/19 --vpc-id $VPC_ID --availability-zone $AZ3 | jq -r .Subnet.SubnetId)

Tag the new subnets

You must tag all subnets so that Amazon EKS can discover the subnets.

1.    (Optional) Add a name tag for your subnets by setting a key-value pair. See the following examples:

aws ec2 create-tags --resources $CUST_SNET1 --tags Key=Name,Value=SubnetA
aws ec2 create-tags --resources $CUST_SNET2 --tags Key=Name,Value=SubnetB
aws ec2 create-tags --resources $CUST_SNET3 --tags Key=Name,Value=SubnetC

2.    Tag the subnet for discovery by Amazon EKS. See the following examples:

aws ec2 create-tags --resources $CUST_SNET1 --tags Key=kubernetes.io/cluster/yourClusterName,Value=shared
aws ec2 create-tags --resources $CUST_SNET2 --tags Key=kubernetes.io/cluster/yourClusterName,Value=shared
aws ec2 create-tags --resources $CUST_SNET3 --tags Key=kubernetes.io/cluster/yourClusterName,Value=shared

Replace yourClusterName with the name of your Amazon EKS cluster.

Note: If you're planning to use Elastic Load Balancing, consider adding additional tags. For more information, see Cluster VPC Considerations.

Associate your new subnet to a route table

1.    To list the entire route table under the VPC, run the following command:

aws ec2 describe-route-tables --filters Name=vpc-id,Values=$VPC_ID |jq -r '.RouteTables[].RouteTableId'

2.    For the route table that you want to associate with your subnet, run the following command to export to the variable, and then replace rtb-xxxxxxxxx with the values from step 1:

export RTASSOC_ID=rtb-xxxxxxxxx

3.    Associate the route table to all new subnets. See the following examples:

aws ec2 associate-route-table --route-table-id $RTASSOC_ID --subnet-id $CUST_SNET1
aws ec2 associate-route-table --route-table-id $RTASSOC_ID --subnet-id $CUST_SNET2
aws ec2 associate-route-table --route-table-id $RTASSOC_ID --subnet-id $CUST_SNET3

For more information, see Routing.

Configure the CNI plugin to use the new CIDR range

1.    To verify that you have the latest version of the CNI plugin, run the following command.

kubectl describe daemonset aws-node --namespace kube-system | grep Image | cut -d "/" -f 2

If your version of the CNI plugin is below 1.5.3, then run the following command to update to the latest version:

kubectl apply -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/master/config/v1.5/aws-k8s-cni.yaml

2.    To enable custom network configuration for the CNI plugin, run the following command:

kubectl set env daemonset aws-node -n kube-system AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG=true

3.    To add the ENIConfig label for identifying your worker nodes, run the following command:

kubectl set env daemonset aws-node -n kube-system ENI_CONFIG_LABEL_DEF=failure-domain.beta.kubernetes.io/zone

4.    To install the ENIConfig custom resource definition, run the following command:

cat << EOF | kubectl apply -f -
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: eniconfigs.crd.k8s.amazonaws.com
spec:
  scope: Cluster
  group: crd.k8s.amazonaws.com
  version: v1alpha1
  names:
    plural: eniconfigs
    singular: eniconfig
    kind: ENIConfig
EOF

5.    To create an ENIConfig custom resource for all subnets and Availability Zones, run the following commands:

cat <<EOF  | kubectl apply -f -
apiVersion: crd.k8s.amazonaws.com/v1alpha1
kind: ENIConfig
metadata:
 name: $AZ1
spec:
  subnet: $CUST_SNET1
EOF

cat <<EOF | kubectl apply -f -
apiVersion: crd.k8s.amazonaws.com/v1alpha1
kind: ENIConfig
metadata:
 name: $AZ2
spec:
  subnet: $CUST_SNET2
EOF

cat <<EOF | kubectl apply -f -
apiVersion: crd.k8s.amazonaws.com/v1alpha1
kind: ENIConfig
metadata:
 name: $AZ3
spec:
  subnet: $CUST_SNET3
EOF

Note: The ENIConfig should match the Availability Zone of your worker nodes.

6.    Launch new worker nodes and terminate the old worker nodes.

Note: This allows the CNI plugin (ipamd) to allocate IP addresses from the new CIDR range on the new worker nodes.

7.    To test the configuration by launching pods, run the following commands:

kubectl run nginx --image nginx --replicas 10
kubectl get pods -o wide

You should see 10 new pods added with the new CIDR range scheduled on new worker nodes.


Did this article help you?

Anything we could improve?


Need more help?