AWS Open Source Blog

Exploring the Networking Foundation for EKS: amazon-vpc-cni-k8s + Calico

中文版

At AWS re:invent, Amazon announced Elastic Container Service for Kubernetes (EKS), and revealed details of how container networking would work — and be secured — on this exciting new platform. In particular, EKS leverages a new AWS Container Network Interface (amazon-vpc-cni-k8s) plug-in, together with Project Calico for enforcing network policies.

In this post, we’ll look under the hood of how this integration works, and also show how you can try it out today in your own Kubernetes clusters running in EC2. Think of it as a “sneak peek” while you’re still waiting for access to EKS itself, or as something you might consider deploying in your own AWS-based Kubernetes deployment — but bear in mind that amazon-vpc-cni-k8 is still officially an alpha release, not for production workloads.

VPC Meets the Container

The virtual private cloud (VPC) networking concept has been embraced by users of AWS, but was initially built around the assumption that an instance would have a single IP address. Hence, most container environments deployed in AWS would use an overlay — Calico being notable for leveraging EC2’s layer 2 capabilities to avoid overlays within a single availability zone (AZ). But even Calico has to use an overlay to traverse AZs.

While a very popular option, Calico’s approach to networking in AWS does not give containers a “real IP,” i.e. an IP address that looks to the underlying VPC just like a host instance’s IP. A better solution would be to allocate it an IP from the underlying VPC and let the underlying Amazon networking layer take care of routing.

This is exactly what the AWS team has done with the amazon-vpc-cni-k8s plug-in. It leverages EC2’s ability to provision multiple elastic network interfaces (ENIs) to a host instance, each with multiple secondary IP addresses, to get multiple IPs allocated from the VPC pool. It then hands out these IPs to pods on the host, connects the ENI to the virtual ethernet port (veth) created on the pod, and lets the Linux kernel take care of the rest.

Policy Meets the veth

By now you’re probably asking yourself: if the amazon-vpc.cni-k8s plugin is taking care of IP assignment and container networking, how can Calico enable network policy?

It’s actually very straightforward: because Calico is designed in a modular fashion, its networking, IPAM, and policy capabilities can be deployed independently via appropriate settings. In the case of EKS, the amazon-vpc-cni-k8s plugin is configured as the CNI plugin for Kubernetes, but we also deploy the ‘calico-node’ agent on each node, as a Kubernetes daemonset.

By aligning the amazon-vpc.cni-k8s plug-in veth naming convention with Calico, the calico-node on each host knows which veth belongs to which container, and hence can create policy rules on each container’s interface in the Linux kernel (using iptables/ipsets) as usual. And because the amazon-vpc-cni-k8s plugin has plumbed the veths in just the right way, we know that all traffic out of the pod will have those rules applied.

Really, all you need to know is: use amazon-vpc-cni-k8s as the CNI plugin, apply a simple manifest to deploy Calico as a daemonset, and Bob’s your uncle.

Theory Meets Your Own EC2 Console

If you’ve read this far, you are probably itching to give it a try. Of course, eventually this will be available as standard for anyone using EKS. While it is still in limited preview, however, you can try it out yourself with a regular Kubernetes deployment on EC2:

github.com/aws-samples/aws-kube-cni

Whither Calico Networking for AWS?

The advantages of the amazon-vpc-cni-k8s plugin are pretty clear: you get a real IP address in the VPC, with the same performance as EC2 host networking and no additional overlay required to route anywhere between AZs within the VPC. So would you ever want to use Calico’s networking capabilities in AWS?

It turns out there are a couple of reasons why you might still want to use Calico in preference to amazon-vpc-cni-k8s plugin:

  • With amazon-vpc-cni-k8s plugin, the total number of pods per host instance (Kubernetes node) is limited to the number of ENIs multiplied by the number of secondary IPs per ENI – which varies depending on the size of the instance (see this table). For small instances, this can be quite a low number – for example, on a c1.medium you will only be able to launch 10 pods. In contrast, Calico has no limit on the number of pods per node.
  • The amazon-vpc-cni-k8s plug-in allocates the maximum number of IPs to a given node at start of day, which may reduce efficiency of address use. In contrast, Calico allows the full IP pool space to be used across all nodes.

Our recommendation is: if these are not hard blockers for your deployment, you should use the amazon-vpc-cni-k8s plugin as the simplest and best-performing native networking solution for AWS — and, of course, it is what will come by default with EKS. Whichever networking approach you choose, you can rest assured that you have industry-standard container network security courtesy of Calico.


Thanks to Andy Randall at Tigera who co-authored this! You can find him @andrew_randall


Omar Lari

Omar Lari

Omar Lari is a Business Development Manager at Amazon Web Services. His focus is helping customers adopt AWS container services including Amazon ECS, Amazon EKS and Amazon ECR. Prior to joining AWS, Omar filled roles at various enterprise and startup organizations including McKesson, ON24, Informatica and Slalom. You can follow him on twitter @omarlari