Getting visibility into your Amazon EKS Cross-AZ pod to pod network bytes


Many customers use Amazon Elastic Kubernetes Service (Amazon EKS) to host their mission-critical applications. As a best practice, we ask our customers to spread their applications across multiple distinct availability zones (AZ).

Because “everything fails all the time,” Werner Vogel, CTO, Amazon

To achieve high availability, customers deploy Amazon EKS worker nodes (Amazon EC2 instances) across multiple distinct AZs. To complement this approach, we recommend customers to implement Kubernetes primitives, such as pod topology spread constraints to achieve pod-level high availability as well as efficient resource utilization.

Often, customers run multiple applications within a single Amazon EKS cluster. Those applications represent large numbers of pods that are scattered across worker nodes and multiple distinct AZs. It’s natural for those applications to communicate in patterns such as application programming interface (API) to API.

It also becomes inevitable for pods to communicate with other pods across multiple distinct AZs and generate cross-availability zone (cross-AZ) data transfer.

One key challenge that customers face is reasoning cross-AZ pod-to-pod network communication patterns and their associated sum of network bytes.

In this post, we show you a solution based on querying and joining two data sources: Amazon Virtual Private Cloud (Amazon VPC) flow logs and an extracted Amazon EKS cluster pods metadata list. The query creates a cross-AZ pod-to-pod network flow table view (including a sum of the egress network bytes).

This post contains a detailed step-by-step walkthrough that builds the solution’s environment.

The following questions are answered with this solution:

  • How many network bytes did pod A send to pod B (Egress) ? (Explicit)
  • Which cross-AZ pod-to-pod flows does my application (Labeled: Key=app) perform? (Implicit)

Let’s look deeper into the solution and its constructs.

Solution overview

Our solution is based on two boilerplates:

  • An Amazon VPC and an Amazon EKS cluster, deployed with the Amazon VPC Container Network Interface (CNI) plugin for Kubernetes. (Currently, this is the only supported Networking CNI plugin for this solution).
  • A Python-based AWS Cloud Development Kit (AWS CDK) stack that implements an AWS Lambda function, Amazon Athena tables and queries, and all other required resources and configurations.

The following diagram depicts how the extract, transform, store, and query process occurs.

(This flow represents an interactive console user that manually executes steps 1, 3, 4).

  1. The Pod Metadata Extractor lambda function connects to the EKS Cluster API endpoint.

(It authenticates and authorizes using a designated, attached AWS Identity and Access Management (AWS IAM) Role mapped to a k8s RBAC identity). We follow the least privilege paradigm that allows­ only the get and list API verbs. The extracted data is then transformed and stored on an Amazon S3 bucket in CSV format.

  1. VPC Flow logs are enabled on the VPC level, records are aggregated and stored on an Amazon S3 Bucket. (This flow is a continues and in-depended of flow 1 above).
  2. Execution of the Amazon Athena-named query joins both data sources. The query then transforms and aggregates the enriched result, and stores it in a parquet format on an Amazon S3 bucket.
  3. Lastly, the user executes a simple SELECT query which returns the cross-AZ pod-to-pod data records and their corresponding sum of transferred network bytes (Egress) column. Results are displayed on screen and saved into a designated user configured Amazon S3 bucket. Amazon EventBridge uses a scheduled rule (i.e., hourly), executing a step function state machine workflow that automates Steps 1 and 3, and stores the results on a pre-created Amazon S3 bucket.

Diagram showing how the extract, transform, store, and query process occurs


The walkthrough consists of three main steps:

Step 1: Deployment of the Amazon EKS cluster

Step 2: Deployment of the AWS CDK stack

Step 3: Execution and Query (AWS Lambda and Amazon Athena queries)


Step 1: Deploy an Amazon EKS cluster

Set the environment

aws configure set region us-east-2
export AWS_REGION=$(aws configure get region) && echo "Your region was set to: $AWS_REGION"

Generate the ClusterConfig

cat >cluster.yaml <<EOF
kind: ClusterConfig
  name: cross-az
  region: ${AWS_REGION}
  - name: ng-1
    desiredCapacity: 2

Deploy the cluster

eksctl create cluster -f cluster.yaml 

Get the worker nodes and their topology zone data

kubectl get nodes --label-columns

Example output:

NAME                                           STATUS   ROLES    AGE   VERSION               ZONE    Ready    <none>   20m   v1.22.9-eks-810597c   us-east-2b   Ready    <none>   20m   v1.22.9-eks-810597c   us-east-2a

Clone the application repo

cd ~
git clone
cd amazon-eks-inter-az-traffic-visibility

Deploy the demo application

cd kubernetes/demoapp/
kubectl apply -f .

Explore the demoapp YAMLs. The application consists of a single pod (i.e., http client) that runs a curl http loop on start. The target is a k8s service wired into two nginx server pods (Endpoints). The server-dep k8s deployment is implementing pod topology spread constrains, spreading the pods across the distinct AZs.

Validate the demo application

kubectl get deployment

Example output:

client-dep   1/1     1            1           14s
server-dep   2/2     2            2           14s

Validate that the server pods are spread across nodes and AZs

kubectl get pods -l=app=server --sort-by="{.spec.nodeName}" -o wide

Example output:

NAME                         READY   STATUS    RESTARTS   AGE   IP               NODE                                           NOMINATED NODE   READINESS GATES
server-dep-797d7b54f-b9jf8   1/1     Running   0          61s    <none>           <none>
server-dep-797d7b54f-8m6hx   1/1     Running   0          61s   <none>           <none>

Step 2: Deploy the AWS CDK stack

Create a Python virtual environment and install the dependencies.

cd ~/amazon-eks-inter-az-traffic-visibility
python3 -m venv .venv
source .venv/bin/activate

Our AWS CDK stack requires the VPC ID and the Amazon EKS cluster name

export CLUSTERNAME="cross-az"
export VPCID=$(aws eks describe-cluster --name $CLUSTERNAME --query cluster.resourcesVpcConfig.vpcId | sed -e 's/^"//' -e 's/"$//')

Deploy the stack

npx cdk bootstrap
npx cdk deploy CdkEksInterAzVisibility --parameters eksClusterName=$CLUSTERNAME --parameters eksVpcId=$VPCID

Authorize the AWS Lambda function (k8s client)

Let’s get the Pod Metadata Extractor IAM Role. (Used by the AWS Lambda function to authenticate and authorize when connecting to the Amazon EKS cluster API.)

export POD_METADATA_EXTRACTOR_IAM_ROLE=$(aws cloudformation describe-stacks --stack-name "CdkEksInterAzVisibility" --output json --query "Stacks[0].Outputs[0].OutputValue" | sed -e 's/^"//' -e 's/"$//')

Create a ClusterRole and binding for the Pod Metadata Extractor AWS Lambda function.

kubectl apply -f kubernetes/pod-metadata-extractor-clusterrole.yaml

Append a role mapping to ConfigMap/aws-auth

⚠ We recommend using eksctl, or another tool, to edit the ConfigMap. For information about other tools you can use, see Use tools to make changes to the aws-auth ConfigMap in the Amazon EKS best practices guides. An improperly formatted aws-auth ConfigMap can cause you to lose access to your cluster.

eksctl create iamidentitymapping \
--cluster ${CLUSTERNAME} \
--username "eks-inter-az-visibility-binding" \
--group "eks-inter-az-visibility-group"


eksctl get iamidentitymapping --cluster ${CLUSTERNAME}

Expected output:

ARN                                                                                             USERNAME                                GROUPS                                  ACCOUNT
arn:aws:iam::555555555555:role/eksctl-cross-az-nodegroup-ng-1-NodeInstanceRole-IPHG3L5AXR3      system:node:{{EC2PrivateDNSName}}       system:bootstrappers,system:nodes
arn:aws:iam::555555555555:role/pod-metadata-extractor-role                                      eks-inter-az-visibility-binding         eks-inter-az-visibility-group 

At this point, kindly wait a few minutes to allow the VPC Flow Logs to be published, then continue to step 3.

Step 3: Execute, query, and review results

The step functions workflow invokes the AWS Lambda function, and if successful, will run the Amazon Athena named query.

For this post’s walkthrough, we force a manual execution that’s interactive.

Head over to the AWS Console Step Functions area and:

  • Select the pod-metadata-extractor-orchestrator state machine
  • On the Execution pane, select the Start execution, accept defaults, and select Start execution
  • After few seconds, the Graph inspector should appear similar to the following diagram:

  • Inspect the output results stored on the pre-created Amazon S3 Bucket. (You can get the bucket name by Inspecting the Definition tab of the pod-metadata-extractor-orchestrator state machine).

Example output:

"ResultConfiguration": {
"OutputLocation": "s3://cdkeksinterazvisibility-athenaanalyzerathenaresul-4444444444444/query_results/"

The step function allows you to implement a batch process workflow that is used to query the results and visualize or analyze them for multipurpose use cases. In the next section, we run the entire process manually and interactively to view the query results on the Amazon Athena console.

Viewing the process and results interactively

  • Head over to the Amazon Athena section. (Query results bucket should have been set, see Prerequisites section. This should be a transient In-Region Amazon S3 bucket for the purpose of viewing the results, interactively).
  • On the Amazon Athena query pane, start a new query (+ Sign), and execute or run the below query:
SELECT * FROM "athena-results-table" ORDER BY "timestamp" DESC, "bytes_transfered";

Expected output:

Screenshot of Query 2 Completed

Examine the results!


  • Cost: While the blueprints use minimal resources, deploying them incurs cost.
  • The Pod Metadata Extractor AWS Lambda function retrieves all pods (labeled: app) across all namespaces, which adds extra load on the API servers. Enable and observe control plane metrics to optimize the interval and time that the workflow will be executed. In large-scale busy clusters, consider scoping the function to get pods in a specific namespace(s).
  • In Amazon EKS clusters where pod churn is high results may be inconsistent. In this use case, consider running the workflow more frequently.
  • All S3 buckets will ship server access logs to a designated S3 logs bucket.
    As a best practice, by default the logs S3 bucket will have no read permission.
  • The CDK stack (by default) will generate IAM managed policies which do not restrict resource scope. It will also create IAM entities which contains wildcard permissions.
    This default approach is too permissive and is only used for demonstration purposes.
  • If you would like to implement this solution (or part of it) in production, we highly recommend following the IAM best practices, adhering to the least privilege principle.
  • The solution was neither designed to be used as a chargeback nor for any billing purposes.


Destroy the AWS CDK stack

cd ~/amazon-eks-inter-az-traffic-visibility
source .venv/bin/activate
npx cdk destroy CdkEksInterAzVisibility
aws cloudformation delete-stack --stack-name CDKToolkit

If no longer needed, delete the unneeded S3 buckets.

Destroy the Amazon EKS cluster

eksctl delete cluster --name=${CLUSTERNAME}


In this post, we showed you a solution that provides a cross-AZ, pod-to-pod network bytes visibility inside an Amazon EKS cluster using the AWS VPC CNI plugin. We wanted to emphasize that we built this solution after speaking with many AWS customers. Based on this feedback, a core design tenet was to introduce a solution that doesn’t require customers to deploy any operational k8s constructs. Those constructs (daemonSets, deployments) often mandate privileged access to the underlying nodes and their network namespace.

We can’t wait to see how the community responds to this solution, and we would love to review your pull requests!

Kobi Biton

Kobi Biton

Kobi Biton is a Senior Specialist Solutions Architect on the AWS Worldwide Specialist Organization (WWSO) team. Kobi bring about 20 years of experience, specializing in solution architecture, container networking, and distributed systems. Over the past few years, he works closely with strategic AWS Technology partners (ISVs) helping them to grow, scale and succeed in their journey on the AWS Platform.

Dor Fibert

Dor Fibert

Dor is a Solution Architect at AWS. He is working with AWS customers of varying domains and sizes, teaching them about the inner workings of cloud services, creating innovative solutions and development methodologies. When not at his computer you’d find him experimenting in the kitchen while listening to fantasy audiobooks.

Yazan Khalaf

Yazan Khalaf

Yazan Khalaf is an AWS Solution Architect. Engaging with a wide variety of AWS customers, assisting them in creating innovative solutions to deal with the challenges they face in the cloud and diving deep into any security related topic with his customers. In his spare time, he enjoys researching Formula 1 cars’ aerodynamics and kart racing.