Containers

Navigating enterprise networking challenges with Amazon EKS Auto Mode

Enterprise Kubernetes deployments face challenges in Container Network Interface (CNI) configuration, IP address management, and security policy implementation. As organizations scale clusters across multiple teams and environments, misconfigured CNI plugins, subnet IP exhaustion, fragmented IP planning, and inconsistent or overly permissive network policies become leading causes of networking incidents, failed pod scheduling, and security misconfigurations. Common misconfigurations include overlapping pod CIDRs and incorrect routing rules.

Amazon Elastic Kubernetes Service (Amazon EKS) Auto Mode automates infrastructure provisioning and maintenance, including networking components such as the Amazon Virtual Private Cloud (Amazon VPC) CNI, load balancers, and DNS. EKS Auto Mode provides an opinionated networking stack that reduces operational work while preserving controls for security and scale.

This post covers how EKS Auto Mode handles VPC CNI optimization, pod density scaling, network security implementation, and hybrid connectivity.

EKS Auto Mode networking fundamentals

EKS Auto Mode provides an automated, opinionated networking stack that removes many configuration decisions while maintaining performance and security controls for enterprise deployments.

  • Pod and node networking: EKS Auto Mode includes an integrated networking capability that handles node and pod networking, which you can configure by creating a NodeClass Kubernetes object.
  • VPC CNI: EKS Auto Mode includes the Amazon VPC CNI as a fully managed component, providing pods with native VPC IP addresses for optimal performance and streamlined network troubleshooting. This removes overlay network complexity while ensuring seamless integration with existing Amazon Web Services (AWS) networking services and VPC endpoints.
  • NodeClass configuration: You can use the NodeClass resource to customize networking aspects including security group selection, subnet selection for nodes and pods, SNAT policy configuration, and Kubernetes network policies.
  • Load balancing: EKS Auto Mode streamlines load balancing by integrating with the Amazon Elastic Load Balancer (ELB) service, and automates the provisioning and configuration of load balancers for Kubernetes Services and Ingress resources. It supports advanced features for both Application Load Balancers (ALBs) and Network Load Balancers (NLBs), manages their lifecycle, and scales them to match cluster demands. An ALB is requested by creating an Ingress with an IngressClass that uses the controller eks.amazonaws.com/alb, and an NLB is requested by creating a service of the type LoadBalancer with the class eks.amazonaws.com/nlb. The behavior is configured with annotations on those resources, and no separate load balancer controller is needed.
  • DNS: EKS Auto Mode includes the cluster DNS as a core managed component and automatically supports caching DNS queries on the node for enhanced performance and reduced latency.

VPC CNI: The foundation of EKS Auto Mode networking

The Amazon VPC CNI gives each pod an IP address from your VPC subnet. This removes the need for overlay networks, which reduces delays and makes troubleshooting clearer. EKS Auto Mode sets up the VPC CNI automatically with the optimum settings for performance and security.This approach works well with existing AWS networking services. Pods can communicate directly to AWS services through VPC endpoints. Network traffic follows regular VPC subnet route tables. Pods get IP addresses from VPC subnets, thus monitoring becomes easier because network tools can observe pod traffic just like other VPC resources. This helps network administrators manage container networking by using tools they already know.

VPC CNI: Lifecycle management and upgrades

Operational excellence requires understanding how EKS Auto Mode handles CNI upgrades and migrations. One of the significant advantages of EKS Auto Mode is its automatic CNI lifecycle management, which removes many of the operational challenges associated with maintaining CNI versions and configurations. EKS Auto Mode automatically manages CNI upgrades as part of the cluster maintenance cycle. When new CNI versions become available, EKS Auto Mode evaluates the compatibility with your current workloads and applies upgrades during maintenance windows with minimal disruption. This automated approach ensures that your clusters always run supported CNI versions with the latest security patches and performance improvements. The upgrade process includes several safety mechanisms:

  • Compatibility validation ensures that new CNI versions are compatible with your existing workloads.
  • Automatic rollback capabilities revert changes if issues are detected during the upgrade process.
  • Service availability during upgrades is maintained as CNI updates are applied by rolling out new nodes rather than updating existing nodes in place. You can deploy multiple replicas across multiple Availability Zones (AZs) and using Pod Disruption Budgets to make sure that a minimum number of pods remain available during voluntary disruptions.

Load balancing with ALBs and NLBs

EKS Auto Mode provisions and manages AWS load balancers based on Kubernetes Service and Ingress resources, eliminating the need to install and operate a separate load balancer controller.

For HTTP and HTTPS applications, create an Ingress (with IngressClass controller eks.amazonaws.com/alb) to provision an ALB. The ALB handles layer 7 routing, TLS termination, and path or host-based rules and can integrate with AWS WAF for application-level protection. The behavior is configured through annotations on the Ingress:

  • alb.ingress.kubernetes.io/scheme: Internal or internet-facing exposure
  • alb.ingress.kubernetes.io/certificate-arn: TLS certificates
  • alb.ingress.kubernetes.io/target-type: Route to instances or pod IPs directly

For TCP and UDP workloads requiring high throughput, declare a Service of type LoadBalancer to provision an NLB. The NLB operates at layer 4, preserving source IP addresses and supporting millions of requests per second. Control behavior through annotations:

  • service.beta.kubernetes.io/aws-load-balancer-scheme: internal or external exposure
  • service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: cross-zone load balancing
  • service.beta.kubernetes.io/aws-load-balancer-security-groups: attach security groups

For more information on configuration options, go to the IngressClassParams reference.

EKS Auto Mode handles load balancer lifecycle including creation, updates, health check configuration, and deletion when services are removed. Security groups are attached according to your NodeClass configuration, and you can layer AWS Shield Advanced for DDoS protection.

Scaling considerations: Pod density and prefix delegation

When EKS clusters scale, pod density per node limitations become a bottleneck because each pod needs an IP address from the VPC CIDR block. Traditional secondary IP mode (/32) assigns one VPC address per pod from the node’s ENI, which limits how many pods can run per node.

Prefix delegation addresses this by letting the Amazon VPC CNI assign IP prefixes (/28, providing 16 IP addresses each) to network interfaces instead of individual secondary IP addresses. This reduces Amazon EC2 API calls and increases the maximum pod density. For example, a c5.4xlarge with four ENIs supports 110 pods with prefix delegation as opposed to 58 with secondary IP mode.

EKS Auto Mode uses prefix delegation by default. When subnet fragmentation prevents prefix assignment, it falls back to the secondary IP mode and recalculates the maximum pods per node based on the available secondary IPs and instance ENI limits.

For large-scale deployments, higher pod densities on fewer nodes reduce both compute costs and management overhead. Instance type selection affects capacity because ENI count and maximum IPs per ENI vary by family—compute-optimized or general-purpose families with higher network interface limits preserve headroom during traffic spikes.

Implementing fine-grained network security

Security-focused customers often prioritize network policy implementation as a critical requirement for container deployments. EKS Auto Mode addresses this need through integrated support for Kubernetes Network Policies using the Amazon VPC CNI. Network policies are essential for zero trust architecture because they enable the implementation of the principle of least privilege by ensuring that only authorized pods can communicate with each other. This provides granular control that isolates sensitive workloads from unauthorized access.

The network policy implementation in VPC CNI uses eBPF (extended Berkeley Packet Filter) technology to provide high-performance, kernel-level traffic filtering. This approach offers several advantages over traditional iptables-based solutions, including better performance at scale, reduced CPU overhead, and more granular control over network traffic

Admin network policies provide cluster administrators with centralized control over network security across all Kubernetes workloads, regardless of namespace. These policies operate in two tiers: Admin Tier policies that can’t be overridden by developers, and Baseline Tier policies that establish default connectivity rules that can be overridden by namespace-level Network Policies when needed. This hierarchical approach enables platform and security teams to enforce organization-wide security requirements while still allowing application teams flexibility within those boundaries.

Cluster Network Policy and DNS Network Policy capabilities extend traditional Kubernetes network controls in two important ways. Cluster Network Policies allow administrators to set cluster-wide security rules that apply across all namespaces, such as isolating sensitive workloads or ensuring monitoring access to all applications. DNS Network Policies (available in EKS Auto Mode) enable filtering of outbound traffic using Fully Qualified Domain Names (FQDNs) instead of IP addresses, which is particularly valuable for controlling access to external software as a service (SaaS) services or on-premises applications where IP addresses frequently change. This DNS-based filtering operates at layer 7 of the OSI model, allowing pods to connect only to explicitly permitted domain names such as “*.example.com” or “internal-api.company.com”.

Isolation of pod network

Security requirements often drive the need for network separation between infrastructure and application layers. For security reasons, your pods might need to use a different subnet or security groups than the node’s primary network interface. This separation empowers organizations to implement more granular security policies and network access controls.

EKS Auto Mode addresses these networking requirements through the NodeClass resource, which you can use to customize certain aspects of the networking capability. The podSubnetSelectorTerms and podSecurityGroupSelectorTerms fields enable advanced networking configurations by allowing Pods to run in different subnets than their nodes.

Egress traffic control and SNAT policy

Enterprises often need predictable egress behavior for compliance, traffic engineering, or on-premises integration. In EKS Auto Mode, the managed Amazon VPC CNI supports configurable Source Network Address Translation (SNAT) so that you can control how pod traffic appears to external destinations. By default, the Node SNAT policy is Random and the Amazon VPC CNI plugin translates pod’s IPv4 address to the primary private IPv4 address of the primary elastic network interface of the node on which the Pod is running. This streamlines hub-and-spoke routing and works well with NAT Gateways. In EKS Auto Mode, SNAT is controlled only through the NodeClass, when on-premises firewalls must identify individual pods or compliance requires traceability to the workload, set the NodeClass snatPolicy property to Disabled. Furthermore, enable VPC Flow Logs and Network Flow monitoring for production deployments to support security operations and compliance requirements.

Hybrid networking and on-premises integration

EKS Auto Mode clusters connect to on-premises resources using standard AWS networking patterns such as AWS Site-to-Site VPN or AWS Direct Connect. EKS Auto Mode uses the Amazon VPC CNI with native VPC integration, thus pods receive IP addresses directly from VPC subnets, which makes them fully routable through AWS networking services.

For complex multi-VPC and hybrid architectures, AWS Transit Gateway integration enables EKS Auto Mode clusters to participate in large-scale networks spanning multiple VPCs and on-premises locations.

DNS integration through Amazon Route 53 Resolver rules allows workloads to resolve on-premises DNS names, enabling seamless service discovery across hybrid environments. Consider using External DNS project, which is offered as an Amazon EKS community add-on. You can use this to automate the DNS zone configuration for your Kubernetes workloads in Amazon Route 53.

Subnet planning and non-routable subnets

Effective subnet planning requires dedicated pod subnets sized for expected growth. Many organizations add a secondary CIDR block from RFC 1918 ranges such as 10.0.0.0/8 or 172.16.0.0/12, or the shared address space 100.64.0.0/10 from RFC 6598 for pod addressing. This keeps the primary VPC space for nodes and load balancers. Configure pod subnets through the NodeClass resource using podSubnetSelectorTerms to direct pods to these dedicated subnets. A /20 per Availability Zone provides approximately 4,000 addresses per zone, which accommodates rolling node replacements and surge capacity during deployments. Separating pod and node subnets streamlines route table management and enables distinct security group policies. Per Availability Zone provides approximately 4,000 addresses per zone, which accommodates rolling node replacements and surge capacity during deployments.

Conclusion

Amazon EKS Auto Mode manages Kubernetes networking components including the VPC CNI, load balancers, and DNS while preserving enterprise controls for security and scale. Native VPC addressing, automatic prefix delegation, and eBPF-backed network policies provide performance without overlay complexity. Managed lifecycle for CNI upgrades, compatibility validation, and automated rollbacks reduce operational overhead. Flexible SNAT policies, and standard AWS networking patterns enable hybrid connectivity across VPCs and on-premises networks. Platforming teams can use these capabilities to focus on application requirements while maintaining resilient network architectures.

Ready to get started with EKS Auto Mode? You can deploy a new EKS Auto Mode cluster or enable EKS Auto Mode on an existing cluster while using eksctl, the AWS Command Line Interface (AWS CLI), the AWS Management Console, EKS APIs, or your preferred infrastructure-as-code (IaC) tools. Try our hands-on workshop that guides you through deploying workloads and exploring the EKS Auto Mode capabilities. You can run this in your own AWS account or register for an AWS-hosted event.


About the authors

Sai Gopaluni headshot.jpg

Sai Charan Teja Gopaluni

Sai Charan Teja Gopaluni is a Senior Specialist Solutions Architect at Amazon Web Services, specializing in container technologies, self managed AI infrastructure, and Agentic AI. He helps customers design and deploy modern, scalable, and secure workloads that accelerate their cloud transformation and AI initiatives. Outside of work, Sai enjoys playing tennis and staying current with emerging technologies.

Hari Charan headshot.jpg

Hari Charan Ayada

Hari Charan Ayada is a Senior Solutions Architect at AWS with 13+ years of experience in software development and cloud architecture. He specializes in containerized workloads, and cloud-native architectures for Financial Services and Media & Entertainment customers across the Nordics. Hari advises large enterprises on modernizing their platforms using container orchestration, microservices, and scalable system design on AWS.