Containers
AWS at KubeCon EU 2026: Open Source Leadership Meets Production Innovation

Kubernetes has rapidly evolved from a container orchestrator into the de-facto operating system for modern applications powering internal developer platforms, cutting-edge AI training and inferencing pipelines, large-scale data processing and so much more. According to the 2025 CNCF Annual Cloud Native Survey, 82% of container users now run Kubernetes in production environments, with 66% using it host GenAI workloads. These shifts have established Kubernetes as the universal foundation for a majority of today’s mainstream enterprise applications and the next-generation of AI/ML workloads.
However, this widespread adoption has brought new considerations as Kubernetes matures. AI workloads are pushing the boundaries of cluster scaling, where provisioning thousands of GPU nodes efficiently becomes critical for emerging AI-powered applications. Security models are evolving to address the ephemeral nature of container environments, requiring new approaches to visibility and policy enforcement. At AWS, we operate Kubernetes for customers at extraordinary scale, supporting tens of millions of clusters across diverse industries, use cases and deployment patterns. This provides direct insight into what breaks, what scales, and what organizations need to run production workloads reliably, ultimately shaping how we contribute to the Kubernetes ecosystem. As both operators and contributors, the Amazon EKS team advances Kubernetes through open collaboration, purpose-built open source projects, upstream contributions, and community investment, all focused on making Kubernetes reliable, scalable, and ultimately invisible. Reinforcing this commitment, AWS pledged $3 million in cloud credits to the CNCF for 2026, sustaining the open source infrastructure that powers the Kubernetes community.
In the following sections, we explore how AWS continuously innovates with Kubernetes and how you can experience these these firsthand at KubeCon + CloudNativeCon Europe 2026.
Kubernetes for AI: Democratizing innovation at all scales
AI workloads demand infrastructure that scales dynamically while maintaining consistent performance. Last summer, we announced that Amazon EKS supports up to 100K worker nodes in a single cluster, the equivalent of ~1.6 million AWS Trainium accelerators or 800K NVIDIA GPUs. To achieve this, we made significant enhancements across the stack, including architectural changes to core Kubernetes components that AWS engineers contributed to the upstream community. With a reimagined etcd storage layer for efficient state management and an optimized control plane to handle millions of scheduling, discovery and repair operations, Amazon EKS delivers the scale, performance, and reliability required for the most advanced AI/ML workloads while preserving Kubernetes conformance. To make this enterprise-grade infrastructure accessible at all scales, we announced EKS Provisioned Control Plane at re:Invent last year, allowing customers to pre-provision Kubernetes control plane capacity from a set of high-performance capacity tiers ensuring predictable performance at peak demand for demanding emerging workloads like HPC, Agentic AI, and multimodal inference. We also announced Seekable OCI (SOCI) Parallel Pull mode last August, which transforms container startup performance by parallelizing both image download and unpacking operations, achieving nearly 60% acceleration for AI containers with large model files that can reach tens of GBs directly addressing the cold-start latency problem for AI inference. To extend this benefit broadly, the parallel unpacking capability was subsequently contributed into containerd, making it a built-in feature available to all container users out of the box.
Beyond infrastructure scale and performance enhancements, we’ve also made it easier for customers leveraging AI to build with Kubernetes. The fully managed EKS MCP Server (in preview) enables AI code assistants to interact with and manage Kubernetes clusters through natural language, providing real-time contextual guidance and the AWS DevOps Agent (in preview) brings agentic reasoning to Kubernetes operations, helping teams diagnose issues, optimize configurations, and automate routine tasks through natural language interactions.
As AI workloads increasingly move to Kubernetes, ensuring interoperability and portability across platforms becomes critical. AWS helped establish the foundation for the Certified Kubernetes AI Conformance specification and achieved CNCF Certified Kubernetes AI Conformance certification for Amazon EKS at KubeCon North America 2025, validating comprehensive capabilities including GPU resource management, distributed workload scheduling, intelligent accelerator scaling, and integrated infrastructure monitoring, demonstrating our commitment to providing customers with a verified, standardized platform for running AI workloads on Kubernetes.
Automate and Simplify Kubernetes: Eliminating the operational burden
Recognizing that customers face persistent challenges operating Kubernetes at scale around cluster upgrades, node lifecycle management, and maintaining add-ons, AWS has invested in both open source community contributions and Amazon EKS innovations that address these core friction points. AWS engineers identified a gap in the open source ecosystem for intelligent, workload-aware autoscaling and developed and subsequently donated Karpenter, a node life cycle manager that provisions compute based on real-time requirements eliminating manual capacity planning and enabling efficient scaling to CNCF. Karpenter has since become the industry leading tool for autoscaling in Kubernetes.
Built on the principles of Karpenter, EKS Auto Mode (launched in December ’24) extends automation across the entire cluster lifecycle, automatically provisioning, scaling, patching, and upgrading compute resources. Through 2025, we’ve delivered continuous enhancements for AutoMode including optimized node lifecycle management for AI/ML workloads, Amazon EC2 capacity reservation support for guaranteed GPU access, intelligent capacity management that balances diversity preferences with availability, and comprehensive AWS KMS encryption for both ephemeral and root storage volumes. Most recently, in Feb ’26, AWS open sourced the EKS Node Monitoring Agent, providing visibility into infrastructure health by automatically monitoring node-level system, storage, networking, and accelerator issues as Kubernetes node conditions. This enables automatic node repair and faster troubleshooting of infrastructure problems across production environments.
GitOps & Platform Engineering Strategy: Building Self-Service Without Chaos
Platform engineering has emerged as the operating model for scaling Kubernetes across diverse workloads, multiple clusters and multiple environments. Platform teams need to enable developer self-service while maintaining governance, reliability, and operational consistency across clusters. As organizations grow, balancing developer autonomy with operational control becomes increasingly complex.
At re:Invent 2025, we announced Amazon EKS Capabilities to dramatically simplify how platform teams build and deploy applications on Kubernetes, delivering fully managed platform features including continuous deployment with Argo CD, AWS resource management through AWS Controllers for Kubernetes (ACK), and dynamic resource orchestration using Kubernetes Resource Orchestrator (kro). AWS handles auto-scaling, patching, and upgrading, enabling platform teams to offer GitOps workflows and infrastructure-as-code patterns without the operational burden. These features are on a foundation of open-source innovation. kro, contributed by AWS in 2024 and now a subproject of the Kubernetes SIG Cloud Provider has gained significant traction and provides a cloud-agnostic way to define reusable infrastructure patterns through APIs. Platform engineers package complex application stacks into building blocks that deploy consistently across on-premises, hybrid, and multi-cloud environments, allowing developers to consume infrastructure through simple APIs rather than wrestling with YAML configurations. ACK extends this by enabling declarative AWS resource management directly through Kubernetes-native APIs, so teams can provision and manage AWS services using familiar kubectl commands. Together, kro and ACK enable GitOps workflows where infrastructure and application deployments are version-controlled, auditable, and repeatable. Cedar, open-source authorization policy language and evaluation engine complements this platform engineering foundation by enabling teams to define fine-grained, context-aware permissions as code automatically validated and auditable, making Zero Trust architectures practical across clusters and services without sacrificing the declarative approach GitOps demands.
Other recent Amazon EKS innovations
Beyond these above innovations, AWS continues to enhance Amazon EKS and Amazon ECR with new features that strengthen security, observability, and operational resilience:
- Amazon ECR Managed Signing: Establishes trust at the source by enabling container image signing with just a few clicks in the ECR Console or a single API call, eliminating the complex process of setting up signing infrastructure and ensuring only verified images enter clusters.
- Amazon ECR Archival: Optimizes supply chain costs by reducing storage expenses for rarely accessed images while maintaining compliance requirements, allowing teams to retain historical container images without incurring unnecessary storage costs.
- Enhanced Container Network Observability: Provides granular network metrics, anomaly detection, and visualization directly within the AWS console, enabling teams to identify top talkers, flows causing retransmissions, and timeout issues without deploying additional monitoring infrastructure.
- Advanced Network Security Policies: Extends defense-in-depth by centrally enforcing network access filters across entire clusters and leveraging DNS-based policies to secure egress traffic to cluster-external destinations.
- AWS Backup for Amazon EKS: Provides centralized, fully managed protection for both cluster state and persistent application data with automated scheduling, retention management, immutable vaults, and cross-Region copies, enabling teams to restore entire clusters, specific namespaces, or individual volumes.
Join Us at KubeCon + CloudNativeCon Europe 2026
AWS is bringing open source leadership and Kubernetes production innovation to Amsterdam on March 23rd – 26th. Whether you’re scaling AI workloads, trying to automate cluster operations, building self-service platforms, or strengthening operational resilience, you’ll find hands-on experiences that accelerate your Kubernetes journey.
Keynote (March 24th)
- Jesse Butler, Principal Product Manager for Amazon EKS, delivers a keynote “From Complexity to Clarity – Engineering an Invisible Kubernetes,” exploring how three community-driven upstream innovations – Karpenter, kro, and Cedar are helping shape Kubernetes for its next chapter where it becomes invisible. Learn more
Day 0 Workshops (March 23)
We’re conducting two Day 0 Hands-on workshops on March 23 on building and deploying production platforms and scaling GenAI inference on Kubernetes:
- Morning (9:00 AM-12:00 PM): “Accelerate Platform Engineering on Amazon EKS” – hands-on experience building production platforms. Register here.
- Afternoon (1:00-4:00 PM): “Building and Scaling GenAI Workloads with Amazon EKS” – practical experience deploying production-ready inference workloads. Register here.
Booth #700 (March 23-26):
- 20+ interactive lightning talks in mini-theater featuring technical presentations and live product demonstrations covering a range of topics. Full list here.
- Four dedicated demo stations covering ‘Kubernetes for AI’, ‘Automate your Kubernetes’, ‘GitOps and Platform Strategy’, and ‘Kubernetes Operations Simplified’ for hands-on exploration and detailed conversations with AWS subject matter experts.
Demo Theatre (March 25th):
- Exclusive demo to see how we’re simplifying GitOps deployments on Kubernetes with Amazon EKS Capabilities. More details here.
Special Guest (March 25):
- Nana Janashia, Co-founder of TechWorld with Nana, facilitates panel interviews at AWS covering Kubernetes community building and enterprise operational challenges. This is your chance to engage directly with one of the Kubernetes community’s most influential voices and gain invaluable insights from industry experts!
Virtual participation
Can’t make it to Amsterdam? Join one of our virtual AWS-led hands-on workshop with Kubernetes and Amazon EKS experts by registering here.