- EKS›
- Kubernetes
AWS at KubeCon + CloudNativeCon Europe Netherlands, 2026
AWS is offering 2 FREE Hands-On Workshops to build and deploy production platforms and scale GenAI inference on Kubernetes on March 23 (KubeCon Pass not required)
Meet AWS Team at Booth #700
Join us at KubeCon Amsterdam at Booth #700 to discover AWS’s latest innovations in cloud-native technologies and Kubernetes. Experience live product demonstrations covering ‘Kubernetes for AI’, ‘Automate your Kubernetes’, ‘GitOps and Platform Strategy’, and ‘Kubernetes Operations Simplified’. Attend 25-minute interactive mini-theater lightning talks featuring technical presentations, or visit our demo stations for hands-on exploration and detailed conversations with AWS subject matter experts. Connect with our team to discuss your specific use cases and discover how to optimize your EKS deployments for performance, security, and cost efficiency.
AWS Keynote - “From Complexity to Clarity - Engineering an Invisible Kubernetes”
Time: Tuesday, March 24, 9.42 am CET to 9.47 am CET
Location: Hall 12, RAI
Speaker: Jesse Butler, Principal Product Manager and Technologist, Amazon Elastic Kubernetes Service (EKS)
Kubernetes has become the ubiquitous control plane for some of the most demanding distributed workloads ever built: AI training and inference at scale, heterogeneous and accelerated compute, and complex multi-tenant platforms. Yet even as it becomes the standard, developers still reason about infrastructure details while platform teams translate application intent into cluster configuration, autoscaling, and resource tuning across thousands of services. If Kubernetes is to truly fade into the stack, the model must evolve. Infrastructure must respond dynamically to workload signals in real time. Higher-level abstractions must reduce direct exposure to low-level primitives without sacrificing control. Governance must be programmatic and consistent across distributed environments. This keynote traces that evolution through three community-driven upstream innovations, Karpenter, kro, and Cedar, and the engineering choices shaping Kubernetes for its next chapter.
Join Us
Don't miss this exclusive opportunity! Meet Nana Janashia, Co-founder of TechWorld with Nana, at the AWS booth on March 25. Nana will be conducting panel interviews covering topics from Kubernetes community building to enterprise operational challenges. This is your chance to engage directly with one of the Kubernetes community's most influential voices and gain invaluable insights from industry experts. Mark your calendar and be part of these dynamic conversations!
Featured Lightning Talks | AWS Booth #700 | Mar 24 - Mar 26
March 24th | 10:50 AM to 6:10 PM CET
|
Session
|
Description
|
|---|---|
|
From Models to Agents: Running LLM-Powered AI Applications on Amazon EKS Auto Mode
|
As AI systems evolve from simple inference endpoints to agentic applications, operating large language models on Kubernetes introduces new infrastructure challenges. This demo-driven session shows how Amazon EKS Auto Mode simplifies deploying and operating LLM-powered agentic AI workloads at scale. Watch a live deployment of a large open-source LLM for inference, followed by an agentic AI application coordinating model reasoning and external actions. Learn how EKS Auto Mode automates cluster and compute management while supporting GPU workloads, with seamless integration of Ray and vLLM. Discover optimizations including SOCI image pulls and emerging capabilities like Agentic AI on EKS, the EKS MCP server, and Kiro integration. |
|
GitOps Account Factory: Multi-Tenant AWS Infrastructure with ACK and kro
|
SaaS vendors require isolated workloads across multiple AWS accounts for tenant separation. This demo showcases an account factory combining AWS Controllers for Kubernetes (ACK) with kro to declaratively provision AWS Organizations accounts, VPCs, and EKS clusters through simple manifests. Watch us create a fully configured customer account with networking and compute in minutes using kubectl commands. Learn how enterprises streamline tenant onboarding, enforce infrastructure standards, and maintain consistency across hundreds of accounts through Kubernetes-native GitOps practices. This session demonstrates practical patterns for automating multi-tenant infrastructure at scale using ACK and kro. |
|
Scale, optimize, and upgrade your Kubernetes cluster with Karpenter and EKS Automode
|
Dive deep into EKS scaling patterns with Karpenter and EKS Auto Mode. This talk explores Karpenter's scaling algorithms and sophisticated node provisioning strategies alongside EKS Auto Mode features. Learn how to implement complex scheduling constraints, optimize node selection using custom provisioners, and architect automated node upgrade strategies. Examine real-world use cases including multi-architecture workloads, spot instance handling, and performance tuning. Discover how these technologies work together to deliver efficient, automated cluster management. Ideal for Kubernetes operators and platform engineers seeking to optimize their EKS environments with advanced scaling and automation techniques. |
|
Elevating EKS Security: Admin and Advanced Network Policies for Multi-Layer Protection
|
Discover how Amazon EKS Network Policies deliver comprehensive security through two powerful capabilities. Admin Network Policies enable platform teams to enforce cluster-wide baseline security policies that cannot be overridden, establishing foundational guardrails across all namespaces and workloads. Advanced Network Policies introduce Layer 7 controls with DNS-based access control, allowing you to define policies using fully qualified domain names instead of IP addresses. This demo showcases implementing default-deny architectures with Admin policies and securing external service access with DNS-based rules. Learn practical patterns for implementing defense-in-depth strategies that strengthen security posture and simplify policy management. |
|
Accelerate Incident Response on EKS with the AWS DevOps Agent
|
Discover how the AWS DevOps Agent transforms incident response for Amazon EKS clusters. This frontier agent autonomously investigates production issues by correlating metrics, logs, and deployment data across observability tools and CI/CD pipelines. Experience how the agent identifies root causes, generates mitigation plans, and provides actionable recommendations for microservices applications. Walk through real-world EKS troubleshooting scenarios including pod failures, networking issues, and resource constraints. See how the agent integrates with CloudWatch, infrastructure as code, GitHub Actions, and collaboration tools like Slack to streamline incident coordination and accelerate recovery times. |
|
Infrastructure for AI Agents: Kro Templates for Agentic Workloads
|
Agentic AI applications have unique infrastructure needs including long-running processes, tool access, and unpredictable resource usage. This session shows how to build reusable kro templates specifically for AI agent deployments on EKS. Create a "deploy an agent" golden path that handles GPU allocation, vector database connections, and LLM API credentials through a single custom resource definition. Learn how to abstract infrastructure complexity, enabling developers to focus on agent logic rather than infrastructure management. Perfect for AI startups and platform teams building self-service capabilities for AI agent deployments on Kubernetes. |
|
How to optimize your platform cost on EKS by adopting Graviton
|
This session demonstrates how organizations significantly reduce Amazon EKS platform costs while improving performance by adopting AWS Graviton processors. Graviton-powered instances deliver up to 20% cost reduction, 40% better price-performance ratio, and 60% lower energy consumption compared to x86-based instances. Learn from a real-world case study where a platform engineering team planned Graviton adoption by implementing Karpenter and creating a multi-architecture container strategy. By preparing the platform and providing tools to help application teams migrate workloads from x86 to ARM-based processors, Graviton adoption grew from less than 2% to 43%, with the platform optimized to prioritize Graviton instances, resulting in $5M annual cost savings. |
|
Edge-to-Cloud Smart Factory: EKS Hybrid Nodes on Raspberry Pi
|
See EKS Hybrid Nodes in action running a smart factory on a Raspberry Pi. Factory machines and MQTT broker run on-premises while telemetry processors burst to cloud via EKS Auto Mode when demand spikes. Watch KEDA scale processors based on real-time MQTT message rates, Auto Mode provision cloud compute automatically, and Kyverno policies ensure hybrid pods are preserved during scale-down. This live demo showcases practical edge-to-cloud patterns: workload placement with node selectors, event-driven autoscaling, intelligent cloud bursting, and cost-optimized scale-down—all orchestrated from a single EKS control plane. |
|
Expedite Model Loading for Training & Inference
|
Large language models cause significant inference latency on EKS with GPUs due to slow loading. The Storage→Network→CPU→PCIe→GPU data path bottlenecks loading latency. Learn strategies like decoupling model artifacts and using optimal storage options to retrieve weights and biases during inferencing. Discover how to use Seekable OCI (SOCI) to lazy load images, expediting inference startup from minutes to seconds. Learn to diagnose bottlenecks, select optimal storage solutions, and implement advanced strategies for rapid LLM inference on EKS. |
March 25th | 10:30 AM to 5:00 PM CET
|
Session
|
Description
|
|---|---|
|
Auto Mode Cost Optimization: Spot, Consolidation, and Zero Config
|
Cost optimization in Kubernetes requires Spot instance expertise, node group tuning, and continuous capacity planning. EKS Auto Mode eliminates this operational overhead through automated optimization. This demo shows EKS Auto Mode handling production workloads with automatic Spot instance integration and graceful interruption handling, intelligent instance type selection across compute and memory-optimized families, continuous consolidation moving workloads to fewer nodes during low traffic, and sub-minute scaling provisioning additional capacity during bursts. Watch Karpenter make real-time decisions selecting optimal instances. See how EKS Auto Mode delivers cost efficiency without manual configuration or Spot expertise. |
|
NVIDIA Dynamo on EKS
|
Deploy NVIDIA Dynamo on EKS with production-ready inference platform patterns. This session demonstrates strategies including prefill/decode disaggregation and KV cache offloading. Experience the power of Elastic Fabric Adapter (EFA) using NVIDIA NIXL inference library for high-performance model serving. Learn architectural patterns and best practices for running advanced NVIDIA inference solutions on Kubernetes, enabling efficient large language model deployments with improved throughput and reduced latency. |
|
AI Platform: From Zero to Production LLMs with EKS, KRO, and GitOps
|
Deploying production AI workloads on Kubernetes remains complex, requiring expertise across GPU scheduling, model serving, and infrastructure provisioning. This demo showcases a fully automated, GitOps-driven platform leveraging EKS, ArgoCD, KRO (Kubernetes Resource Orchestrator), and ACK to deploy complete AI stacks—including vLLM, LiteLLM, OpenWebUI, and Langfuse—with a single custom resource. |
|
Autoscaling Isn't Set-and-Forget: Tuning KEDA & Karpenter for Real Workloads
|
You've deployed KEDA and Karpenter—now what? This session dives into advanced configurations, failure modes, and optimization patterns that separate demo clusters from production workloads. On the Karpenter side: Node Disruption Budgets to control voluntary disruptions, surviving Spot interruptions gracefully, and bootstrap optimizations that reduce node startup times. On KEDA: scaling modifiers combining multiple metrics into smarter decisions, cooldown tuning to prevent thrashing, and shifting from reactive scaling to proactive workload sizing. Learn how EKS Auto Mode changes the equation, what you get out of the box, what requires configuration, and how KEDA fits into managed Karpenter environments. |
|
Building Production-Ready Platforms on Amazon EKS with EKS Capabilities
|
Discover how Amazon EKS Capabilities revolutionizes platform engineering by eliminating operational overhead through fully managed Kubernetes-native solutions. This demo showcases a production-ready platform built using three integrated managed capabilities: Argo CD for GitOps-based continuous deployment, AWS Controllers for Kubernetes (ACK) for declarative AWS resource management, and kro for creating reusable, self-service building blocks. Experience advanced deployment strategies with Argo Rollouts for blue/green deployments ensuring zero-downtime releases, while Kargo orchestrates application promotion across environments. Built with AWS Identity Center integration and GitOps principles, this solution provides actionable patterns for modern platform engineering supporting diverse workloads. |
|
How VPC Lattice simplifies secure multi-cluster networking on EKS
|
As organizations scale to multiple EKS clusters, networking becomes a complex problem. How do you securely connect services across clusters, migrate workloads without disrupting users, and observe traffic flowing across cluster boundaries? This session builds a multi-cluster EKS networking layer using Amazon VPC Lattice—no sidecars, no service mesh complexity. Learn how to establish cross-cluster service connectivity using Kubernetes-native Gateway API resources, enforce zero-trust security with IAM-based authentication using SigV4, execute canary application migrations across clusters with weighted routing and session stickiness, and gain end-to-end visibility using EKS enhanced networking observability. |
March 26th | 10:30 AM to 2:00 PM CET
|
Session
|
Description
|
|---|---|
|
KV Cache Optimization: From Working to Efficient LLM Inference on Kubernetes
|
We've moved past getting LLMs running—now it's time to optimize. As inference workloads scale, KV cache becomes your biggest GPU memory bottleneck and cost driver. This demo-driven session shows practical KV cache optimization strategies that unlock better resource utilization. Through live comparisons, we'll demonstrate measurable improvements in GPU utilization and throughput. Learn how to identify optimization opportunities, implementation patterns on Kubernetes, and walk away with actionable strategies to maximize your inference infrastructure efficiency without sacrificing performance. |
|
Shift-Left with AI: Automated Security, Cost, and Best Practice Reviews for Kubernetes Manifests
|
In this session, we'll build a complete AI-powered code review system for Kubernetes manifests that integrates seamlessly into your GitOps workflow. Using Claude on AWS Bedrock with ArgoCD and GitHub Actions, we'll create an intelligent reviewer that analyzes every pull request for security vulnerabilities, cost implications, reliability risks, and best practice violations—then explains its findings in plain English. |
|
Building Multi-Tenant SaaS Architecture with Amazon EKS Capabilities and Auto Mode
|
Discover how Amazon EKS Capabilities revolutionizes multi-tenant SaaS development with fully managed Kubernetes-native solutions. This demo showcases building production-ready platforms using EKS Capabilities for Argo CD, ACK, and kro, plus EKS Auto Mode. Learn how these integrated capabilities automate tenant onboarding in minutes, enforce namespace-level isolation, and orchestrate AWS infrastructure. See how platform teams eliminate operational bottlenecks to scale across hundreds of tenants while developers ship faster with modern GitOps patterns. This session provides actionable patterns for building scalable, secure multi-tenant architectures on Amazon EKS. |
|
Simplify backup for stateful Amazon EKS workloads
|
Protecting persistent data in EKS environments presents unique challenges. This session explores implementing native AWS Backup service for EKS workloads, eliminating dependence on third-party solutions. Learn practical configuration steps for protecting S3, EBS, and EFS storage used by containerized applications. Discover how to integrate EKS clusters into AWS Backup policies, automate backup and copy schedules, and establish efficient restoration workflows. The demonstrated approach shows how organizations achieve lower Recovery Time Objectives while maintaining fully AWS-native infrastructure. Ideal for teams seeking to simplify data protection for stateful workloads on Amazon EKS. |
|
Cost Allocation Simplified with Split Cost Allocation Data
|
Allocating costs for workloads running on shared Amazon EKS clusters is challenging because traditional instance-level billing is too broad for environments where multiple teams share underlying resources. AWS Split Cost Allocation Data (SCAD) addresses this by providing granular, pod-level cost breakdowns along with aggregations on common K8s constructs and labels, in the AWS Cost and Usage Report (CUR), calculating costs based on requested/consumed resources and distributing idle capacity costs proportionally. In this demo, we'll show sample CUR queries, and also show the SCAD Containers Cost Allocation QuickSight dashboard |
Join AWS for an exclusive demo to see how we're simplifying GitOps deployments on Kubernetes
Time: Wednesday, March 25, 11.35 am CET to 12.05 pm CET
Location: Demo Theatre Located in Solutions Showcase
Speaker: Sebastien Allamand, Sr. Containers Specialist Solutions Architect; Shani Adadi Kazaz, Sr. Container Specialist
This demo session will dive into the challenges teams face during application deployments ranging from managing upgrades and security patches to scaling across multiple clusters. We'll demonstrate how Amazon EKS Capabilities for continuous deployments using Argo CD eliminates these operational burdens while enabling automated application deployment across development, staging, and production environments. This fully-managed capability streamlines continuous deployment by automatically synchronizing desired application state from Git repositories to multiple clusters. It provides native AWS integrations with AWS Identity and Access Management Identity Center for single sign-on authentication, AWS Secrets Manager for secure credential management, and AWS CodeConnections for streamlined repository access. AWS manages all operational aspects—including security patches, upgrades, and scaling—allowing you to focus on application delivery rather than maintaining deployment tools.
Create It. Transform It. Take It. Come See what AWS is building
AI meets Amazon EKS in a whole new way.
See it. Touch it. Leave with something unforgettable.
Any guesses?
AWS Booth Experience at Previous KubeCons
At our AWS booth during previous KubeCons, we showcased cutting-edge solutions and innovations in Kubernetes and cloud-native technologies. Attendees had the opportunity to engage with experts, participate in hands-on demos.
Stay Connected With Us
Did you find what you were looking for today?
Let us know so we can improve the quality of the content on our pages