AWS

AWS at KubeCon + CloudNativeCon North America 2024

Nov 12-15 | Salt Lake City, Utah

Meet us at Booth F1

Join us at KubeCon Salt Lake City at Booth F1, where you can learn about Kubernetes best practices, strategies, and our latest innovations from AWS experts. The AWS booth will feature live, interactive product demonstrations focused on cost optimization, observability, security, governance, data and AI/ML, and platform strategy. Don’t miss this opportunity to engage with the Kubernetes community and connect with AWS experts.

AWS is coming to KubeCon + CloudNativeCon NA 2024

Reserve a time to meet with the EKS service team and specialists

Talk to the EKS experts to dive into topics like cost optimization, developer experience, learn more about the EKS roadmap, or tell us what you'd like to discuss.

See Our Solutions in Action—Chat with Our Team at the AWS Booth

Rajdeep Saha
Principal Solution Architect
Sai Vennam
Principal Specialist Solution Architect
Farrah Campbell
Head of Developer Advisory Program
Carlos Santana
Worldwide Solution Architect
Sandhya Job
Containers Worldwide Specialist
Praseeda Sathaye
Principal SA, Containers & OSS
Vara Bonthu
Principal, OSS Specialist Solution Architect
Nirmal Mehta
Principal Worldwide Solution Architect
Carlos Rueda
Worldwide Solution Architect
Apoorva Kulkarni
Containers Solution Architect

Visit the AWS Booth F1 to join our demos and learn from experts how to address key Kubernetes challenges, including cost optimization, scaling AI/ML workloads, building internal developer platforms (IDPs), and ensuring security at scale. This is a unique opportunity to discover best practices that can help your team navigate the complexities of Kubernetes management. Gain valuable insights to enhance operational efficiency, drive innovation, and transform your Kubernetes journey.

  • Discover Karpenter, an open-source CNCF project that serves as a flexible, high-performance Kubernetes cluster autoscaler built on AWS, designed to enhance application availability and cluster efficiency by automatically launching right-sized compute resources in response to changing loads, all while optimizing costs and respecting scheduling constraints.

  • In this session, we’ll demonstrate how to achieve breakthrough performance in Large Language Model (LLM) inference by integrating NVIDIA Triton Inference Server with vLLM on Amazon EKS. Attendees will learn to reduce latency and improve throughput through Triton’s versatile model-serving and vLLM’s optimized memory management, along with scalable orchestration for various inference workloads.

  • Building an Internal Developer Platform (IDP) for enterprise environments demands strategic planning and the right technologies to ensure scalability and reliability. In this hands-on workshop, you’ll explore tools and practices from the Cloud Native Operational Excellence (CNOE) initiative, helping you optimize costs and accelerate the benefits of your IDP.

  • This presentation explores how Karpenter optimizes compute usage for Kubernetes workloads on AWS by maximizing efficiency through strategic instance selection and continuous optimization. As a node lifecycle manager and autoscaler, Karpenter integrates efficiency boosters like AWS Graviton processors and Spot Instances, ensuring high availability while optimizing price/performance and supporting sustainability goals for organizations.

  • Modern applications are composed of diverse design patterns, such as event-driven architectures, microservices, and data on Kubernetes, among others. Due to the unique nature of these applications, they require scaling based on metrics beyond the traditional CPU and memory usage. In this session, you will learn how to leverage CNCF Karpenter (part of Kubernetes Autoscaling-SIG) and CNCF KEDA to scale your application from zero to (near) infinity and back to zero, ensuring performance meets the desired SLOs while considering cost optimization.

  • AWS Controllers for Kubernetes (ACK) is an open-source project that extends the Kubernetes API to manage AWS resources. In this session, you'll learn how to deploy an application alongside its AWS dependencies—like RDS databases, S3 buckets, and SQS queues—using a single, unified YAML interface.

  • Unlock the power of large language models (LLMs) with Ray Serve and vLLM on Amazon EKS. This session demonstrates how to achieve scalability and performance in ML inference by combining Ray Serve’s distribution with vLLM’s memory efficiency. Learn to optimize for latency and throughput in fast, cost-effective deployments across your Kubernetes cluster powered by Karpenter and Amazon EC2. Discover how this combination accelerates AI-driven applications for responsive LLM inference at any scale.

  • Join us to discover the powerful capabilities of Karpenter in automating upgrades for your Kubernetes cluster worker plane. Learn how Karpenter can effortlessly upgrade your worker nodes when AWS releases a new AMI, or how you can pin to specific or custom AMIs for security compliance. In this session, we'll demonstrate how Karpenter’s Drift features optimize deployment costs while minimizing operational overhead, ensuring a seamless upgrade experience for your cluster!

  • The control plane is a critical part of the k8s infrastructure. In this talk we review the metrics associated with the control plane and explain how to use them to understand the health of your cluster.

  • This session explores the integration of Karpenter and KubeCost, two open-source tools optimizing Kubernetes cluster management and cost efficiency. Karpenter, a CNCF project, is a high-performance, flexible Kubernetes cluster autoscaler for AWS that enhances application availability by dynamically provisioning resources. KubeCost offers real-time cost visibility for Kubernetes clusters. Participants will learn how Karpenter automatically scales clusters, how to implement KubeCost for cost visualization, strategies for combining these tools effectively, and best practices for optimizing cluster performance and cost-efficiency. This session also demonstrates how to use AWS Split Cost Allocation Data in AWS Billing for Amazon EKS improved cost visibility. This session aims to equip attendees with practical knowledge to leverage these tools for more efficient Kubernetes cluster management in AWS.

  • Join us to explore the EKS Upgrades accelerator, an innovative solution designed to streamline your EKS upgrade process. Discover how it systematically analyzes deprecated APIs, converts Kubernetes manifests, and suggests alternatives for end-of-life resources. You'll see how it generates organized pull requests and offers a comprehensive Amazon QuickSight dashboard for centralized visibility. Empower your organization with informed decision-making and proactive planning, ensuring a smooth transition to the latest EKS versions!

  • Run a unified Kubernetes distribution across hybrid environments with EKS Anywhere, simplifying the management of both containerized and virtual machine workloads on-premises and at the edge. Leverage the same secure and reliable Kubernetes used in Amazon EKS on your infrastructure, from hobbyist hardware to enterprise-grade servers. Join us to see how EKS Anywhere enables KubeVirt to run virtual machines alongside containerized workloads for a seamless experience across on-premises, edge, and cloud environments.

  • Whether you’re in the early stages of evaluating an IDP or already on the journey, this workshop offers a unique opportunity to learn from enterprises at the forefront of cloud-native operational excellence. Join us and gain the skills and insights needed to build and operate an enterprise-grade IDP that empowers your development teams to deliver software faster and more efficiently, at scale.

  • NVIDIA NIMs help developers and enterprises easily deploy and manage AI models at scale. Customers deploy NIM for various AI pipeline use cases, such as RAG, fine-tuning, and Digital Humans for customer service. The goal of the NIM Operator is to deploy and manage the lifecycle of optimized NIM models and NVIDIA NeMo Microservices AI pipelines in a Kubernetes environment, including EKS. The NIM Operator enables optimized enterprise AI pipelines on AWS, handling production-grade deployment, upgrades, scaling, and observability. In this session, we will share best practices and demonstrate how to run a production-grade NIM Operator AI RAG pipeline using Meta-Llama-3-8B-Instruct, NV-EmbedQA-E5-v5, and NV-RerankQA-Mistral4B-v3 on AWS.

  • Join our session to learn how to efficiently isolate Spark jobs in dedicated namespaces using Symphony, ArgoCD, and AWS Controllers for Kubernetes (ACK). Discover how to create on-demand namespaces with tailored Karpenter NodePools and IAM permissions for S3 access, ensuring fast data transfer.

  • Join our lightning talk to discover effective strategies for monitoring containerized applications on Amazon EKS. Learn how to leverage AWS services like Amazon Container Insights and Log Insights to gain visibility into your workloads, ensuring reliability and quick issue resolution. Don’t miss out—reserve your spot now to transform your observability toolkit!

  • Jupyter has become the go-to environment for data science and machine learning, but managing multi-tenant setups can be complex. In this demo, we’ll showcase how JupyterHub, integrated with SSO systems like AWS Cognito, enables secure, managed environments tailored to user needs. By leveraging Karpenter for dynamic compute provisioning and the NVIDIA Device Plugin for fractional GPU usage, we optimize resource allocation and cost. Additionally, we'll highlight how Elyra enhances collaboration by allowing data scientists to create and orchestrate complex workflows directly from their Jupyter notebooks.

  • Learn how to implement scalable and reliable patterns for CI/CD pipelines, GitOps workflows, observability, and more, tailored to your organization’s needs.

  • Join us for a live demo using ARC to move workloads from degraded AZ's to healthy AZ's. This ensures new Kubernetes pods and nodes are launched in healthy AZs only.

  • Join us for a live demo on securing your Kubernetes environment throughout its lifecycle using open-source tools and AWS services! Learn to detect and investigate threats in Amazon EKS with Kubescape and Amazon GuardDuty, while enhancing cluster security with Open Policy Agent to manage admission controls. This session provides a comprehensive understanding of Kubernetes security, from pre-deployment checks to real-time threat detection.

  •  Interest is growing in shifting EKS cluster creation from Infrastructure as Code (IaC) to Kubernetes APIs. Reasons include isolating cluster per development team and using ephemeral clusters for data platforms. In this session, we will demonstrate creating and bootstrapping EKS clusters using ArgoCD and AWS Controllers for Kubernetes (ACK).

  • Join our session to explore how to optimize large language model training using Amazon EKS, AWS Trainium, and the Neuron stack. Discover strategies for enhancing speed and cost-effectiveness in distributed training, along with best practices for using DoEKS blueprints. Don’t miss our live demo showcasing Llama2 training on Amazon EKS.

  • Join the CNOE Community! Discover how to deploy the CNOE stack using your favorite CNCF technologies and tap into the collective expertise of the CNOE community.

  • Join us to explore how VPC Lattice enables secure sharing of EKS services across multiple components like Ingress and Transit Gateway. This session will highlight the benefits of data forwarding, load balancing, policy management, and observability in Kubernetes environments. Experience hands-on demonstrations of VPC Lattice deployments using the Gateway API controller, including IAM policy manifests, TLS passthrough, multi-cluster canary testing, and service migration. Discover how to enhance your Kubernetes service sharing with these powerful tools!

Join Us for an Exclusive Demo with Nvidia!

Enabling enterprise AI pipelines using the NVIDIA NIM Operator on AWS

November 13: 4:30 PM - 6:00 PM MST  |  November 14: 12:00 PM - 1:30 PM MST

Join Us for an Exclusive Demo with Nvidia!

NVIDIA NIM helps developers and enterprises easily deploy and manage AI models at scale. Customers deploy NIM for various AI pipeline use cases, such as RAG, fine-tuning, and Digital Humans for customer service. The goal of the NIM Operator is to deploy and manage the lifecycle of optimized NIM models and NVIDIA NeMo Microservices AI pipelines in a Kubernetes environment, including Amazon Elastic Kubernetes Service (EKS). The NIM Operator enables optimized enterprise AI pipelines on AWS, handling production-grade deployment, upgrades, scaling, and observability. In this session, we will share best practices and demonstrate how to run a production-grade NIM Operator AI RAG pipeline using Meta-Llama-3-8B-Instruct, NV-EmbedQA-E5-v5, and NV-RerankQA-Mistral4B-v3 on AWS.

Whether you're new to Kubernetes or looking to deepen your expertise, these AWS sessions are designed to help you stay ahead in your Kubernetes journey, drive innovation, and enhance operational efficiency within your team. Don't miss the chance to connect, learn, and grow with the community.

AWS Booth Experience at Previous KubeCons

At our AWS booth during previous KubeCons, we showcased cutting-edge solutions and innovations in Kubernetes and cloud-native technologies. Attendees had the opportunity to engage with experts, participate in hands-on demos.