Containers

Tag: autoscaling

How Getir optimized their Amazon EKS compute using Karpenter

Introduction Getir is the pioneer of ultrafast grocery delivery. Getir was founded in 2015 and revolutionized last-mile delivery with its grocery in-minutes delivery proposition. Today, Getir is a conglomerate incorporating nine verticals under the same brand. Challenge Getir uses Amazon Elastic Kubernetes Service (Amazon EKS) to host applications on AWS. One of the foremost challenges […]

Deploying Karpenter Nodes with Multus on Amazon EKS

Container based Telco workloads use Multus CNI primarily for traffic or network segmentation. Amazon Elastic Kubernetes Service (Amazon EKS) supports Multus CNI enabling users to attach multiple network interfaces, apply advanced network configuration and segmentation to Kubernetes-based applications running on AWS. One of the many benefits of running applications on AWS is resource elasticity (scaling out and scaling […]

Manage scale-to-zero scenarios with Karpenter and Serverless

March 2024: This blog has been updated for Karpenter version v0.33.1 and v1beta1 specification. Introduction Cluster autoscaler, has been the de facto industry standard autoscaling mechanism on kubernetes since the very early version of the platform. However, with the evolving complexity and number of containerized workloads, our customers running on Amazon Elastic Kubernetes Service (Amazon […]

Life360’s journey to a multi-cluster Amazon EKS architecture to improve resiliency

This post was coauthored by Jesse Gonzalez, Sr. Staff Site Reliability and Naveen Puvvula, Sr. Eng Manager, Reliability Engineering at Life360 Introduction Life360 offers advanced driving, digital, and location safety features and location sharing for the entire family. Since its launch in 2008, it has become an essential solution for modern life around the world, […]

Eliminate Kubernetes node scaling lag with pod priority and over-provisioning

Introduction In Kubernetes, the Data Plane consists of two layers of scaling: a pod layer and a worker node layer. The pods can be autoscaled using Horizontal Pod Autoscaler (HPA) or Vertical Pod Autoscaler. Nodes can be autoscaled using Cluster Autoscaler (CA) or Karpenter. If worker nodes are running at full capacity and new pods […]

Amazon CloudWatch Prometheus metrics now generally available

Imaya Kumar Jagannathan, TP Kohli, and Michael Hausenblas In Using Prometheus Metrics in Amazon CloudWatch we showed you how to use the beta version of the Amazon CloudWatch supporting the ingestion of Prometheus metrics. Now that we made this feature generally available we explore its benefits in greater detail and show you how to use […]

Autoscaling Amazon EKS services based on custom Prometheus metrics using CloudWatch Container Insights

Introduction In a Kubernetes cluster, the Horizontal Pod Autoscaler can automatically scale the number of Pods in a Deployment based on observed CPU utilization and memory usage. The autoscaler depends on the Kubernetes metrics server, which collects resource metrics from Kubelets and exposes them in Kubernetes API server through Metrics API. The metrics server has […]

Autoscaling EKS on Fargate with custom metrics

NOTICE: October 04, 2024 – This post no longer reflects the best guidance for configuring a service mesh with Amazon EKS and its examples no longer work as shown. Please refer to newer content on Amazon VPC Lattice. ——– This is a guest post by Stefan Prodan of Weaveworks. Autoscaling is an approach to automatically scale […]