Containers
Category: *Post Types
Amazon ECS announces IPv6-only support
In this post, Amazon ECS announces support for IPv6-only workloads, allowing users to run containerized applications in IPv6-only environments without IPv4 dependencies while maintaining compatibility with existing applications and AWS services. The new capability helps organizations address IPv4 address exhaustion challenges, streamline network architecture, improve security posture, and meet compliance requirements for IPv6 adoption.
Implementing granular failover in multi-Region Amazon EKS
In this post, we demonstrate how to configure Amazon Route 53 to enable unique failover behavior for each application within multi-tenant Amazon EKS environments across AWS Regions. This solution allows organizations to maintain the cost benefits of shared infrastructure while meeting diverse availability requirements by implementing application-specific health checks that provide granular control over failover scenarios.
Use Raspberry Pi 5 as Amazon EKS Hybrid Nodes for edge workloads
In this post, we demonstrate how to use a Raspberry Pi 5 as an Amazon EKS hybrid node to process edge workloads while maintaining cloud connectivity. We show how to set up an EKS cluster that connects cloud and edge infrastructure, secure connectivity using WireGuard VPN, enable container networking with Cilium, and implement a real-world IoT application using an ultrasonic sensor that demonstrates edge-cloud integration.
Migrating from AWS CodeDeploy to Amazon ECS for blue/green deployments
In this post, we explore the migration path from AWS CodeDeploy to Amazon ECS for blue/green deployments, discussing key architectural differences and implementation considerations. We examine three different migration approaches – in-place update, new service with existing load balancer, and new service with new load balancer – along with their respective trade-offs in terms of complexity, risk, downtime, and cost.
Kubernetes right-sizing with metrics-driven GitOps automation
In this post, we introduce an automated, GitOps-driven approach to resource optimization in Amazon EKS using AWS services such as Amazon Managed Service for Prometheus and Amazon Bedrock. The solution helps optimize Kubernetes resource allocation through metrics-driven analysis, pattern-aware optimization strategies, and automated pull request generation while maintaining GitOps principles of collaboration, version control, and auditability.
How to build highly available Kubernetes applications with Amazon EKS Auto Mode
In this post, we explore how to build highly available Kubernetes applications using Amazon EKS Auto Mode by implementing critical features like Pod Disruption Budgets, Pod Readiness Gates, and Topology Spread Constraints. Through various test scenarios including pod failures, node failures, AZ failures, and cluster upgrades, we demonstrate how these implementations maintain service continuity and maximize uptime in EKS Auto Mode environments.
How to run AI model inference with GPUs on Amazon EKS Auto Mode
In this post, we show you how to swiftly deploy inference workloads on EKS Auto Mode and demonstrate key features that streamline GPU management. We walk through a practical example by deploying open weight models from OpenAI using vLLM, while showing best practices for model deployment and maintaining operational efficiency.
Dynamic Kubernetes request right sizing with Kubecost
In this post, we demonstrate how to utilize the Kubecost Amazon EKS add-on to reduce infrastructure costs and enhance Kubernetes efficiency through Container Request Right Sizing, which helps identify and fix inefficient container resource configurations. We explore how to review Kubecost’s right sizing recommendations and implement them through either one-time updates or scheduled automated resizing within Amazon EKS environments for continuous resource optimization.
Introducing Seekable OCI Parallel Pull mode for Amazon EKS
In this post, we explore how SOCI Parallel Pull Mode transforms container image pulls through configurable parallelization strategies, addressing performance bottlenecks in both download and unpacking phases. The solution demonstrates significant improvements in pull times, showing nearly 60% acceleration when tested with a 10GB Deep Learning Container image, making it particularly valuable for AI/ML workloads with large, complex images.
Migrate to Amazon EKS: Data plane cost modeling with Karpenter and KWOK
In this post, we demonstrate how to use Karpenter and KWOK to simulate Kubernetes migrations to Amazon EKS, enabling organizations to estimate compute costs before actual migration. The solution involves creating a test environment, backing it up with Velero, restoring it in a new EKS cluster, and analyzing Karpenter’s node provisioning decisions to build accurate cost estimates.