As the adoption of Amazon EKS for running Kubernetes clusters grows, customers are seeking ways to better understand and control their costs. To address these customer needs and shed more light on EKS costs, AWS has collaborated with Anodot, an AWS Partner that uses machine learning to autonomously monitor and alert on costs in near real time, and provides recommendations to help customers quickly resolve cost anomalies.
The AWS Machine Learning Visionaries Partners Report is a quarterly series that tracks, selects, collates, and distributes horizontal technology capabilities enabled by machine learning in areas that AWS expects to be transformative in 1-3 years. The series’ purpose is to share our insights with AWS Partners and to collect their interest, expertise, and insights in co-building along these prioritized themes. The reports include updates on series topics as we see changes in those areas, and new topics will also be added.
This post explores an asynchronous pattern for ingesting data from legacy systems, collecting it into projections, and aggregating it into single views. The purpose of this solution is to decouple the source systems where data is stored from the external channels that request data—ensuring both offloading of source systems and making data available to channels 24/7 and in near real-time. The proposed solution is a simplification of the high-end architecture of Mia-Platform Fast Data.
Every SaaS architecture must introduce mechanisms and policies that prevent noisy neighbor conditions. Getting these policies right is essential to building a robust SaaS solution that delivers a consistent experience to customers. This post looks at the different strategies that can be used to introduce the throttles (transaction rate) and quotas (transaction volume) that manage each tenant’s activity, exploring the various AWS services that can be used to bring these concepts to life.
AWS provides a secure, reliable, and scalable environment for customers to run their container workloads. Customers running containers on premises are looking to move to AWS to gain agility benefits and reduce technical debt of managing their own infrastructure. Learn how Tech Mahindra transitioned a customer from an on-premises self-managed Kubernetes environment to a managed Amazon EKS platform with centralized self-service deployment options using AWS Service Catalog.
Today, many organizations trust Amazon Web Services (AWS) to host their business’s applications and infrastructure. As they continue to innovate, their applications and environments become increasingly complex. This post explores how AWS customers can leverage Gremlin to improve the resiliency and reliability of their applications. Learn how to apply chaos engineering principles to your Amazon EKS environment to increase uptime, reduce incidents, and build more resilient applications, systems, and services.
Amazon EKS and Calico Cloud’s combined solution provides proof of security compliance to meet organizational regulatory requirements, but building and running cloud-native applications in EKS requires communication with other AWS and external third-party services. Learn how you can apply zero-trust workload access controls along with microsegmentation for workloads on EKS, and explore what implementing zero-trust workload access controls and identity-aware microsegmentation means for you.
Customers have different reasons to run multiple Red Hat OpenShift clusters, including having separate clusters per geographical locations, setting a cluster-level boundary between mission-critical applications, data residency, and reducing latency for end users. This post explores Red Hat Advanced Cluster Management for Kubernetes and how it extends the value of Red Hat OpenShift for hybrid environments. It also explores different scenarios where having a multi-cluster environment is beneficial.
Deploying machine learning (ML) models as a packaged container with hardware-optimized acceleration, without compromising accuracy and while being financially feasible, can be challenging. As machine learning models become the brains of modern applications, developers need a simpler way to deploy trained ML models to live endpoints for inference. This post explores how a ML engineer can take a trained model, optimize and containerize the model using OctoML CLI, and deploy it to Amazon EKS.
The F5 BIG-IP Virtual Edition (VE) load balancer deployment adds new Layer 4 application capabilities and added visibility to those applications inside an Amazon EKS cluster to ensure a successful deployment in a containerized environment. This post presents a step-by-step guide for using the F5 BIG-IP VE on AWS as a load balancer for EKS clusters by using additional components, including the F5 Container Ingress Service (CIS) and F5 IPAM Controller (FIC).