[SEO Subhead]
This Guidance demonstrates how to build a real-time machine learning (ML) inferencing solution on AWS that can serve millions of requests per second. By hosting your solution’s ML model on Amazon Elastic Container Service (Amazon ECS) and routing requests to the ML server using Network Load Balancer, you can achieve low latency and support high-throughput inference requirements commonly found in real-time and programmatic advertising. This Guidance provides an example of applying ML for ad request filtering and demonstrates how to build a client application that can simulate high-throughput OpenRTB-based requests to send to the ML inference server.
Please note: [Disclaimer]
Architecture Diagram
In this architecture diagram, Steps A-B refer to the data scientist. Steps 1-4 refer to the publisher
Step A
Data scientists use Amazon SageMaker to experiment with, build, and train their ML model. Once the model is ready, it is saved in Amazon Simple Storage Service (Amazon S3).
Step B
The trained model is read and loaded by the Amazon Elastic Container Service (Amazon ECS) model inference task. The model is hosted as a Thrift endpoint. Incoming requests, in OpenRTB format (for real-time bidding), are used for inference.
Step 1
A publisher issues requests to a supply-side platform (SSP) auction server for an ad placement.
Step 2
The auction server (a client application) is hosted as an Amazon ECS application within the SSP’s virtual private cloud (VPC). The auction request issues a bid request based on the OpenRTB format.
Step 3
Network Load Balancer distributes the incoming requests to an Amazon Elastic Compute Cloud (Amazon EC2)-based Amazon ECS cluster that hosts the ad-filtering ML server. The purpose of the ad-filtering ML server is to infer the likelihood of a bid for every bid request, filtering the demand partners that need to be sent to the auction request, and optimizing the cost per bid.
Step 4
The ad-filtering ML server is hosted as a container within an Amazon EC2-based Amazon ECS cluster. An Amazon EC2 Auto Scaling group maintains the desired number of Amazon EC2 instances running across multiple Availability Zones (AZs) to maintain high availability.
Amazon ECS deploys and maintains the desired capacity of the Amazon ECS tasks, hosting the ML container. Each task loads the ad-filtering model from an Amazon S3 bucket and hosts it as a Thrift protocol–based endpoint. This helps in low-latency-based communication, and multiple instances of the tasks support a high number of concurrent requests.
Well-Architected Pillars
The AWS Well-Architected Framework helps you understand the pros and cons of the decisions you make when building systems in the cloud. The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems. Using the AWS Well-Architected Tool, available at no charge in the AWS Management Console, you can review your workloads against these best practices by answering a set of questions for each pillar.
The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.
-
Operational Excellence
Amazon CloudWatch monitors the performance of the Amazon ECS cluster (including CPU and memory) along with the incoming requests sent through Network Load Balancer. Your CloudWatch dashboard—created as part of an AWS CloudFormation script—provides a comprehensive view of the number of incoming requests and their associated latency. By using CloudWatch to visualize and analyze performance and latency, you can better identify any bottlenecks in your application.
-
Security
By scoping down all AWS Identity and Access Management (IAM) policies to the minimum permissions required for the services to function properly, you can limit unauthorized access to resources.
-
Reliability
The Amazon ECS cluster runs a service definition that maintains a desired capacity of EC2 instances. If one of the instances becomes unavailable, a new instance will automatically launch and be registered with the Amazon ECS cluster as a healthy target to receive incoming requests routed by Network Load Balancer.
-
Performance Efficiency
Network Load Balancer, which communicates with Amazon ECS, supports low-millisecond latency and high throughput that are apt for this use case.
-
Cost Optimization
Amazon EC2 Auto Scaling groups let you run your application at the desired capacity while providing dynamic support for scaling based on the load. Automatic scaling grows or reduces the infrastructure based on load and your scaling policy. This helps you control the costs associated with running your application.
-
Sustainability
The Amazon EC2-based Amazon ECS cluster lets you choose appropriate hardware types and configurations for specific workloads so that they run efficiently. As a result, you can maximize utilization and avoid overprovisioning resources. This Guidance is designed for low-latency and high-performance model inference workloads, so appropriate EC2 instance types are powered by AWS Graviton3. This service uses up to 60 percent less energy for the same performance as comparable EC2 instances, helping you reduce your carbon footprint.
Implementation Resources
A detailed guide is provided to experiment and use within your AWS account. Each stage of building the Guidance, including deployment, usage, and cleanup, is examined to prepare it for deployment.
The sample code is a starting point. It is industry validated, prescriptive but not definitive, and a peek under the hood to help you begin.
Related Content
[Title]
Disclaimer
The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.
References to third-party services or organizations in this Guidance do not imply an endorsement, sponsorship, or affiliation between Amazon or AWS and the third party. Guidance from AWS is a technical starting point, and you can customize your integration with third-party services when you deploy the architecture.