This Guidance helps you assess ad opportunities at scale using real time bidding (RTB). It uses NoSQL database tables to assess whether or not to bid on an ad opportunity and then stores and analyzes this information to improve reporting and decision-making in the future. This Guidance also uses a microservices approach to support scalability, security mechanisms to protect data, and a deployment pipeline to implement changes in minutes.
Architecture Diagram

Step 1
The supply-side platform (SSP) receives an ad request from a publisher, creates a real-time auction, and sends a bid request to a demand side public endpoint that is configured on an Elastic Load Balancer.
Step 2
The requests are routed to “bidder” pods hosted on an Amazon Elastic Kubernetes Service (Amazon EKS) cluster. The bidder uses Amazon Virtual Private Cloud (Amazon VPC) endpoints to access NoSQL database tables that hold information about audience/segments, campaigns, and budgets. The bidder uses this information to process the bids.
Step 3
Based on the data from the NoSQL database (such as Aerospike or Amazon DynamoDB), the bidder decides whether to bid or not. If a sent bid wins, the bidder updates the budgets and campaign database tables. NOTE: Aerospike will run within the VPC and does not require a VPC endpoint. Configure the rack-aware feature on Aerospike for better performance.
Step 4
The bidder transactions are sent to Amazon Kinesis Data Streams via Kinesis VPC Endpoints in compressed micro batches of 25 KB PUTs. Amazon Kinesis Data Firehose then sends this data to Amazon Simple Storage Service (Amazon S3) for downstream analytics and reporting. A data stream enables the bidder to respond faster and helps in scaling each component independently.
Consideration A
Use AWS Graviton instances for bidder nodes. For additional cost optimization, implement auto-scaling and Amazon Elastic Compute Cloud (Amazon EC2) Spot instances.
Consideration B
Pre-install bidder container images with dependent libraries and binaries to minimize boot time. Upload the images to a container registry like Amazon Elastic Container Registry (Amazon ECR).
Consideration C
Encrypt and decrypt data at rest and in transit across DynamoDB, Kinesis, EKS, and S3 using AWS Key Management Service (AWS KMS). Grant least privilege access using AWS Identity and Access Management (IAM) to provide permissions for users, roles, and services.
Consideration D
Automate the deployment of the RTB platform using AWS CodeCommit, AWS CodeBuild, and AWS CodePipeline to reduce time-consuming, manual processes.
Well-Architected Pillars

The AWS Well-Architected Framework helps you understand the pros and cons of the decisions you make when building systems in the cloud. The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems. Using the AWS Well-Architected Tool, available at no charge in the AWS Management Console, you can review your workloads against these best practices by answering a set of questions for each pillar.
The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.
-
Operational Excellence
The architecture uses a microservices-based approach so that components operate independently from one another. This allows you to deploy, update, and scale components individually to meet demand for specific functions.
-
Security
IAM policies grant least privilege access to data, meaning that users only have the permissions required to perform a specific task. AWS KMS encrypts data at rest and in transit as an additional layer of protection against unauthorized use.
-
Reliability
Scalable services and features in this architecture, such as autoscaling for EKS and Kinesis, help adapt to changing requirements of dynamic workloads. The deployment pipeline implements and logs configuration changes, allowing users to rollback to a previous state in the case of disaster.
-
Performance Efficiency
The services selected for this architecture, such as Graviton, EKS, and DynamoDB, are purpose-built for high-throughput, low latency applications such as RTB. These services process millions of transactions per second.
-
Cost Optimization
EC2 Spot Instances offer scale and cost savings at a 90% discount compared to On-Demand instances. EKS and DynamoDB scale based on demand, so you only pay for resources actually used.
-
Sustainability
S3 lifecycle policies offer effective storage management by defining when data should go through archival or expiration. Graviton instances maximize performance for workloads while using fewer resources or smaller instances.
Implementation Resources

The sample code is a starting point. It is industry validated, prescriptive but not definitive, and a peek under the hood to help you begin.
Related Content

Disclaimer
The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.