Skip to main content

Guidance for Building a Real Time Bidder for Advertising on AWS

Overview

This Guidance shows how to assess ad opportunities at scale using real-time bidding (RTB) technology and NoSQL database tables. With a microservices approach, this offering enables rapid scalability, implements robust security mechanisms to protect data, and uses a deployment pipeline that allows for quick modifications. By capturing and analyzing bidding information in real-time, advertisers have a powerful tool to optimize their advertising strategies and react quickly to market dynamics.

How it works

These technical details feature an architecture diagram to illustrate how to effectively use this solution. The architecture diagram shows the key components and their interactions, providing an overview of the architecture's structure and functionality step-by-step.

Well-Architected Pillars

The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.

This Guidance is designed using a stateless, microservices-based architecture where changes can be made independently on each component using a deployment pipeline. It can be used to optimize the cost of high-throughput, low-latency workloads, allowing you to process a greater number of bid requests at a reduced cost.

Read the Operational Excellence whitepaper

IAM policies grant least-privilege access to data so you only have the minimum permissions required for your specific tasks. Additionally, AWS KMS is used to encrypt data at rest and in transit, providing an additional layer of protection against unauthorized access. Lastly, access to the Amazon S3 bucket is secured through bucket policies and by blocking public access and data is routed through Amazon VPC endpoints between services.

Read the Security whitepaper

By enabling autoscaling for the Amazon EKS cluster, as well as provisioned throughput for DynamoDB and Amazon Kinesis Streams, the services configured in this Guidance will scale to meet your demand. Consider exploring Kinesis Auto Scaling to scale shards, the individual units of data storage, based on demand.

Read the Reliability whitepaper

The services selected for this architecture, such as AWS Gravitonprocessors, Amazon EKS, and DynamoDB, are purpose-built for high-throughput, low latency applications such as RTB. These services process millions of transactions per second. AWS Graviton processors provide 40 percent better price performance when compared to X86-based instances, processing more bids or transactions per second.

Read the Performance Efficiency whitepaper

This Guidance uses AWS Graviton processors, Amazon EC2 Spot Instances, and managed services to optimize costs. Specifically, Amazon EC2 Spot Instances achieve scale and cost savings up to 90 percent compared to On-Demand Instances. Moreover, Amazon EKS and DynamoDB are designed to scale based on demand so you only pay for the resources used. The Amazon EKS cluster is configured with an autoscaler to scale bidder pods and nodes. This Guidance also employs compression and Amazon VPC endpoints to minimize data transfer costs. Lastly, Amazon VPC endpoints are used to route traffic over the AWS backbone network, avoiding data transfer out costs.

Read the Cost Optimization whitepaper

Amazon S3 lifecycle policies provide effective storage management by defining the appropriate data archival or expiration timelines. Also, by using Amazon EC2 Graviton-based instances, you can optimize performance with a reduced number or smaller instances to host the bidder and Aerospike clusters. These Graviton-based instances demonstrate up to a 60 percent reduction in power consumption compared to similar-sized x86 CPU-based instances for the same workload.

Read the Sustainability whitepaper

Implementation resources

The sample code is a starting point. It is industry validated, prescriptive but not definitive, and a peek under the hood to help you begin. 
Open sample code on GitHub

Disclaimer

The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.