Overview
The Prebid Server Deployment on AWS allows AWS customers with ad-supported websites to maximize advertising revenue through a community of over 180 advertising platforms. With this AWS Solution, customers can easily deploy and operate Prebid Server, an open-source solution for real-time ad monetization, within their own AWS environment. Customers retain full control over bidding decision logic and transaction data, reducing Prebid Server implementation time from months to days. It also offers enterprise-grade scalability to handle a variety of requests and enhances data protection using the robust security capabilities of the AWS Cloud.
Benefits
Fully integrated infrastructure for Prebid Server with high availability, scalability, and low latency.
Operational metrics, logs, business insights, and cost visibility through Amazon QuickSight and AWS Systems Manager integration.
Prebid Server metrics extract, transform, and load (ETL) to the AWS Glue Catalog for various clients.
Technical details
Deploying this solution with the default parameters provisions the following components in your AWS account.
You can automatically deploy this architecture using the implementation guide and the accompanying AWS CloudFormation template.
Step 1
A user browses to a page on a website that hosts ads.
Step 2
The publisher's website responds with the page source and one or more script modules (also called wrappers) to the browser. These wrappers facilitate real-time bidding by enabling ad requests and responses based on criteria like ad dimensions, types, topics, and other parameters.
Step 3
Bid requests from the browser are received at the Amazon CloudFront endpoint, integrated with AWS WAF. This step filters out malicious requests like penetration or distributed denial-of-service (DDoS) attempts, ensuring only legitimate traffic enters the solution. Requests can be received through HTTP or HTTPS.
Step 4
The request is forwarded to the Application Load Balancer (ALB), which routes it to the least-utilized Prebid Server container in the cluster. The ALB has a public network interface and private interfaces in each subnet hosting the containers within the Amazon Virtual Private Cloud (Amazon VPC).
Step 5
The request arrives at an Amazon Elastic Container Service (Amazon ECS) container, where it is parsed and validated. Concurrent requests are then sent to various bidding services over the internet through the default internet gateway.
Step 6
The NAT gateway and internet gateway enable Prebid Server containers to initiate outbound requests to bidding services and receive responses, facilitating the ad auction process.
Step 7
Bidders receive one or more bid requests over the internet from a Prebid Server container. Bidders respond with zero or more bids for the various requests. The response, including the winning creative(s), is sent back to the browser.
Step 8
Amazon CloudWatch collects metrics from resources handling requests and responses. CloudWatch alarms invoke scaling the container cluster up or down based on load changes.
Step 9
The Prebid ECS service, using AWS Fargate instances, tracks cluster health, scales containers up and down, and manages the available container pool for the ALB.
Step 10
Metrics log files for each container are stored to a shared Amazon Elastic File System (Amazon EFS) using the Network File System (NFS) protocol. This file system is mounted to each Prebid Server container during start-up.
A single metrics log file is written for a limited time and then closed and rotated so that it can be included in the next stage of processing. Amazon EFS is treated as a temporary location as log data is generated and moved to longer-term storage on Amazon Simple Storage Service (Amazon S3) and into AWS Glue.
Step 11
AWS DataSync replicates rotated log files from Amazon EFS to Amazon S3 on a recurring schedule. DataSync verifies each transferred file and provides a report of the completed work to an AWS Lambda function.
Step 12
The S3 bucket, DataSyncLogsBucket, receives the replicated log files from EFS using the same folder structure. Log files arrive in this bucket as a result of the DataSync process.
Step 13
The clean-up Lambda function runs after the DataSync process completes in step 12 and removes transferred and verified log file data from EFS.
Step 14
An AWS Glue job performs an ETL operation on the metrics data in the DataSyncLogsBucket S3 bucket. The ETL operation structures the metric data into a single database with several tables, partitions the physical data, and writes it to an S3 bucket.
Step 15
The MetricsEtlBucket S3 bucket contains the metric log data transformed and partitioned through ETL. The data in this bucket is made available to AWS Glue clients for queries.
Step 16
Many different types of clients use the AWS Glue Data Catalog to access the Prebid Server metric data.
- Publish Date