We use essential cookies and similar tools that are necessary to provide our site and services. We use performance cookies to collect anonymous statistics, so we can understand how customers use our site and make improvements. Essential cookies cannot be deactivated, but you can choose “Customize” or “Decline” to decline performance cookies.
If you agree, AWS and approved third parties will also use cookies to provide useful site features, remember your preferences, and display relevant content, including relevant advertising. To accept or decline all non-essential cookies, choose “Accept” or “Decline.” To make more detailed choices, choose “Customize.”
Customize cookie preferences
We use cookies and similar tools (collectively, "cookies") for the following purposes.
Essential
Essential cookies are necessary to provide our site and services and cannot be deactivated. They are usually set in response to your actions on the site, such as setting your privacy preferences, signing in, or filling in forms.
Performance
Performance cookies provide anonymous statistics about how customers navigate our site so we can improve site experience and performance. Approved third parties may perform analytics on our behalf, but they cannot use the data for their own purposes.
Allowed
Functional
Functional cookies help us provide useful site features, remember your preferences, and display relevant content. Approved third parties may set these cookies to provide certain site features. If you do not allow these cookies, then some or all of these services may not function properly.
Allowed
Advertising
Advertising cookies may be set through our site by us or our advertising partners and help us deliver relevant marketing content. If you do not allow these cookies, you will experience less relevant advertising.
Allowed
Blocking some types of cookies may impact your experience of our sites. You may review and change your choices at any time by selecting Cookie preferences in the footer of this site. We and selected third-parties use cookies or similar technologies as specified in the AWS Cookie Notice.
Your privacy choices
We display ads relevant to your interests on AWS sites and on other properties, including cross-context behavioral advertising. Cross-context behavioral advertising uses data from one site or app to advertise to you on a different company’s site or app.
To not allow AWS cross-context behavioral advertising based on cookies or similar technologies, select “Don't allow” and “Save privacy choices” below, or visit an AWS site with a legally-recognized decline signal enabled, such as the Global Privacy Control. If you delete your cookies or visit this site from a different browser or device, you will need to make your selection again. For more information about cookies and how we use them, please read our AWS Cookie Notice.
El contenido no se encuentra disponible en el idioma seleccionado. Trabajamos continuamente para proveer nuestro contenido en dicho idioma. Agradecemos su comprensión.
The Prebid Server Deployment on AWS allows AWS customers with ad-supported websites to maximize advertising revenue through a community of over 180 advertising platforms. With this AWS Solution, customers can easily deploy and operate Prebid Server, an open-source solution for real-time ad monetization, within their own AWS environment. Customers retain full control over bidding decision logic and transaction data, reducing Prebid Server implementation time from months to days. It also offers enterprise-grade scalability to handle a variety of requests and enhances data protection using the robust security capabilities of the AWS Cloud.
Benefits
Scalable, cost-efficient deployment
Fully integrated infrastructure for Prebid Server with high availability, scalability, and low latency.
Observable infrastructure
Operational metrics, logs, business insights, and cost visibility through Amazon QuickSight and AWS Systems Manager integration.
Data ownership
Prebid Server metrics extract, transform, and load (ETL) to the AWS Glue Catalog for various clients.
Technical details
Deploying this solution with the default parameters provisions the following components in your AWS account.
Step 9 The Prebid ECS service, using AWS Fargate instances, tracks cluster health, scales containers up and down, and manages the available container pool for the ALB.
Step 10 Metrics log files for each container are stored to a shared Amazon Elastic File System (Amazon EFS) using the Network File System (NFS) protocol. This file system is mounted to each Prebid Server container during start-up.
A single metrics log file is written for a limited time and then closed and rotated so that it can be included in the next stage of processing. Amazon EFS is treated as a temporary location as log data is generated and moved to longer-term storage on Amazon Simple Storage Service (Amazon S3) and into AWS Glue.
Step 11 AWS DataSync replicates rotated log files from Amazon EFS to Amazon S3 on a recurring schedule. DataSync verifies each transferred file and provides a report of the completed work to an AWS Lambda function.
Step 12 The S3 bucket, DataSyncLogsBucket, receives the replicated log files from EFS using the same folder structure. Log files arrive in this bucket as a result of the DataSync process.
Step 13 The clean-upLambda function runs after the DataSync process completes in step 12 and removes transferred and verified log file data from EFS.
Step 14 An AWS Glue job performs an ETL operation on the metrics data in the DataSyncLogsBucket S3 bucket. The ETL operation structures the metric data into a single database with several tables, partitions the physical data, and writes it to an S3 bucket.
Step 15 The MetricsEtlBucket S3 bucket contains the metric log data transformed and partitioned through ETL. The data in this bucket is made available to AWS Glue clients for queries.
Step 16 Many different types of clients use the AWS Glue Data Catalog to access the Prebid Server metric data.
Step 1 A user browses to a page on a website that hosts ads.
Step 2 The publisher's website responds with the page source and one or more script modules (also called wrappers) to the browser. These wrappers facilitate real-time bidding by enabling ad requests and responses based on criteria like ad dimensions, types, topics, and other parameters.
Step 3 Bid requests from the browser are received at the Amazon CloudFront endpoint, integrated with AWS WAF. This step filters out malicious requests like penetration or distributed denial-of-service (DDoS) attempts, ensuring only legitimate traffic enters the solution. Requests can be received through HTTP or HTTPS.
Step 4 The request is forwarded to the Application Load Balancer (ALB), which routes it to the least-utilized Prebid Server container in the cluster. The ALB has a public network interface and private interfaces in each subnet hosting the containers within the Amazon Virtual Private Cloud (Amazon VPC).
Step 5 The request arrives at an Amazon Elastic Container Service (Amazon ECS) container, where it is parsed and validated. Concurrent requests are then sent to various bidding services over the internet through the default internet gateway.
Step 6 The NAT gateway and internet gateway enable Prebid Server containers to initiate outbound requests to bidding services and receive responses, facilitating the ad auction process.
Step 7 Bidders receive one or more bid requests over the internet from a Prebid Server container. Bidders respond with zero or more bids for the various requests. The response, including the winning creative(s), is sent back to the browser.
Step 8 Amazon CloudWatch collects metrics from resources handling requests and responses. CloudWatch alarms invoke scaling the container cluster up or down based on load changes.
Step 9 The Prebid ECS service, using AWS Fargate instances, tracks cluster health, scales containers up and down, and manages the available container pool for the ALB.
Step 10 Metrics log files for each container are stored to a shared Amazon Elastic File System (Amazon EFS) using the Network File System (NFS) protocol. This file system is mounted to each Prebid Server container during start-up.
A single metrics log file is written for a limited time and then closed and rotated so that it can be included in the next stage of processing. Amazon EFS is treated as a temporary location as log data is generated and moved to longer-term storage on Amazon Simple Storage Service (Amazon S3) and into AWS Glue.
Step 11 AWS DataSync replicates rotated log files from Amazon EFS to Amazon S3 on a recurring schedule. DataSync verifies each transferred file and provides a report of the completed work to an AWS Lambda function.
Step 12 The S3 bucket, DataSyncLogsBucket, receives the replicated log files from EFS using the same folder structure. Log files arrive in this bucket as a result of the DataSync process.
Step 13 The clean-upLambda function runs after the DataSync process completes in step 12 and removes transferred and verified log file data from EFS.
Step 14 An AWS Glue job performs an ETL operation on the metrics data in the DataSyncLogsBucket S3 bucket. The ETL operation structures the metric data into a single database with several tables, partitions the physical data, and writes it to an S3 bucket.
Step 15 The MetricsEtlBucket S3 bucket contains the metric log data transformed and partitioned through ETL. The data in this bucket is made available to AWS Glue clients for queries.
Step 16 Many different types of clients use the AWS Glue Data Catalog to access the Prebid Server metric data.
Step 1 A user browses to a page on a website that hosts ads.
Step 2 The publisher's website responds with the page source and one or more script modules (also called wrappers) to the browser. These wrappers facilitate real-time bidding by enabling ad requests and responses based on criteria like ad dimensions, types, topics, and other parameters.
Step 3 Bid requests from the browser are received at the Amazon CloudFront endpoint, integrated with AWS WAF. This step filters out malicious requests like penetration or distributed denial-of-service (DDoS) attempts, ensuring only legitimate traffic enters the solution. Requests can be received through HTTP or HTTPS.
Step 4 The request is forwarded to the Application Load Balancer (ALB), which routes it to the least-utilized Prebid Server container in the cluster. The ALB has a public network interface and private interfaces in each subnet hosting the containers within the Amazon Virtual Private Cloud (Amazon VPC).
Step 5 The request arrives at an Amazon Elastic Container Service (Amazon ECS) container, where it is parsed and validated. Concurrent requests are then sent to various bidding services over the internet through the default internet gateway.
Step 6 The NAT gateway and internet gateway enable Prebid Server containers to initiate outbound requests to bidding services and receive responses, facilitating the ad auction process.
Step 7 Bidders receive one or more bid requests over the internet from a Prebid Server container. Bidders respond with zero or more bids for the various requests. The response, including the winning creative(s), is sent back to the browser.
Step 8 Amazon CloudWatch collects metrics from resources handling requests and responses. CloudWatch alarms invoke scaling the container cluster up or down based on load changes.
Step 9 The Prebid ECS service, using AWS Fargate instances, tracks cluster health, scales containers up and down, and manages the available container pool for the ALB.
Step 9 The Prebid ECS service, using AWS Fargate instances, tracks cluster health, scales containers up and down, and manages the available container pool for the ALB.
Step 10 Metrics log files for each container are stored to a shared Amazon Elastic File System (Amazon EFS) using the Network File System (NFS) protocol. This file system is mounted to each Prebid Server container during start-up.
A single metrics log file is written for a limited time and then closed and rotated so that it can be included in the next stage of processing. Amazon EFS is treated as a temporary location as log data is generated and moved to longer-term storage on Amazon Simple Storage Service (Amazon S3) and into AWS Glue.
Step 11 AWS DataSync replicates rotated log files from Amazon EFS to Amazon S3 on a recurring schedule. DataSync verifies each transferred file and provides a report of the completed work to an AWS Lambda function.
Step 12 The S3 bucket, DataSyncLogsBucket, receives the replicated log files from EFS using the same folder structure. Log files arrive in this bucket as a result of the DataSync process.
Step 13 The clean-upLambda function runs after the DataSync process completes in step 12 and removes transferred and verified log file data from EFS.
Step 14 An AWS Glue job performs an ETL operation on the metrics data in the DataSyncLogsBucket S3 bucket. The ETL operation structures the metric data into a single database with several tables, partitions the physical data, and writes it to an S3 bucket.
Step 15 The MetricsEtlBucket S3 bucket contains the metric log data transformed and partitioned through ETL. The data in this bucket is made available to AWS Glue clients for queries.
Step 16 Many different types of clients use the AWS Glue Data Catalog to access the Prebid Server metric data.
Step 1 A user browses to a page on a website that hosts ads.
Step 2 The publisher's website responds with the page source and one or more script modules (also called wrappers) to the browser. These wrappers facilitate real-time bidding by enabling ad requests and responses based on criteria like ad dimensions, types, topics, and other parameters.
Step 3 Bid requests from the browser are received at the Amazon CloudFront endpoint, integrated with AWS WAF. This step filters out malicious requests like penetration or distributed denial-of-service (DDoS) attempts, ensuring only legitimate traffic enters the solution. Requests can be received through HTTP or HTTPS.
Step 4 The request is forwarded to the Application Load Balancer (ALB), which routes it to the least-utilized Prebid Server container in the cluster. The ALB has a public network interface and private interfaces in each subnet hosting the containers within the Amazon Virtual Private Cloud (Amazon VPC).
Step 5 The request arrives at an Amazon Elastic Container Service (Amazon ECS) container, where it is parsed and validated. Concurrent requests are then sent to various bidding services over the internet through the default internet gateway.
Step 6 The NAT gateway and internet gateway enable Prebid Server containers to initiate outbound requests to bidding services and receive responses, facilitating the ad auction process.
Step 7 Bidders receive one or more bid requests over the internet from a Prebid Server container. Bidders respond with zero or more bids for the various requests. The response, including the winning creative(s), is sent back to the browser.
Step 8 Amazon CloudWatch collects metrics from resources handling requests and responses. CloudWatch alarms invoke scaling the container cluster up or down based on load changes.
Step 9 The Prebid ECS service, using AWS Fargate instances, tracks cluster health, scales containers up and down, and manages the available container pool for the ALB.
Step 10 Metrics log files for each container are stored to a shared Amazon Elastic File System (Amazon EFS) using the Network File System (NFS) protocol. This file system is mounted to each Prebid Server container during start-up.
A single metrics log file is written for a limited time and then closed and rotated so that it can be included in the next stage of processing. Amazon EFS is treated as a temporary location as log data is generated and moved to longer-term storage on Amazon Simple Storage Service (Amazon S3) and into AWS Glue.
Step 11 AWS DataSync replicates rotated log files from Amazon EFS to Amazon S3 on a recurring schedule. DataSync verifies each transferred file and provides a report of the completed work to an AWS Lambda function.
Step 12 The S3 bucket, DataSyncLogsBucket, receives the replicated log files from EFS using the same folder structure. Log files arrive in this bucket as a result of the DataSync process.
Step 13 The clean-upLambda function runs after the DataSync process completes in step 12 and removes transferred and verified log file data from EFS.
Step 14 An AWS Glue job performs an ETL operation on the metrics data in the DataSyncLogsBucket S3 bucket. The ETL operation structures the metric data into a single database with several tables, partitions the physical data, and writes it to an S3 bucket.
Step 15 The MetricsEtlBucket S3 bucket contains the metric log data transformed and partitioned through ETL. The data in this bucket is made available to AWS Glue clients for queries.
Step 16 Many different types of clients use the AWS Glue Data Catalog to access the Prebid Server metric data.
Step 1 A user browses to a page on a website that hosts ads.
Step 2 The publisher's website responds with the page source and one or more script modules (also called wrappers) to the browser. These wrappers facilitate real-time bidding by enabling ad requests and responses based on criteria like ad dimensions, types, topics, and other parameters.
Step 3 Bid requests from the browser are received at the Amazon CloudFront endpoint, integrated with AWS WAF. This step filters out malicious requests like penetration or distributed denial-of-service (DDoS) attempts, ensuring only legitimate traffic enters the solution. Requests can be received through HTTP or HTTPS.
Step 4 The request is forwarded to the Application Load Balancer (ALB), which routes it to the least-utilized Prebid Server container in the cluster. The ALB has a public network interface and private interfaces in each subnet hosting the containers within the Amazon Virtual Private Cloud (Amazon VPC).
Step 5 The request arrives at an Amazon Elastic Container Service (Amazon ECS) container, where it is parsed and validated. Concurrent requests are then sent to various bidding services over the internet through the default internet gateway.
Step 6 The NAT gateway and internet gateway enable Prebid Server containers to initiate outbound requests to bidding services and receive responses, facilitating the ad auction process.
Step 7 Bidders receive one or more bid requests over the internet from a Prebid Server container. Bidders respond with zero or more bids for the various requests. The response, including the winning creative(s), is sent back to the browser.
Step 8 Amazon CloudWatch collects metrics from resources handling requests and responses. CloudWatch alarms invoke scaling the container cluster up or down based on load changes.
Step 9 The Prebid ECS service, using AWS Fargate instances, tracks cluster health, scales containers up and down, and manages the available container pool for the ALB.