This Guidance shows how to activate publisher first-party data from Software as a Service (SaaS) environments that support Seller Defined Audiences (SDA). It uses page content without Personally Identifiable Information (PII) to automatically map to industry standard taxonomies, returning the associated SDA identifications for activation through Real-Time Bidding (RTB).
Please note: [Disclaimer]
Follow the steps in this diagram to deploy this Guidance.
A visitor's browser, mobile client, or Connected TV (CTV) device accesses publisher content containing ad impressions. The OpenRTB header bidding platform, such as prebid.js, is loaded with the page invoking the on-page data assembler.
The publisher's web tier routes the SDA data request to the internal ALB private endpoint.
The internal ALB routes the SDA data request to the SDA service fleet on Amazon Elastic Kubernetes Service (Amazon EKS) for processing.
Aerospike will run within the VPC and does not require a VPC endpoint. Configure the rack-aware feature on Aerospike for better performance.
The SDA data containing page context and audience taxonomy segment data is returned to the caller through CloudFront. The returned SDA data does not contain a unique ID of the user nor does it reveal a user's identity.
The on-page data assembler sets the fetched page context classification attributes in the site.content top-level object. The audience related data is set within the user.data top-level object. Both of these objects are configured on Prebid.js.
The new segtax identifier extension, that is introduced within these objects for SDA support, determines the provided segments. In the case of site content, this identifier can be custom or the standardized IAB Tech Lab Content Taxonomy.
Custom taxonomy types must be registered with IAB Tech Lab to be assigned a number. Prebid.js then submits the bid request to the Supply-Side Platform (SSP).
The SSP parses the incoming request, resolves the data from the Prebid ortb2 object, transmits the data into the bid stream, applying the same ortb2 fields, and passes the bid request to its demand sources.
Consider the following key components when deploying this Guidance.
The audience data in the NoSQL database is updated by an audience mapping service. This data flow occurs out of band from the RTB process. The publisher could leverage a partner to implement this service.
The contextual data in the NoSQL database is updated by a contextual mapping service. This data flow occurs out of band from the RTB process.
The publisher could utilize the Guidance for Contextual Intelligence for Advertising on AWS Guidance to build a contextual mapping service, or leverage a partner to provide this service.
DynamoDB, Aerospike, or any other NoSQL database can be considered for storing audience and contextual data. When using DynamoDB, you can boost query performance by using Amazon DynamoDB Accelerator (DAX), which provides in-memory acceleration to the DynamoDB tables.
Use AWS Graviton Processor instances for bidder nodes. For additional cost optimization, implement auto-scaling.
To minimize boot time, pre-install SDA service container images with dependent libraries and binaries. Upload the images to a container registry like Amazon Elastic Container Registry (Amazon ECR).
The AWS Well-Architected Framework helps you understand the pros and cons of the decisions you make when building systems in the cloud. The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems. Using the AWS Well-Architected Tool, available at no charge in the AWS Management Console, you can review your workloads against these best practices by answering a set of questions for each pillar.
The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.
This Guidance uses a microservices-based approach so that components operate independently of one another, allowing you to easily integrate and deploy changes.
IAM and AWS KMS are AWS services that you can deploy with this Guidance to protect your resources and data. IAM policies grant least privilege access to data, so that users only have the permissions required to perform a specific task. AWS KMS encrypts data at rest and in transit as an additional layer of protection against unauthorized use.
Scalable services and features included in this Guidance, such as autoscaling for Amazon EKS, help you adapt to changes inherent in dynamic workloads. And the deployment pipeline implements and logs configuration changes, allowing you to roll back to a previous state in the case of a disaster.
This Guidance allows you to deploy, update, and scale components individually to meet demand for specific functions, allowing you to experiment with this Guidance and optimize it based on your data.
We recommend using AWS pricing models to help reduce cost. For example, Amazon EC2 Spot Instances offer scale and cost savings for up to a 90% discount when compared to Amazon EC2 On-Demand Instances. And, Amazon EKS and DynamoDB scale based on demand, so you only pay for the resources actually used.
By extensively using serverless services, you maximize overall resource usage because compute is used only as needed. This also reduces the overall energy required to operate your workloads. And to minimize the amount of hardware needed to provision this Guidance, AWS Graviton processors maximize performance for workloads.
A detailed guide is provided to experiment and use within your AWS account. Each stage of building the Guidance, including deployment, usage, and cleanup, is examined to prepare it for deployment.
The sample code is a starting point. It is industry validated, prescriptive but not definitive, and a peek under the hood to help you begin.
The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.