This Guidance helps publishers monetize their assets effectively and create a foundation for broader internal and external data collaboration. The architecture diagram shows how to build a data product that logs data sources, consolidates them in a data lake, and follows a data lifecycle management process. This data product supports yield optimization, ad-hoc queries, and reporting.
Please note: [Disclaimer]
Architecture Diagram
[text]
Step 1
Stream the clickstream web and mobile engagement data and internally hosted pre-bid server data into AWS analytical infrastructure through Amazon Kinesis and Amazon API Gateway.
Step 2
Collect the ad log data generated by Publisher Adtech applications through the batch ingestion process. Use AWS Transfer Family for transferring data from files stored in SFTP servers. Use AWS Data Sync to copy files from external cloud data stores.
Step 3
Store near real-time streaming data in a hot analytical storage layer that supports low latency ingestion and concurrent querying. Use analytical databases, such as Apache Pinot or Apache Druid, and host them in Amazon Elastic Kubernetes Service (Amazon EKS) clusters. Time To Live (TTL) for events range from 7-30 days. The virtual private cloud (VPC) endpoint securely transfers data from AWS-managed services to VPC-hosted services.
Step 4
Stream and batch load both external and internal data sets into the Amazon Simple Storage Service (Amazon S3) "raw" zone.
Step 5
Use AWS Glue Apache Spark distributed computing jobs to process large volumes of data. Use AWS Step Functions to orchestrate data processing workflows that include AWS Glue and other AWS services.
Step 6
The logical "clean" zone in Amazon S3 stores the data in source-like schema and in an analytical read-optimized format. Use Apache Parquet format and apply proper partition and compression techniques. Use the logical "curated" zone to integrate data from various sources and store them in a normalized schema.
Step 7
Optionally, use Amazon Redshift to build a data warehouse that hosts curated or modeled data. This warehouse will handle repeated analytical queries and dashboards that can benefit from massively parallel processing (MPP) architecture and indexes.
Step 8
Use Amazon Athena to perform ad-hoc data discovery and analysis queries.
Step 9
Connect the short-term analytical storage to Amazon QuickSight to build operational reporting dashboards. Use QuickSight VPC connection to secure the network connection. Embed these dashboards in web applications that the Ad operations team use. Historical data sets in Amazon S3 and Amazon Redshift are accessed from QuickSight to build business intelligence (BI) dashboards.
Step 10
Access the data lake using Amazon SageMaker to train, test, and deploy machine learning (ML) models. These ML models are deployed to for use cases such as optimizing yield, making supply forecast, and shaping traffic.
Step 11
Use AWS Clean Rooms to collaborate with advertisers and measurement providers. This enhances privacy and allows combined analysis of the curated datasets without exposing raw data.
Step 12
Selectively load curated datasets into Amazon DynamoDB using AWS Glue. Build APIs using AWS Lambda and API Gateway to share data with stakeholders for use cases such as monetization.
Step 13
Data in the Publisher Advertising data lake supports use cases such as yield optimization and acts as a source of truth for advertising revenue data in enterprise reporting.
Step 14
Use AWS Lake Formation to define fine-grained access controls on AWS Glue Data Catalog tables, columns, and rows in the data lake. AWS Identity and Access Management (IAM) securely manages identities and access to AWS services and resources.
Well-Architected Pillars
The AWS Well-Architected Framework helps you understand the pros and cons of the decisions you make when building systems in the cloud. The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems. Using the AWS Well-Architected Tool, available at no charge in the AWS Management Console, you can review your workloads against these best practices by answering a set of questions for each pillar.
The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.
-
Operational Excellence
To support observability, every service in this Guidance publishes metrics to Amazon CloudWatch, through which you can configure dashboards and alarms. We also recommend that you establish “lessons learned” sessions, retrospectives, and a feedback process in your organization to analyze and resolve potential issues.
-
Security
IAM policies use least-privilege access so that every policy is restrictive to the specific resource and operation. The data at rest in the Amazon S3 bucket is encrypted using AWS Key Management Service (AWS KMS) keys. The data in transit is encrypted and transferred over HTTPS.
All of the Amazon S3 buckets are blocked from public access. AWS managed services access the short-term analytical data storage hosted in Amazon EKS through a VPC endpoint. This prevents traffic from traversing the open internet and being subject to that environment.
-
Reliability
You can back up Kinesis Data Streams to Amazon S3 and store static content in Amazon S3. Amazon Redshift periodically takes snapshots of the cluster. By default, Amazon Redshift takes a snapshot about every eight hours or following every 5 GB per node of data changes (whichever comes first). For the short-term analytical storage, Amazon EKS on AWS Fargate offers the easiest path to a resilient data plane. Fargate runs each pod in an isolated compute environment. Each pod running on Fargate gets its own worker node. Fargate automatically scales the data plane as Kubernetes scales pods. You can scale both the data plane and your workload by using the horizontal pod autoscaler.
-
Performance Efficiency
Using serverless technologies, you only provision the exact resources you use. The serverless architecture diagram reduces the amount of underlying infrastructure you need to manage, allowing you to focus on solving your business needs. Each microservice can be scaled according to its own transactions per second (TPS) requirements.
-
Cost Optimization
You should scope real-time data ingestion to use Kinesis Data Streams provisioned capacity mode. Provisioned capacity mode is best suited for predictable application traffic, application with traffic that is consistent or ramps gradually, or applications where you can forecast capacity requirements to control costs.
When AWS Glue performs data transformations, you pay only for infrastructure during the time that processing occurs. Additionally, you can use a tenant isolation model and resource tagging to automate cost usage alerts and measure costs specific to each tenant, application module, and service.
-
Sustainability
This Guidance uses purpose-built data stores for specific workloads which minimizes the amount of provisioned resources. For example, Amazon S3, a low latency analytical data storage service, provides historical data lake storage and only stores the latest information that is needed for operational queries.
Implementation Resources
A detailed guide is provided to experiment and use within your AWS account. Each stage of building the Guidance, including deployment, usage, and cleanup, is examined to prepare it for deployment.
The sample code is a starting point. It is industry validated, prescriptive but not definitive, and a peek under the hood to help you begin.
Related Content
[Title]
Disclaimer
The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.
References to third-party services or organizations in this Guidance do not imply an endorsement, sponsorship, or affiliation between Amazon or AWS and the third party. Guidance from AWS is a technical starting point, and you can customize your integration with third-party services when you deploy the architecture.