[SEO Subhead]
This Guidance helps you centralize operations and set up seamless workflows among agency, advertiser, and publisher teams. It addresses key topics, including audience analysis, identity resolution, campaign insights visualization, personalized customer experiences, and media attribution. By following the best practices outlined, you can drive greater return on ad spend through improved collaboration between media planning, buying, analytics, and creative execution teams. The Guidance showcases how to leverage AWS for audience enrichment, data hygiene, democratizing performance data, and hyper-personalization.
Please note: [Disclaimer]
Architecture Diagram
-
Data Flow
-
Detailed Architecture Diagram
-
Data Flow
-
This diagram shows the data flow process for advertising agency planning management. For a detailed reference architecture diagram, open the other tab.
Step 1
Advertiser creates the media plan brief that describes the objective of the advertising campaigns directly tied to the overarching goals of their business. This can be a prospecting upper-funnel campaign or specific retargeting lower-funnel campaign.Step 2
Using advertiser in-house customer data solutions, define the targeted person(s) that represent the ideal customer profile. The output of this step is either a target audience list or specific target persona (for example, existing customers with CLTV > X).Step 3
The advertiser asset library provides a sample creatives or full campaigns creatives to be used during the campaign creative configuration.Step 4
Advertising agency planning management incorporates the media plan brief and targeted audience to optimize campaigns structure and budget across channels.Step 5
Advertising agency planning management leverages purpose-built large language models (LLMs) to create campaigns creatives artifacts including text, image, audio, and video.Step 6
Advertising agency planning management creates campaigns targeted audience using custom audience uploaded from advertiser in-house customer data solutions or from the campaigns brief (for example, demographics settings).Step 7
Cross-channel advertising campaigns execution and delivery happens through publisher platforms and tools.Step 8
Advertising agency planning management collects campaigns and creatives performance metrics and makes them available to the advertiser using business intelligence (BI) services.Step 9
Advertising agency planning management digests the campaigns and creatives feedback to optimize the next cycle of media plan analysis. -
Detailed Architecture Diagram
-
This architecture diagram shows how to modernize advertising planning management in detail. For the data flow, open the other tab.
Step 1
Advertiser develops the media plan brief artifacts and stores them in Amazon Simple Storage Service (Amazon S3).Step 2
Advertiser stores branded media creatives in Amazon S3 for advertising agency planning management to consume. Alternatively, advertising agency planning management consumes the creatives from the content management system (CMS) through API integration.Step 3
Once the media plan brief is uploaded to Amazon S3, an AWS Step Functions workflow containing Amazon Bedrock is initiated. This workflow invokes purpose-built LLMs available in Amazon Bedrock to analyze the media plan brief and extract campaigns structure, budget, and targeted channels.Similarly, Amazon Bedrock is used to generate text and image campaign creatives for the new campaigns. The generated assets are stored in Amazon S3.
Step 4
Advertiser collects and stores first-party customer engagement data in the AWS Customer Engagement operational data store.Step 5
Advertiser collects the consumer interaction data in an Amazon S3 data lake through AWS data connector solutions and AWS Partner solutions.Step 6
Advertiser unifies first-party customer engagement data stored in Amazon S3 using AWS Entity Resolution and onboards it to the advertising agency planning management using AWS Clean Rooms. This allows advertiser to protect sensitive raw data while sharing other dimensions.Step 7
The AWS Entity Resolution workflow reads the consumer identity data and generates a common ID for consumer records across multiple sources. The output of the AWS Entity Resolution workflow is stored as customer matched records in Amazon S3.Step 8
AWS Glue extract, transform, load (ETL) jobs consume the AWS Entity Resolution workflow output and generate the unified consumer profile data tables, which are used for campaign audience building.The Step Functions workflow orchestrates the customer data processing between AWS Entity Resolution and AWS Glue. Optionally, a web user interface front-end can be developed to control the AWS Entity Resolution workflow and AWS Clean Rooms collaboration creation.
Step 9
Using AWS Clean Rooms machine learning (ML), lookalike audiences are created. In the AWS Clean Rooms collaboration, ad platforms and publishers use their audience interaction data to train the pre-built AWS Clean Rooms lookalike model. Advertisers bring the seed data for audience expansion.Step 10
The campaign management module is a solution built using AWS services or consumed as an AWS Partner solution that manages the advertising campaign components. This module combines the campaign audience and the creatives, carrying out campaigns across multiple campaign channels using file and API integrations.Step 11
Once the campaigns are live, the targeted audience starts to engage with the ad placements and generates new customer engagement data through the customer experience layer, such as web and app. The generated customer engagement data is stored in customer engagement storage to be consumed in campaigns audience creation.Step 12
An AWS Clean Rooms data collaboration brings together advertiser first-party data and publisher campaign data to perform campaign performance analysis.Step 13
The campaign performance and creative metrics, such as impressions and clicks, are delivered to Amazon S3 from the campaign management module. The output will be consolidated and queried using Amazon Athena or loaded in Amazon Redshift and visualized using Amazon QuickSight.Step 14
This data will be used to engineer prompts for Amazon Bedrock LLMs. The variables being optimized include campaign budget adjustment and creatives themselves.
This data will be used by AWS Lambda to engineer prompts for Amazon Bedrock LLMs. The variables being optimized include campaign budget adjustment and creatives themselves.
Step 15
Amazon DataZone and AWS Lake Formation define granular access controls on AWS Glue Data Catalog tables in the data lake. AWS Identity and Access Management (IAM) securely manages identities and access to AWS services and resources.
Well-Architected Pillars
The AWS Well-Architected Framework helps you understand the pros and cons of the decisions you make when building systems in the cloud. The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems. Using the AWS Well-Architected Tool, available at no charge in the AWS Management Console, you can review your workloads against these best practices by answering a set of questions for each pillar.
The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.
-
Operational Excellence
The Guidance uses Step Functions for workflow orchestration, enabling failure anticipation, source identification, and mitigation. The core services (Amazon Bedrock, AWS Clean Rooms, and AWS Entity Resolution) are fully managed, reducing operational burden. Step Functions interacts with these services using direct integration to perform business operations, monitor data flow, and anticipate failures.
-
Security
Amazon DataZone streamlines data discovery and sharing while maintaining appropriate access levels. This service creates and manages IAM roles between data producers and consumers, granting or revoking Lake Formation permissions for data sharing. By using IAM, you can help ensure that policies have minimum required permissions to limit resource access, reducing unauthorized access risks.
-
Reliability
Step Functions orchestrates workflows by monitoring AWS Entity Resolution workflow status and direct service integration with Amazon Bedrock. Step Functions monitors workflows and automatically handles errors and exceptions with built-in try/catch and retry. It also automatically scales the operations and underlying compute to run the steps of the workflow in response to increase in requests.
-
Performance Efficiency
You can achieve business use cases in near real-time by invoking LLM models through API calls to Amazon Bedrock. AWS Entity Resolution allows for record matching, using rule-based or machine learning (ML) models on-demand or automatically. As fully managed services that reduce overhead from managing underlying resources, Amazon Bedrock and AWS Entity Resolution enhance performance efficiency through reduced operational burdens.
-
Cost Optimization
S3 buckets for the campaign analytics module use the S3 Intelligent-Tiering storage class, reducing costs based on access patterns. By leveraging S3 Intelligent-Tiering, storage costs are reduced based on data access patterns.
You can review QuickSight author and reader account activity to identify and remove inactive accounts. Removing inactive QuickSight accounts minimizes the number of required subscriptions, further optimizing costs.
-
Sustainability
Athena's query result reuse feature reduces the usage of compute resources for running the same SQL queries on large datasets within a specific time period, returning the same results. This feature minimizes redundant compute resource usage, supporting sustainability efforts.
Implementation Resources
A detailed guide is provided to experiment and use within your AWS account. Each stage of building the Guidance, including deployment, usage, and cleanup, is examined to prepare it for deployment.
The sample code is a starting point. It is industry validated, prescriptive but not definitive, and a peek under the hood to help you begin.
Related Content
[Title]
Disclaimer
The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.
References to third-party services or organizations in this Guidance do not imply an endorsement, sponsorship, or affiliation between Amazon or AWS and the third party. Guidance from AWS is a technical starting point, and you can customize your integration with third-party services when you deploy the architecture.