Skip to main content

Guidance for Implementing the Google Privacy Sandbox Aggregation Service on AWS

Overview

This Guidance demonstrates how to deploy the Google Privacy Sandbox “Aggregation Service” within a trusted execution environment (TEE) using AWS services. The Aggregation Service can be used to produce event or aggregate campaign measurement data through the Privacy Sandbox Attribution Reporting API (ARA) or Private Aggregation API. This Guidance includes several features to help streamline deployment for AWS customers, including:

  1. An overview of end-to-end collection, batching, and orchestration of aggregation jobs by the service
  2. Example implementations of how to perform Avro conversion on records before being processed by the service
  3. Example implementations to prepare report batches and data enrichment before processing by the service (in future releases)
  4. Example implementations of a collection service that enables endpoints to collect event-level and summary reports

How it works

This architecture diagram shows how AWS customers can deploy the Google Privacy Sandbox Aggregation Service on AWS. It also illustrates the necessary infrastructure for collecting and processing reports generated by the Private Aggregation and Attribution Reporting APIs.

Well-Architected Pillars

The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.

By harnessing the capabilities of Amazon CloudWatch and Amazon ECS, users can optimize their operational processes, reduce manual interventions, and maintain a high-performing, resilient infrastructure. Specifically, CloudWatch provides comprehensive logging and insights so users can monitor the performance and health of their services running on Amazon ECS. Users can also easily scale their workloads with Amazon ECS and adapt this Guidance to meet their changing demands.

Read the Operational Excellence whitepaper 

The comprehensive security services of AWS Identity and Access Management (IAM), AWS WAF, and AWS Key Management Service (AWS KMS) work together to fortify workloads. Specifically, IAM policies grant the minimum required access, adhering to the principle of least privilege. The AWS KMS service protects user data at rest by allowing users to easily manage cryptographic keys. Finally, public-facing endpoints are protected with AWS WAF, shielding workloads from malicious attacks and Distributed Denial of Service (DDoS) threats.

Read the Security whitepaper 

Several AWS services facilitate workload recovery from failures or disruptions. Amazon S3 provides durable data storage with versioning and replication capabilities, safeguarding data against accidental deletions or application failures. Amazon ECS and Amazon Elastic Compute Cloud (Amazon EC2) distribute workloads across multiple Availability Zones for high availability. The capabilities of Kinesis Data Streams durably store and retain data for a specified period of time so that the data is not lost in the event of failures or disruptions. Elastic Load Balancing efficiently distributes traffic across resources, ensuring workloads can handle increased demand during disruptions or spikes.

Read the Reliability whitepaper 

Optimize the performance of this Guidance with Amazon ECS and AWS Graviton Processors. Amazon ECS simplifies the scaling of containerized workloads, allowing users to dynamically adjust their compute resources to meet fluctuating demands. AWS Graviton processors include custom silicon with improved price performance, resulting in higher throughputs and lower latencies for requests.

Read the Performance Efficiency whitepaper 

Amazon S3, Amazon ECS, and AWS Glue work in tandem to deliver business value at the lowest possible cost while avoiding unnecessary expenses. Amazon S3 allows users to store and retrieve data at scale, paying only for the storage they use without the need to provision and manage physical infrastructure. Amazon ECS dynamically scales compute resources, helping to ensure users only pay for the resources consumed. And AWS Glue simplifies extract, transfer, and load (ETL) workloads, automatically provisioning the necessary resources and reducing maintenance overhead.

Read the Cost Optimization whitepaper 

By using AWS Graviton-based instances for this Guidance, users can optimize their workloads for environmental efficiency and reduce their carbon footprint. AWS Graviton processors deliver up to 60% less energy consumption compared to traditional Amazon EC2 instances, so users can contribute to a more sustainable cloud infrastructure.

Read the Sustainability whitepaper 

Deploy with confidence

Ready to deploy? Review the sample code on GitHub for detailed deployment instructions to deploy as-is or customize to fit your needs. 

Go to sample code

Disclaimer

The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.

References to third-party services or organizations in this Guidance do not imply an endorsement, sponsorship, or affiliation between Amazon or AWS and the third party. Guidance from AWS is a technical starting point, and you can customize your integration with third-party services when you deploy the architecture.