Skip to main content

Guidance for Building a Customer 360 Data Product in a Data Mesh on AWS

Overview

This Guidance shows how to implement a Customer 360 Data Product using a data mesh for a decentralized cloud architecture. With a data mesh framework, you can combine and link data with centrally governed guidelines, helping business teams build and share core data products with their wider organization.

How it works

These technical details feature an architecture diagram to illustrate how to effectively use this solution. The architecture diagram shows the key components and their interactions, providing an overview of the architecture's structure and functionality step-by-step.

Well-Architected Pillars

The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.

A customer 360 data product in a data mesh helps you to collect, transform, govern, and analyze customer data, all from a central location. You can build customer profiles, as well as analyze customer journeys and interactions to create a personalized user experience. The decentralized nature of a data mesh distinguishes it from traditional data warehouses and data lakes. Data products are owned by department leaders and shared across organizations using a data product library leveraging Data Catalog. Central governance is established using Lake Formation features, such as tag-based security, access policies, and approval workflows. Changes in data products are captured automatically using AWS Glue Data Crawlers to keep metadata consistent across the data mesh.

Read the Operational Excellence whitepaper

When configuring this Guidance, all the data in Amazon S3 is encrypted at rest using AWS Key Management Service (AWS KMS). Also, SageMaker can only access the data through virtual private cloud (VPC) endpoints, meaning the data does not travel through the public internet. Finally, AWS Identity and Access Management (IAM) and Lake Formation are used to control access to the data.

Read the Security whitepaper

When configuring this Guidance, data is stored in Amazon S3, an object storage service that offers 99.999999999% durability, making it a reliable way to store the data since it provides high reliability and fault tolerance while also being cost-effective. Amazon Athena, QuickSight, and AWS Glue are serverless and help you to query and visualize the data at scale without you needing to worry about provisioning infrastructure. Also, SageMaker offers a broad set of machine learning (ML) services, putting ML at the hands of every developer and data scientist.

Read the Reliability whitepaper

Lambda is a serverless compute service that automatically scales up and down depending on the demand. While AWS Glue, Athena, and QuickSight are used to query and visualize the results to help you monitor performance and maintain efficiency as your business needs evolve.

Read the Performance Efficiency whitepaper

Unlike a centralized data warehouse that replicates and combines data from various sources, data is managed, federated, and decentralized in a data mesh. Configuring a data mesh helps to minimize both data movement and redundant storage. It does this in a number of ways. First, Lambda is used to process data and exposes the data mesh as APIs. Due to the on-demand nature of Lambda, resources are consumed only for the usage duration. Also, AWS Glue jobs are used to extract, transfer, and load (ETL) a batch of users rather than individual records. Moreover, Athena and QuickSight are used to query and visualize the insights in a cost-efficient way. Finally, SageMaker batch inference jobs are used to create predictive insights.

Read the Cost Optimization whitepaper

Lambda, AWS Glue, Athena, and QuickSight are all serverless services that work on-demand, maximizing the performance and utilization of resources. In addition, SageMaker batch inference jobs are processed using the appropriate size of instance to ensure an optimal utilization of the resources while being cost-efficient.

By extensively using serverless services, you maximize overall resource utilization as compute is only used as needed. The efficient use of serverless resources reduces the overall energy required to operate the workload. You can also use the AWS Billing Conductor carbon footprint tool to calculate and track the environmental impact of the workload over time at an account, Region, and service level.

Read the Sustainability whitepaper

Disclaimer

The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.