Guidance for Identifying and Resolving Duplicate Customer Records on AWS
Overview
How it works
These technical details feature an architecture diagram to illustrate how to effectively use this solution. The architecture diagram shows the key components and their interactions, providing an overview of the architecture's structure and functionality step-by-step.
Well-Architected Pillars
The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.
Operational Excellence
AWS Glue integrates with multiple sources and helps ensure data quality by preventing low-quality data from entering downstream systems. By integrating with Lambda and Amazon EventBridge, you can set up an event-driven architecture. If job failures occur, AWS Glue offers multiple retries and workflow features. Amazon CloudWatch logs make monitoring job runtimes and debugging easier by centralizing potential issues that you may need to troubleshoot.
Read the Operational Excellence whitepaperSecurity
This Guidance will handle sensitive data, such as personally identifiable information (PII). As such, it is crucial that you authorize and grant access to AWS resources using AWS Identity and Access Management (IAM). IAM secures resources by allowing user identities to access services through the minimum level of permissions, preventing unauthorized access. Additionally, AWS Key Management Service (AWS KMS) can encrypt data on Amazon Simple Storage Service (Amazon S3) and Neptune to protect data at rest or in transit.
Read the Security whitepaperReliability
AWS Glue and Neptune are serverless and operate using multiple Availability Zones (AZ) in a resilient manner. AWS Glue allows for data replication across AWS Regions, and you can also easily port AWS Glue jobs for ETL or ML into different Regions. AWS Glue ML transform jobs can be trained in multiple Regions and use the transformation jobs for high availability. Additionally, Neptune supports multi-AZ deployment by allowing you to specify “multi-AZ” when creating a database cluster.
Read the Reliability whitepaperPerformance Efficiency
A typical data pipeline will run the initial load on a large volume of data, followed by a delta load with a relatively small volume of the data. AWS Glue record matching supports incremental matching where a small set of workers are needed for processing incremental data. In this Guidance, we’ve chosen an effective data format like Parquet to store the data on Amazon S3, which provides an optimized storage format for working with AWS Glue. Additionally, with AWS Glue, you don’t need to plan ahead for the number of workers you use—you can start with a small number of workers and scale automatically when you need more compute.
Read the Performance Efficiency whitepaperCost Optimization
AWS Glue jobs for ETL offer cost savings through a pay-as-you-go pricing model, which means you only pay for the resources you use. Additionally, AWS Glue allows you to run workloads on spare AWS capacity, such as the AWS Glue Flex execution option. You can choose Standard or Flex worker types based on the time sensitivity of your workloads. To further optimize costs, you can choose the right data format and compression technique for data stored on Amazon S3. Additionally, you can start with Neptune Serverless to avoid capacity calculations, and then use historical patterns to identify the right instance size for your needs.
Read the Cost Optimization whitepaperSustainability
Amazon S3 and AWS Glue are managed services that scale to meet peak workloads. This helps you to not overprovision resources, reducing waste across operations
Read the Sustainability whitepaperDisclaimer
Did you find what you were looking for today?
Let us know so we can improve the quality of the content on our pages