[SEO Subhead]
This Guidance illustrates how to use AWS Glue machine learning (ML) transform and AWS Lake Formation FindMatches to harmonize, or de-duplicate, customer data from different sources. In today’s digital world, data is generated by a large number of disparate sources and growing at an exponential rate. Companies are faced with the daunting task of ingesting all this data, cleansing it, and using it to generate customer insights. This Guidance provides an ML-based probabilistic approach to help you get a complete customer profile and provide a better customer experience.
Please note: [Disclaimer]
Architecture Diagram
[Architecture diagram description]
Step 1
Using an AWS Glue crawler, catalog the raw property and auto insurance data as tables in AWS Glue Data Catalog.
Step 2
Using an AWS Glue extract, transform, and load (ETL) job, transform raw insurance data into a CSV format that Amazon Neptune Bulk Loader can accept.
Step 3
When the data is in a CSV format, use an Amazon SageMaker Jupyter notebook to run a PySpark script to load the raw data into Neptune, and visualize it in a Jupyter notebook.
Step 4
Run an AWS Glue ETL job to merge the raw property and auto insurance data into one dataset, and catalog the merged dataset. This dataset will have duplicates. No relations are built between the auto and property insurance data at this point.
Step 5
Create and train an AWS Glue ML transform job to harmonize the merged data, remove duplicates, and build relations between the related data.
Step 6
Run the AWS Glue ML transform job. The job also catalogs the harmonized data in the Data Catalog and transforms the harmonized insurance data into a CSV format that Neptune Bulk Loader can accept.
Step 7
When the data is in a CSV format, use a Jupyter notebook to run a PySpark script and load the harmonized data into Neptune. Visualize the data in a Jupyter notebook.
Well-Architected Pillars
The AWS Well-Architected Framework helps you understand the pros and cons of the decisions you make when building systems in the cloud. The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems. Using the AWS Well-Architected Tool, available at no charge in the AWS Management Console, you can review your workloads against these best practices by answering a set of questions for each pillar.
The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.
-
Operational Excellence
AWS Glue integrates with multiple sources and helps ensure data quality by preventing low-quality data from entering downstream systems. By integrating with Lambda and Amazon EventBridge, you can set up an event-driven architecture. If job failures occur, AWS Glue offers multiple retries and workflow features. Amazon CloudWatch logs make monitoring job runtimes and debugging easier by centralizing potential issues that you may need to troubleshoot.
-
Security
This Guidance will handle sensitive data, such as personally identifiable information (PII). As such, it is crucial that you authorize and grant access to AWS resources using AWS Identity and Access Management (IAM). IAM secures resources by allowing user identities to access services through the minimum level of permissions, preventing unauthorized access. Additionally, AWS Key Management Service (AWS KMS) can encrypt data on Amazon Simple Storage Service (Amazon S3) and Neptune to protect data at rest or in transit.
-
Reliability
AWS Glue and Neptune are serverless and operate using multiple Availability Zones (AZ) in a resilient manner. AWS Glue allows for data replication across AWS Regions, and you can also easily port AWS Glue jobs for ETL or ML into different Regions. AWS Glue ML transform jobs can be trained in multiple Regions and use the transformation jobs for high availability. Additionally, Neptune supports multi-AZ deployment by allowing you to specify “multi-AZ” when creating a database cluster.
-
Performance Efficiency
A typical data pipeline will run the initial load on a large volume of data, followed by a delta load with a relatively small volume of the data. AWS Glue record matching supports incremental matching where a small set of workers are needed for processing incremental data. In this Guidance, we’ve chosen an effective data format like Parquet to store the data on Amazon S3, which provides an optimized storage format for working with AWS Glue. Additionally, with AWS Glue, you don’t need to plan ahead for the number of workers you use—you can start with a small number of workers and scale automatically when you need more compute.
-
Cost Optimization
AWS Glue jobs for ETL offer cost savings through a pay-as-you-go pricing model, which means you only pay for the resources you use. Additionally, AWS Glue allows you to run workloads on spare AWS capacity, such as the AWS Glue Flex execution option. You can choose Standard or Flex worker types based on the time sensitivity of your workloads. To further optimize costs, you can choose the right data format and compression technique for data stored on Amazon S3. Additionally, you can start with Neptune Serverless to avoid capacity calculations, and then use historical patterns to identify the right instance size for your needs.
-
Sustainability
Amazon S3 and AWS Glue are managed services that scale to meet peak workloads. This helps you to not overprovision resources, reducing waste across operations
Implementation Resources
A detailed guide is provided to experiment and use within your AWS account. Each stage of building the Guidance, including deployment, usage, and cleanup, is examined to prepare it for deployment.
The sample code is a starting point. It is industry validated, prescriptive but not definitive, and a peek under the hood to help you begin.
Related Content
[Title]
Disclaimer
The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.
References to third-party services or organizations in this Guidance do not imply an endorsement, sponsorship, or affiliation between Amazon or AWS and the third party. Guidance from AWS is a technical starting point, and you can customize your integration with third-party services when you deploy the architecture.