[SEO Subhead]
This Guidance demonstrates how to build a modern, serverless data lake on AWS tailored for the insurance industry. It enables you to collect data from disparate core systems and third parties, set up self-service data access, and set the foundation for business intelligence (BI) and machine learning (ML) features that drive informed decision-making. This Guidance helps you leverage data effectively through a data lake architectural pattern that allows you to quickly get started on the cloud, reducing the time it takes to extract value from your data.
Please note: [Disclaimer]
Architecture Diagram
[Architecture diagram description]
Step 1
Business analysts define the data pipeline operations using low-code configuration files stored in an Amazon Simple Storage Service (Amazon S3) bucket. Data sources upload source data files, such as policies and claims, to the Collect S3 bucket.
Step 2
An ObjectCreated event invokes an AWS Lambda function that reads metadata from the incoming source data, logs all actions, and starts the AWS Step Functions workflow.
Step 3
Step Functions calls AWS Glue jobs that map the data to your predefined data dictionary. These jobs then perform the transformations and data quality checks for both the Cleanse and Consume layers.
Step 4
Amazon DynamoDB contains lookup values used by the lookup and multi-lookup transforms; extract, transform, and load (ETL) metadata such as job audit logs, data lineage output logs, and data quality results are written here.
Step 5
AWS Glue jobs store cleansed and curated data in Amazon S3 as compressed, partitioned Apache Parquet files. AWS Glue jobs also create and update the AWS Glue Data Catalog databases and tables.
Step 6
AWS Glue jobs store source data file validation failures in an Amazon S3 Quarantine folder and Data Catalog table which can populate an exception queue dashboard that allows a human to review and take appropriate action.
Step 7
Amazon Athena runs SQL queries using the Data Catalog databases and tables.
Step 8
Amazon QuickSight dashboards and reports pull data from the data lake on a near real-time or scheduled basis.
Step 9
AWS CodePipeline manages the full DevSecOps cycle for the infrastructure, application, and pipeline configuration.
Well-Architected Pillars
The AWS Well-Architected Framework helps you understand the pros and cons of the decisions you make when building systems in the cloud. The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems. Using the AWS Well-Architected Tool, available at no charge in the AWS Management Console, you can review your workloads against these best practices by answering a set of questions for each pillar.
The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.
-
Operational Excellence
Lambda functions, Step Functions state machines, and AWS Glue job output logs curate diagnostic and status information in Amazon CloudWatch logs. Data lineage, job audit data, and data quality results stored in DynamoDB enable publishing metrics and audit data to operational dashboards. Automated deployment of data lake environments using CodePipeline and consistent tagging of infrastructure and ETL resources across stacks facilitate centralized customization in AWS Cloud Development Kit (AWS CDK). Diagnostic logs and metrics in CloudWatch for Step Functions, Lambda, and AWS Glue provide near real-time transparency for effective monitoring of data pipeline job progress and performance.
-
Security
Public access to S3 buckets is blocked, encryption for data in transit is required, and server-side encryption using AWS Key Management Service (AWS KMS) secures data at rest. Access to all S3 buckets is logged in a dedicated access log bucket for permission review and maintenance. Built-in data masking and hashing transforms in AWS Glue jobs protect sensitive data, and regular automated execution of data pipelines reduces manual errors or unauthorized access risks.
-
Reliability
The inherent durability and availability of Amazon S3, which stores data across multiple Availability Zones, and DynamoDB, which automatically replicates data across three Availability Zones, enhance reliability. Amazon S3 versioning preserves, retrieves, and restores every version of objects, while DynamoDB deletion protection safeguards production environments. Additionally, CodePipeline and infrastructure as code enable easy resource replication across multiple Regions and accounts.
-
Performance Efficiency
Optimized AWS Glue jobs minimize data processing units (DPU) hours consumed, and efficient Amazon S3 storage for Cleanse and Consume layers enables faster data scans and queries. DynamoDB efficiently stores data lineage, data quality results, job audit data, lookup transform data, and tokenized source data, and provides scalability and low-latency performance. The serverless nature of Athena and AWS Glue provides efficient data access without data movement.
-
Cost Optimization
Amazon S3 lifecycle policies automatically transition data to Amazon S3 Glacier storage, and DynamoDB tables can use On-Demand Capacity mode and Infrequent Access storage class as needed. DynamoDB Time to Live (TTL) automatically deletes expired items, and AWS Glue DPU auto scaling and flex capacity right-size compute resources. Fully managed, serverless services like Amazon S3, AWS Glue, and DynamoDB optimize costs by only charging for consumed resources without infrastructure maintenance overhead.
-
Sustainability
The efficient Parquet file format for data storage in Cleanse and Consume S3 buckets reduces the energy impact of querying data. The serverless design and On-Demand Capacity mode of DynamoDB minimize the carbon footprint compared to on-premises or provisioned database servers. Lambda functions using AWS Graviton processors are more energy-efficient than traditional computer workloads. Fully managed, serverless services help ensure the data lake only consumes resources when needed, minimizing environmental impact.
Related Content
[Title]
Disclaimer
The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.
References to third-party services or organizations in this Guidance do not imply an endorsement, sponsorship, or affiliation between Amazon or AWS and the third party. Guidance from AWS is a technical starting point, and you can customize your integration with third-party services when you deploy the architecture.