Guidance for Data Lakes on AWS
Overview
How it works
These technical details feature an architecture diagram to illustrate how to effectively use this solution. The architecture diagram shows the key components and their interactions, providing an overview of the architecture's structure and functionality step-by-step.
Deploy with confidence
Ready to deploy? Review the sample code on GitHub for detailed deployment instructions to deploy as-is or customize to fit your needs.
Well-Architected Pillars
The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.
Operational Excellence
AmazonCloudWatchprovides comprehensive insights into the performance and health through operational logging from every architectural component. Use Amazon S3 server access logging to track detailed records of requests made to your data lake, empowering you to conduct security and access audits, as well as understand your Amazon S3 billing. DynamoDB meticulously tracks the status of your data lake pipeline jobs, enabling you to swiftly identify and resolve any errors that may arise.
Security
AWS Key Management Service (AWS KMS) safeguards your data lake by encrypting all data at rest using customer-managed keys. Protect data in transit with robust TLS 1.2 encryption. AWS Identity and Access Management (IAM) enables you to manage identities and access to your AWS services and resources with precision, through the principle of least privilege.
Reliability
Amazon S3 serves as the highly durable and available storage layer. Data pipelines are triggered through EventBridge, which sends messages to Amazon SQS to initiate pipeline jobs. Errors are handled by moving messages to a dead letter queue for debugging and reprocessing. The Guidance can be redeployed to another AWS Region or account in case of regional failure, ensuring flexibility and resilience.
Performance Efficiency
This solution optimizes performance by using Lambda for lightweight tasks and AWS Glue for heavy data transformations. AWS Glue, a serverless data integration service, simplifies and accelerates data preparation while reducing costs. It leverages Apache Spark for scalable execution of transformation jobs. Step Functions orchestrates AWS Glue jobs, providing distributed processing capabilities to enhance the data pipeline's performance.
Cost Optimization
This Guidance uses serverless AWS services, reducing total ownership costs and enabling scalability based on demand. Amazon S3 serves as the storage layer, offering various cost-efficient storage classes with automated lifecycle management for diverse data access patterns. By shifting infrastructure management to AWS, the serverless approach allows developers to focus on code, further lowering costs and improving efficiency.
Sustainability
Serverless services in this Guidance scale based on demand, maximizing energy efficiency and minimizing compute resources. Amazon S3 implements data lifecycle policies and stores ingested data in Parquet format. This compressed format reduces data scans per query, further decreasing compute resources needed for the workload. The combination of serverless architecture and efficient data storage optimizes overall performance and resource utilization.
Disclaimer
Did you find what you were looking for today?
Let us know so we can improve the quality of the content on our pages