Guidance for Patient Outcome Prediction on AWS
Overview
How it works
These technical details feature an architecture diagram to illustrate how to effectively use this solution. The architecture diagram shows the key components and their interactions, providing an overview of the architecture's structure and functionality step-by-step.
Well-Architected Pillars
The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.
Operational Excellence
HealthLake extracts meaning from unstructured data with NLP and supports interoperable standards such as the Fast Healthcare Interoperability Resources (FHIR) format. This provides broad extensibility across data sources that are relevant for healthcare and life science users.
Security
CloudFront and AWS WAF help ensure secure access to the web app using only allow-listed IP sets. Macie automates the discovery of potentially sensitive data to ensure data privacy and security. All roles are defined with least privilege access, and all communications between services stay within the customer account. The infrastructure is based in a VPC where API and ML workloads are executed in private subnets to prevent the risk of intrusion. All S3 buckets encrypt data, are private, and block public access. The data catalog in AWS Glue has encryption enabled, and all data written to Amazon S3 from SageMaker is encrypted.
Reliability
Multiple services help to enable a reliable architecture for this Guidance. For example, CloudWatch alarms track API events in CloudTrail, and backend Lambda functions log errors to log streams. These services help you stay aware of potential issues so you can fix them as they arise. Additionally, SageMaker endpoints can be configured to scale for increased workload demand.
Performance Efficiency
By using serverless technologies, you provision the exact amount of resources you use. Each AWS Glue job will provision a Spark cluster on demand to transform data and de-provision the resources when you’re done.
Cost Optimization
Lambda will automatically de-provision resources when you no longer need them, so that you don’t pay for idle infrastructure. From a model experimentation perspective, you can start and stop SageMaker notebook environments on an as-needed basis.
Sustainability
You can minimize the environmental impact of backend services by using fully managed services, dynamic scaling within all serverless services, and Lambda for custom functionality. The only component in this architecture that you need to maintain and monitor manually are SageMaker notebooks, which you must start and stop during model experimentation. Aside from that service, all other components in the architecture can be automated, reducing the number of resources you need.
Disclaimer
Did you find what you were looking for today?
Let us know so we can improve the quality of the content on our pages