Skip to main content

Guidance for Patient Outcome Prediction on AWS

Overview

This Guidance helps life sciences customers to gain a comprehensive understanding of the patient care journey through a Patient Outcome Predictor (POP) application that applies artificial intelligence and machine learning (AI/ML) to de-identified, longitudinal patient data. POP helps to uncover unique patterns in target patients’ medical history and unleashes insights about patient outcome patterns such as disease progression to support early identification of eligible patients for treatments, data-driven care management decisions and timely interventions. Patient journey predictions that are made by the algorithm can be linked back to providers so that life sciences organization can use POP for improving their customer segmentation & targeting with deciphered real world patient journey insights to enable successful product commercialization.

How it works

These technical details feature an architecture diagram to illustrate how to effectively use this solution. The architecture diagram shows the key components and their interactions, providing an overview of the architecture's structure and functionality step-by-step.

Well-Architected Pillars

The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.

HealthLake extracts meaning from unstructured data with NLP and supports interoperable standards such as the Fast Healthcare Interoperability Resources (FHIR) format. This provides broad extensibility across data sources that are relevant for healthcare and life science users.

Read the Operational Excellence whitepaper 

CloudFront and AWS WAF help ensure secure access to the web app using only allow-listed IP sets. Macie automates the discovery of potentially sensitive data to ensure data privacy and security. All roles are defined with least privilege access, and all communications between services stay within the customer account. The infrastructure is based in a VPC where API and ML workloads are executed in private subnets to prevent the risk of intrusion. All S3 buckets encrypt data, are private, and block public access. The data catalog in AWS Glue has encryption enabled, and all data written to Amazon S3 from SageMaker is encrypted.

Read the Security whitepaper 

Multiple services help to enable a reliable architecture for this Guidance. For example, CloudWatch alarms track API events in CloudTrail, and backend Lambda functions log errors to log streams. These services help you stay aware of potential issues so you can fix them as they arise. Additionally, SageMaker endpoints can be configured to scale for increased workload demand.

Read the Reliability whitepaper 

By using serverless technologies, you provision the exact amount of resources you use. Each AWS Glue job will provision a Spark cluster on demand to transform data and de-provision the resources when you’re done.

Read the Performance Efficiency whitepaper 

Lambda will automatically de-provision resources when you no longer need them, so that you don’t pay for idle infrastructure. From a model experimentation perspective, you can start and stop SageMaker notebook environments on an as-needed basis. 

Read the Cost Optimization whitepaper 

You can minimize the environmental impact of backend services by using fully managed services, dynamic scaling within all serverless services, and Lambda for custom functionality. The only component in this architecture that you need to maintain and monitor manually are SageMaker notebooks, which you must start and stop during model experimentation. Aside from that service, all other components in the architecture can be automated, reducing the number of resources you need.

Read the Sustainability whitepaper 

Disclaimer

The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.