This Guidance helps payors offer a centralized medical records system to patients, allowing patients to manage their healthcare information in a consolidated repository. Patients control access to their medical records (including diagnostic activities, such as lab tests and procedures) and can choose which providers are authorized to view their data and the duration that they will have access. Providers will gain a 360-view of the patient’s health and medical history, leading to improved diagnoses and a reduced need for redundant tests or lab procedures. This results in lowered costs for both payors and patients. With the patient’s authorization, the payor can access patient population data to generate personalized wellness tips and business insights for the patient.
Please note: [Disclaimer]
Architecture Diagram

Step 1
Patients will use mobile devices to manage their healthcare data. Patients will input data manually by uploading photos of medical records and through future wearables. All data interactions will occur through Amazon API Gateway and AWS Lambda.
Step 2
Health data from multiple sources will be added to Amazon Neptune.
Step 3
When patients upload pictures of medical records, such as a lab report or diagnosis, the text will be extracted from the image using Amazon Artificial Intelligence (AI) services, and data will be sent to Neptune.
Step 4
Patients will authorize providers with view access for a duration of time (such as 90 days) to their medical records. A token will be managed in Amazon DynamoDB to represent provider authorization and expiry.
Step 5
The payor will enrich the patient data from other sources. The data may come from the payor’s source systems or from Amazon HealthLake if the payor has one set up.
Step 6
The provider will have access to a secure portal for accessing the patient’s authorized 360 medical records.
Step 7
Providers will also have the ability to add medical episode and other relevant information to the patient data.
Step 8
Data captured in Neptune will be added to the payor’s data lake and used for generating personalized medical tips for the patient.
Step 9
The payor will have access to patient population data in the data lake to generate business insights. Dashboards may be created using Amazon QuickSight.
Well-Architected Pillars

The AWS Well-Architected Framework helps you understand the pros and cons of the decisions you make when building systems in the cloud. The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems. Using the AWS Well-Architected Tool, available at no charge in the AWS Management Console, you can review your workloads against these best practices by answering a set of questions for each pillar.
The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.
-
Operational Excellence
All services in this architecture emit Amazon CloudWatch events that can be used to monitor the individual components of the design. Changes to the infrastructure may be managed using AWS CloudFormation, AWS Cloud Development Kit (CDK), or tools like Terraform. Logs captured by various components may be consolidated into existing services, such as Splunk and DataDog.
-
Security
Neptune can only be created in a virtual private cloud (VPC) and can only be accessed from subnets within the VPC. As a best practice, ensure that the data in Neptune is encrypted at rest. Amazon Cognito supports authentication and authorization for patient and providers.
-
Reliability
Neptune offers high durability due to the separation of compute and storage. Six copies of data are stored across three Availability Zones. Compute instances are created across multiple Availability Zones to ensure high availability. The read replicas in the Neptune cluster are a candidate for failover—if the primary compute instance fails, one of the replicas takes over the role of the primary instance. Neptune and DynamoDB backups may be automated using AWS Backup.
-
Performance Efficiency
Neptune decouples compute and storage, allowing customers to select an appropriately-sized compute instance based on the workload. Customers can create up to 15 read replicas and split the read/write load across the primary and replicas, leading to better performance. Neptune has an auto-scaling feature that can increase the database instances in the cluster under heavy load, helping achieve consistent performance from the cluster.
-
Cost Optimization
With serverless technologies, you provision the exact amount of resources you use. Neptune automatically scales storage, so customers pay only for the storage they use and do not need to maintain excess storage for future growth. We recommend that customers adjust the size of the compute instances for the databases to meet their workload's service level agreement (SLA) and avoid paying for excess capacity.
-
Sustainability
Serverless services in this architecture lead to an optimal use of resources. The managed services in the architecture support on-demand scaling, which maximizes resource utilization and reduces the energy needed to run workloads.
Implementation Resources

A detailed guide is provided to experiment and use within your AWS account. Each stage of building the Guidance, including deployment, usage, and cleanup, is examined to prepare it for deployment.
The sample code is a starting point. It is industry validated, prescriptive but not definitive, and a peek under the hood to help you begin.
Related Content

[Title]
Disclaimer
The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.
References to third-party services or organizations in this Guidance do not imply an endorsement, sponsorship, or affiliation between Amazon or AWS and the third party. Guidance from AWS is a technical starting point, and you can customize your integration with third-party services when you deploy the architecture.