AWS for Industries

Simplify Prior Authorization in Healthcare with AWS and HL7 FHIR

Prior authorization is a process to obtain approval from a health insurer or plan that may be required before you get a health care service, treatment plan, prescription drug, or durable medical equipment in order for the service or prescription to be covered by your plan. Your health insurance company uses a prior authorization requirement as a way of keeping health care cost-effective, safe, necessary, and appropriate for each patient. However, the process of requesting and receiving prior authorizations can be slow and inefficient. They can often lead to treatment delays and be an obstacle between patients and the care they need. Prior authorizations are often solicited by fax or payor-specific portals, and usually need manual intervention to enter the relevant information. This requires manual transcription on the payor side, potentially resulting in significant time and cost prior to a decision being made.

Healthcare data interoperability can offer frictionless data exchange among various stakeholders in the healthcare community such as payors, providers, and vendors, and can be a mechanism to address challenges related to prior authorization. Direct submission of prior authorization requests from an electronic health record (EHR) will not only reduce costs for both providers and payors but will also result in faster prior authorization decisions. The Da Vinci Project created an implementation guide to address this challenge. Da Vinci is working to accelerate the adoption of HL7 Fast Healthcare Interoperability Resources (HL7® FHIR®) as the standard to support and integrate value-based care (VBC) data exchange across communities. But this healthcare data interoperability creates an additional challenge to connect and coordinate the exchange of different information systems—devices or applications—within and across organizational boundaries. AWS provides a cloud-enabled platform with modern, scalable architectures and microservices to enable true healthcare data portability. Because of a broad offering of container orchestrators, you can run your containers on AWS regardless of your choice of tools or APIs.

In this blog, we demonstrate how to use Amazon Elastic Container Service (ECS) to deploy part of the Da Vinci implementation for healthcare data interoperability. In this interoperability solution, we will be using AWS Fargate, a serverless compute engine for containers.

In this two-part series, we share a sample architecture to deploy Da Vinci leveraging AWS as a scalable, secure platform. In part two of this two-part series, we will also share ideas on how to create innovation from healthcare data using AWS machine learning and analytics services.

Prerequisites

Be sure to have the following to get the most out of this blog post:

Before diving into the solution, we also recommend familiarizing yourself with the technologies and standards used throughout this post.

Solution overview

The Da Vinci community has developed reference implementations for FHIR interoperability, and in this blogpost, we discuss one such use case on prior authorization with Coverage Requirements Discovery (CRD). Prior authorization enables direct submission of prior authorization requests to payors, resulting in lower costs and faster processing of prior authorization decisions. CRD defines a workflow to allow payors to provide information about coverage requirements to healthcare providers. The combined architecture of prior authorization with CRD offers further increased efficiency and better patient outcome.

Figure 1 – High-level architecture of communication between CRD and prior authorization

In this blogpost, we are focusing on deployment of the CRD server, which acts as a healthcare payor information system by leveraging the AWS container platform. The following diagram shows how AWS services Amazon ECR and Amazon ECS with AWS FargateAmazon S3, Amazon DynamoDB, and Amazon CloudWatch work together to deliver a resilient, secure architecture.

Architecture

Figure 2 – Architecture diagram

The architecture diagram showcases the various components of the solution:

  1. The container image for CRD is stored in Amazon ECR.
  2. Amazon ECS with AWS Fargate is used as a serverless compute for CRD container.
  3. Required IAM Roles with access permission are created to write to Amazon DynamoDB table and Amazon S3 from the server.
  4. Container logs are streamed to Amazon CloudWatch for logging and monitoring.
  5. The CRD server is front ended with an Application Load Balancer for secure distribution of load, and offers a scalable, secure way to communicate with the prior-auth service.
  6. (Optionally) Amazon API Gateway can be leveraged to serve the API requests.

Instructions to build the solution

We are building Docker container images from the Da Vinci for CRD Reference Implementation, which includes the code changes required to write to Amazon S3 and an Amazon DynamoDB table.

To integrate with the AWS environment, we have modified the build.gradle to add the required dependencies for AWS SDK. AWS offers support for AWS SDK for various different languages, and we are leveraging Java SDK for our requirement.

As per our requirement to write FHIR resources to the DynamoDB table, we modified the code with the below code sample. You can use a Software Development Kit (SDK) to integrate with other AWS services as per your requirements. For more information, see AWS SDK for Java Documentation.

  1. Add code changes to store data on Amazon DynamoDB
    Sample code changes:
HashMap<String,AttributeValue> item_values =
        new HashMap<String,AttributeValue>();
    item_values.put("cdsconnecturl", new AttributeValue(url));
    AmazonDynamoDB ddb = AmazonDynamoDBClientBuilder.defaultClient();
    ddb.putItem("davinci", item_values);
  1.  Build container image
    After the code changes are in place, use a “Docker build” command to build the container image. The Dockerfile can be found in the Da Vinci Git repository.Sample command:
sudo docker build -t crd:latest .
  1. Push to Amazon ECR
    Sample command:
sudo docker tag <Image ID> <AWS accountID>.dkr.ecr.us-east-1.amazonaws.com/crd
sudo docker push <AWS accountID.dkr.ecr.us-east-1.amazonaws.com/crd

Instructions for deployment

Once the CRD container image is pushed to Amazon ECR, we will run our Amazon ECS clusters using AWS Fargate. AWS Fargate removes the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application isolation by design.

  1. Create an Amazon ECS cluster
    To spin up an Amazon ECS cluster, a logical grouping of tasks and services, follow the instructions on Amazon ECS clusters. Below is the output of our Amazon ECS cluster for CRD container.
  1. Create a task definition for our CRD container
    A task definition is required to run Docker containers in Amazon ECS. Because we are defining one CRD container in our task definition, we will include our Amazon ECR container URI in the parameter list. We define the port mapping in our task definition because we plan to expose our CRD container on port 8090.For further instructions, see Amazon ECS Task definitions. Snapshots of our CRD container task definition and container definition are shown in figures 3 and 4.

    Figure 3 – Task definition

    Figure 4 – Container definition

    Create an Amazon DynamoDB table in the container image, which was mentioned above in step 1 of the “Instructions to build the solution” section. Amazon DynamoDB provides the input data representation and query patterns necessary for a FHIR data repository. Create an Amazon S3 bucket to build a clinical data store for your clinical patient information. Required IAM roles are created to access the Amazon DynamoDB table and Amazon S3 bucket from our Amazon ECS cluster.

  1. Create an Amazon ECS service
    An Amazon ECS service enables you to run and maintain a specified number of instances of a task definition simultaneously in an Amazon ECS cluster. If any of your tasks fail or stop for any reason, Amazon ECS service scheduler launches another instance of your task definition to replace it in order to maintain the desired number of tasks in the service. We are running one CRD task for our service, but to run it at scale, you can specify the number of tasks in the service configuration. We have also front-ended the service with an Application Load Balancer in the configuration. If your use case has a need to enable metering and throttling, you can enable Amazon API Gateway by following the instructions detailed in the blog Access Private applications on AWS Fargate using Amazon API Gateway PrivateLink.Our CRD container is now ready to serve traffic on port 8090 and Amazon ECS container logs are streamed to Amazon CloudWatch. The sample logs are shown below.You can follow Steps1 to Steps3 in “instructions for deployment” to deploy prior authorization Docker container with its configuration settings.

    Figure 5 – CloudWatch log

Result

Access CRD Page

After deployment, you can now access the CRD page with the IP address and port 8090, as illustrated in figure 6.

Figure 6 – CRD page

Click the endpoints to retrieve sample data from the repository. A sample output given for FHIR R4 endpoint is shown in figure 7.


Figure 7 – Sample output for FHIR R4 endpoint

Contents stored in Amazon DynamoDB

As discussed before, we have extended the CRD reference architecture to store the FHIR resources in Amazon S3 and resource path as unique id in Amazon DynamoDB. You can check the sample output shown in figure 8.

Figure 8 – Sample output for contents stored in Amazon DynamoDB

Prior authorization request/response

Now let’s run some sample prior authorization requests.

Submit a claim

curl -X POST -H "Content-Type: application/json" -d @src/test/resources/bundle-prior-auth.json http://<service prior auth IP>:9000/fhir/Claim/\$submit

Figure 9 – Sample output of prior authorization request: submitting a claim

Find all claims submitted by a patient

Figure 10 – Sample output of prior authorization request: find all claims submitted by a patient

Summary

In this post, we demonstrated how to deploy Da Vinci reference implementation for Coverage Requirement Discovery and a prior authorization use case by leveraging the AWS container platform. We used AWS Fargate and Amazon ECR to deploy the microservices architecture. Amazon DynamoDB and Amazon S3 were also used as a data repository, but the user can modify this architecture to build their own reference implementation using AWS global infrastructure.

Stay tuned for part two of this two-part blog series, where we will extend Amazon S3 to offer a clinical data store and leverage AWS analytics and machine learning services to derive insights from clinical data.

Sonali Sahu

Sonali Sahu

Sonali Sahu is AI/ML Solutions Architect at Amazon Web Services. She is a passionate technophile and enjoys working with enterprise & healthcare customers to solve complex problems using innovation. Her core area of focus are Artificial Intelligence & Machine Learning for Intelligent Document Processing.

Wilson To

Wilson To

Wilson To obtained his PhD in Pathology from the University of California- Davis, where he led a number of scientific investigations and published discoveries in microcirculatory systems related to vascular diseases using computer-assisted intravital microscopy. He has led teams across startup and corporate environments, receiving international recognition for his global health efforts. Wilson joined Amazon Web Services in October 2016 to lead product management and strategic initiatives, and currently leads business development efforts across the AWS worldwide healthcare practice.