Containers

Accelerated model training and AI assisted annotation of medical images with the NVIDIA Clara Train application development framework on AWS

In May 2020, we released an AWS Quick Start that you can use to deploy a medical imaging model development environment on the AWS Cloud, with the NVIDIA Clara Train application framework. Numerous healthcare and life sciences customers, such as Philips and Cerner, trust AWS for their sensitive healthcare workloads. Secure, scalable cloud infrastructure enables data scientists, architects, and medical researchers to focus on building machine learning (ML) workflows and pipelines to assist the medical community.

FDA approved AI is already helping physicians provide timely, high quality care in hospitals around the world. Several commercial solutions have established production medical imaging workloads on AWS, including solutions developed by GE Healthcare, Zebra, Arterys, and Heartflow. Despite these successes, we are in the early phases of a revolution in healthcare, where data and ML pipelines will increasingly assist clinicians diagnose, optimize workflows, enhance medical devices, and improve clinical quality.

In the sections below, I will introduce the NVIDIA Clara Train application framework, and then demonstrate deploying it to address the real-world use case of semantic segmentation of a spleen in a CT scan. The Quick Start deploys the Clara Train SDK using Amazon Elastic Container Service (ECS) and NVIDIA V100 Tensor Core GPUs with the Amazon EC2 p3.2xlarge instance type.

This blog post is a hands-on companion to the Quick Start that demonstrates the deployment steps, setting up an AWS Cloud9 integrated development environment (IDE), loading a pre-trained model from NVIDIA NGC (a hub for GPU-optimized software), and connecting the Clara Train server to Slicer (an open source medical image informatics tool). I will also demonstrate how to run a model training job with the Clara Train command line tools.

NVIDIA Clara Overview

NVIDIA Clara represents a set of tools for accelerating both medical imaging analysis as well as genomics profiling. This post focuses specifically on medical imaging. The Clara application framework for medical imaging is a containerized solution that can be divided into workloads with specific goals in mind. The first is Clara Train, which provides capabilities for AI- assisted annotation (AIAA) and training AI models with techniques like AutoML, transfer learning, and federated learning. The second is Clara Deploy, which provides a framework for inferencing by using operators and pipelines on top of the Triton inference server. The present Quick Start deploys the Clara Train application framework.

Using Clara Train on AWS provides the following benefits:

  1. Ready access to AWS storage services, like Amazon S3, that can be used to durably store massive medical imagining datasets, and to facilitate secure collaboration between teams.
  2. Minimizes permanent compute infrastructure. The deployment is entirely ephemeral, and provides scalable access to GPU compute as it is needed, with pay as you go pricing.
  3. As Amazon ECS is a fully managed container orchestration service, the Clara Train Docker containers and underlying GPU compute managed for you. Service level health checks ensure uptime of the Clara Train services, with automatic failover to a second AWS Availability Zone in the case of a failure.
  4. Clara Train’s AIAA service, deployed in a highly available architecture, can dramatically reduce the labor and cost associated with annotating medical image datasets.
  5. Adopting Clara Train’s advanced features, such as AutoML, Federated learning, and Transfer learning, can accelerate the development of sophisticated ML pipelines.

In short, leveraging Clara Train with managed services like Amazon ECS, Amazon Elastic File System (EFS), and Amazon S3 can help you focus on the AI development that will differentiate your organization’s research or products rather than the undifferentiated heavy lifting associated with managing the underlying platform.

Prerequisites

Deploy the NVIDIA Clara Train SDK Quick Start by following the steps in the deployment guide. By default, the Quick Start encrypts network connections, end-to-end, throughout the environment using a customer certificate managed in AWS Certificate Manager. Use the defaults, requiring encryption, for model training and Federated Learning focused deployments.

If you wish to integrate with Slicer, you will need to override the default (and best practice), and permit unencrypted HTTP connections to the load balancer and annotation server. Set UseHTTPS to HTTP to deploy the architecture with this less secure setting, and take care to only use with configuration with de-identified medical imaging data.

When the deployment is complete, you will see the Clara AIAA API and Amazon EFS DNS names in the AWS CloudFormation outputs section.

Paste the ClaraServiceUrl output in your browser. The API documentation page served by the Clara AIAA server confirms the deployment was a success.

Download the Task09_Spleen.tar dataset from the Medical Segmentation Decathalon to your local machine. Copy the archive to a S3 bucket, and extract the contents of the archive to a directory on your local machine.

IDE and Environment Setup

To explore the SDK functionality, you will set up both a pre-trained model and some training data. The setup is partially automated by scripts included with the Quick Start. You’ll use an AWS Cloud9 IDE environment to run the setup scripts and for editing the model configuration files. The AWS Cloud9 EC2 instance will mount the Amazon EFS file system deployed with the Quick Start, providing access to the data and configurations used by the Clara Train container.

From the AWS Management Console, launch a new AWS Cloud9 environment. Accept the default configurations except for Network settings (advanced). Launch the AWS Cloud9 EC2 instance into the same VPC that contains the ECS cluster. For the Subnet, select a public subnet in the same VPC, such as the subnet containing the bastion host.

From the bash shell in the lower pane of the AWS Cloud9 IDE, clone the Quick Start repo to the local directory.

git clone https://github.com/aws-quickstart/quickstart-nvidia-imaging-clara-train.git

With the AWS Cloud9 editor, open the clara_bootstrap.sh script in the samples subdirectory of the cloned repo. Set CLOUD9SG to the security group ID used by the AWS Cloud9 EC2 instance. Set EFSSG to the security group ID used by the Amazon EFS mount target deployed by the Quick Start. Set AIAA and EFSDNS to the AWS CloudFormation stack outputs ClaraServiceUrl and ElasticFileSystemDnsName, respectively. Set DATABUCKET to the S3 bucket where you uploaded the example data archive, Task09_Spleen.tar.

Save the file, and run it in the AWS Cloud9 bash shell. When the script exits, confirm the model was loaded to the AIAA server with the following curl command, adding /v1/models to the AIAA DNS name.

curl http://clara-LoadB-ABLH1BWGGPQ8-799966118.us-west-2.elb.amazonaws.com/v1/models

The API will respond with a summary of the model that you just loaded.

[{
"name": "clara_ct_seg_spleen_amp",
"labels": ["spleen"],
"description": "A pre-trained model for volumetric (3D) segmentation of the spleen from CT image",
"version": "2",
"type": "segmentation"
}]

Setup Slicer and Perform Automatic Segmentation

Install and launch the Slicer application. From the Extensions manager menu, install the NvidiaAIAssistedAnnotation extension. Use the Add Data command, at top left, to import an image file from the imagesTr subdirectory of the Task09_Spleen.tar archive that you extracted previously.

From the Application Settings menu, select the NVIDIA plugin and set the server address to match the AIAA DNS name, available from the Quick Start AWS CloudFormation output ClaraServiceUrl noted above.

Select the Segment Editor from the Slicer module dropdown, then select Nvidia AIAA from Effects. The Auto-segmentation dropdown should display the clara_ct_seg_spleen_amp model that you downloaded in the configuration above. Click the Start button to begin the automatic segmentation job.

All slices of the CT images are sent through the load balancer to the backend Amazon ECS container instance for inferencing. The segmentation results are then returned to Slicer, and the spleen volume is displayed with blue shading.

Using the Clara model training tools

The NVIDIA Clara Train application framework also includes a powerful set of command line model development tools. The Quick Start deploys the Amazon ECS container instance in a private subnet for improved security. You access the Clara Train container through an SSH connection to the bastion host deployed by the Quick Start. Use the Amazon EC2 console to find the public IP of the bastion host and the private IP of the Amazon ECS container instance. Then start an SSH authentication agent, SSH to the bastion host’s public IP, and from there, SSH to the Amazon ECS container instance’s private IP.

ssh-add -K Clara2.pem
ssh -A ec2-user@35.155.36.51
ssh -A ec2-user@10.180.31.225

From the Amazon ECS container instance, run docker ps to find the ID of the Clara Train container. Then launch an interactive bash shell on that container.

docker exec -it b50d10477dfc /bin/bash

From the container command line, download the pre-trained spleen segmentation model from the NVIDIA NCG registry.

MODEL_NAME=clara_ct_seg_spleen_amp
VERSION=1
ngc registry model download-version nvidia/med/$MODEL_NAME:$VERSION --dest /workspace/

This is the same pre-trained model that you used for the segmentation task with the AIAA server, but in this case it will be used for a model training job. Extract the training dataset that the setup scripts downloaded to the data directory.

cd /workspace/data/ 
 tar -xf ./Task09_Spleen.tar
 gunzip ./Task09_Spleen/imagesTr/*.nii.gz
 gunzip ./Task09_Spleen/imagesTs/*.nii.gz
 gunzip ./Task09_Spleen/labelsTr/*.nii.gz

Return to the AWS Cloud9 IDE environment. From the environment directory, create a symbolic link to make the mounted Amazon EFS mount point more convenient, and change the owner of the model directory that you just downloaded.

ln -s /mnt/efs/ ./workspace
sudo chown -R ec2-user ./workspace/clara_ct_seg_spleen_amp

With the AWS Cloud9 editor, open the environment.json file that came with the model, located at ./workspace/clara_ct_seg_spleen_amp_v1/config/environment.json. Set DATA_ROOT to point to the extracted training data directory. Update DATASET_JSON to point to the dataset_0.json file in the model config directory.

Now you are ready to launch the segmentation model training job with the train.sh script.

cd /workspace/clara_ct_seg_spleen_amp_v1/
chmod +x commands/train.sh
commands/train.sh

When the training job completes, model performance metrics are reported showing how well your model performed.

Conclusion

In this blog post, I’ve shown you the end-to-end process for deploying the Clara Train Application Development Framework on AWS. I demonstrated how to use pre-trained models to perform semantic segmentation of CT scans, and how to train a segmentation model yourself. I hope you use this post to start developing your own AI models, leveraging AWS services and the Clara Train Application Development Framework, or with Amazon SageMaker, TensorFlow, PyTorch, or Apache MXNet. If you work in healthcare and want to know more, please do not hesitate to contact your AWS account team.

Andy Schuetz

Andy Schuetz

Andy is a Sr. Partner Solutions Architect, and he focuses on helping partners use the AWS Cloud to deliver solutions for Healthcare and Life Science customers.