AWS Marketplace
Using Amazon Augmented AI with AWS Marketplace machine learning models
Pre-trained machine learning (ML) models available in AWS Marketplace take care of the heavy lifting, helping you deliver Artificial Intelligence (AI)- and ML-powered features faster and at a lower cost. However, just like all ML models, sometimes ML model predictions are just not confident enough. You want a pair of human eyes to confirm the prediction; I refer to this as “human-in-loop.” Furthermore, training a model can be complicated, time-consuming, and expensive. This is where AWS Marketplace and Amazon Augmented AI (Amazon A2I) come in. By combining a pre-trained ML model from AWS Marketplace with Amazon A2I, you can quickly reap the benefits of pre-trained models by validating and augmenting the models’ accuracy with human intelligence.
If you would like to exercise this solution in an automated manner, you can try out the sample Jupyter notebook.
Background
- AWS Marketplace contains over 400 pre-trained ML models. Some models are general purpose. For example, the GluonCV SSD Object Detector can detect objects in an image and place bounding boxes around the objects.
- Amazon A2I provides a human-in-loop workflow to review ML predictions. Its configurable human-review workflow solution and customizable user-review console enable you to focus on ML tasks and increase the accuracy of the predictions with human input.
For this blog post, I assume that your application accepts an image and returns bounding boxes for all vehicles or bikes that are completely visible. I also assume you are using a generic object detector model such as GluonCV SSD Object Detector augmented by a human who manually reviews the image in case model identifies any objects with low confidence. In this blog post, I show you how to use GluonCV SSD Object Detector, a pre-trained ML model from AWS Marketplace, with Amazon A2I.
Prerequisites
For this walkthrough, you must meet the following prerequisites:
- Subscription to an AWS Marketplace machine learning model
- Identity and Access Management (IAM) permissions to execute Amazon A2I
Solution overview
This end-to-end solution augments low-confidence predictions with manual reviews via Amazon A2I. Here is what it enables:
- Use the AWS console to select an ML model from AWS Marketplace.
- Deploy the ML model to an Amazon SageMaker endpoint using the AWS Command Line Interface (AWS CLI). Refer to the following diagram.
- Use the AWS CLI to simulate a client application submitting an inference request against the SageMaker endpoint.
- Prediction results that are above the threshold criterion require no further processing.
- For prediction results that are below the threshold criterion, the private workforce will review and modify using the worker console in Amazon A2I. The modified results will be sent to Amazon Simple Storage Service (Amazon S3) for storage and subsequent retrieval by the client application. Refer to the following diagram.
The solution described in this blog post detects objects and triggers a human-in-loop workflow to review, update, and add additional labeled objects to an individual image. You can follow these steps to configure a human-in-loop workflow, subscribe to and then deploy a model from AWS Marketplace, and then trigger the workflow for low-confidence predictions. Finally, the results are written back to the configured S3 bucket.
The first step of this workflow is to configure Amazon A2I.
Step 1: Configure Amazon A2I
In this three-step process, you first create a private workforce in Amazon SageMaker Ground Truth. You then create a worker task template. You finally create a human review workflow in Amazon A2I.
Step 1.1 Creating a private workforce in Ground Truth
To create a private workforce, perform the following steps:
- In the Amazon SageMaker console left sidebar under the Ground Truth heading, open the Labeling Workforces.
- Choose Private, and then choose Create private team.
- For team name, enter MyTeam. Choose Create private team.
- After creating your private team, you will be presented with a new screen where you will select Invite new workers from the Workers section at the bottom of the page.
- Paste or type a list of up to 50 email addresses, separated by commas, into the email addresses box. If you are following this blog post, specify an email account that you have access to. The system sends an invitation email, which allows users to authenticate and set up their profile for performing human-in-loop review.
- On the Private Workforce Summary page on the Private tab on the Labeling workforces page, choose In the Workers tab, choose Add workers to team.
- Choose and add the worker or workers you invited from step 5.
While implementing this architecture, you are creating a workforce of people who do the low-confidence prediction validation and correction for you.
The next step is to create a task template, which is the user interface that the worker sees when they’re asked to review model predictions.
Step 1.2 Creating the worker task template
AWS makes it easier to create a worker task template by providing over 60 pre-baked User Interface (UI) template samples you can use from in the amazon-a2i-sample-task-uis repository. For this blog post, I am performing object detection, so I am using the bounding-box.liquid.html template. To do that, follow these steps:
- Download the template and open the file using a text editor.
- To display the bounding boxes and labels that the model predicts during the evaluation process, just after the 6th line which contains
src="{{ task.input.taskObject | grant_read_access }}"
, addinitial-value="{{ task.input.initialValue }}"
. - Modify the labels entry with
labels="{{ task.input.labels }}"
. - Copy the content of your template into your clipboard.
- From the Amazon SageMaker console on the left sidebar under Augmented AI, select Worker task templates and then choose Create Template.
- For Template name, enter object-detection-template.
- Paste the content you copied into your clipboard in Template Editor. Choose Create.
The final step in configuring Amazon A2I is creating the human review workflow.
Step 1.3 Creating the human review workflow
To create a human review workflow:
- In the Amazon SageMaker console left sidebar under the Augmented AI heading, open Human review workflows and then choose Create human review workflow.
- In Workflow settings, do the following:
-
- For Name, enter a unique workflow name.
- For S3 location for output, enter the S3 bucket where you want to store the human review results.
- For IAM role, choose Create a new role and then select Any S3 bucket.
- For Task type, choose custom.
- Configure and specify your worker task template in the Worker task template section:
-
- Select the custom task template that you just created.
- Enter a Task description to briefly describe the task for your workers.
- For Workers, choose Private as workforce type.
- For Private Team, choose the team you created in Step 1.1.
The configuration of Amazon A2I is now complete.
Step 2: Subscribe to and deploy an AWS Marketplace model
In this step, I subscribe to the model in AWS Marketplace and deploy it to perform an inference.
Step 2.1 Select and Subscribe to the AWS Marketplace model
- After logging into your AWS account, open GluonCV SSD Object Detector.
- Choose the Continue to Subscribe button.
- To subscribe to the ML model, choose Accept offer.
- On the Configure your Software page, you should see a model package ARN.
Note that in step 2.2, I will deploy the model in the us-east-1 Region. If you choose to deploy in another Region, specify the model package ARN corresponding to that Region in the create-model CLI command in step 2.2.2.
Step 2.2 Deploy the model to the SageMaker endpoint
To deploy the model for performing real-time inference using AWS CLI, follow these steps:
- Create an IAM role that includes the AmazonSageMakerFullAccess managed policy. Copy the ARN of the IAM role from the Summary screen of the newly created IAM role, which you use in the next step. For detailed instructions for creating this role, see Creating a role for an AWS service (console).
- In the AWS CLI, copy and paste the following command, replacing the <IAM_ROLE_ARN> with the ARN you copied in step 2.2.1. To create a model from the model package, execute the command.
- You are now ready to create a SageMaker endpoint from the model from AWS Marketplace. To do that, in the AWS CLI, enter the following command:
Once the model has been deployed, verify that the Status of the endpoint changed to InService. To do that, navigate to the Amazon SageMaker console. In the left sidebar, under the Inference heading, select Endpoints. The Status of the endpoint should be InService.
Step 3: Perform inference to detect objects
Now you can perform an inference on the endpoint you just deployed.
- For this blog post, I downloaded this image and used it to perform an inference. Make sure and select the Small version of the image from the Free Download drop-down. When you download the image, note the image path on your system. For example, /tmp/download/image.jpg.
- To perform an inference on the image, in the AWS CLI, execute the following command from shell, replacing the <IMAGE_PATH> with the full filesystem path where you saved the image file.
You can see the inference results in the result.json file. Below is a snippet of the JSON that shows the bounding box coordinates for each object identified along with a confidence score. You can see that the model is confident about many objects. However, it returned a lower score for some. These low-confidence inferences require a human-in-loop review.
The next step is to trigger and perform a human-in-loop review.
Step 4: Invoke the human-in-loop review
Step 4.1 Triggering a human-in-loop review
Typically, human-in-loop review would be triggered by your application. However, for this blog post, I trigger it via code, which can be run from your Jupyter notebook hosted on SageMaker.
Modify <FLOW_DEFINITION_ARN> and <IMAGE_FILE_NAME> and ensure that you upload both the image and inference result files to your hosted notebook instance. Then execute the following code. Also note that a threshold of 0.9 has been set in following code. You may adjust the threshold based on your experience with the model you have chosen.
Step 4.2 Update the predictions as a worker
Now that the human-in-loop workflow has been triggered, you can log into the worker console to review and update the model’s predictions.
To access the URL for the worker console, follow these steps.
- In the Amazon SageMaker console left sidebar under the Ground Truth heading, open the Labeling Workforces.
- To view the private workforce that you created, choose Private.
- In Private workforce summary, under Labeling portal sign-in URL, choose the link. A new tab opens in your web browser.
- Enter the username and password that you created for your worker in Step 1.1.
- You would see a worker console with one task waiting for you to work on. To begin working on tasks, choose the task and then choose Start working. The following screenshot shows the Jobs section of a worker console with one task and an orange Start working button.
When you begin working on a task, you see the image that you sent into the model for prediction. You also see a number of bounding boxes with labels. These objects were detected by the model with high confidence. The following image shows a city street with cars and a bicycle rider in the rain. Three of the cars have been correctly identified by the model and have orange bounding boxes around them and the label car; the cyclist has a red bounding box with the label person.
Note that the model did not detect the bicycle in the foreground because you set the threshold to 0.9, which filtered out all object detections with an accuracy score lower than 0.9.
As a human in charge of reviewing this inference, you can place a bounding box around the bike and include the label. To do so, in the right pane under Labels, choose bicycle. Then use your cursor to make a box around the bicycle as shown in the following image.
In the lower right corner, choose Submit.
Step 4.3 View human-in-loop annotations
You can open the workflow you created and then open the review from the Human loops section. In the output, you would be able to find S3 location of the output, which contains the following results.
As you can see, the annotatedResult object includes the bounding box identified by the worker as well.
Cleaning up
To avoid incurring future charges, please follow these clean up steps.
Additional resources
Here are some additional resources I recommend checking out.
- To see tutorial videos, watch this AWS Marketplace for Machine Learning video playlist.
- For inspiration, look over the machine learning project submissions from the AWS Marketplace ML Challenge virtual hackathon, conducted in spring 2020. There are also detailed architecture diagrams and project demonstration videos.
- In addition to third-party model packages, AWS Marketplace also contains algorithms. These can be used to train a custom ML model. For more information on algorithms, see Amazon SageMaker Resources in AWS Marketplace and Using AWS Marketplace for machine learning workloads.
- If you are interested in selling an ML algorithm or a pre-trained model package, see Sell Amazon SageMaker Algorithms and Model Packages.
Conclusion
In this blog post, I showed you how to use Amazon A2I with pre-trained ML models available in AWS Marketplace. Using these models with Amazon A2I enables you to incorporate human-in-loop reviews of model predictions and obtain highly accurate results. For more information on available models, see AWS Marketplace – Machine Learning.
About the author
Prashanth Rao is a senior technical account manager at AWS. He has over 20 years of experience in IT, management, and consulting. Outside work, he’s striving to have a backhand like Roger Federer and is always on the lookout for the best Vietnamese pho in every new city that he visits.