The Internet of Things on AWS – Official Blog

Machine Learning at the Edge: Using and Retraining Image Classification Models with AWS IoT Greengrass (Part 1)

With the introduction of the AWS IoT Greengrass Image Classification connector at this year’s re:Invent, it has become easier than ever to use image classification at the edge through AWS IoT Greengrass. Because it is software that lives on a local device, AWS IoT Greengrass makes it possible to analyze data closer to the source (a sensor, etc). Using AWS IoT Greengrass connectors, AWS IoT Greengrass Core devices can connect to third-party applications, on-premises software, and AWS services without writing code. The AWS IoT Greengrass Image Classification connector gives an AWS IoT Greengrass Core device the ability to classify an image into one of multiple categories (for example, categorizing microchips in a factory as defective/not defective, classifying types of inventory, or determining the kind of dog you’re following on Instagram). This prediction is referred to as an inference. Together, image classification and AWS IoT Greengrass make it possible for you to perform inference even when a device is disconnected from the cloud!

Behind the scenes, the AWS IoT Greengrass Image Classification connector uses a machine learning model that has been trained using the image classification algorithm in Amazon SageMaker. By deploying the connector, all of the Lambda functions and machine learning libraries (MXNet) required to make a prediction are pulled down and configured on an AWS IoT Greengrass Core device automatically.

In these two posts, we will walk through an end-to-end example of creating an application that uses image classification. In part 1, we will create a new image classification model in Amazon SageMaker and get you up and running with the AWS IoT Greengrass Image Classification connector. In part 2, we will collect data in the field, retrain our model, and observe changes in our inference results.

What we’re building

We’ll be tackling a real world problem that can be addressed through the use of image classification: sorting beverage containers in a recycling facility. We will train our model to identify whether an image contains a beer mug, wine bottle, coffee mug, or soda can. We will also include a clutter category in case the image does not belong to one of these classes.

First, we will build our image classification model using the Caltech 256 dataset. Then, we will create an AWS IoT Greengrass Image Classification connector and interact with it through a Lambda function dedicated to classifying beverage containers. At the end of part 1, we will have the following architecture:

Prerequisites

To follow along with the instructions in this post, you will need:

Make sure that you have a Greengrass group deployed to your Raspberry Pi that is running AWS IoT Greengrass Core v1.7.0. Make sure that your Greengrass group has an IAM group role with, at minimum, the AWSGreengrassResourceAccessRolePolicy and AWSGreengrassFullAccess policies attached. You can do this by opening the AWS IoT console and choosing Settings. For information about setting up a device with AWS IoT Greengrass, visit Getting Started with AWS IoT Greengrass in the AWS IoT Greengrass Developer Guide.

To use the IoT Greengrass Image Classification connector, we need to install required dependencies for MXNet, the machine learning library we use for image classification. Follow the installation script outlined for ARMv7 in the Image Classification connector documentation.

Note: To install the dependencies on a Raspberry Pi, you must increase the swap file size. We recommend setting the size to 1000. This installation can take up to one hour.

Finally, per the troubleshooting section of the Image Classification connector documentation, run the following command to prevent a Raspberry Pi/opencv-specific issue from occurring during deployments:

$ sudo ln /dev/null /dev/raw1394

If you have trouble performing these steps, see the troubleshooting section in the AWS IoT Greengrass documentation.

Building and testing the application

We will start by creating a Lambda function that can take pictures using the Pi camera and make predictions using an image classification model.

Create a Lambda function

Create a Lambda function. Download beverageclassifier.py from GitHub into a new directory and then download and unzip the AWS IoT Greengrass Machine Learning SDK into the same location. Compress the directory into a .zip file and use it to create a Lambda function in the AWS Lambda console. We called our Lambda function beverage_classifier. In the AWS IoT console, add this Lambda function to your group and configure it as a long-lived Lambda function with a memory limit of 128 MB and timeout of 10 seconds, as shown in the following screenshot. For more information about creating and packaging Lambda functions, see Create and Package a Lambda Function in the AWS IoT Greengrass Developer Guide.

Run the following command on your Raspberry Pi to install the AWS IoT Greengrass Core SDK:

$ pip install greengrasssdk

To use the Pi camera, we need to set up the Raspberry Pi and some local resources. Follow the steps in Configure the Rasberry Pi and Add Resources to the Greengrass Group sections of the AWS IoT Greengrass Developer Guide.

Notice that interaction with the Image Classification connector occurs through the AWS IoT Greengrass Machine Learning SDK.

Create a model

We will use Amazon SageMaker to create and train our image classification model. In the Amazon SageMaker console, create a notebook using the sample we have provided on GitHub.

Follow the notebook’s instructions for part 1. Upon completion, you will have an Amazon SageMaker training job that can be used to configure an Image Classification connector.

Configure an Image Classification connector

Now that we have a training job, we can set up our connector. Deploying the connector to our Core device will make our image classification model ready to be used locally by the Lambda function we created in the previous step.

Begin by creating a machine learning resource in your Greengrass group. You can find your group in the Greengrass group page of the AWS IoT console. On the page, under Resources, choose the Machine Learning tab, and then choose Add a machine learning resource. Use the values in following screenshot to complete the fields. For SageMaker model, be sure to choose the Amazon SageMaker model we created in the previous step.

Choose Save and create a deployment.

Now we’re ready to create a connector. Navigate to your Greengrass group, choose the Connectors tab, and then choose Add a connector. We will be deploying this connector to a Raspberry Pi, so on Select a connector, choose the Image Classification ARMv7 connector.

On the next page, we will configure some parameters for our connector. Choose the machine learning resource you created in the previous step. For Local inference service name, enter beverage-classifier. This name will be used in our Lambda code when we call the connector through the AWS IoT Greengrass Machine Learning SDK. Use the values in this screenshot to configure the rest of your connector’s parameters.

Choose Add and then create a new deployment. Our Lambda function can now access our image classification model!

If you have trouble with any of these steps, see the troubleshooting section of the Image Classification connector documentation.

Configure subscriptions

Now that our connector and Lambda function are set up, let’s create a way to interact with our application. Using the Test page in the AWS IoT console, we will configure subscriptions between the AWS Cloud and the beverage_classifier Lambda function so that we can trigger the device to capture images and view our inference results in the console. In practice, any MQTT message can trigger the beverage_classifier Lambda function. We use the AWS IoT console to trigger events for this example because it offers easy debugging feedback, but there are other ways to trigger these events. In a production environment, you might instead send these MQTT events from other devices or Lambda functions. (It’s possible to send messages between devices and a Greengrass Core device even when the Core device is disconnected from the cloud!) Depending on your use case, AWS IoT Jobs offer another way to interact with your Greengrass Core device.

In the AWS IoT console, configure the following subscriptions for your group:

  1. AWS IoT Cloud (source) to beverage_classifier Lambda (target) on /request/classify/beverage_container (topic). Messages on this topic will trigger the Lambda code.
  2. beverage_classifier Lambda (source) to AWS IoT Cloud (source) on /response/prediction/beverage_container (topic). These messages will appear in the AWS IoT console and report predictions.

Set up local resources

Configure a volume resource for the local directory where we will store the images we capture:

Before we deploy, we need to create the /home/ggc_user/raw_field_data directory on the device. We also need to give read and write permissions to ggc_user:

$ sudo mkdir -p /home/ggc_user/raw_field_data
$ sudo chown -R ggc_user:ggc_group /home/ggc_user/raw_field_data/

You can alternatively give permission to your own user ID/group ID by setting the Run as field in the beverage classifier AWS IoT Greengrass Lambda function configuration. For more information, see Controlling Execution of Greengrass Lambda Functions by Using Group-Specific Configuration in the AWS IoT Greengrass Developer Guide.

Create a deployment.

Test

Now that everything is set up, we can test our beverage container classifier. In the AWS IoT console, choose Test, and subscribe to topic /response/prediction/beverage_container. Publishing to the topic /request/classify/beverage_container will capture and classify an image! Place a coffee mug, beer mug, wine bottle, or soda can in front of your Pi camera and choose Publish to topic. Your Core device will capture an image, make a prediction, and emit the result back to the AWS IoT console:

Conclusion

Testing will demonstrate the limitations of the Caltech 256 dataset. You’ll notice that many predictions are either incorrect or of low confidence. In our testing, we saw low confidence or incorrect predictions for everything but our beer mug:

It would be great to improve the accuracy of our model. In part 2, we will show you how to extend this application to collect your own images and retrain the model to attempt to improve its performance!