AWS Machine Learning Blog

Run ML inference on AWS Snowball Edge with Amazon SageMaker Edge Manager and AWS IoT Greengrass

You can use AWS Snowball Edge devices in locations like cruise ships, oil rigs, and factory floors with limited to no network connectivity for a wide range of machine learning (ML) applications such as surveillance, facial recognition, and industrial inspection. However, given the remote and disconnected nature of these devices, deploying and managing ML models at the edge is often difficult. With AWS IoT Greengrass and Amazon SageMaker Edge Manager, you can perform ML inference on locally generated data on Snowball Edge devices using cloud-trained ML models. You not only benefit from the low latency and cost savings of running local inference, but also reduce the time and effort required to get ML models to production. You can do all this while continuously monitoring and improving model quality across your Snowball Edge device fleet.

In this post, we talk about how you can use AWS IoT Greengrass version 2.0 or higher and Edge Manager to optimize, secure, monitor, and maintain a simple TensorFlow classification model to classify shipping containers (connex) and people.

Getting started

To get started, order a Snowball Edge device (for more information, see Creating an AWS Snowball Edge Job). You can order a Snowball Edge device with an AWS IoT Greengrass validated AMI on it.

After you receive the device, you can use AWS OpsHub for Snow Family or the Snowball Edge client to unlock the device. You can start an Amazon Elastic Compute Cloud (Amazon EC2) instance with the latest AWS IoT Greengrass installed or use the commands on AWS OpsHub for Snow Family.

Launch and install an AMI with the following requirements, or provide an AMI reference on the Snowball console before ordering and it will be shipped with all libraries and data in the AMI:

  • The ML framework of your choice, such as TensorFlow, PyTorch, or MXNet
  • Docker (if you intend to use it)
  • AWS IoT Greengrass
  • Any other libraries you may need

Prepare the AMI at the time of ordering the Snowball Edge device on AWS Snow Family console. For instructions, see Using Amazon EC2 Compute Instances. You also have the option to update the AMI after Snowball is deployed to your edge location.

Install the latest AWS IoT Greengrass on Snowball Edge

To install AWS IoT Greengrass on your device, complete the following steps:

  1. Install the latest AWS IoT Greengrass on your Snowball Edge device. Make sure dev_tools=True is set to have ggv2 cli See the following code:
sudo -E java -Droot="/greengrass/v2" -Dlog.store=FILE \ -jar ./MyGreengrassCore/lib/Greengrass.jar \ --aws-region region \ --thing-name MyGreengrassCore \ --thing-group-name MyGreengrassCoreGroup \ --tes-role-name GreengrassV2TokenExchangeRole \ --tes-role-alias-name GreengrassCoreTokenExchangeRoleAlias \ --component-default-user ggc_user:ggc_group \ --provision true \ --setup-system-service true \ --deploy-dev-tools true

We reference the --thing-name you chose here when we set up Edge Manager.

  1. Run the following command to test your installation:
aws greegrassv2 help
  1. On the AWS IoT console, validate the successfully registered Snowball Edge device with your AWS IoT Greengrass account.

Optimize ML models with Edge Manager

We use Edge Manger to deploy and manage the model on Snowball Edge.

  1. Install the Edge Manager agent on Snowball Edge using the latest AWS IoT Greengrass.
  2. Train and store your ML model.

You can train your ML model using any framework of your choice and save it to an Amazon Simple Storage Service (Amazon S3) bucket. In the following screenshot, we use TensorFlow to train a multi-label model to classify connex and people in an image. The model used here is saved to an S3 bucket by first creating a .tar file.

After the model is saved (TensorFlow Lite in this case), you can start an Amazon SageMaker Neo compilation job of the model and optimize the ML model for Snowball Edge Compute (SBE_C).

  1. On the SageMaker console, under Inference in the navigation pane, choose Compilation jobs.
  2. Choose Create compilation job.

  1. Give your job a name and create or use an existing role.

 If you’re creating a new AWS Identity and Access Management (IAM) role, ensure that SageMaker has access to the bucket in which the model is saved.

  1. In the Input configuration section, for Location of model artifacts, enter the path to model.tar.gz where you saved the file (in this case, s3://feidemo/tfconnexmodel/connexmodel.tar.gz).
  2. For Data input configuration, enter the ML model’s input layer (its name and its shape). In this case, it’s called keras_layer_input and its shape is [1,224,224,3], so we enter {“keras_layer_input”:[1,224,224,3]}.

  1. For Machine learning framework, choose TFLite.

  1. For Target device, choose sbe_c.
  2. Leave Compiler options
  3. For S3 Output location, enter the same location as where your model is saved with the prefix (folder) output. For example, we enter s3://feidemo/tfconnexmodel/output.

  1. Choose Submit to start the compilation job.

Now you create a model deployment package to be used by Edge Manager.

  1. On the SageMaker console, under Edge Manager, choose Edge packaging jobs.
  2. Choose Create Edge packaging job.
  3. In the Job properties section, enter the job details.
  4. In the Model source section, for Compilation job name, enter the name you provided for the Neo compilation job.
  5. Choose Next.

  1. In the Output configuration section, for S3 bucket URI, enter where you want to store the package in Amazon S3.
  2. For Component name, enter a name for your AWS IoT Greengrass component.

This step creates an AWS IoT Greengrass model component where the model is downloaded from Amazon S3 and uncompressed to local storage on Snowball Edge.

  1. Create a device fleet to manage a group of devices, in this case, just one (SBE).
  2. For IAM role¸ enter the role generated by AWS IoT Greengrass earlier (–tes-role-name).

Make sure it has the required permissions by going to IAM console, searching for the role, and adding the required policies to it.

  1. Register the Snowball Edge device to the fleet you created.

  1. In the Device source section, enter the device name. The IoT name needs to match the name you used earlier—in this case, –thing-name MyGreengrassCore.

You can register additional Snowball devices on the SageMaker console to add them to the device fleet, which allows you to group and manage these devices together.

Deploy ML models to Snowball Edge using AWS IoT Greengrass

In the previous sections, you unlocked and configured your Snowball Edge device. The ML model is now compiled and optimized for performance on Snowball Edge. An Edge Manager package is created with the compiled model and the Snowball device is registered to a fleet. In this section, you look at the steps involved in deploying the ML model for inference to Snowball Edge with the latest AWS IoT Greengrass.

Components

AWS IoT Greengrass allows you to deploy to edge devices as a combination of components and associated artifacts. Components are JSON documents that contain the metadata, the lifecycle, what to deploy when, and what to install. Components also define what operating system to use and what artifacts to use when running on different OS options.

Artifacts

Artifacts can be code files, models, or container images. For example, a component can be defined to install a pandas Python library and run a code file that will transform the data, or to install a TensorFlow library and run the model for inference. The following are example artifacts needed for an inference application deployment:

  • gRPC proto and Python stubs (this can be different based on your model and framework)
  • Python code to load the model and perform inference

These two items are uploaded to an S3 bucket.

Deploy the components

The deployment needs the following components:

  • Edge Manager agent (available in public components at GA)
  • Model
  • Application

Complete the following steps to deploy the components:

  1. On the AWS IoT console, under Greengrass, choose Components, and create the application component.
  2. Find the Edge Manager agent component in the public components list and deploy it.
  3. Deploy a model component created by Edge Manager, which is used as a dependency in the application component.
  4. Deploy the application component to the edge device by going to the list of AWS IoT Greengrass deployments and creating a new deployment.

If you have an existing deployment, you can revise it to add the application component.

Now you can test your component.

  1. In your prediction or inference code deployed with application component, code in the logic to access files locally on the Snowball Edge device (for example, in the incoming folder) and have the predictions or processed files be moved to a processed folder.
  2. Log in to the device to see if the predictions have been made.
  3. Set up the code to run on a loop, checking the incoming folder for new files, processing the files, and moving them to the processed folder.

The following screenshot is an example setup of files before deployment inside the Snowball Edge.

After deployment, all the test images have classes of interest and therefore are moved to the processed folder.

Clean up

To clean up everything or reimplement this solution from scratch, stop all the EC2 instances by invoking the TerminateInstance API against EC2-compatible endpoints running on your Snowball Edge device. To return your Snowball Edge device, see Powering Off the Snowball Edge and Returning the Snowball Edge Device.

Conclusion

This post walked you through how to order a Snowball Edge device with an AMI of your choice. You then compile a model for the edge using SageMaker, package that model using Edge Manager, and create and run components with artifacts to perform ML inference on Snowball Edge using the latest AWS IoT Greengrass. With Edge Manager, you can deploy and update your ML models on a fleet of Snowball Edge devices, and monitor performance at the edge with saved input and prediction data on Amazon S3. You can also run these components as long-running AWS Lambda functions that can spin up a model and wait for data to do inference.

You combine several features of AWS IoT Greengrass to create an MQTT client and use a pub/sub model to invoke other services or microservices. The possibilities are endless.

By running ML inference on Snowball Edge with Edge Manager and AWS IoT Greengrass, you can optimize, secure, monitor, and maintain ML models on fleets of Snowball Edge devices. Thanks for reading and please do not hesitate to leave questions or comments in the comments section.

To learn more about AWS Snow Family, AWS IoT Greengrass, and Edge Manager, check out the following:


About the Authors

Raj Kadiyala is an AI/ML Tech Business Development Manager in AWS WWPS Partner Organization. Raj has over 12 years of experience in Machine Learning and likes to spend his free time exploring machine learning for practical every day solutions and staying active in the great outdoors of Colorado.

 

 

 

Nida Beig is a Sr. Product Manager – Tech at Amazon Web Services where she works on the AWS Snow Family team. She is passionate about understanding customer needs, and using technology as a conductor of transformative thinking to deliver consumer products. Besides work, she enjoys traveling, hiking, and running.