AWS IoT Greengrass ML Inference Solution Accelerator
The AWS IoT Greengrass ML Inference solution accelerator demonstrates how to deploy and run machine learning models trained in Amazon SageMaker on a local edge device running AWS IoT Greengrass, with the ability to perform local inferencing on locally-generated data while publishing the inference results to connected AWS IoT Greengrass-aware devices or AWS.
This solution accelerator uses AWS IoT services to collect data from locally-connected devices and process that data using AWS IoT Greengrass ML Inference. The diagram below represents the architecture you can automatically deploy using the AWS CloudFormation template and accompanying source code and documentation.
This solution accelerator can address multiple use case scenarios where local data sources, such as a connected camera or sensory network, can stream data to an AWS IoT Greengrass device and local functions can be used to invoke inference services running on the device.
For example, an automated sorting and recycling facility center operates a fleet of cameras that identify and look for waste. Each camera snaps an image per second as the materials are conveyed throughout the sorting process. The images are captured and forwarded to a device running AWS IoT Greengrass, where local inference is made against the captured image, and the appropriate action is taken according to the rules determined by the sorting and recycling facility.
The solution architecture is composed of three main parts. The data source, such as a connected camera or a network streaming protocol, provides raw data inputs. The AWS IoT Greengrass Core runs on local hardware that connects to the data source and to the cloud, and runs AWS Lambda functions. AWS services in the cloud, such as AWS IoT Core, Amazon SageMaker, Amazon S3, Amazon DynamoDB, and AWS Lambda provide compute, storage, messaging, and model training and improvement.
This solution accelerator uses Lambda functions to acquire data from the source and pre-process it for inference. Inference is performed locally using a Lambda function or an IoT Greengrass Connector to make predictions that can trigger local actions and transmit results to the cloud.
This solution accelerator is hardware agnostic but can be customized to ensure machine learning models can be deployed and run on most any hardware running AWS IoT Greengrass. The AWS Partner Device Catalog provides a list of qualified devices that have been tested to run IoT Greengrass and interoperate with AWS. You can self-test your hardware to run AWS IoT Greengrass with AWS IoT Device Tester.
This solution accelerator complements the Extract, Transform, Load (ETL) with AWS IoT Greengrass solution accelerator and can be adapted to handle different data ingest and processing requirements, including the use of alternate machine learning models and visualization tools.