AWS Greengrass ML Inference - Now Generally Available

Posted on: Apr 4, 2018

At re:Invent 2017, we introduced a preview of the AWS Greengrass ML Inference, a new feature of AWS Greengrass that allows you to run machine learning inference on IoT edge devices, even without cloud connectivity. Today, we are excited to announce the general availability of AWS Greengrass ML Inference. Since we launched our preview at re:Invent, we have added feature enhancements to improve your experience while using AWS Greengrass ML Inference. We have made it easier for you to deploy and run machine learning models on your IoT devices. In addition to Apache MXNet, AWS Greengrass ML Inference now includes a pre-built TensorFlow package so you don’t have to build or configure the ML framework for your device from scratch. These ML packages support Intel Atom, NVIDIA Jetson TX2, and Rasperry Pi devices. For more information on Greengrass ML Inference, please visit here.

AWS Greengrass ML Inference is available in the US East (Northern Virginia) and US West (Oregon) regions.

  • Read how Yanmar is using Greengrass ML for precision agriculture here.
  • Get started with AWS Greengrass here.