AWS Greengrass is software that lets you run local
Machine learning uses statistical algorithms that can learn from existing data, a process called training, in order to make decisions about new data, a process called inference. During training, patterns and relationships in the data are identified to build a model for decision making. This model allows a system to then make intelligent decisions about data it hasn’t encountered before. Training ML models
AWS Greengrass ML Inference gives you the best of both worlds. You use ML models that are built and trained in the cloud and you deploy and run ML inference locally on connected devices. For example, you can build a predictive model in Amazon SageMaker for scene detection analysis and then run it locally on a Greengrass enabled security camera device where there is no cloud connectivity to predict and send an alert when an incoming visitor is detected.
Easily run ML Inference on Connected Devices
Deploy Models to Your Connected Device with a Few Clicks
Accelerate Inference Performance with GPUs
How It Works
Retail and Hospitality
Predictive Industrial Maintenance
Yanmar leverages AWS Greengrass ML Inference as part of their IoT precision agriculture solution that increases the intelligence of greenhouse operations by automatically detecting and recognizing the main growth stages of vegetables.
AWS Greengrass ML Inference enabled IoT devices allows DFDS to predict and optimize ship propulsion, ultimately reducing fuel consumption for their entire fleet.