Amazon SageMaker Edge Manager
Manage and monitor ML models efficiently across fleets of smart devices
An increasing number of applications such as industrial automation, autonomous vehicles, and automated checkouts require machine learning (ML) models that run on devices at the edge so predictions can be made in real-time when new data is available. Amazon SageMaker Neo is the easiest way to optimize ML models for edge devices, enabling you to train ML models once in the cloud and run them on any device. As devices proliferate, customers may have thousands of deployed models running across their fleets. Amazon SageMaker Edge Manager allows you to optimize, secure, monitor, and maintain ML models on fleets of smart cameras, robots, personal computers, and mobile devices.
Amazon SageMaker Edge Manager provides a software agent that runs on edge devices. The agent comes with a ML model optimized with SageMaker Neo automatically so you don’t need to have Neo runtime installed on your devices in order to take advantage of the model optimizations. The agent also collects prediction data and sends a sample of the data to the cloud for monitoring, labeling, and retraining so you can keep models accurate over time. All data can be viewed in the SageMaker Edge Manager dashboard which reports on the operation of deployed models. And, because SageMaker Edge Manager enables you to manage models separately from the rest of the application, you can update the model and the application independently reducing costly downtime and service disruptions. SageMaker Edge Manager also cryptographically signs your models so you can verify that it was not tampered with as it moves from the cloud to edge devices.
Model management across fleets of edge devices
Optimize ML models for a wide range of devices
Amazon SageMaker Edge Manager automatically optimizes ML models for deployment on a wide variety of edge devices, including devices powered by CPUs, GPUs, and embedded ML accelerators. SageMaker Edge Manager compiles your trained model into an executable that discovers and applies specific performance optimizations that can make your model run up to 25x faster on the target hardware. SageMaker Edge Manager allows you to optimize and package trained models using different frameworks such as DarkNet, Keras, MXNet, PyTorch, TensorFlow, TensorFlow-Lite, ONNX, and XGBoost for inference on Android, iOS, Linux, and Windows based machines.
Easy integration with device applications
Amazon SageMaker Edge Manager supports gRPC, an open source remote procedure call, which allows you to integrate SageMaker Edge Manager with your existing edge applications through APIs in common programming languages, such as Android Java, C# / .NET, Dart, Go, Java, Kotlin/JVM, Node.js, Objective-C, PHP, Python, Ruby, and Web.
Continuous model monitoring
Amazon SageMaker Edge Manager collects data from edge devices and sends a sample to the cloud where it is analyzed and visualized in SageMaker. If quality declines are detected, you can quickly spot them in the dashboard and also configure alerts through Amazon CloudWatch. Declines in model quality, or model drift, can be caused by differences in the data used to make predictions compared to the data used to train the model or by changes in the real world. For example, an object detection model that is not trained on images in snow conditions does not work well when it encounters them in the real-world.
Amazon SageMaker Edge Manager provides a dashboard so you can understand the performance of models running on each device across your fleet. The dashboard helps you visually understand overall fleet health and identify the problematic models through a dashboard in the console. When a problem is identified, you can collect model data, relabel the data, retrain the model, and redeploy the model.
Serve multiple models on a device – coming soon
For ML applications that require hosting and running multiple models concurrently on a device, Amazon SageMaker Edge Manager will soon allow you to write simple application logic to send one or more queries (i.e. load/unload models, run inference) independently to multiple models and rebalance hardware resource utilization when you add or update a model. For example, a self-navigating robot needs an object detection model to detect obstacles, a classification model to recognize obstacles, and a tree-based model to determine action.
Model registry and model lineage - coming soon
Soon you will be able to automate the build-train-deploy workflow from cloud to edge devices in Amazon SageMaker Edge Manager, and trace the lifecycle of each model.
Lenovo™, the #1 global PC maker, recently incorporated Amazon SageMaker into its latest predictive maintenance offering.
"The new SageMaker Edge Manager will help eliminate the manual effort required to optimize, monitor, and continuously improve the models after deployment. With it, we expect our models will run faster and consume less memory than with other comparable machine-learning platforms. SageMaker Edge Manager allows us to automatically sample data at the edge, send it securely to the cloud, and monitor the quality of each model on each device continuously after deployment. This enables us to remotely monitor, improve, and update the models on our edge devices around the world and at the same time saves us and our customers' time and costs."
Igor Bergman, Lenovo Vice President, Cloud & Software of PCs and Smart Devices.
Basler AG is a leading manufacturer of high-quality digital cameras and accessories for industry, medicine, transportation and a variety of other markets.
“Basler AG delivers intelligent computer vision solutions in a variety of industries, including manufacturing, medical, and retail applications. We are excited to extend our software offering with new features made possible by Amazon SageMaker Edge Manager. To ensure our machine learning solutions are performant and reliable, we need a scalable edge to cloud MLOps tool that allows us to continuously monitor, maintain, and improve machine learning models on edge devices. SageMaker Edge Manager allows us to automatically sample data at the edge, send it securely to the cloud, and monitor the quality of each model on each device continuously after deployment. This enables us to remotely monitor, improve, and update the models on our edge devices around the world and at the same time saves us and our customers' time and costs."
Mark Hebbel, Head of Software Solutions at Basler.