AWS Partner Network (APN) Blog

Tag: LeapMind

Machine Learning-4

Using Fewer Resources to Run Deep Learning Inference on Intel FPGA Edge Devices

Inference is an important stage of machine learning pipelines that deliver insights to end users from trained neural network models. These models are deployed to perform predictive tasks like image classification, object detection, and semantic segmentation. However, constraints can make implementing inference at scale on edge devices such as IoT controllers and gateways challenging. Learn how to train and convert a neural network model for image classification to an edge-optimized binary for Intel FPGA hardware.