AWS Partner Network (APN) Blog

Tag: Deep Learning

Machine Learning-4

Using Fewer Resources to Run Deep Learning Inference on Intel FPGA Edge Devices

Inference is an important stage of machine learning pipelines that deliver insights to end users from trained neural network models. These models are deployed to perform predictive tasks like image classification, object detection, and semantic segmentation. However, constraints can make implementing inference at scale on edge devices such as IoT controllers and gateways challenging. Learn how to train and convert a neural network model for image classification to an edge-optimized binary for Intel FPGA hardware.

Read More
Deep-Instinct_AWS-Competency

How Deep Neural Networks Built on AWS Can Help Predict and Prevent Security Threats

Deep learning is inspired by the human brain and once a brain learns to identify an object, its identification becomes second nature. Similarly, as Deep Instinct’s artificial neural network learns to detect more and more types of cyber threats, its prediction capabilities become instinctive. As a result, malware both known and new can be predicted and prevented in zero-time. Deep Instinct’s predictive threat prevention platform can be applied against known or unknown threats, whether it be a file or fileless attack.

Read More