Sign in
Migration Mapping Assistant Your Saved List Partners Sell in AWS Marketplace Amazon Web Services Home Help

FPGA-Accelerated Deep-Learning Inference with Binarized Neural Networks

FPGA-Accelerated Deep-Learning Inference with Binarized Neural Networks

By: Missing Link Electronics, Inc. Latest Version: 1.2

Product Overview

Image classification of the Cifar10 dataset using the CNV neural network. Based on Xilinx public proof-of-concept implementation of a reduced-precision, Binarized Neural Network (BNN) implemented in FPGA, MLE developed this demo to showcase the performance benefits of Deep-Learning Inference when running on AWS F1. Starting point for this demo was PYNQ-BNN from Xilinx [] using the so-called FINN framework [] from Y. Umuroglu et al. Several changes were applied during porting on AWS F1: modifications to the PYNQ-BNN software library including Python, and design-flow adjustments to support the AWS F1 SDAccel workflow.

This demo is especially intended for researchers and developers interested in FPGA-based acceleration in general, and in accelerated reduced-precision Neural Network inference with FPGA in particular.

For further information or for Amazon AWS F1 Design Services please contact us at



Operating System

Linux/Unix, Ubuntu 16.04

Delivery Methods

  • Amazon Machine Image

Pricing Information

Usage Information

Support Information

Customer Reviews