FPGA-Accelerated Deep-Learning Inference with Binarized Neural Networks

Sold by: Latest Version: 1.2

FPGA-Accelerated Deep-Learning Inference with Binarized Neural Networks

Product Overview

Image classification of the Cifar10 dataset using the CNV neural network. Based on Xilinx public proof-of-concept implementation of a reduced-precision, Binarized Neural Network (BNN) implemented in FPGA, MLE developed this demo to showcase the performance benefits of Deep-Learning Inference when running on AWS F1. Starting point for this demo was PYNQ-BNN from Xilinx [https://github.com/Xilinx/BNN-PYNQ] using the so-called FINN framework [https://arxiv.org/abs/1612.07119] from Y. Umuroglu et al. Several changes were applied during porting on AWS F1: modifications to the PYNQ-BNN software library including Python, and design-flow adjustments to support the AWS F1 SDAccel workflow.

This demo is especially intended for researchers and developers interested in FPGA-based acceleration in general, and in accelerated reduced-precision Neural Network inference with FPGA in particular.

For further information or for Amazon AWS F1 Design Services please contact us at https://www.missinglinkelectronics.com/awsf1



Sold by

Operating System

Linux/Unix, Ubuntu 16.04

Fulfillment Methods

  • Amazon Machine Image

Pricing Information

Usage Information

Support Information

Customer Reviews