Exploiting Area-Speed-Power Tradeoff of FPGA Designs on Multilayer Perceptron (MLP) Neural Network

Date

2020-12-21

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

This dissertation presents four Field Programmable Gate Array (FPGA) design architectures for handwritten digit recognition, in order to improve hardware efficiency in terms of resources, power, and speed of the neuromorphic processor. Multipliers are used as the basis for each of the processor designs. Additional methods are explored to compare hardware efficiency with implementing floating point adders and multipliers in register-transfer level (RTL) designs. These implementations are then instantiated into each of the processor designs and the results are compared to the IPs-based demonstrations using Xilinx Vivado. Experimental results show that the 196-MUL design architecture achieves the highest speed but consumes a large amount of power and FPGA resource including look-up Tables (LUTs), flip-flops (FFs), and digital signal processing elements (DSPs). In contrast, the 28-MUL design architecture spends the minimum LUTs, FFs, DSPs, and power dissipation, however, the latency is greater than 3× compared with the hardware cost of the 196-muliplier structure. The conclusion of the dissertation is that the proposed works offer different levels of hardware efficiency corresponding to different design specifications. The proposed designs allows for the user to choose which aspect is important to them in regards to area, power, and speed and then specialize the system to their needs.

Description

Keywords

FPGA, MLP, NN

Citation