Reducing the size and power of stochastic computing neural networks through training.

Date

Access rights

No access – contact librarywebmaster@baylor.edu

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

It has been demonstrated that stochastic computing (SC) has the ability to reduce the size and power requirements of artificial neural network (ANN) circuits [1]. There are two prevailing SC neuron topologies: multiplexer (MUX) and approximate parallel counter (APC) based [2]. Both topologies contain an activation module with a state parameter that affects the respective output function as well as the size and power requirements. This thesis explores altering this state parameter and the network training process in order to reduce the size and power of each neuron without incurring significant accuracy loss. As part of this exploration, a stochastic artificial neural network (SANN) is created in Verilog and implemented on a Field Programmable Gate Array (FPGA). Additionally, a SANN simulator is built in MATLAB to assist in rapid prototyping. Both simulation and hardware results demonstrate that the size/power utilized by SANNs can be reduced without significant accuracy loss.

Description

Keywords

Stochastic computing. Machine learning. Field programmable gate array (FPGA). Multilayer perceptron (MLP). Artificial neural network (ANN).

Citation