File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

이종은

Lee, Jongeun
Intelligent Computing and Codesign Lab.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Cost-effective stochastic MAC circuits for deep neural networks

Author(s)
Sim, HyeonukLee, Jongeun
Issued Date
2019-09
DOI
10.1016/j.neunet.2019.04.017
URI
https://scholarworks.unist.ac.kr/handle/201301/27789
Fulltext
https://www.sciencedirect.com/science/article/pii/S0893608019301236?via%3Dihub
Citation
NEURAL NETWORKS, v.117, pp.152 - 162
Abstract
Stochastic computing (SC) is a promising computing paradigm that can help address both the uncertainties of future process technology and the challenges of efficient hardware realization for deep neural networks (DNNs). However the impreciseness and long latency of SC have rendered previous SC-based DNN architectures less competitive against optimized fixed-point digital implementations, unless inference accuracy is significantly sacrificed. In this paper we propose a new SC-MAC (multiply-and-accumulate) algorithm, which is a key building block for SC-based DNNs, that is orders of magnitude more efficient and accurate than previous SC-MACs. We also show how our new SC-MAC can be extended to a vector version and used to accelerate both convolution and fully-connected layers of convolutional neural networks (CNNs) using the same hardware. Our experimental results using CNNs designed for MNIST and CIFAR-10 datasets demonstrate that not only is our SC-based CNNs more accurate and 40∼490× more energy-efficient for convolution layers than conventional SC-based ones, but ours can also achieve lower area–delay product and lower energy compared with precision-optimized fixed-point implementations without sacrificing accuracy. We also demonstrate the feasibility of our SC-based CNNs through FPGA prototypes.
Publisher
Elsevier Ltd
ISSN
0893-6080
Keyword (Author)
Convolutional neural networkHardware accelerationLow-discrepancy codeStochastic computingStochastic number generatorVariable latency
Keyword
ConvolutionCost effectivenessEnergy efficiencyNeural networksNumber theoryStochastic systemsTiming circuitsConvolutional neural networkHardware accelerationLow-discrepancy codeStochastic computingStochastic numbersVariable latenciesDeep neural networks

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.