File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

이종은

Lee, Jongeun
Intelligent Computing and Codesign Lab.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.endPage 162 -
dc.citation.startPage 152 -
dc.citation.title NEURAL NETWORKS -
dc.citation.volume 117 -
dc.contributor.author Sim, Hyeonuk -
dc.contributor.author Lee, Jongeun -
dc.date.accessioned 2023-12-21T18:46:21Z -
dc.date.available 2023-12-21T18:46:21Z -
dc.date.created 2019-06-17 -
dc.date.issued 2019-09 -
dc.description.abstract Stochastic computing (SC) is a promising computing paradigm that can help address both the uncertainties of future process technology and the challenges of efficient hardware realization for deep neural networks (DNNs). However the impreciseness and long latency of SC have rendered previous SC-based DNN architectures less competitive against optimized fixed-point digital implementations, unless inference accuracy is significantly sacrificed. In this paper we propose a new SC-MAC (multiply-and-accumulate) algorithm, which is a key building block for SC-based DNNs, that is orders of magnitude more efficient and accurate than previous SC-MACs. We also show how our new SC-MAC can be extended to a vector version and used to accelerate both convolution and fully-connected layers of convolutional neural networks (CNNs) using the same hardware. Our experimental results using CNNs designed for MNIST and CIFAR-10 datasets demonstrate that not only is our SC-based CNNs more accurate and 40∼490× more energy-efficient for convolution layers than conventional SC-based ones, but ours can also achieve lower area–delay product and lower energy compared with precision-optimized fixed-point implementations without sacrificing accuracy. We also demonstrate the feasibility of our SC-based CNNs through FPGA prototypes. -
dc.identifier.bibliographicCitation NEURAL NETWORKS, v.117, pp.152 - 162 -
dc.identifier.doi 10.1016/j.neunet.2019.04.017 -
dc.identifier.issn 0893-6080 -
dc.identifier.scopusid 2-s2.0-85066415220 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/27789 -
dc.identifier.url https://www.sciencedirect.com/science/article/pii/S0893608019301236?via%3Dihub -
dc.identifier.wosid 000477943300009 -
dc.language 영어 -
dc.publisher Elsevier Ltd -
dc.title Cost-effective stochastic MAC circuits for deep neural networks -
dc.type Article -
dc.description.isOpenAccess FALSE -
dc.relation.journalWebOfScienceCategory Computer Science, Artificial Intelligence; Neurosciences -
dc.relation.journalResearchArea Computer Science; Neurosciences & Neurology -
dc.type.docType Article -
dc.description.journalRegisteredClass scie -
dc.description.journalRegisteredClass scopus -
dc.subject.keywordAuthor Convolutional neural network -
dc.subject.keywordAuthor Hardware acceleration -
dc.subject.keywordAuthor Low-discrepancy code -
dc.subject.keywordAuthor Stochastic computing -
dc.subject.keywordAuthor Stochastic number generator -
dc.subject.keywordAuthor Variable latency -
dc.subject.keywordPlus Convolution -
dc.subject.keywordPlus Cost effectiveness -
dc.subject.keywordPlus Energy efficiency -
dc.subject.keywordPlus Neural networks -
dc.subject.keywordPlus Number theory -
dc.subject.keywordPlus Stochastic systems -
dc.subject.keywordPlus Timing circuits -
dc.subject.keywordPlus Convolutional neural network -
dc.subject.keywordPlus Hardware acceleration -
dc.subject.keywordPlus Low-discrepancy code -
dc.subject.keywordPlus Stochastic computing -
dc.subject.keywordPlus Stochastic numbers -
dc.subject.keywordPlus Variable latencies -
dc.subject.keywordPlus Deep neural networks -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.