File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

이종은

Lee, Jongeun
Intelligent Computing and Codesign Lab.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Successive log quantization for cost-efficient neural networks using stochastic computing

Author(s)
Lee, SugilSim, HyeonukChoi, JooyeonLee, Jongeun
Issued Date
2019-06-02
DOI
10.1145/3316781.3317916
URI
https://scholarworks.unist.ac.kr/handle/201301/79705
Fulltext
https://dl.acm.org/citation.cfm?doid=3316781.3317916
Citation
Design Automation Conference
Abstract
Despite the multifaceted benefits of stochastic computing (SC) such as low cost, low power, and flexible precision, SC-based deep neural networks (DNNs) still suffer from the long-latency problem, especially for those with high precision requirements. While log quantization can be of help, it has its own accuracy-saturation problem due to uneven precision distribution. In this paper we propose successive log quantization (SLQ), which extends log quantization with significant improvements in precision and accuracy, and apply it to state-of-the-art SC-DNNs. SLQ reuses the existing datapath of log quantization, and thus retains its advantages such as simple multiplier hardware. Our experimental results demonstrate that our SLQ can significantly extend both the accuracy and efficiency of SCDNNs over the state-of-the-art solutions, including linear-quantized and log-quantized SC-DNNs, achieving less than 1∼1.5%p accuracy drop for AlexNet, SqueezeNet, and VGG-S at mere 4∼5-bit weight resolution. © 2019 Copyright held by the owner/author(s).
Publisher
Institute of Electrical and Electronics Engineers
ISSN
0738-100X

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.