File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

이종은

Lee, Jongeun
Intelligent Computing and Codesign Lab.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Automated Log-Scale Quantization for Low-Cost Deep Neural Networks

Author(s)
Oh, SangyunSim, HyeonukLee, SugilLee, Jongeun
Issued Date
2021-06-20
DOI
10.1109/cvpr46437.2021.00080
URI
https://scholarworks.unist.ac.kr/handle/201301/77268
Citation
IEEE Conference on Computer Vision and Pattern Recognition
Abstract
Quantization plays an important role in deep neural network (DNN) hardware. In particular, logarithmic quantization has multiple advantages for DNN hardware implementations, and its weakness in terms of lower performance at high precision compared with linear quantization has been recently remedied by what we call selective two-word logarithmic quantization (STLQ). However, there is a lack of training methods designed for STLQ or even logarithmic quantization in general. In this paper we propose a novel STLQ-aware training method, which significantly outperforms the previous state-of-the-art training method for STLQ. Moreover, our training results demonstrate that with our new training method, STLQ applied to weight parameters of ResNet-18 can achieve the same level of performance as state-of-the-art quantization method, APoT, at 3-bit precision. We also apply our method to various DNNs in image enhancement and semantic segmentation, showing competitive results.
Publisher
IEEE

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.