File Download

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

이종은

Lee, Jongeun
Intelligent Computing and Codesign Lab.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.endPage 106108 -
dc.citation.startPage 106097 -
dc.citation.title IEEE ACCESS -
dc.citation.volume 8 -
dc.contributor.author Oh, Sangyun -
dc.contributor.author Kim, Hye-Jin S. -
dc.contributor.author Lee, Jongeun -
dc.contributor.author Kim, Junmo -
dc.date.accessioned 2023-12-21T17:36:35Z -
dc.date.available 2023-12-21T17:36:35Z -
dc.date.created 2020-06-10 -
dc.date.issued 2020-06 -
dc.description.abstract Lightweight neural networks that employ depthwise convolution have a significant computational advantage over those that use standard convolution because they involve fewer parameters; however, they also require more time, even with graphics processing units (GPUs). We propose a Repetition-Reduction Network (RRNet) in which the number of depthwise channels is large enough to reduce computation time while simultaneously being small enough to reduce GPU latency. RRNet also reduces power consumption and memory usage, not only in the encoder but also in the residual connections to the decoder. We apply RRNet to the problem of resource-constrained depth estimation, where it proves to be significantly more efficient than other methods in terms of energy consumption, memory usage, and computation. It has two key modules: the Repetition-Reduction (RR) block, which is a set of repeated lightweight convolutions that can be used for feature extraction in the encoder, and the Condensed Decoding Connection (CDC), which can replace the skip connection, delivering features to the decoder while significantly reducing the channel depth of the decoder layers. Experimental results on the KITTI dataset show that RRNet consumes less energy and less memory than conventional schemes, and that it is faster on a commercial mobile GPU without increasing the demand on hardware resources relative to the baseline network. Furthermore, RRNet outperforms state-of-the-art lightweight models such as MobileNets, PyDNet, DiCENet, DABNet, and EfficientNet. -
dc.identifier.bibliographicCitation IEEE ACCESS, v.8, pp.106097 - 106108 -
dc.identifier.doi 10.1109/ACCESS.2020.3000773 -
dc.identifier.issn 2169-3536 -
dc.identifier.scopusid 2-s2.0-85086713728 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/32354 -
dc.identifier.url https://ieeexplore.ieee.org/document/9110910 -
dc.identifier.wosid 000541112500003 -
dc.language 영어 -
dc.publisher IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC -
dc.title RRNet: Repetition-Reduction Network for Energy Efficient Depth Estimation -
dc.type Article -
dc.description.isOpenAccess TRUE -
dc.relation.journalWebOfScienceCategory Computer Science, Information Systems; Engineering, Electrical & Electronic; Telecommunications -
dc.relation.journalResearchArea Computer Science; Engineering; Telecommunications -
dc.type.docType Article -
dc.description.journalRegisteredClass scie -
dc.description.journalRegisteredClass scopus -
dc.subject.keywordAuthor Computer vision -
dc.subject.keywordAuthor deep neural network -
dc.subject.keywordAuthor depth estimation -
dc.subject.keywordAuthor encoder-decoder network -
dc.subject.keywordAuthor lightweight neural network -
dc.subject.keywordAuthor machine learning -
dc.subject.keywordAuthor mobile graphical processing unit (GPU) -
dc.subject.keywordAuthor unsupervised learning -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.