File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

양승준

Yang, Seungjoon
Signal Processing Lab .
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.endPage 106 -
dc.citation.startPage 95 -
dc.citation.title NEURAL NETWORKS -
dc.citation.volume 134 -
dc.contributor.author Cho, Hyunjoong -
dc.contributor.author Jang, Jinhyeok -
dc.contributor.author Lee, Chanhyeok -
dc.contributor.author Yang, Seungjoon -
dc.date.accessioned 2023-12-21T16:16:21Z -
dc.date.available 2023-12-21T16:16:21Z -
dc.date.created 2021-02-23 -
dc.date.issued 2021-02 -
dc.description.abstract In this study, we present a neural network that consists of nodes with heterogeneous sensitivity. Each node in a network is assigned a variable that determines the sensitivity with which it learns to perform a given task. The network is trained via a constrained optimization that maximizes the sparsity of the sensitivity variables while ensuring optimal network performance. As a result, the network learns to perform a given task using only a few sensitive nodes. Insensitive nodes, which are nodes with zero sensitivity, can be removed from a trained network to obtain a computationally efficient network. Removing zero-sensitivity nodes has no effect on the performance of the network because the network has already been trained to perform the task without them. The regularization parameter used to solve the optimization problem was simultaneously found during the training of the networks. To validate our approach, we designed networks with computationally efficient architectures for various tasks such as autoregression, object recognition, facial expression recognition, and object detection using various datasets. In our experiments, the networks designed by our proposed method provided the same or higher performances but with far less computational complexity. (C) 2020 Elsevier Ltd. All rights reserved. -
dc.identifier.bibliographicCitation NEURAL NETWORKS, v.134, pp.95 - 106 -
dc.identifier.doi 10.1016/j.neunet.2020.10.017 -
dc.identifier.issn 0893-6080 -
dc.identifier.scopusid 2-s2.0-85097647762 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/50074 -
dc.identifier.url https://www.sciencedirect.com/science/article/pii/S0893608020303804?via%3Dihub -
dc.identifier.wosid 000603296800009 -
dc.language 영어 -
dc.publisher PERGAMON-ELSEVIER SCIENCE LTD -
dc.title Efficient architecture for deep neural networks with heterogeneous sensitivity -
dc.type Article -
dc.description.isOpenAccess FALSE -
dc.relation.journalWebOfScienceCategory Computer Science, Artificial Intelligence; Neurosciences -
dc.relation.journalResearchArea Computer Science; Neurosciences & Neurology -
dc.type.docType Article -
dc.description.journalRegisteredClass scie -
dc.description.journalRegisteredClass scopus -
dc.subject.keywordAuthor Deep neural networks -
dc.subject.keywordAuthor Efficient architecture -
dc.subject.keywordAuthor Heterogeneous sensitivity -
dc.subject.keywordAuthor Constrained optimization -
dc.subject.keywordAuthor Simultaneous regularization parameter selection -
dc.subject.keywordPlus L-CURVE -
dc.subject.keywordPlus REGULARIZATION -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.