File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

양승준

Yang, Seungjoon
Signal Processing Lab .
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Deep neural networks with a set of node-wise varying activation functions

Author(s)
Jang, JinhyeokCho, HyunjoongKim, JaehongLee, JaeyeonYang, Seungjoon
Issued Date
2020-06
DOI
10.1016/j.neunet.2020.03.004
URI
https://scholarworks.unist.ac.kr/handle/201301/31660
Fulltext
https://www.sciencedirect.com/science/article/pii/S0893608020300812
Citation
Neural Networks, v.126, pp.118 - 131
Abstract
In this study, we present deep neural networks with a set of node-wise varying activation functions. The feature-learning abilities of the nodes are affected by the selected activation functions, where the nodes with smaller indices become increasingly more sensitive during training. As a result, the features learned by the nodes are sorted by the node indices in order of their importance such that more sensitive nodes are related to more important features. The proposed networks learn input features but also the importance of the features. Nodes with lower importance in the proposed networks can be pruned to reduce the complexity of the networks, and the pruned networks can be retrained without incurring performance losses. We validated the feature-sorting property of the proposed method using both shallow and deep networks as well as deep networks transferred from existing networks. (c) 2020 Elsevier Ltd. All rights reserved.
Publisher
PERGAMON-ELSEVIER SCIENCE LTD
ISSN
0893-6080
Keyword (Author)
Deep networkPrincipal component analysisPruningVarying activation

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.