File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

황성주

Hwang, Sung Ju
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Combined group and exclusive sparsity for deep neural networks

Author(s)
Yoon, JHwang, Sung Ju
Issued Date
2017-08-06
URI
https://scholarworks.unist.ac.kr/handle/201301/35110
Citation
34th International Conference on Machine Learning, ICML 2017, pp.6031 - 6039
Abstract
The number of parameters in a deep neural network is usually very large, which helps with its learning capacity but also hinders its scalability and practicality due to memory/time inefficiency and overfitting. To resolve this issue, we propose a sparsity regularization method that exploits both positive and negative correlations among the features to enforce the network to be sparse, and at the same time remove any redundancies among the features to fully utilize the capacity of the network. Specifically, we propose to use an exclusive sparsity regularization based on (1, 2)-norm, which promotes competition for features between different weights, thus enforcing them to fit to disjoint sets of features. We further combine the exclusive sparsity with the group sparsity based on (2, l)-norm, to promote both sharing and competition for features in training of a deep neural network. We validate our method on multiple public datasets, and the results show that our method can obtain more compact and efficient networks while also improving the performance over the base networks with full weights, as opposed to existing sparsity regularizations that often obtain efficiency at the expense of prediction accuracy.
Publisher
International Machine Learning Society (IMLS)
ISSN
0000-0000

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.