File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

최재식

Choi, Jaesik
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Learning the group structure of deep neural networks with an expectation maximization method

Author(s)
Yi, SubinChoi, Jaesik
Issued Date
2018-11-17
DOI
10.1109/ICDMW.2018.00106
URI
https://scholarworks.unist.ac.kr/handle/201301/80402
Fulltext
https://ieeexplore.ieee.org/document/8637406
Citation
18th IEEE International Conference on Data Mining Workshops, ICDMW 2018, pp.689 - 696
Abstract
Many recent deep learning research work use very deep neural networks exploiting huge amount of parameters. It results in the strong expressive power, however, it also brings issues such as overfitting to training data, increasing memory burden and requiring excessive computations. In this paper, we propose an expectation maximization method to learn the group structure of deep neural networks with a group regularization principle to resolve those issues. Our method clusters the neurons in a layer based on how they are connected to the neurons in the next layer using a mixture model and the neurons in the next layer based on which group in the current layer they are most strongly connected to. Our expectation maximization method uses the Gaussian mixture model to keep the most salient connections and remove others to acquire a grouped weight matrix in a block diagonal matrix form. We refine our method further to cluster the kernels of convolutional neural networks (CNNs). We define the representative value of each kernel and build a representative matrix. The matrix is then grouped and the kernels are pruned out based on the group structure of the representative matrix. In experiments, we applied our method to fully-connected networks, 1-dimensional CNNs, and 2-dimensional CNNs and compared with baseline deep neural networks in MNIST, CIFAR-10, and United States groundwater datasets with respect to the number of parameters and classification and regression accuracy. We show that our method can reduce the number of parameters significantly without loss of accuracy and outperform the baseline models.
Publisher
IEEE Computer Society
ISSN
2375-9232

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.