File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

황성주

Hwang, Sung Ju
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.conferencePlace AT -
dc.citation.conferencePlace Sydney -
dc.citation.endPage 2962 -
dc.citation.startPage 2950 -
dc.citation.title 34th International Conference on Machine Learning, ICML 2017 -
dc.contributor.author Kim, J -
dc.contributor.author Park, Y -
dc.contributor.author Kim, G -
dc.contributor.author Hwang, Sung Ju -
dc.date.accessioned 2023-12-19T18:37:07Z -
dc.date.available 2023-12-19T18:37:07Z -
dc.date.created 2019-03-21 -
dc.date.issued 2017-08-06 -
dc.description.abstract We propose a novel deep neural network that is both lightweight and effectively structured for model parallelization. Our network, which we name as SplitNet, automatically learns to split the network weights into either a set or a hierarchy of multiple groups that use disjoint sets of features, by learning both the class-to-group and fcaturc-to-group assignment matrices along with the network weights. This produces a trcc-structurcd network that involves no connection between branched subtrees of semantically disparate class groups. SplitNet thus greatly reduces the number of parameters and required computations, and is also embarrassingly model-parallelizable at test time, since the evaluation for each subnetwork is completely independent except for the shared lower layer weights that can be duplicated over multiple processors, or assigned to a separate processor. We validate our method with two different deep network models (ResNet and AlexNet) on two datasets (CIFAR-100 and ILSVRC 2012) for image classification, on which our method obtains networks with significantly reduced number of parameters while achieving comparable or superior accuracies over original full deep networks, and accelerated test speed with multiple GPUS. -
dc.identifier.bibliographicCitation 34th International Conference on Machine Learning, ICML 2017, pp.2950 - 2962 -
dc.identifier.issn 0000-0000 -
dc.identifier.scopusid 2-s2.0-85048424183 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/35109 -
dc.identifier.url http://proceedings.mlr.press/v70/kim17b.html -
dc.language 영어 -
dc.publisher International Machine Learning Society (IMLS) -
dc.title SplitNet: Learning to semantically split deep networks for parameter reduction and model parallelization -
dc.type Conference Paper -
dc.date.conferenceDate 2017-08-06 -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.