File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.conferencePlace US -
dc.citation.conferencePlace New Orleans -
dc.citation.title International Conference on Learning Representations -
dc.contributor.author Lee, Namhoon -
dc.contributor.author Ajanthan, Thalaiyasingam -
dc.contributor.author Torr, Philip H. S. -
dc.date.accessioned 2024-02-01T00:35:52Z -
dc.date.available 2024-02-01T00:35:52Z -
dc.date.created 2020-12-02 -
dc.date.issued 2019-05-06 -
dc.description.abstract Pruning large neural networks while maintaining their performance is often desirable due to the reduced space and time complexity. In existing methods, pruning is done within an iterative optimization procedure with either heuristically designed pruning schedules or additional hyperparameters, undermining their utility. In this work, we present a new approach that prunes a given network once at initialization prior to training. To achieve this, we introduce a saliency criterion based on connection sensitivity that identifies structurally important connections in the network for the given task. This eliminates the need for both pretraining and the complex pruning schedule while making it robust to architecture variations. After pruning, the sparse network is trained in the standard way. Our method obtains extremely sparse networks with virtually the same accuracy as the reference network on the MNIST, CIFAR-10, and Tiny-ImageNet classification tasks and is broadly applicable to various architectures including convolutional, residual and recurrent networks. Unlike existing methods, our approach enables us to demonstrate that the retained connections are indeed relevant to the given task. © 7th International Conference on Learning Representations, ICLR 2019. All Rights Reserved. -
dc.identifier.bibliographicCitation International Conference on Learning Representations -
dc.identifier.scopusid 2-s2.0-85083951596 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/79855 -
dc.language 영어 -
dc.publisher ICLR -
dc.title.alternative International Conference on Learning Representations -
dc.title SNIP: Single-shot network pruning based on connection sensitivity -
dc.type Conference Paper -
dc.date.conferenceDate 2019-05-06 -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.