File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

이슬기

Lee, Seulki
Embedded Artificial Intelligence Lab.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Fast and scalable in-memory deep multitask learning via neural weight virtualization

Author(s)
Lee, SeulkiNirjon, Shahriar
Issued Date
2020-06-17
DOI
10.1145/3386901.3388947
URI
https://scholarworks.unist.ac.kr/handle/201301/78484
Fulltext
https://www.youtube.com/watch?v=9-3qh1fKfCU
Citation
ACM International Conference on Mobile Systems, Applications, and Services, pp.175 - 190
Abstract
This paper introduces the concept of Neural Weight Virtualization - which enables fast and scalable in-memory multitask deep learning on memory-constrained embedded systems. The goal of neural weight virtualization is two-fold: (1) packing multiple DNNs into a fixed-sized main memory whose combined memory requirement is larger than the main memory, and (2) enabling fast in-memory execution of the DNNs. To this end, we propose a two-phase approach: (1) virtualization of weight parameters for fine-grained parameter sharing at the level of weights that scales up to multiple heterogeneous DNNs of arbitrary network architectures, and (2) in-memory data structure and run-time execution framework for in-memory execution and context-switching of DNN tasks. We implement two multitask learning systems: (1) an embedded GPU-based mobile robot, and (2) a microcontroller-based IoT device. We thoroughly evaluate the proposed algorithms as well as the two systems that involve ten state-of-the-art DNNs. Our evaluation shows that weight virtualization improves memory efficiency, execution time, and energy efficiency of the multitask learning systems by 4.1x, 36.9x, and 4.2x, respectively.
Publisher
Association for Computing Machinery

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.