File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

이종은

Lee, Jongeun
Intelligent Computing and Codesign Lab.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

NP-CGRA: Extending CGRAs for Efficient Processing of Light-weight Deep Neural Networks

Author(s)
Lee, JungiLee, Jongeun
Issued Date
2021-02
DOI
10.23919/DATE51398.2021.9474256
URI
https://scholarworks.unist.ac.kr/handle/201301/77642
Citation
Design Automation and Test in Europe Conference, pp.1408 - 1413
Abstract
Coarse-grained reconfigurable architectures (CGRAs) can provide both high energy efficiency and flexibility, making them well-suited for machine learning applications. However previous work on CGRAs has a very limited support for deep neural networks (DNNs), especially for recent lightweight models such as depthwise separable convolution (DSC), which are an important workload for mobile environment. In this paper, we propose a set of architecture extensions and a mapping scheme to greatly enhance CGRA's performance for DSC kernels. Our experimental results using MobileNets demonstrate that our proposed CGRA enhancement can deliver 8 18x improvement in area-delay product depending on layer type, over a baseline CGRA with a state-of-the-art CGRA compiler. Moreover, our proposed CGRA architecture can also speed up 3D convolution with similar efficiency as previous work, demonstrating the effectiveness of our architectural features beyond DSC layers.
Publisher
Institute of Electrical and Electronics Engineers Inc.
ISSN
1530-1591

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.