File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

이종은

Lee, Jongeun
Intelligent Computing and Codesign Lab.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Efficient FPGA Acceleration of Convolutional Neural Networks Using Logical-3D Compute Array

Author(s)
Rahman, AtulLee, JongeunChoi, Kiyoung
Issued Date
2016-03-14
URI
https://scholarworks.unist.ac.kr/handle/201301/32807
Fulltext
https://www.date-conference.com/date16
Citation
Design Automation and Test in Europe Conference, pp.1393 - 1398
Abstract
Convolutional Deep Neural Networks (DNNs) are reported to show outstanding recognition performance in many image-related machine learning tasks. DNNs have a very high computational requirement, making accelerators a very attractive option. These DNNs have many convolutional layers with different parameters in terms of input/output/kernel sizes as well as input stride. Design constraints usually require a single design for all layers of a given DNN. Thus a key challenge is how to design a common architecture that can perform well for all convolutional layers of a DNN, which can be quite diverse and complex. In this paper we present a flexible yet highly efficient 3D neuron array architecture that is a natural fit for convolutional layers. We also present our technique to optimize its parameters including onchip buffer sizes for a given set of resource constraint for modern FPGAs. Our experimental results targeting a Virtex-7 FPGA demonstrate that our proposed technique can generate DNN accelerators that can outperform the state-of-the-art solutions, by 22% for 32-bit floating-point MAC implementations, and are far more scalable in terms of compute resources and DNN size.
Publisher
ACM/IEEE

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.