File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

박희천

Park, Heechun
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Speeding-up neuromorphic computation for neural networks: Structure optimization approach

Author(s)
Park , HeechunKim, Taewhan
Issued Date
2022-01
DOI
10.1016/j.vlsi.2021.09.001
URI
https://scholarworks.unist.ac.kr/handle/201301/81623
Citation
INTEGRATION-THE VLSI JOURNAL, v.82, pp.104 - 114
Abstract
This work addresses a structure optimization of neuromorphic computing architectures. This enables to speed up the computation of fully connected neural network twice as fast as, theoretically, that of the existing architectures. Precisely, we propose a new neuromorphic computing architecture of mixing both dendritic and axonal-based neuromorphic cores in a way to totally eliminate the inherent non-zero waiting time between neuromorphic cores. In conjunction with the new architecture, we also propose a criteria of maximally utilizing computation units so that the resource overhead of total computation units can be minimized. We then extend the applicability of our proposed structure to the convolutional and recurrent neural networks. Through a set of experiments, we demonstrate the effectiveness (i.e., speed and area) of our proposed architecture: similar to 2x speedup with no accuracy penalty on the neuromorphic computation; or accuracy improvement with no time penalty.
Publisher
ELSEVIER
ISSN
0167-9260
Keyword (Author)
Neural networkPerformanceArchitectureOptimization

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.