File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

이종은

Lee, Jongeun
Intelligent Computing and Codesign Lab.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.conferencePlace JA -
dc.citation.conferencePlace Tokyo -
dc.citation.endPage 656 -
dc.citation.startPage 651 -
dc.citation.title 24th Asia and South Pacific Design Automation Conference, ASPDAC 2019 -
dc.contributor.author Sim, Hyeonuk -
dc.contributor.author Anderson, Jason H. -
dc.contributor.author Lee, Jongeun -
dc.date.accessioned 2024-02-01T00:39:37Z -
dc.date.available 2024-02-01T00:39:37Z -
dc.date.created 2019-03-07 -
dc.date.issued 2019-01-21 -
dc.description.abstract State-of-the-art deep neural networks (DNNs) require hundreds of millions of multiply-accumulate (MAC) computations to perform inference, e.g. in image-recognition tasks. To improve the performance and energy efficiency, deep learning accelerators have been proposed, realized both on FPGAs and as custom ASICs. Generally, such accelerators comprise many parallel processing elements, capable of executing large numbers of concurrent MAC operations. From the energy perspective, however, most consumption arises due to memory accesses, both to off-chip external memory, and on-chip buffers. In this paper, we propose an on-chip DNN co-processor architecture where minimizing memory accesses is the primary design objective. To the maximum possible extent, off-chip memory accesses are eliminated, providing lowest-possible energy consumption for inference. Compared to a state-of-the-art ASIC, our architecture requires 36% fewer external memory accesses and 53% less energy consumption for low-latency image classification. -
dc.identifier.bibliographicCitation 24th Asia and South Pacific Design Automation Conference, ASPDAC 2019, pp.651 - 656 -
dc.identifier.doi 10.1145/3287624.3287713 -
dc.identifier.issn 0000-0000 -
dc.identifier.scopusid 2-s2.0-85061152294 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/80215 -
dc.identifier.url https://dl.acm.org/citation.cfm?doid=3287624.3287713 -
dc.language 영어 -
dc.publisher Association for Computing Machinery -
dc.title XoMA: Exclusive on-chip memory architecture for energy-efficient deep learning acceleration -
dc.type Conference Paper -
dc.date.conferenceDate 2019-01-21 -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.