File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

이종은

Lee, Jongeun
Intelligent Computing and Codesign Lab.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.endPage 4908 -
dc.citation.number 12 -
dc.citation.startPage 4897 -
dc.citation.title IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS -
dc.citation.volume 42 -
dc.contributor.author Azamat, Azat -
dc.contributor.author Asim, Faaiz -
dc.contributor.author Kim, Jintae -
dc.contributor.author Lee, Jongeun -
dc.date.accessioned 2023-12-19T11:13:26Z -
dc.date.available 2023-12-19T11:13:26Z -
dc.date.created 2023-11-30 -
dc.date.issued 2023-12 -
dc.description.abstract While ReRAM (Resistive Random-Access Memory) crossbar arrays have the potential to significantly accelerate DNN (Deep Neural Network) training through fast and lowcost matrix-vector multiplication, peripheral circuits like ADCs (analog-to-digital converters) create a high overhead. These ADCs consume over half of the chip power and a considerable portion of the chip cost. To address this challenge, we propose advanced quantization techniques that can significantly reduce the ADC overhead of ReRAM crossbar arrays. Our methodology interprets ADC as a quantization mechanism, allowing us to scale the range of ADC input optimally along with the weight parameters of a DNN, resulting in multiple-bit reduction in ADC precision. This approach reduces ADC size and power consumption by several times, and it is applicable to any DNN type (binarized or multi-bit) and any ReRAM crossbar array size. Additionally, we propose ways to minimize the overhead of the digital scaler, which is an essential part of our scheme and sometimes required. Our experimental results using ResNet-18 on the ImageNet dataset demonstrate that our method can reduce the size of the ADC by 32 times compared to ISAAC with only a minimal accuracy loss degradation of 0.24 evaluation results in the presence of ReRAM non-ideality (such as stuck-at fault). IEEE -
dc.identifier.bibliographicCitation IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, v.42, no.12, pp.4897 - 4908 -
dc.identifier.doi 10.1109/TCAD.2023.3294461 -
dc.identifier.issn 0278-0070 -
dc.identifier.scopusid 2-s2.0-85164737622 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/66315 -
dc.identifier.wosid 001123254100044 -
dc.language 영어 -
dc.publisher Institute of Electrical and Electronics Engineers -
dc.title Partial Sum Quantization for Reducing ADC Size in ReRAM-based Neural Network Accelerators -
dc.type Article -
dc.description.isOpenAccess FALSE -
dc.relation.journalWebOfScienceCategory Computer Science, Hardware & Architecture;Computer Science, Interdisciplinary Applications;Engineering, Electrical & Electronic -
dc.relation.journalResearchArea Computer Science;Engineering -
dc.type.docType Article -
dc.description.journalRegisteredClass scie -
dc.description.journalRegisteredClass scopus -
dc.subject.keywordAuthor analog-to-digital conversion (ADC) -
dc.subject.keywordAuthor Artificial neural networks -
dc.subject.keywordAuthor Convolution -
dc.subject.keywordAuthor convolutional neural network (CNN) -
dc.subject.keywordAuthor Costs -
dc.subject.keywordAuthor Hardware -
dc.subject.keywordAuthor In-memory computing accelerator -
dc.subject.keywordAuthor memristor -
dc.subject.keywordAuthor quantization -
dc.subject.keywordAuthor Quantization (signal) -
dc.subject.keywordAuthor Throughput -
dc.subject.keywordAuthor Training -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.