File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

이규호

Lee, Kyuho Jason
Intelligent Systems Lab.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.endPage 1743 -
dc.citation.number 5 -
dc.citation.startPage 1739 -
dc.citation.title IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS -
dc.citation.volume 70 -
dc.contributor.author Jeong, Hoichang -
dc.contributor.author Kim, Seungbin -
dc.contributor.author Park, Keonhee -
dc.contributor.author Jung, Jueun -
dc.contributor.author Lee, Kyuho Jason -
dc.date.accessioned 2023-12-21T12:38:33Z -
dc.date.available 2023-12-21T12:38:33Z -
dc.date.created 2023-07-07 -
dc.date.issued 2023-05 -
dc.description.abstract A highly energy-efficient Computing-in-Memory (CIM) processor for Ternary Neural Network (TNN) acceleration is proposed in this brief. Previous CIM processors for multi-bit precision neural networks showed low energy efficiency and throughput. Lightweight binary neural networks were accelerated with CIM processors for high energy efficiency but showed poor inference accuracy. In addition, most previous works suffered from poor linearity of analog computing and energy-consuming analog-to-digital conversion. To resolve the issues, we propose a Ternary-CIM (T-CIM) processor with 16T1C ternary bitcell for good linearity with the compact area and a charge-based partial sum adder circuit to remove analog-to-digital conversion that consumes a large portion of the system energy. Furthermore, flexible data mapping enables execution of the whole convolution layers with smaller bitcell memory capacity. Designed with 65 nm CMOS technology, the proposed T-CIM achieves 1,316 GOPS of peak performance and 823 TOPS/W of energy efficiency. -
dc.identifier.bibliographicCitation IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, v.70, no.5, pp.1739 - 1743 -
dc.identifier.doi 10.1109/TCSII.2023.3265064 -
dc.identifier.issn 1549-7747 -
dc.identifier.scopusid 2-s2.0-85153332504 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/64800 -
dc.identifier.wosid 000988497300015 -
dc.language 영어 -
dc.publisher IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC -
dc.title A Ternary Neural Network Computing-in-Memory Processor With 16T1C Bitcell Architecture -
dc.type Article -
dc.description.isOpenAccess FALSE -
dc.relation.journalWebOfScienceCategory Engineering, Electrical & Electronic -
dc.relation.journalResearchArea Engineering -
dc.type.docType Article; Proceedings Paper -
dc.description.journalRegisteredClass scie -
dc.description.journalRegisteredClass scopus -
dc.subject.keywordAuthor Computer architecture -
dc.subject.keywordAuthor Throughput -
dc.subject.keywordAuthor Neural networks -
dc.subject.keywordAuthor Linearity -
dc.subject.keywordAuthor Energy efficiency -
dc.subject.keywordAuthor Common Information Model (computing) -
dc.subject.keywordAuthor Transistors -
dc.subject.keywordAuthor SRAM -
dc.subject.keywordAuthor computing-in-memory (CIM) -
dc.subject.keywordAuthor processing-in-memory (PIM) -
dc.subject.keywordAuthor ternary neural network (TNN) -
dc.subject.keywordAuthor analog computing -
dc.subject.keywordPlus SRAM MACRO -
dc.subject.keywordPlus COMPUTATION -
dc.subject.keywordPlus BINARY -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.