File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

이규호

Lee, Kyuho Jason
Intelligent Systems Lab.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.conferencePlace US -
dc.citation.conferencePlace Monterey, CA, USA -
dc.citation.title IEEE International Symposium on Circuits and Systems -
dc.contributor.author Hoichang Jeong -
dc.contributor.author Keonhee Park -
dc.contributor.author Seungbin Kim -
dc.contributor.author Jueun Jung -
dc.contributor.author Lee, Kyuho Jason -
dc.date.accessioned 2024-01-31T19:05:53Z -
dc.date.available 2024-01-31T19:05:53Z -
dc.date.created 2023-05-03 -
dc.date.issued 2023-05-24 -
dc.description.abstract A highly energy-efficient Computing-in-Memory (CIM) processor for Ternary Neural Network (TNN) acceleration is proposed in this paper. Previous CIM processors for multi-bit precision neural networks showed low energy efficiency and throughput. Lightweight binary neural networks were accelerated with CIM processors for high energy efficiency but showed poor inference accuracy. In addition, most previous works suffered from poor linearity of analog computing and energy-consuming analog-to-digital conversion. To resolve the issues, we propose a Ternary-CIM (T-CIM) processor with 16T1C ternary bitcell for good linearity with compact area and a charge-based partial sum adder circuit to remove analog-to-digital conversion that consumes a large portion of the system energy. Furthermore, configurable data mapping enables execution of the whole convolution layers with smaller bitcell memory capacity. Designed with 65 nm CMOS technology, the proposed T-CIM achieves 1,316 GOPS of peak performance and 823 TOPS/W of energy efficiency. -
dc.identifier.bibliographicCitation IEEE International Symposium on Circuits and Systems -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/74724 -
dc.publisher IEEE -
dc.title A Ternary Neural Network Computing-in-Memory Processor with 16T1C Bitcell Architecture -
dc.type Conference Paper -
dc.date.conferenceDate 2023-05-21 -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.