File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

백웅기

Baek, Woongki
Intelligent System Software Lab.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.endPage 645 -
dc.citation.number 3 -
dc.citation.startPage 630 -
dc.citation.title IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS -
dc.citation.volume 30 -
dc.contributor.author Kim, Kyu Yeun -
dc.contributor.author Park, Jinsu -
dc.contributor.author Baek, Woongki -
dc.date.accessioned 2023-12-21T19:36:53Z -
dc.date.available 2023-12-21T19:36:53Z -
dc.date.created 2018-11-19 -
dc.date.issued 2019-03 -
dc.description.abstract Hardware caches are widely employed in GPGPUs to achieve higher performance and energy efficiency. Incorporating hardware caches in GPGPUs, however, does not immediately guarantee enhanced performance and energy efficiency due to high cache contention and thrashing. To address the inefficiency of GPGPU caches, various adaptive techniques (e.g., warp limiting) have been proposed. However, relatively little work has been done in the context of creating an architectural framework that tightly integrates adaptive cache management techniques and investigating their effectiveness and interaction. To bridge this gap, we propose IACM, integrated adaptive cache management for high-performance and energy-efficient GPGPU computing. IACM integrates the state-of-the-art adaptive cache management techniques (i.e., cache indexing, bypassing, and warp limiting) in a unified architectural framework. Our quantitative evaluation demonstrates that IACM significantly improves the performance and energy efficiency of various GPGPU workloads over the baseline architecture (i.e., 98.1 and 61.9 percent on average, respectively), achieves considerably higher performance than the state-of-the-art technique (i.e., 361.4 percent at maximum and 7.7 percent on average), and delivers significant performance and energy-efficiency gains over the baseline GPGPU architecture enhanced with advanced architectural technologies. -
dc.identifier.bibliographicCitation IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, v.30, no.3, pp.630 - 645 -
dc.identifier.doi 10.1109/TPDS.2018.2868658 -
dc.identifier.issn 1045-9219 -
dc.identifier.scopusid 2-s2.0-85052826257 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/25211 -
dc.identifier.url https://ieeexplore.ieee.org/document/8454288 -
dc.identifier.wosid 000458820700011 -
dc.language 영어 -
dc.publisher IEEE COMPUTER SOC -
dc.title Improving the Performance and Energy Efficiency of GPGPU Computing through Integrated Adaptive Cache Management -
dc.type Article -
dc.description.isOpenAccess FALSE -
dc.relation.journalWebOfScienceCategory Computer Science, Theory & Methods; Engineering, Electrical & Electronic -
dc.relation.journalResearchArea Computer Science; Engineering -
dc.description.journalRegisteredClass scie -
dc.description.journalRegisteredClass scopus -
dc.subject.keywordAuthor Integrated adaptive cache management -
dc.subject.keywordAuthor GPGPU computing -
dc.subject.keywordAuthor high performance -
dc.subject.keywordAuthor energy efficiency -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.