File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

노삼혁

Noh, Sam H.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.conferencePlace US -
dc.citation.conferencePlace Santa Clara UniversitySanta Clara -
dc.citation.title 31st Symposium on Massive Storage Systems and Technologies, MSST 2015 -
dc.contributor.author Kang, Dongwoo -
dc.contributor.author Baek, Seungjae -
dc.contributor.author Choi, Jongmoo -
dc.contributor.author Lee, Donghee -
dc.contributor.author Noh, Sam H. -
dc.contributor.author Mutlu, Onur -
dc.date.accessioned 2023-12-19T22:11:55Z -
dc.date.available 2023-12-19T22:11:55Z -
dc.date.created 2016-06-08 -
dc.date.issued 2015-06-05 -
dc.description.abstract One characteristic of non-volatile memory (NVM) is that, even though it supports non-volatility, its retention capability is limited. To handle this issue, previous studies have focused on refreshing or advanced error correction code (ECC). In this paper, we take a different approach that makes use of the limited retention capability to our advantage. Specifically, we employ NVM as a file cache and devise a new scheme called amnesic cache management (ACM). The scheme is motivated by our observation that most data in a cache are evicted within a short time period after they have been entered into the cache, implying that they can be written with the relaxed retention capability. This retention relaxation can enhance the overall cache performance in terms of latency and energy since the data retention capability is proportional to the write latency. In addition, to prevent the retention relaxation from degrading the hit ratio, we estimate the future reference intervals based on the inter-reference gap (IRG) model and manage data adaptively. Experimental results with real-world workloads show that our scheme can reduce write latency by up to 40% (30% on average) and save energy consumption by up to 49% (37% on average) compared with the conventional LRU based cache management scheme. -
dc.identifier.bibliographicCitation 31st Symposium on Massive Storage Systems and Technologies, MSST 2015 -
dc.identifier.doi 10.1109/MSST.2015.7208291 -
dc.identifier.scopusid 2-s2.0-84951989209 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/35527 -
dc.identifier.url http://ieeexplore.ieee.org/document/7208291/ -
dc.language 영어 -
dc.publisher IEEE -
dc.title Amnesic Cache Management for Non-Volatile Memory -
dc.type Conference Paper -
dc.date.conferenceDate 2015-05-30 -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.