File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

전명재

Jeon, Myeongjae
OMNIA
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.conferencePlace US -
dc.citation.conferencePlace Carlsbad, CA -
dc.citation.title USENIX Annual Technical Conference -
dc.contributor.author Choi, Sangjin -
dc.contributor.author Kim, Taeksoo -
dc.contributor.author Jeong, Jinwoo -
dc.contributor.author Ausavarungnirun, Rachata -
dc.contributor.author Jeon, Myeongjae -
dc.contributor.author Kwon, Youngjin -
dc.contributor.author Ahn, Jeongseob -
dc.date.accessioned 2024-01-31T20:08:48Z -
dc.date.available 2024-01-31T20:08:48Z -
dc.date.created 2022-07-18 -
dc.date.issued 2022-07-12 -
dc.description.abstract With the ever-growing demands for GPUs, most organizations allow users to share the multi-GPU servers. However, we observe that the memory space across GPUs is not effectively utilized enough when consolidating various workloads that exhibit highly varying resource demands. This is because the current memory management techniques were designed solely for individual GPUs rather than shared multi-GPU environments.

This study introduces a novel approach to provide an illusion of virtual memory space for GPUs, called hierarchical unified virtual memory (HUVM), by incorporating the temporarily idle memory of neighbor GPUs. Since modern GPUs are connected to each other through a fast interconnect, it provides lower access latency to neighbor GPU’s memory compared to the host memory via PCIe. On top of HUVM, we design a new memory manager, called memHarvester, to effectively and efficiently harvest the temporarily available neighbor GPUs’ memory. For diverse consolidation scenarios with DNN training and graph analytics workloads, our experimental result shows up to 2.71× performance improvement compared to the prior approach in multi-GPU environments.
-
dc.identifier.bibliographicCitation USENIX Annual Technical Conference -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/75709 -
dc.identifier.url https://www.usenix.org/conference/atc22/presentation/choi-sangjin -
dc.language 영어 -
dc.publisher USENIX -
dc.title Memory Harvesting in Multi-GPU Systems with Hierarchical Unified Virtual Memory -
dc.type Conference Paper -
dc.date.conferenceDate 2022-07-11 -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.