File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

전명재

Jeon, Myeongjae
OMNIA
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Memory Harvesting in Multi-GPU Systems with Hierarchical Unified Virtual Memory

Author(s)
Choi, SangjinKim, TaeksooJeong, JinwooAusavarungnirun, RachataJeon, MyeongjaeKwon, YoungjinAhn, Jeongseob
Issued Date
2022-07-12
URI
https://scholarworks.unist.ac.kr/handle/201301/75709
Fulltext
https://www.usenix.org/conference/atc22/presentation/choi-sangjin
Citation
USENIX Annual Technical Conference
Abstract
With the ever-growing demands for GPUs, most organizations allow users to share the multi-GPU servers. However, we observe that the memory space across GPUs is not effectively utilized enough when consolidating various workloads that exhibit highly varying resource demands. This is because the current memory management techniques were designed solely for individual GPUs rather than shared multi-GPU environments.

This study introduces a novel approach to provide an illusion of virtual memory space for GPUs, called hierarchical unified virtual memory (HUVM), by incorporating the temporarily idle memory of neighbor GPUs. Since modern GPUs are connected to each other through a fast interconnect, it provides lower access latency to neighbor GPU’s memory compared to the host memory via PCIe. On top of HUVM, we design a new memory manager, called memHarvester, to effectively and efficiently harvest the temporarily available neighbor GPUs’ memory. For diverse consolidation scenarios with DNN training and graph analytics workloads, our experimental result shows up to 2.71× performance improvement compared to the prior approach in multi-GPU environments.
Publisher
USENIX

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.