File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

전명재

Jeon, Myeongjae
OMNIA
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Reliability of Large Scale GPU Clusters for Deep Learning Workloads

Author(s)
Qian, JunjieKim, TaeyoonJeon, Myeongjae
Issued Date
2021-04-19
DOI
10.1145/3442442.3452056
URI
https://scholarworks.unist.ac.kr/handle/201301/77539
Fulltext
https://dl.acm.org/doi/10.1145/3442442.3452056
Citation
International World Wide Web Conference, pp.179 - 181
Abstract
Recent advances on deep learning technologies have made GPU clusters popular as training platforms. In this paper, we study reliability issues while focusing on training job failures from analyzing logs collected from running deep learning workloads on a large-scale GPU cluster in production. These failures are largely grouped into two categories, infrastructure and user, based on their sources, and reveal diverse reasons causing the failures. With insights obtained from the failure analysis, we suggest several different ways to improve the stability of shared GPU clusters designed for DL training and optimize user experience by reducing failure occurrences.
Publisher
Association for Computing Machinery, Inc

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.