Reliability of Large Scale GPU Clusters for Deep Learning Workloads
Cited 0 times inCited 0 times in
- Reliability of Large Scale GPU Clusters for Deep Learning Workloads
- Qian, Junjie; Kim, Taeyoon; Jeon, Myeongjae
- Issue Date
- Association for Computing Machinery, Inc
- International World Wide Web Conference, pp.179 - 181
- Recent advances on deep learning technologies have made GPU clusters popular as training platforms. In this paper, we study reliability issues while focusing on training job failures from analyzing logs collected from running deep learning workloads on a large-scale GPU cluster in production. These failures are largely grouped into two categories, infrastructure and user, based on their sources, and reveal diverse reasons causing the failures. With insights obtained from the failure analysis, we suggest several different ways to improve the stability of shared GPU clusters designed for DL training and optimize user experience by reducing failure occurrences.
- Appears in Collections:
- CSE_Conference Papers
- Files in This Item:
- There are no files associated with this item.
can give you direct access to the published full text of this article. (UNISTARs only)
Show full item record
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.