File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

전명재

Jeon, Myeongjae
OMNIA
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Sibylla: To Retry or Not To Retry on Deep Learning Job Failure

Author(s)
Kim, TaeyoonJeong, SuyeonLee, JongseopLee, SoobeeJeon, Myeongjae
Issued Date
2022-07-11
URI
https://scholarworks.unist.ac.kr/handle/201301/75713
Fulltext
https://www.usenix.org/conference/atc22/presentation/kim-taeyoon
Citation
USENIX Annual Technical Conference
Abstract
GPUs are highly contended resources in shared clusters for deep learning (DL) training. However, our analysis with a real-world trace reveals that a non-negligible number of jobs running on the cluster undergo failures and are blindly retried by the job scheduler. Unfortunately, these job failures often repeat and waste GPU resources, limiting effective GPU utilization across the cluster. In this paper, we introduce Sibylla which informs whether an observed failure of DL training will repeat or not upon retry on the failure. Sibylla employs a machine learning model based on RNNs that trains on stdout and stderr logs of failed jobs and can continuously update the model on new log messages without hand-constructing labels for the new training samples. With Sibylla, the job scheduler is learning-enhanced, performing a retry for a failed job only when it is highly likely to succeed with the retry. We evaluate the effectiveness of Sibylla under a variety of scenarios using trace-driven simulations. Sibylla improves cluster utilization and reduces job completion time (JCT) by up to 15%.
Publisher
USENIX

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.