File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

전명재

Jeon, Myeongjae
OMNIA
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.endPage 876 -
dc.citation.number 4 -
dc.citation.startPage 863 -
dc.citation.title PROCEEDINGS OF THE VLDB ENDOWMENT -
dc.citation.volume 17 -
dc.contributor.author Kim, Taeyoon -
dc.contributor.author Park, ChanHo -
dc.contributor.author Mukimbekov, Mansur -
dc.contributor.author Hong, Heelim -
dc.contributor.author Kim, Minseok -
dc.contributor.author Jin, Ze -
dc.contributor.author Kim, Changdae -
dc.contributor.author Shin, Ji-Yong -
dc.contributor.author Jeon, Myeongjae -
dc.date.accessioned 2024-06-10T17:35:08Z -
dc.date.available 2024-06-10T17:35:08Z -
dc.date.created 2024-05-16 -
dc.date.issued 2023-12 -
dc.description.abstract Data augmentation enhances the accuracy of DL models by diversifying training samples through a sequence of data transformations. While recent advancements in data augmentation have demonstrated remarkable efficacy, they often rely on computationally expensive and dynamic algorithms. Unfortunately, current system optimizations, primarily designed to leverage CPUs, cannot effectively support these methods due to costs and limited resource availability. To address these issues, we introduce FusionFlow, a system that cooperatively utilizes both CPUs and GPUs to accelerate the data preprocessing stage of DL training that runs the data augmentation algorithm. FusionFlow orchestrates data preprocessing tasks across CPUs and GPUs while minimizing interference with GPU-based model training. In doing so, it effectively mitigates the risk of GPU memory overflow by managing memory allocations of the tasks within the GPU-wide free space. Furthermore, FusionFlow provides a dynamic scheduling strategy for tasks with varying computational demands and reallocates compute resources on the fly to enhance training throughput for both single and multi-GPU DL jobs. Our evaluations show that FusionFlow outperforms existing CPU-based methods by 16-285% in single-machine scenarios and, to achieve similar training speeds, requires 50-60% fewer CPUs compared to utilizing scalable compute resources from external servers. -
dc.identifier.bibliographicCitation PROCEEDINGS OF THE VLDB ENDOWMENT, v.17, no.4, pp.863 - 876 -
dc.identifier.doi 10.14778/3636218.3636238 -
dc.identifier.issn 2150-8097 -
dc.identifier.scopusid 2-s2.0-85190680820 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/82944 -
dc.identifier.wosid 001206935800020 -
dc.language 영어 -
dc.publisher ASSOC COMPUTING MACHINERY -
dc.title FusionFlow: Accelerating Data Preprocessing for Machine Learning with CPU-GPU Cooperation -
dc.type Article -
dc.description.isOpenAccess FALSE -
dc.relation.journalWebOfScienceCategory Computer Science, Information Systems; Computer Science, Theory & Methods -
dc.relation.journalResearchArea Computer Science -
dc.type.docType Article -
dc.description.journalRegisteredClass scie -
dc.description.journalRegisteredClass scopus -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.