File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

유재준

Yoo, Jaejun
Lab. of Advanced Imaging Technology
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.conferencePlace SI -
dc.citation.endPage 21442 -
dc.citation.startPage 21419 -
dc.citation.title International Conference on Learning Representations -
dc.contributor.author Seo, Kyeongkook -
dc.contributor.author Han, Dong-Jun -
dc.contributor.author Yoo, Jaejun -
dc.date.accessioned 2025-12-03T14:40:20Z -
dc.date.available 2025-12-03T14:40:20Z -
dc.date.created 2025-12-03 -
dc.date.issued 2025-04-24 -
dc.description.abstract Despite recent advancements in federated learning (FL), the integration of generative models into FL has been limited due to challenges such as high communication costs and unstable training in heterogeneous data environments. To address these issues, we propose PRISM, a FL framework tailored for generative models that ensures (i) stable performance in heterogeneous data distributions and (ii) resource efficiency in terms of communication cost and final model size. The key of our method is to search for an optimal stochastic binary mask for a random network rather than updating the model weights, identifying a sparse subnetwork with high generative performance; i.e., a “strong lottery ticket”. By communicating binary masks in a stochastic manner, PRISM minimizes communication overhead. Combined with the utilization of maximum mean discrepancy (MMD) loss and a mask-aware dynamic moving average aggregation method (MADA) on the server side, PRISM facilitates stable and strong generative capabilities by mitigating local divergence in FL scenarios. Moreover, thanks to its sparsifying characteristic, PRISM yields an lightweight model without extra pruning or quantization, making it ideal for environments such as edge devices. Experiments on MNIST, FMNIST, CelebA, and CIFAR10 demonstrate that PRISM outperforms existing methods, while maintaining privacy with minimal communication costs. PRISM is the first to successfully generate images under challenging non-IID and privacy-preserving FL environments on complex datasets, where previous methods have struggled. Our code is available at PRISM. -
dc.identifier.bibliographicCitation International Conference on Learning Representations, pp.21419 - 21442 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/88848 -
dc.language 영어 -
dc.publisher International Conference on Learning Representations -
dc.title PRISM: PRIVACY-PRESERVING IMPROVED STOCHASTIC MASKING FOR FEDERATED GENERATIVE MODELS -
dc.type Conference Paper -
dc.date.conferenceDate 2025-04-24 -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.