File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

유재준

Yoo, Jaejun
Lab. of Advanced Imaging Technology
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.conferencePlace US -
dc.citation.conferencePlace Honolulu, Hawaii -
dc.citation.title IEEE International Conference on Computer Vision -
dc.contributor.author Lee, Taehwan -
dc.contributor.author Seo, Kyeongkook -
dc.contributor.author Yoo, Jaejun -
dc.contributor.author Yoon, Sung Whan -
dc.date.accessioned 2025-11-25T15:17:34Z -
dc.date.available 2025-11-25T15:17:34Z -
dc.date.created 2025-11-09 -
dc.date.issued 2025-10-21 -
dc.description.abstract Flat minima, known to enhance generalization and robustness in supervised learning, remain largely unexplored in generative models. In this work, we systematically investigate the role of loss surface flatness in generative models, both theoretically and empirically, with a particular focus on diffusion models. We establish a theoretical claim that flatter minima improve robustness against perturbations in target prior distributions, leading to benefits such as reduced exposure bias---where errors in noise estimation accumulate over iterations---and significantly improved resilience to model quantization, preserving generative performance even under strong quantization constraints. We further observe that Sharpness-Aware Minimization (SAM), which explicitly controls the degree of flatness, effectively enhances flatness in diffusion models even surpassing the indirectly promoting flatness methods---Input Perturbation (IP) which enforces the Lipschitz condition, ensembling-based approach like Stochastic Weight Averaging (SWA) and Exponential Moving Average (EMA)---are less effective. Through extensive experiments on CIFAR-10, LSUN Tower, and FFHQ, we demonstrate that flat minima in diffusion models indeed improve not only generative performance but also robustness. -
dc.identifier.bibliographicCitation IEEE International Conference on Computer Vision -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/88351 -
dc.identifier.url https://openaccess.thecvf.com/content/ICCV2025/html/Lee_Understanding_Flatness_in_Generative_Models_Its_Role_and_Benefits_ICCV_2025_paper.html -
dc.publisher IEEE/CVF -
dc.title Understanding Flatness in Generative Models: Its Role and Benefits -
dc.type Conference Paper -
dc.date.conferenceDate 2025-10-19 -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.