File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

윤상웅

Yoon, Sangwoong
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.conferencePlace CN -
dc.citation.conferencePlace 벤쿠버 -
dc.citation.title Neural Information Processing Systems -
dc.contributor.author Yoon, Sangwoong -
dc.contributor.author Hwang, Himchan -
dc.contributor.author Kwon, Dohyun -
dc.contributor.author Noh, Yung-Kyun -
dc.contributor.author Park, Frank C. -
dc.date.accessioned 2026-02-23T15:47:03Z -
dc.date.available 2026-02-23T15:47:03Z -
dc.date.created 2026-02-23 -
dc.date.issued 2024-12-12 -
dc.description.abstract We present a maximum entropy inverse reinforcement learning (IRL) approach for improving the sample quality of diffusion generative models, especially when the number of generation time steps is small. Similar to how IRL trains a policy based on the reward function learned from expert demonstrations, we train (or fine-tune) a diffusion model using the log probability density estimated from training data. Since we employ an energy-based model (EBM) to represent the log density, our approach boils down to the joint training of a diffusion model and an EBM. Our IRL formulation, named Diffusion by Maximum Entropy IRL (DxMI), is a minimax problem that reaches equilibrium when both models converge to the data distribution. The entropy maximization plays a key role in DxMI, facilitating the exploration of the diffusion model and ensuring the convergence of the EBM. We also propose Diffusion by Dynamic Programming (DxDP), a novel reinforcement learning algorithm for diffusion models, as a subroutine in DxMI. DxDP makes the diffusion model update in DxMI efficient by transforming the original problem into an optimal control formulation where value functions replace back-propagation in time. Our empirical studies show that diffusion models fine-tuned using DxMI can generate high-quality samples in as few as 4 and 10 steps. Additionally, DxMI enables the training of an EBM without MCMC, stabilizing EBM training dynamics and enhancing anomaly detection performance. -
dc.identifier.bibliographicCitation Neural Information Processing Systems -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/90537 -
dc.language 영어 -
dc.publisher Neural Information Processing Systems Foundation (NeurIPS Foundation) -
dc.title Maximum Entropy Inverse Reinforcement Learning of Diffusion Models with Energy-Based Models -
dc.type Conference Paper -
dc.date.conferenceDate 2024-12-10 -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.