File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

임동영

Lim, Dong-Young
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.title TRANSACTIONS ON MACHINE LEARNING RESEARCH (TMLR) -
dc.citation.volume 2025 -
dc.contributor.author Bruno, Stefano -
dc.contributor.author Zhang, Ying -
dc.contributor.author Lim, Dong-Young -
dc.contributor.author Akyildiz, Ömer Deniz -
dc.contributor.author Sabanis, Sotirios -
dc.date.accessioned 2025-04-25T15:09:29Z -
dc.date.available 2025-04-25T15:09:29Z -
dc.date.created 2025-03-15 -
dc.date.issued 2025-02 -
dc.description.abstract We provide full theoretical guarantees for the convergence behaviour of diffusion-based generative models under the assumption of strongly log-concave data distributions while our approximating class of functions used for score estimation is made of Lipschitz continuous functions avoiding any Lipschitzness assumption on the score function. We demonstrate via a motivating example, sampling from a Gaussian distribution with unknown mean, the powerfulness of our approach. In this case, explicit estimates are provided for the associated optimization problem, i.e. score approximation, while these are combined with the corresponding sampling estimates. As a result, we obtain the best known upper bound estimates in terms of key quantities of interest, such as the dimension and rates of convergence, for the Wasserstein-2 distance between the data distribution (Gaussian with unknown mean) and our sampling algorithm. Beyond the motivating example and in order to allow for the use of a diverse range of stochastic optimizers, we present our results using an L2-accurate score estimation assumption, which crucially is formed under an expectation with respect to the stochastic optimizer and our novel auxiliary process that uses only known information. This approach yields the best known convergence rate for our sampling algorithm. © 2025, Transactions on Machine Learning Research. All rights reserved. -
dc.identifier.bibliographicCitation TRANSACTIONS ON MACHINE LEARNING RESEARCH (TMLR), v.2025 -
dc.identifier.doi 10.48550/arXiv.2311.13584 -
dc.identifier.issn 2835-8856 -
dc.identifier.scopusid 2-s2.0-85219551989 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/86736 -
dc.language 영어 -
dc.publisher OpenReview.net -
dc.title On diffusion-based generative models and their error bounds: The log-concave case with full convergence estimates -
dc.type Article -
dc.description.isOpenAccess FALSE -
dc.type.docType Article -
dc.description.journalRegisteredClass scopus -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.