File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

임동영

Lim, Dong-Young
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Langevin Dynamics Based Algorithm e-THεO POULA for Stochastic Optimization Problems with Discontinuous Stochastic Gradient

Author(s)
Lim, Dong-YoungNeufeld, ArielSabanis, SotiriosZhang, Ying
Issued Date
2025-08
DOI
10.1287/moor.2022.0307
URI
https://scholarworks.unist.ac.kr/handle/201301/84016
Citation
MATHEMATICS OF OPERATIONS RESEARCH, v.50, no.3, pp.1585 - 2432
Abstract
We introduce a new Langevin dynamics based algorithm, called the extended tamed hybrid epsilon-order polygonal unadjusted Langevin algorithm (e-TH epsilon O POULA), to solve optimization problems with discontinuous stochastic gradients, which naturally appear in real-world applications such as quantile estimation, vector quantization, conditional value at risk (CVaR) minimization, and regularized optimization problems involving rectified linear unit (ReLU) neural networks. We demonstrate both theoretically and numerically the applicability of the e-TH epsilon O POULA algorithm. More precisely, under the conditions that the stochastic gradient is locally Lipschitz in average and satisfies a certain convexity at infinity condition, we establish nonasymptotic error bounds for e-TH epsilon O POULA in Wasserstein distances and provide a nonasymptotic estimate for the expected excess risk, which can be controlled to be arbitrarily small. Three key applications in finance and insurance are provided, namely, multiperiod portfolio optimization, transfer learning in multiperiod portfolio optimization, and insurance claim prediction, which involve neural networks with (Leaky)ReLU activation functions. Numerical experiments conducted using real-world data sets illustrate the superior empirical performance of e-TH epsilon O POULA compared with SGLD (stochastic gradient Langevin dynamics), TUSLA (tamed unadjusted stochastic Langevin algorithm), adaptive moment estimation, and Adaptive Moment Estimation with a Strongly Non-Convex Decaying Learning Rate in terms of model accuracy.
Publisher
INFORMS
ISSN
0364-765X
Keyword (Author)
nonconvex stochastic optimizationnonasymptotic convergence boundLangevin dynamics based algorithmdiscontinuous stochastic gradientartificial neural networksReLU activation functiontaming techniquesuperlinearly growing coefficients
Keyword
DEPENDENT DATA STREAMSSTRONG-CONVERGENCE

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.