File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

오태훈

Oh, Tae Hoon
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Model-based safe reinforcement learning for nonlinear systems under uncertainty with constraints tightening approach

Author(s)
Kim, YeonsooOh, Tae Hoon
Issued Date
2024-04
DOI
10.1016/j.compchemeng.2024.108601
URI
https://scholarworks.unist.ac.kr/handle/201301/81661
Citation
COMPUTERS & CHEMICAL ENGINEERING, v.183, pp.108601
Abstract
In chemical processes, the safety constraints must be satisfied despite any uncertainties. Reinforcement learning is an algorithm that learns optimal control policies through interaction with the system. Recently, studies have shown that well-trained controllers can improve the performance of chemical processes, but the actual application requires additional schemes to satisfy the constraints. In our previous work, we proposed a modelbased safe RL in which both state and input constraints can be considered by introducing barrier functions into the objective function. This study extends our previous model-based safe RL to consider the constraints with model-plant mismatches and stochastic disturbances. The Gaussian processes are employed to predict the expectation and variance of errors in constraints caused by uncertainties. Subsequently, these are further used to tighten the constraint by backoffs. With these adaptive backoffs, the safe RL can satisfy chance constraints and learn the optimal control policy of the uncertain nonlinear system.
Publisher
PERGAMON-ELSEVIER SCIENCE LTD
ISSN
0098-1354
Keyword (Author)
Reinforcement learningGaussian processSontag&aposs formulaChance constraintBackoff approach
Keyword
PREDICTIVE CONTROL

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.