File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

오태훈

Oh, Tae Hoon
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.startPage 108601 -
dc.citation.title COMPUTERS & CHEMICAL ENGINEERING -
dc.citation.volume 183 -
dc.contributor.author Kim, Yeonsoo -
dc.contributor.author Oh, Tae Hoon -
dc.date.accessioned 2024-03-15T11:05:07Z -
dc.date.available 2024-03-15T11:05:07Z -
dc.date.created 2024-03-15 -
dc.date.issued 2024-04 -
dc.description.abstract In chemical processes, the safety constraints must be satisfied despite any uncertainties. Reinforcement learning is an algorithm that learns optimal control policies through interaction with the system. Recently, studies have shown that well-trained controllers can improve the performance of chemical processes, but the actual application requires additional schemes to satisfy the constraints. In our previous work, we proposed a modelbased safe RL in which both state and input constraints can be considered by introducing barrier functions into the objective function. This study extends our previous model-based safe RL to consider the constraints with model-plant mismatches and stochastic disturbances. The Gaussian processes are employed to predict the expectation and variance of errors in constraints caused by uncertainties. Subsequently, these are further used to tighten the constraint by backoffs. With these adaptive backoffs, the safe RL can satisfy chance constraints and learn the optimal control policy of the uncertain nonlinear system. -
dc.identifier.bibliographicCitation COMPUTERS & CHEMICAL ENGINEERING, v.183, pp.108601 -
dc.identifier.doi 10.1016/j.compchemeng.2024.108601 -
dc.identifier.issn 0098-1354 -
dc.identifier.scopusid 2-s2.0-85183625146 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/81661 -
dc.identifier.wosid 001170765800001 -
dc.language 영어 -
dc.publisher PERGAMON-ELSEVIER SCIENCE LTD -
dc.title Model-based safe reinforcement learning for nonlinear systems under uncertainty with constraints tightening approach -
dc.type Article -
dc.description.isOpenAccess FALSE -
dc.relation.journalWebOfScienceCategory Computer Science, Interdisciplinary Applications; Engineering, Chemical -
dc.relation.journalResearchArea Computer Science; Engineering -
dc.type.docType Article -
dc.description.journalRegisteredClass scie -
dc.description.journalRegisteredClass scopus -
dc.subject.keywordAuthor Reinforcement learning -
dc.subject.keywordAuthor Gaussian process -
dc.subject.keywordAuthor Sontag&apos -
dc.subject.keywordAuthor s formula -
dc.subject.keywordAuthor Chance constraint -
dc.subject.keywordAuthor Backoff approach -
dc.subject.keywordPlus PREDICTIVE CONTROL -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.