File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

오태훈

Oh, Tae Hoon
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

A model-based deep reinforcement learning method applied to finite-horizon optimal control of nonlinear control-affine system

Author(s)
Kim, Jong WooPark, Byung JunYoo, HaeunOh, Tae HoonLee, Jay H.Lee, Jong Min
Issued Date
2020-03
DOI
10.1016/j.jprocont.2020.02.003
URI
https://scholarworks.unist.ac.kr/handle/201301/81581
Citation
JOURNAL OF PROCESS CONTROL, v.87, pp.166 - 178
Abstract
The Hamilton-Jacobi-Bellman (HJB) equation can be solved to obtain optimal closed-loop control policies for general nonlinear systems. As it is seldom possible to solve the HJB equation exactly for nonlinear systems, either analytically or numerically, methods to build approximate solutions through simulation based learning have been studied in various names like neurodynamic programming (NDP) and approximate dynamic programming (ADP). The aspect of learning connects these methods to reinforcement learning (RL), which also tries to learn optimal decision policies through trial-and-error based learning. This study develops a model-based RL method, which iteratively learns the solution to the HJB and its associated equations. We focus particularly on the control-affine system with a quadratic objective function and the finite horizon optimal control (FHOC) problem with time-varying reference trajectories. The HJB solutions for such systems involve time-varying value, costate, and policy functions subject to boundary conditions. To represent the time-varying HJB solution in high-dimensional state space in a general and efficient way, deep neural networks (DNNs) are employed. It is shown that the use of DNNs, compared to shallow neural networks (SNNs), can significantly improve the performance of a learned policy in the presence of uncertain initial state and state noise. Examples involving a batch chemical reactor and a one-dimensional diffusion-convection-reaction system are used to demonstrate this and other key aspects of the method. (C) 2020 Elsevier Ltd. All rights reserved.
Publisher
ELSEVIER SCI LTD
ISSN
0959-1524
Keyword (Author)
Reinforcement learningApproximate dynamic programmingDeep neural networksGlobalized dual heuristic programmingFinite horizon optimal control problemHamilton-Jacobi-Bellman equation
Keyword
APPROXIMATE OPTIMAL-CONTROL

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.