File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

오태훈

Oh, Tae Hoon
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.startPage 462073 -
dc.citation.title JOURNAL OF CHROMATOGRAPHY A -
dc.citation.volume 1647 -
dc.contributor.author Oh, Tae Hoon -
dc.contributor.author Kim, Jong Woo -
dc.contributor.author Son, Sang Hwan -
dc.contributor.author Kim, Hosoo -
dc.contributor.author Lee, Kyungmoo -
dc.contributor.author Lee, Jong Min -
dc.date.accessioned 2024-03-13T10:05:13Z -
dc.date.available 2024-03-13T10:05:13Z -
dc.date.created 2024-03-13 -
dc.date.issued 2021-06 -
dc.description.abstract Optimal control of a simulated moving bed (SMB) process is challenging because the system dynamics is represented as nonlinear partial differential-algebraic equations combined with discrete events. In addition, product purity constraints are active at the optimal operating condition, which implies that these constraints can be easily violated by disturbance. Recently, artificial intelligence techniques have received significant attention for their ability to address complex problems, involving a large number of state variables. In this study, a data-based deep Q-network, which is a model-free reinforcement learning method, is applied to the SMB process to train a near-optimal control policy. Using a deep Q-network, the control policy of a complex dynamic system can be trained off-line as long as a sufficient number of data is provided. These data can be efficiently generated by performing numerical simulations in parallel on multiple machines. The on-line computation of the control input using a trained Q-network is fast enough to satisfy the computational time limit for the SMB process. However, because the Q-network does not predict the future state, it is not possible to explicitly impose state constraints. Instead, the state constraints are indirectly imposed by providing a relatively large penalty (negative reward) when the constraints are violate. Furthermore, logic-based switching control is utilized to limit the ranges of the extract and raf-finate purities, which helps to satisfy the state constraints and reduce the regions in the state space for reinforcement learning to explore. The simulation results demonstrate the advantages of applying deep reinforcement learning to control the SMB process. (C) 2021 Elsevier B.V. All rights reserved. -
dc.identifier.bibliographicCitation JOURNAL OF CHROMATOGRAPHY A, v.1647, pp.462073 -
dc.identifier.doi 10.1016/j.chroma.2021.462073 -
dc.identifier.issn 0021-9673 -
dc.identifier.scopusid 2-s2.0-85105308757 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/81578 -
dc.identifier.wosid 000687136700001 -
dc.language 영어 -
dc.publisher ELSEVIER -
dc.title Automatic control of simulated moving bed process with deep Q-network -
dc.type Article -
dc.description.isOpenAccess FALSE -
dc.relation.journalWebOfScienceCategory Biochemical Research Methods; Chemistry, Analytical -
dc.relation.journalResearchArea Biochemistry & Molecular Biology; Chemistry -
dc.type.docType Article; Early Access -
dc.description.journalRegisteredClass scie -
dc.description.journalRegisteredClass scopus -
dc.subject.keywordAuthor Simulated moving bed -
dc.subject.keywordAuthor Reinforcement learning -
dc.subject.keywordAuthor Deep neural network -
dc.subject.keywordAuthor Optimal control -
dc.subject.keywordPlus PREDICTIVE CONTROL -
dc.subject.keywordPlus CHROMATOGRAPHIC-SEPARATION -
dc.subject.keywordPlus NEURAL-NETWORKS -
dc.subject.keywordPlus MODEL -
dc.subject.keywordPlus STRATEGIES -
dc.subject.keywordPlus DESIGN -
dc.subject.keywordPlus COUNTERCURRENT -
dc.subject.keywordPlus OPERATION -
dc.subject.keywordPlus SYSTEMS -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.