File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

오태훈

Oh, Tae Hoon
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Integration of reinforcement learning and model predictive control to optimize semi-batch bioreactor

Author(s)
Oh, Tae HoonPark, Hyun MinKim, Jong WooLee, Jong Min
Issued Date
2022-06
DOI
10.1002/aic.17658
URI
https://scholarworks.unist.ac.kr/handle/201301/81565
Citation
AICHE JOURNAL, v.68, no.6, pp.e17658
Abstract
As the digital transformation of the bioprocess is progressing, several studies propose to apply data-based methods to obtain a substrate feeding strategy that minimizes the operating cost of a semi-batch bioreactor. However, the negligent application of model-free reinforcement learning (RL) has a high chance to fail on improving the existing control policy because the available amount of data is limited. In this article, we propose an integrated algorithm of double-deep Q-network and model predictive control. The proposed method learns the action-value function in an off-policy fashion and solves the model-based optimal control problem where the terminal cost is assigned by the action-value function. For simulation study, the proposed method, model-based method, and model-free methods are applied to the industrial scale penicillin process. The results show that the proposed method outperforms other methods, and it can learn with fewer data than model-free RL algorithms.
Publisher
WILEY
ISSN
0001-1541
Keyword (Author)
bioprocessdeep neural networkmodel predictive controloptimal controlreinforcement learning
Keyword
FED-BATCH FERMENTATIONPENICILLIN PRODUCTIONSTRUCTURED MODELBIG DATASTABILITY

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.