File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

조경화

Cho, Kyung Hwa
Water-Environmental Informatics Lab.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.number 2 -
dc.citation.startPage 136364 -
dc.citation.title CHEMOSPHERE -
dc.citation.volume 308 -
dc.contributor.author Park, Sanghun -
dc.contributor.author Shim, Jaegyu -
dc.contributor.author Yoon, Nakyung -
dc.contributor.author Lee, Sungman -
dc.contributor.author Kwak, Donggeun -
dc.contributor.author Lee, Seungyong -
dc.contributor.author Kim, Young Mo -
dc.contributor.author Son, Moon -
dc.contributor.author Cho, Kyung Hwa -
dc.date.accessioned 2023-12-21T13:15:34Z -
dc.date.available 2023-12-21T13:15:34Z -
dc.date.created 2022-10-27 -
dc.date.issued 2022-12 -
dc.description.abstract Enhancing engineering efficiency and reducing operating costs are permanent subjects that face all engineers over the world. To effectively improve the performance of filtration systems, it is necessary to determine an optimal operating condition beyond conventional methods of periodic and empirical operation. Herein, this paper proposes an effective approach to finding an optimal operating strategy using deep reinforcement learning (DRL), particularly for an ultrafiltration (UF) system. Deep learning was developed to represent the UF system utilizing a long-short term memory and provided an environment for DRL. DRL was designed to control three actions; operating pressure, cleaning time, and cleaning concentration. Ultimately, DRL proposed the UF system to actively change the operating pressure and cleaning conditions over time toward better water productivity and operating efficiency. DRL denoted similar to 20.9% of specific energy consumption can be reduced by increasing average water flux (39.5-43.7 L m(-2) h(-1)) and reducing operating pressure (0.617-0.540 bar). Moreover, the optimal action of DRL was reasonable to achieve better performance beyond the conventional operation. Crucially, this study demonstrated that due to the nature of DRL, the approach is tractable for engineering systems that have structurally complex relationships among operating conditions and resultants. -
dc.identifier.bibliographicCitation CHEMOSPHERE, v.308, no.2, pp.136364 -
dc.identifier.doi 10.1016/j.chemosphere.2022.136364 -
dc.identifier.issn 0045-6535 -
dc.identifier.scopusid 2-s2.0-85138053409 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/59906 -
dc.identifier.wosid 000864635900003 -
dc.language 영어 -
dc.publisher PERGAMON-ELSEVIER SCIENCE LTD -
dc.title Deep reinforcement learning in an ultrafiltration system: Optimizing operating pressure and chemical cleaning conditions -
dc.type Article -
dc.description.isOpenAccess FALSE -
dc.relation.journalWebOfScienceCategory Environmental Sciences -
dc.relation.journalResearchArea Environmental Sciences & Ecology -
dc.type.docType Article -
dc.description.journalRegisteredClass scie -
dc.description.journalRegisteredClass scopus -
dc.subject.keywordAuthor Deep reinforcement learning -
dc.subject.keywordAuthor Machine learning -
dc.subject.keywordAuthor Ultrafiltration -
dc.subject.keywordAuthor Chemical cleaning -
dc.subject.keywordAuthor Optimization -
dc.subject.keywordPlus WATER -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.