File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

이승준

Lee, Seung Jun
Nuclear Safety Assessment and Plant HMI Evolution Lab.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.endPage 3290 -
dc.citation.number 9 -
dc.citation.startPage 3277 -
dc.citation.title NUCLEAR ENGINEERING AND TECHNOLOGY -
dc.citation.volume 55 -
dc.contributor.author Bae, Junyong -
dc.contributor.author Kim, Jae Min -
dc.contributor.author Lee, Seung Jun -
dc.date.accessioned 2023-12-21T11:45:09Z -
dc.date.available 2023-12-21T11:45:09Z -
dc.date.created 2023-08-28 -
dc.date.issued 2023-09 -
dc.description.abstract Nuclear power plant (NPP) operations with multiple objectives and devices are still performed manually by operators despite the potential for human error. These operations could be automated to reduce the burden on operators; however, classical approaches may not be suitable for these multi-objective tasks. An alternative approach is deep reinforcement learning (DRL), which has been successful in automating various complex tasks and has been applied in automation of certain operations in NPPs. But despite the recent progress, previous studies using DRL for NPP operations have limitations to handle complex multi-objective operations with multiple devices efficiently. This study proposes a novel DRL-based approach that addresses these limitations by employing a continuous action space and straightforward binary rewards supported by the adoption of a soft actor-critic and hindsight experience replay. The feasibility of the proposed approach was evaluated for controlling the pressure and volume of the reactor coolant while heating the coolant during NPP startup. The results show that the proposed approach can train the agent with a proper strategy for effectively achieving multiple objectives through the control of multiple devices. Moreover, hands-on testing results demonstrate that the trained agent is capable of handling untrained objectives, such as cooldown, with substantial success. -
dc.identifier.bibliographicCitation NUCLEAR ENGINEERING AND TECHNOLOGY, v.55, no.9, pp.3277 - 3290 -
dc.identifier.doi 10.1016/j.net.2023.06.009 -
dc.identifier.issn 1738-5733 -
dc.identifier.scopusid 2-s2.0-85164393532 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/65308 -
dc.identifier.wosid 001047926100001 -
dc.language 영어 -
dc.publisher 한국원자력학회 -
dc.title Deep reinforcement learning for a multi-objective operation in a nuclear power plant -
dc.type Article -
dc.description.isOpenAccess FALSE -
dc.relation.journalWebOfScienceCategory Nuclear Science & Technology -
dc.relation.journalResearchArea Nuclear Science & Technology -
dc.type.docType Article -
dc.description.journalRegisteredClass scie -
dc.description.journalRegisteredClass scopus -
dc.description.journalRegisteredClass kci -
dc.subject.keywordAuthor Automation -
dc.subject.keywordAuthor Deep reinforcement learning -
dc.subject.keywordAuthor Hindsight experience replay -
dc.subject.keywordAuthor Nuclear power plant -
dc.subject.keywordAuthor Soft actor-critic -
dc.subject.keywordPlus LEVEL -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.