File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

한승열

Han, Seungyul
Machine Learning & Intelligent Control Lab.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.endPage 129748 -
dc.citation.title NEUROCOMPUTING -
dc.citation.volume 633 -
dc.contributor.author Park, Giseung -
dc.contributor.author Jung, Whiyoung -
dc.contributor.author Han, Seungyul -
dc.contributor.author Choi, Sungho -
dc.contributor.author Sung, Youngchul -
dc.date.accessioned 2025-04-25T15:05:37Z -
dc.date.available 2025-04-25T15:05:37Z -
dc.date.created 2025-03-10 -
dc.date.issued 2025-06 -
dc.description.abstract In this paper, we address intrinsic reward generation for sparse-reward reinforcement learning, where the agent receives limited extrinsic feedback from the environment. Traditional approaches to intrinsic reward generation often rely on prediction errors from a single model, where the intrinsic reward is derived from the discrepancy between the model’s predicted outputs and the actual targets. This approach exploits the observation that less-visited state–action pairs typically yield higher prediction errors. We extend this framework by incorporating multiple prediction models and propose an adaptive fusion technique specifically designed for the multi-model setting. We establish and mathematically justify key axiomatic conditions that any viable fusion method must satisfy. Our adaptive fusion approach dynamically learns the best way to combine prediction errors during training, leading to improved learning performance. Numerical experiments validate the effectiveness of our method, showing significant performance gains across various tasks compared to existing approaches. -
dc.identifier.bibliographicCitation NEUROCOMPUTING, v.633 -
dc.identifier.doi 10.1016/j.neucom.2025.129748 -
dc.identifier.issn 0925-2312 -
dc.identifier.scopusid 2-s2.0-85218883705 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/86621 -
dc.identifier.wosid 001441017900001 -
dc.language 영어 -
dc.publisher ELSEVIER -
dc.title Adaptive multi-model fusion learning for sparse-reward reinforcement learning -
dc.type Article -
dc.description.isOpenAccess FALSE -
dc.relation.journalWebOfScienceCategory Computer Science, Artificial Intelligence -
dc.relation.journalResearchArea Computer Science -
dc.type.docType Article -
dc.description.journalRegisteredClass scie -
dc.description.journalRegisteredClass scopus -
dc.subject.keywordAuthor Multiple prediction models -
dc.subject.keywordAuthor Neural network -
dc.subject.keywordAuthor Sparse-reward reinforcement learning -
dc.subject.keywordAuthor Adaptive fusion -
dc.subject.keywordAuthor Deep reinforcement learning -
dc.subject.keywordAuthor Intrinsic reward -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.