File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

한승열

Han, Seungyul
Machine Learning & Intelligent Control Lab.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Adaptive multi-model fusion learning for sparse-reward reinforcement learning

Author(s)
Park, GiseungJung, WhiyoungHan, SeungyulChoi, SunghoSung, Youngchul
Issued Date
2025-06
DOI
10.1016/j.neucom.2025.129748
URI
https://scholarworks.unist.ac.kr/handle/201301/86621
Citation
NEUROCOMPUTING, v.633
Abstract
In this paper, we address intrinsic reward generation for sparse-reward reinforcement learning, where the agent receives limited extrinsic feedback from the environment. Traditional approaches to intrinsic reward generation often rely on prediction errors from a single model, where the intrinsic reward is derived from the discrepancy between the model’s predicted outputs and the actual targets. This approach exploits the observation that less-visited state–action pairs typically yield higher prediction errors. We extend this framework by incorporating multiple prediction models and propose an adaptive fusion technique specifically designed for the multi-model setting. We establish and mathematically justify key axiomatic conditions that any viable fusion method must satisfy. Our adaptive fusion approach dynamically learns the best way to combine prediction errors during training, leading to improved learning performance. Numerical experiments validate the effectiveness of our method, showing significant performance gains across various tasks compared to existing approaches.
Publisher
ELSEVIER
ISSN
0925-2312
Keyword (Author)
Multiple prediction modelsNeural networkSparse-reward reinforcement learningAdaptive fusionDeep reinforcement learningIntrinsic reward

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.