File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

한승열

Han, Seungyul
Machine Learning & Intelligent Control Lab.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Exclusively Penalized Q-learning for Offline Reinforcement Learning

Author(s)
Yeom, JunghyukJo, YonghyeonKim, JeongmoLee, SanghyeonHan, Seungyul
Issued Date
2024-12-13
URI
https://scholarworks.unist.ac.kr/handle/201301/85291
Citation
Neural Information Processing Systems
Abstract
Constraint-based offline reinforcement learning (RL) involves policy constraints or imposing penalties on the value function to mitigate overestimation errors caused by distributional shift. This paper focuses on a limitation in existing offline RL methods with penalized value function, indicating the potential for underestimation bias due to unnecessary bias introduced in the value function. To address this concern, we propose Exclusively Penalized Q-learning (EPQ), which reduces estimation bias in the value function by selectively penalizing states that are prone to inducing estimation errors. Numerical results show that our method significantly reduces underestimation bias and improves performance in various offline control tasks compared to other offline RL methods.
Publisher
Neural Information Processing Systems

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.