File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

한승열

Han, Seungyul
Machine Learning & Intelligent Control Lab.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

A Max-Min Entropy Framework for Reinforcement Learning

Author(s)
Han, SeungyulSung, Youngchul
Issued Date
2021-12-08
URI
https://scholarworks.unist.ac.kr/handle/201301/76467
Fulltext
https://papers.nips.cc/paper/2021/hash/d7b76edf790923bf7177f7ebba5978df-Abstract.html
Citation
Neural Information Processing Systems
Abstract
In this paper, we propose a max-min entropy framework for reinforcement learning (RL) to overcome the limitation of the soft actor-critic (SAC) algorithm implementing the maximum entropy RL in model-free sample-based learning. Whereas the maximum entropy RL guides learning for policies to reach states with high entropy in the future, the proposed max-min entropy framework aims to learn to visit states with low entropy and maximize the entropy of these low-entropy states to promote better exploration. For general Markov decision processes (MDPs), an efficient algorithm is constructed under the proposed max-min entropy framework based on disentanglement of exploration and exploitation. Numerical results show that the proposed algorithm yields drastic performance improvement over the current state-of-the-art RL algorithms.
Publisher
Neural Information Processing Systems

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.