File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

한승열

Han, Seungyul
Machine Learning & Intelligent Control Lab.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Diversity Actor-Critic: Sample-Aware Entropy Regularization for Sample-Efficient Exploration

Author(s)
Han, SeungyulSung, Youngchul
Issued Date
2021-07-20
URI
https://scholarworks.unist.ac.kr/handle/201301/77155
Fulltext
https://icml.cc/virtual/2021/spotlight/10270
Citation
International Conference on Machine Learning
Abstract
In this paper, sample-aware policy entropy regularization is proposed to enhance the conventional policy entropy regularization for better exploration. Exploiting the sample distribution obtainable from the replay buffer, the proposed sample-aware entropy regularization maximizes the entropy of the weighted sum of the policy action distribution and the sample action distribution from the replay buffer for sample-efficient exploration. A practical algorithm named diversity actor-critic (DAC) is developed by applying policy iteration to the objective function with the proposed sample-aware entropy regularization. Numerical results show that DAC significantly outperforms existing recent algorithms for reinforcement learning.
Publisher
International Conference on Machine Learning

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.