File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

한승열

Han, Seungyul
Machine Learning & Intelligent Control Lab.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Dimension-wise importance sampling weight clipping for sample-efficient reinforcement learning

Author(s)
Han, SeungyulSung, Youngchul
Issued Date
2019-06-12
URI
https://scholarworks.unist.ac.kr/handle/201301/79678
Citation
International Conference on Machine Learning, pp.4572 - 4584
Abstract
In importance sampling (IS)-based reinforcement learning algorithms such as Proximal Policy Optimization (PPO), IS weights are typically clipped to avoid large variance in learning. However, policy update from clipped statistics induces large bias in tasks with high action dimensions, and bias from clipping makes it difficult to reuse old samples with large IS weights. In this paper, we consider PPO, a representative on-policy algorithm, and propose its improvement by dimension-wise IS weight clipping which separately clips the IS weight of each action dimension to avoid large bias and adaptively controls the IS weight to bound policy update from the current policy. This new technique enables efficient learning for high action-dimensional tasks and reusing of old samples like in off-policy learning to increase the sample efficiency. Numerical results show that the proposed new algorithm outperforms PPO and other RL algorithms in various Open AI Gym tasks.
Publisher
International Machine Learning Society (IMLS)

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.