File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

한승열

Han, Seungyul
Machine Learning & Intelligent Control Lab.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

AMBER: Adaptive Multi-Batch Experience Replay for Continuous Action Control

Author(s)
Han, SeungyulSung, Youngchul
Issued Date
2019-08-11
URI
https://scholarworks.unist.ac.kr/handle/201301/79408
Fulltext
http://surl.tirl.info/?p=program&y=2019
Citation
International Joint Conference on Artificial Intelligence
Abstract
In this paper, a new adaptive multi-batch experience replay scheme is proposed for proximal policy optimization (PPO) for continuous action control. On the contrary to original PPO, the proposed scheme uses the batch samples of past policies as well as the current policy for the update for the next policy, where the number of the used past batches is adaptively determined based on the oldness of the past batches measured by the average importance sampling (IS) weight. The new algorithm constructed by combining PPO with the proposed multi-batch experience replay scheme maintains the advantages of original PPO such as random mini-batch sampling and small bias due to low IS weights by storing the pre-computed advantages and values and adaptively determining the mini-batch size. Numerical results show that the proposed method significantly increases the speed and stability of convergence on various continuous control tasks compared to original PPO.
Publisher
IJCAI

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.