File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

최재식

Choi, Jaesik
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Deep Reinforcement Learning in Continuous Action Spaces: a Case Study in the Game of Simulated Curling

Author(s)
Lee, KyowoonKim, Sol-AChoi, JaesikLee, Seong-Whan
Issued Date
2018-07-11
URI
https://scholarworks.unist.ac.kr/handle/201301/81172
Citation
IEEE International Conference on Machine Learning
Abstract
Many real-world applications of reinforcement learning require an agent to select optimal actions from continuous spaces. Recently, deep neural networks have successfully been applied to games with discrete actions spaces. However, deep neural networks for discrete actions are not suitable for devising strategies for games where a very small change in an action can dramatically affect the outcome. In this paper, we present a new selfplay reinforcement learning framework which equips a continuous search algorithm which enables
to search in continuous action spaces with a kernel regression method. Without any handcrafted features, our network is trained by supervised learning followed by self-play reinforcement learning with a high-fidelity simulator for the Olympic sport of curling. The program trained under our framework outperforms existing programs equipped with several hand-crafted features and won an international digital curling competition.
Publisher
International Machine Learning Society

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.