File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

김지수

Kim, Gi-Soo
Statistical Decision Making
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Contextual multi-armed bandit algorithm for semiparametric reward model

Author(s)
Kim, Gi-SooPaik, Myunghee Cho
Issued Date
2019-06
URI
https://scholarworks.unist.ac.kr/handle/201301/80250
Citation
36th International Conference on Machine Learning, ICML 2019, pp.5875 - 5889
Abstract
Contextual multi-armed bandit (MAB) algorithms have been shown promising for maximizing cumulative rewards in sequential decision tasks such as news article recommendation systems, web page ad placement algorithms, and mobile health. However, most of the proposed contextual MAB algorithms assume linear relationships between the reward and the context of the action. This paper proposes a new contextual MAB algorithm for a relaxed, semiparametric reward model that supports nonstationarity. The proposed method is less restrictive, easier to implement and faster than two alternative algorithms that consider the same model, while achieving a tight regret upper bound. We prove that the high-probability upper bound of the regret incurred by the proposed algorithm has the same order as the Thompson sampling algorithm for linear reward models. The proposed and existing algorithms are evaluated via simulation and also applied to Yahoo! News article recommendation log data. Copyright © 2019 ASME
Publisher
International Machine Learning Society (IMLS)

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.