File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Linear Quadratic Risk-Sensitive and Robust Mean Field Games

Author(s)
Moon, JunBasar, Tamer
Issued Date
2017-03
DOI
10.1109/TAC.2016.2579264
URI
https://scholarworks.unist.ac.kr/handle/201301/21681
Fulltext
http://ieeexplore.ieee.org/document/7488259/
Citation
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, v.62, no.3, pp.1062 - 1077
Abstract
This paper considers two classes of large population stochastic differential games connected to optimal and robust decentralized control of large-scale multiagent systems. The first problem (P1) is one where each agent minimizes an exponentiated cost function, capturing risk-sensitive behavior, whereas in the second problem (P2) each agent minimizes a worst-case risk-neutral cost function, where the "worst case" stems from the presence of an adversary entering each agent's dynamics characterized by a stochastic differential equation. In both problems, the individual agents are coupled through the mean field term included in each agent's cost function, which captures the average or mass behavior of the agents. We solve both P1 and P2 via mean field game theory. Specifically, we first solve a generic risk-sensitive optimal control problem and a generic stochastic zero-sum differential game, where the corresponding optimal controllers are applied by each agent to construct the mean field systems of P1 and P2. We then characterize an approximated mass behavior effect on an individual agent via a fixed-point analysis of the mean field system. For each problem, P1 and P2, we show that the approximated mass behavior is in fact the best estimate of the actual mass behavior in various senses as the population size, N, goes to infinity. Moreover, we show that for finite N, there exist is an element of-Nash equilibria for both P1 and P2, where the corresponding individual Nash strategies are decentralized in terms of local state information and the approximated mass behavior. We also show that is an element of can be taken to be arbitrarily small whenN is sufficiently large. We show that the is an element of-Nashequilibria of P1 and P2 are partially equivalent in the sense that the individual Nash strategies share identical control laws, but the approximated mass behaviors for P1 and P2 are different, since in P2, the mass behavior is also affected by the associated worst-case disturbance. Finally, we prove that the Nash equilibria for P1 and P2 both feature robustness, and as the parameter characterizing this robustness becomes infinite, the two Nash equilibria become identical and equivalent to that of the risk-neutral case, as in the one-agent risk-sensitive and robust control theory.
Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
ISSN
0018-9286
Keyword (Author)
Decentralized controlmean field gamesrisk-sensitive optimal controlstochastic zero-sum differential games
Keyword
STOCHASTIC MULTIAGENT SYSTEMSDIFFERENTIAL-GAMESNASH EQUILIBRIALQG CONTROLCOSTEQUIVALENCEPRINCIPLEHORIZON

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.