Borkar, Vivek S. ; Chandak, Siddharth (2021) Prospect-theoretic Q-learning Systems & Control Letters, 156 . p. 105009. ISSN 0167-6911
Full text not available from this repository.
Official URL: http://doi.org/10.1016/j.sysconle.2021.105009
Related URL: http://dx.doi.org/10.1016/j.sysconle.2021.105009
Abstract
We consider a prospect theoretic version of the classical Q-learning algorithm for discounted reward Markov decision processes, wherein the controller perceives a distorted and noisy future reward, modeled by a nonlinearity that accentuates gains and under-represents losses relative to a reference point. We analyze the asymptotic behavior of the scheme by analyzing its limiting differential equation and using the theory of monotone dynamical systems to infer its asymptotic behavior. Specifically, we show convergence to equilibria, and establish some qualitative facts about the equilibria themselves.
Item Type: | Article |
---|---|
Source: | Copyright of this article belongs to Elsevier Science. |
ID Code: | 135129 |
Deposited On: | 19 Jan 2023 07:31 |
Last Modified: | 19 Jan 2023 07:31 |
Repository Staff Only: item control page