Learning algorithms for Markov decisionprocesses with average cost

Abounadi, J. ; Bertsekas, D. ; Borkar, V. S. (2001) Learning algorithms for Markov decisionprocesses with average cost SIAM Journal on Control and Optimization, 40 (3). pp. 681-698. ISSN 0363-0129

PDF - Publisher Version

Official URL: http://www-mit.mit.edu/dimitrib/www/q_learn.pdf


This paper gives the first rigorous convergence analysis of analogues of Watkins's Q-learning algorithm, applied to average cost control of finite-state Markov chains. We discuss two algorithms which may be viewed as stochastic approximation counterparts of two existing algorithms for recursively computing the value function of the average cost problem-the traditional relative value iteration (RVI) algorithm and a recent algorithm of Bertsekas based on the stochastic shortest path (SSP) formulation of the problem. Both synchronous and asynchronous implementations are considered and analyzed using the ODE method. This involves establishing asymptotic stability of associated ODE limits. The SSP algorithm also uses ideas from two-time-scale stochastic approximation.

Item Type:Article
Source:Copyright of this article belongs to Society for Industrial and Applied Mathematics.
Keywords:Simulation-Based Algorithms; Q-Learning; Controlled Markov Chains; Average Cost Control; Stochastic Approximation; Dynamic Programming
ID Code:81451
Deposited On:06 Feb 2012 05:04
Last Modified:18 May 2016 23:00

Repository Staff Only: item control page