An actor-critic algorithm for constrained markov decision processes

Borkar, V. S. (2005) An actor-critic algorithm for constrained markov decision processes Systems & Control Letters, 54 (3). pp. 207-213. ISSN 0167-6911

Full text not available from this repository.

Official URL: http://linkinghub.elsevier.com/retrieve/pii/S01676...

Related URL: http://dx.doi.org/10.1016/j.sysconle.2004.08.007

Abstract

An actor-critic type reinforcement learning algorithm is proposed and analyzed for constrained controlled Markov decision processes. The analysis uses multiscale stochastic approximation theory and the envelope theorem' of mathematical economics.

Item Type:Article
Source:Copyright of this article belongs to Elsevier Science.
Keywords:Actor-critic Algorithms; Reinforcement Learning; Constrained Markov Decision Processes; Stochastic Approximation; Envelope Theorem
ID Code:5285
Deposited On:18 Oct 2010 08:32
Last Modified:20 May 2011 08:53

Repository Staff Only: item control page