An Online Actor–Critic Algorithm with Function Approximation for Constrained Markov Decision Processes

Bhatnagar, Shalabh ; Lakshmanan, K. (2012) An Online Actor–Critic Algorithm with Function Approximation for Constrained Markov Decision Processes Journal of Optimization Theory and Applications, 153 (3). pp. 688-708. ISSN 0022-3239

Full text not available from this repository.

Official URL: http://doi.org/10.1007/s10957-012-9989-5

Related URL: http://dx.doi.org/10.1007/s10957-012-9989-5

Abstract

We develop an online actor–critic reinforcement learning algorithm with function approximation for a problem of control under inequality constraints. We consider the long-run average cost Markov decision process (MDP) framework in which both the objective and the constraint functions are suitable policy-dependent long-run averages of certain sample path functions. The Lagrange multiplier method is used to handle the inequality constraints. We prove the asymptotic almost sure convergence of our algorithm to a locally optimal solution. We also provide the results of numerical experiments on a problem of routing in a multi-stage queueing network with constraints on long-run average queue lengths. We observe that our algorithm exhibits good performance on this setting and converges to a feasible point.

Item Type:Article
Source:Copyright of this article belongs to Springer Nature.
Keywords:Actor–Critic Algorithm; Constrained Markov Decision Processes; Long-Run Average Cost Criterion; Function Approximation.
ID Code:116534
Deposited On:12 Apr 2021 06:07
Last Modified:12 Apr 2021 06:07

Repository Staff Only: item control page