Bhatnagar, Shalabh ; Panigrahi, J. Ranjan (2006) Actor-critic algorithms for hierarchical Markov decision processes Automatica, 42 (4). pp. 637-644. ISSN 0005-1098
Full text not available from this repository.
Official URL: http://doi.org/10.1016/j.automatica.2005.12.010
Related URL: http://dx.doi.org/10.1016/j.automatica.2005.12.010
Abstract
We consider the problem of control of hierarchical Markov decision processes and develop a simulation based two-timescale actor-critic algorithm in a general framework. We also develop certain approximation algorithms that require less computation and satisfy a performance bound. One of the approximation algorithms is a three-timescale actor-critic algorithm while the other is a two-timescale algorithm, however, which operates in two separate stages. All our algorithms recursively update randomized policies using the simultaneous perturbation stochastic approximation (SPSA) methodology. We briefly present the convergence analysis of our algorithms. We then present numerical experiments on a problem of production planning in semiconductor fabs on which we compare the performance of all algorithms together with policy iteration. Algorithms based on certain Hadamard matrix based deterministic perturbations are found to show the best results.
Item Type: | Article |
---|---|
Source: | Copyright of this article belongs to Elsevier B.V. |
ID Code: | 116568 |
Deposited On: | 12 Apr 2021 06:53 |
Last Modified: | 12 Apr 2021 06:53 |
Repository Staff Only: item control page