Joseph, Ajin George ; Bhatnagar, Shalabh (2017) An Incremental Fast Policy Search Using a Single Sample Path Part of the Lecture Notes in Computer Science book series (LNCS, volume 10597), 10597 . Springer Nature, pp. 3-10. ISBN 978-3-319-69899-1
Full text not available from this repository.
Official URL: https://doi.org/10.1007/978-3-319-69900-4_1
Related URL: http://dx.doi.org/10.1007/978-3-319-69900-4_1
Abstract
In this paper, we consider the control problem in a reinforcement learning setting with large state and action spaces. The control problem most commonly addressed in the contemporary literature is to find an optimal policy which optimizes the long run γ -discounted transition costs, where γ∈[0,1) . They also assume access to a generative model/simulator of the underlying MDP with the hidden premise that realization of the system dynamics of the MDP for arbitrary policies in the form of sample paths can be obtained with ease from the model. In this paper, we consider a cost function which is the expectation of a approximate value function w.r.t. the steady state distribution of the Markov chain induced by the policy, without having access to the generative model. We assume that a single sample path generated using a priori chosen behaviour policy is made available. In this information restricted setting, we solve the generalized control problem using the incremental cross entropy method. The proposed algorithm is shown to converge to the solution which is globally optimal relative to the behaviour policy.
Item Type: | Book |
---|---|
Source: | Copyright of this article belongs to Springer Nature. |
ID Code: | 116474 |
Deposited On: | 12 Apr 2021 05:46 |
Last Modified: | 12 Apr 2021 05:46 |
Repository Staff Only: item control page