Reza Maei, Hamid ; Szepesv´ari, Csaba ; Bhatnagar, Shalabh ; Sutton, Richard S. (2010) Toward Off-Policy Learning Control with Function Approximation In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), June 21-24, 2010, Haifa, Israel.
Full text not available from this repository.
Abstract
We present the first temporal-difference learning algorithm for off-policy control with unrestricted linear function approximation whose per-time-step complexity is linear in the number of features. Our algorithm, Greedy-GQ, is an extension of recent work on gradient temporal-difference learning, which has hitherto been restricted to a prediction (policy evaluation) setting, to a control setting in which the target policy is greedy with respect to a linear approximation to the optimal action-value function. A limitation of our control setting is that we require the behavior policy to be stationary. We call this setting latent learning because the optimal policy, though learned, is not manifest in behavior. Popular off-policy algorithms such as Q-learning are known to be unstable in this setting when used with linear function approximation.
Item Type: | Conference or Workshop Item (Paper) |
---|---|
Source: | Copyright 2010 by the author(s)/owner(s). |
ID Code: | 116686 |
Deposited On: | 12 Apr 2021 07:24 |
Last Modified: | 12 Apr 2021 07:24 |
Repository Staff Only: item control page