Dharmavaram, Akshay ; Riemer, Matthew ; Bhatnagar, Shalabh (2020) Hierarchical Average Reward Policy Gradient Algorithms (Student Abstract) In: The Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20), Feb 7-12, 2020, New York, NY, USA.
Full text not available from this repository.
Official URL: http://doi.org/10.1609/aaai.v34i10.7160
Related URL: http://dx.doi.org/10.1609/aaai.v34i10.7160
Abstract
Option-critic learning is a general-purpose reinforcement learning (RL) framework that aims to address the issue of long term credit assignment by leveraging temporal abstractions. However, when dealing with extended timescales, discounting future rewards can lead to incorrect credit assignments. In this work, we address this issue by extending the hierarchical option-critic policy gradient theorem for the average reward criterion. Our proposed framework aims to maximize the long-term reward obtained in the steady-state of the Markov chain defined by the agent's policy. Furthermore, we use an ordinary differential equation based approach for our convergence analysis and prove that the parameters of the intra-option policies, termination functions, and value functions, converge to their corresponding optimal values, with probability one. Finally, we illustrate the competitive advantage of learning options, in the average reward setting, on a grid-world environment with sparse rewards.
Item Type: | Conference or Workshop Item (Paper) |
---|---|
Source: | Copyright of this article belongs to Association for the Advancement of Artificial Intelligence. |
ID Code: | 116618 |
Deposited On: | 12 Apr 2021 07:13 |
Last Modified: | 12 Apr 2021 07:13 |
Repository Staff Only: item control page