Reinforcement learning algorithm for non-stationary environments

Padakandla, Sindhu ; K. J., Prabuchandran ; Bhatnagar, Shalabh (2020) Reinforcement learning algorithm for non-stationary environments Applied Intelligence, 50 (11). pp. 3590-3606. ISSN 0924-669X

Full text not available from this repository.

Official URL: http://doi.org/10.1007/s10489-020-01758-5

Related URL: http://dx.doi.org/10.1007/s10489-020-01758-5

Abstract

Reinforcement learning (RL) methods learn optimal decisions in the presence of a stationary environment. However, the stationary assumption on the environment is very restrictive. In many real world problems like traffic signal control, robotic applications, etc., one often encounters situations with non-stationary environments, and in these scenarios, RL methods yield sub-optimal decisions. In this paper, we thus consider the problem of developing RL methods that obtain optimal decisions in a non-stationary environment. The goal of this problem is to maximize the long-term discounted reward accrued when the underlying model of the environment changes over time. To achieve this, we first adapt a change point algorithm to detect change in the statistics of the environment and then develop an RL algorithm that maximizes the long-run reward accrued. We illustrate that our change point method detects change in the model of the environment effectively and thus facilitates the RL algorithm in maximizing the long-run reward. We further validate the effectiveness of the proposed solution on non-stationary random Markov decision processes, a sensor energy management problem, and a traffic signal control problem.

Item Type:Article
Source:Copyright of this article belongs to Springer Nature.
Keywords:Markov Decision Processes; Reinforcement Learning; Non-stationary Environments; Change Detection.
ID Code:116434
Deposited On:12 Apr 2021 05:52
Last Modified:12 Apr 2021 05:52

Repository Staff Only: item control page