Two timescale convergent Q-learning for sleep-scheduling in wireless sensor networks

Prashanth, L. A. ; Chatterjee, Abhranil ; Bhatnagar, Shalabh (2014) Two timescale convergent Q-learning for sleep-scheduling in wireless sensor networks Wireless Networks, 20 (8). pp. 2589-2604. ISSN 1022-0038

Full text not available from this repository.

Official URL: http://doi.org/10.1007/s11276-014-0762-6

Related URL: http://dx.doi.org/10.1007/s11276-014-0762-6

Abstract

In this paper, we consider an intrusion detection application for Wireless Sensor Networks. We study the problem of scheduling the sleep times of the individual sensors, where the objective is to maximize the network lifetime while keeping the tracking error to a minimum. We formulate this problem as a partially-observable Markov decision process (POMDP) with continuous stateaction spaces, in a manner similar to Fuemmeler and Veeravalli (IEEE Trans Signal Process 56(5), 2091–2101, 2008). However, unlike their formulation, we consider infinite horizon discounted and average cost objectives as performance criteria. For each criterion, we propose a convergent on-policy Q-learning algorithm that operates on two timescales, while employing function approximation. Feature-based representations and function approximation is necessary to handle the curse of dimensionality associated with the underlying POMDP. Our proposed algorithm incorporates a policy gradient update using a one-simulation simultaneous perturbation stochastic approximation estimate on the faster timescale, while the Q-value parameter (arising from a linear function approximation architecture for the Q-values) is updated in an on-policy temporal difference algorithm-like fashion on the slower timescale. The feature selection scheme employed in each of our algorithms manages the energy and tracking components in a manner that assists the search for the optimal sleep-scheduling policy. For the sake of comparison, in both discounted and average settings, we also develop a function approximation analogue of the Q-learning algorithm. This algorithm, unlike the two-timescale variant, does not possess theoretical convergence guarantees. Finally, we also adapt our algorithms to include a stochastic iterative estimation scheme for the intruder’s mobility model and this is useful in settings where the latter is not known. Our simulation results on a synthetic 2-dimensional network setting suggest that our algorithms result in better tracking accuracy at the cost of only a few additional sensors, in comparison to a recent prior work.

Item Type:Article
Source:Copyright of this article belongs to Springer-Verlag.
Keywords:Sensor Networks; Sleep-Wake Scheduling; Reinforcement Learning; Q-Learning; Function Approximation; Simultaneous Perturbation; SPSA.
ID Code:116501
Deposited On:12 Apr 2021 06:03
Last Modified:12 Apr 2021 06:03

Repository Staff Only: item control page