Memory-Based Deep Reinforcement Learning for Obstacle Avoidance in UAV With Limited Environment Knowledge

Singla, Abhik ; Padakandla, Sindhu ; Bhatnagar, Shalabh (2021) Memory-Based Deep Reinforcement Learning for Obstacle Avoidance in UAV With Limited Environment Knowledge IEEE Transactions on Intelligent Transportation Systems, 22 (1). pp. 107-118. ISSN 1524-9050

Full text not available from this repository.

Official URL: http://doi.org/10.1109/TITS.2019.2954952

Related URL: http://dx.doi.org/10.1109/TITS.2019.2954952

Abstract

This paper presents our method for enabling a UAV quadrotor, equipped with a monocular camera, to autonomously avoid collisions with obstacles in unstructured and unknown indoor environments. When compared to obstacle avoidance in ground vehicular robots, UAV navigation brings in additional challenges because the UAV motion is no more constrained to a well-defined indoor ground or street environment. Unlike ground vehicular robots, a UAV has to navigate across more types of obstacles - for e.g., objects like decorative items, furnishings, ceiling fans, sign-boards, tree branches, etc., are also potential obstacles for a UAV. Thus, methods of obstacle avoidance developed for ground robots are clearly inadequate for UAV navigation. Current control methods using monocular images for UAV obstacle avoidance are heavily dependent on environment information. These controllers do not fully retain and utilize the extensively available information about the ambient environment for decision making. We propose a deep reinforcement learning based method for UAV obstacle avoidance (OA) which is capable of doing exactly the same. The crucial idea in our method is the concept of partial observability and how UAVs can retain relevant information about the environment structure to make better future navigation decisions. Our OA technique uses recurrent neural networks with temporal attention and provides better results compared to prior works in terms of distance covered without collisions. In addition, our technique has a high inference rate and reduces power wastage as it minimizes oscillatory motion of UAV.

Item Type:Article
Source:Copyright of this article belongs to Institute of Electrical and Electronics Engineers.
Keywords:Unmanned Aerial Vehicle (UAV); Obstacle Avoidance (OA); Deep Reinforcement Learning (DRL); Partial Observability; Deep Q-Networks (DQN).
ID Code:116420
Deposited On:12 Apr 2021 05:51
Last Modified:12 Apr 2021 05:51

Repository Staff Only: item control page