Multi-agent reinforcement learning for traffic signal control

K.J., Prabuchandran ; A.N, Hemanth Kumar ; Bhatnagar, Shalabh (2014) Multi-agent reinforcement learning for traffic signal control In: 17th International IEEE Conference on Intelligent Transportation Systems (ITSC), 8-11 Oct. 2014, Qingdao, China.

Full text not available from this repository.

Official URL: http://doi.org/10.1109/ITSC.2014.6958095

Related URL: http://dx.doi.org/10.1109/ITSC.2014.6958095

Abstract

Optimal control of traffic lights at junctions or traffic signal control (TSC) is essential for reducing the average delay experienced by the road users amidst the rapid increase in the usage of vehicles. In this paper, we formulate the TSC problem as a discounted cost Markov decision process (MDP) and apply multi-agent reinforcement learning (MARL) algorithms to obtain dynamic TSC policies. We model each traffic signal junction as an independent agent. An agent decides the signal duration of its phases in a round-robin (RR) manner using multi-agent Q-learning with either ε-greedy or UCB [3] based exploration strategies. It updates its Q-factors based on the cost feedback signal received from its neighbouring agents. This feedback signal can be easily constructed and is shown to be effective in minimizing the average delay of the vehicles in the network. We show through simulations over VISSIM that our algorithms perform significantly better than both the standard fixed signal timing (FST) algorithm and the saturation balancing (SAT) algorithm [15] over two real road networks.

Item Type:Conference or Workshop Item (Paper)
Source:Copyright of this article belongs to Institute of Electrical and Electronics Engineers.
Keywords:Traffic Signal Control; Multi-Agent Reinforcement Learning; Q-Learning; UCB; VISSIM.
ID Code:116667
Deposited On:12 Apr 2021 07:21
Last Modified:12 Apr 2021 07:21

Repository Staff Only: item control page