Navigating Communication Networks with Deep Reinforcement Learning

Patrick Krämer, Andreas Blenk

Abstract


Traditional routing protocols such as Open Shortest Path First cannot incorporate fast-changing network states due to their inherent slowness and limited expressiveness. To overcome these limitations, we propose COMNAV, a system that uses Reinforcement Learning (RL) to learn a distributed routing protocol tailored to a specific network. COMNAV interprets routing as a navigational problem, in which flows have to find a way from source to destination. Thus, COMNAV has a close connection to congestion games. The key concept and main contribution is the design of the learning process as a congestion game that allows RL to effectively learn a distributed protocol. Game Theory thereby provides a solid foundation against which the policies RL learns can be evaluated, interpreted, and questioned. We evaluate the capabilities of the learning system in two scenarios in which the routing protocol must react to changes in the network state, and make decisions based on the properties of the flow. Our results show that RL can learn the desired behavior and requires the exchange of only 16 bits of information.

Full Text:

PDF


DOI: http://dx.doi.org/10.14279/tuj.eceasst.80.1177

DOI (PDF): http://dx.doi.org/10.14279/tuj.eceasst.80.1177.1119

Hosted By Universitätsbibliothek TU Berlin.