posted on 2021-02-02, 13:28authored byChong Huang, Gaojie Chen, Yu GongYu Gong
IEEE This paper investigates the reinforcement learning for the relay selection in the delay-constrained buffer-aided networks. The buffer-aided relay selection significantly improves the outage performance but often at the price of higher latency. On the other hand, modern communication systems such as the Internet of Things often have strict requirement on the latency. It is thus necessary to find relay selection policies to achieve good throughput performance in the buffer-aided relay network while stratifying the delay constraint. With the buffers employed at the relays and delay constraints imposed on the data transmission, obtaining the best relay selection becomes a complicated high-dimensional problem, making it hard for the reinforcement learning to converge. In this paper, we propose the novel decision-assisted deep reinforcement learning to improve the convergence. This is achieved by exploring the a-priori information from the buffer-aided relay system. The proposed approaches can achieve high throughput subject to delay constraints. Extensive simulation results are provided to verify the proposed algorithms.
Funding
Communications Signal Processing Based Solutions for Massive Machine-to-Machine Networks (M3NETs)
Engineering and Physical Sciences Research Council
Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.