Hybrid machine learning based network performance improvement in SDN and VANET
Machine learning algorithms involving disruptive and emerging technologies play a critical role in addressing network challenges that are increasingly heterogeneous and complex. The rapid expansion in the use of smart devices such as smartphones, smart vehicles, and smart home systems, coupled with advances in network technologies like cloud computing and network virtualization, has greatly increased the heterogeneity and intricacy of networks. To tackle these complexities in dynamic network environments and topologies, machine learning is becoming an essential tool due to its adaptive capabilities. Over time, these algorithms have found applications in routing path selection, network congestion control, traffic prediction, and cache interest prediction, achieving significant progress in these areas. However, applying machine learning algorithms to networks remains challenging. They often require extensive datasets and long training periods to converge, which can lead to slow responses and difficulties in adaptation within rapidly changing network environments. While cloud and edge computing nodes can offer some optimizations, the benefits are not easily extended to every network node. For efficient inter-vehicle communication and content delivery in dynamic networks, machine learning algorithms must be further optimized for network conditions.
Recent research into software-defined networking (SDN) routing based on reinforcement learning (RL) demonstrates its potential, as it provides superior solutions compared to traditional mathematical model-based routing. However, as network scale and complexity increase, RL methods tend to exhibit slow convergence and struggle to adapt to network changes. This limits SDN routing performance. Therefore, this thesis proposes PRLR, a pre-trained reinforcement learning based SDN routing method, which can improve the quality of service (QoS) and convergence speed of RL algorithms. Empirical experimental results reveal that PRLR outperforms baseline methods across multiple metrics, such as network delay, bandwidth availability, throughput, and convergence efficiency, demonstrating its effectiveness in dynamic routing topologies.
This pre-trained machine learning network optimization method has the potential for broad application in highly dynamic networks. Recently, vehicular ad hoc network (VANET) caching has garnered significant interest, particularly due to high node mobility. Traditional mobile ad hoc network (MANET) caching approaches face computational limitations that hamper efficient fulfilment of VANET-specific caching requirements. To address this challenge, the Pre-trained Reinforcement Learning-based VANET Caching Strategy (PRVC) is introduced. This new approach uses pre-trained reinforcement learning to enhance the quality of service (QoS) and adapt to changing caching interests. Empirical experiments confirm that PRVC exceeds benchmarks in cache hit ratio, latency, and link load.
PRVC is then further developed into the Pre-trained Hybrid Machine Learning-based VANET Caching Strategy (PHVC), designed to provide strategies for both cache copy placement and cache content replacement. By integrating cache copy placement and content replacement and using a pre-training algorithm to accelerate convergence, PHVC combines supervised and reinforcement learning to enhance performance. Compared to existing strategies, this approach unites cache copy placement and content replacement, adopting hybrid machine learning algorithms for comprehensive cache optimization. In simulations results, PHVC consistently outperforms traditional VANET caching methods in multiple areas, including higher cache hit ratios, lower latency, and reduced link load.
As networks evolve and become increasingly complex, SDN provides guidance for developing and implementing caching strategies in expansive, intricate VANET scenarios. Existing software-defined VANETs often rely on large servers for caching, which complicates the optimization of underlying vehicle nodes. To resolve this, an SDN-based VANET caching framework has been proposed for deployment in roadside units (RSUs) and vehicle nodes. This framework can flexibly apply various caching algorithms for both copy placement and content replacement, offering a comprehensive platform for future research. Based on this framework, a Reinforcement Learning based SD-VANET Caching Strategy (RLSVC) has also been proposed.
In summary, PRLR, PRVC, PHVC, and RLSVC represent significant contributions to the field, particularly in facilitating critical content transmission for smart cities, intelligent transportation systems, and autonomous driving. These methods improve key performance metrics such as quality of service, cache hit ratios, latency, and link load, enabling more efficient and adaptive network optimization in complex, dynamic environments.
History
School
- Science
Department
- Computer Science
Publisher
Loughborough UniversityRights holder
© Ziyang ZhangPublication date
2024Notes
A Doctoral Thesis. Submitted in partial fulfilment of the requirements for the award of the degree of Doctor of Philosophy of Loughborough University.Language
- en
Supervisor(s)
Lin Guan ; Qinggang MengQualification name
- PhD
Qualification level
- Doctoral
This submission includes a signed certificate in addition to the thesis file(s)
- I have submitted a signed certificate