Rl based network deployment and resource management solutions for opportunistic wireless access for aerial networks in disaster areas and smart city applications

thumbnail.default.alt
Tarih
2023-08-09
Yazarlar
Ariman, Mehmet
Süreli Yayın başlığı
Süreli Yayın ISSN
Cilt Başlığı
Yayınevi
Graduate School
Özet
The growth in the mobile communication area changed the data traffic profile. In addition, the requirement for the deployed infrastructure has significantly changed. The available bandwidth and IP transformation in the mobile backend increased the peak traffic requirements, while the mobile nature of the users changed the required infrastructure over time. The commercial availability of unmanned aerial vehicles potentially addresses requirements changes within the infrastructure. However, its three-dimensional nature and operation range limitations due to limited battery introduce new problems. Topology control is a significant problem for unmanned aerial vehicle networks. The optimization of the network size for coverage is identical to the minimum set-cover problem. The minimum set-cover problem is NP-hard, even without the service-level agreements enforced within the communication networks. The solution sets provided for tailor-made applications prevent the scalability of aerial networks. The tailor-made solutions require the exact development cost for each new application target. Reinforcement learning provides an ideal solution for addressing requirements for multiple applications with a single development effort. The integration cost depends on data availability for training in reinforcement learning-based deployments. To this end, reinforcement learning is integrated into a central software-defined networking-based control entity to demonstrate the deployment cycle of the aerial network. In addition, the solution's effectiveness is proved by comparing the quality of service, coverage, and power consumption results with existing literature. Furthermore, the application area of the reinforcement learning is extended to wireless channel selection to address the physical resource assignment problem. The development cost of the model has been the availability of the data. The integration of the new application is demonstrated in the simulation tool to measure the cost. In addition, smart-city application for the aerial network in distributed architecture is simulated with this implementation. Overall, this thesis conducts a survey of the existing literature on the challenges of aerial networks. In addition, the reinforcement learning integration tool is developed in a simulation format. Finally, the disaster area and smart-city applications are implemented to measure the applicability of the hypothesis. The comparison results revealed that reinforcement learning-based aerial network topology control provides scalable performance for power consumption while satisfying the quality of service and coverage requirements of the network. In addition, the improvements in the physical resource allocation for opportunistic access on the wireless medium is proved in wireless channel selection deployment for the smart-city application.
Açıklama
Thesis (Ph.D.) -- Istanbul Technical University, Graduate School, 2023
Anahtar kelimeler
Computer network protocols, Bilgisayar ağ protokolleri, Computer networks, Bilgisayar ağları, Smart city, Akıllı şehir
Alıntı