Development and testing novel guidance algorithms for visual drone interception
Development and testing novel guidance algorithms for visual drone interception
Dosyalar
Tarih
2024-06-13
Yazarlar
Çetin, Ahmet Talha
Süreli Yayın başlığı
Süreli Yayın ISSN
Cilt Başlığı
Yayınevi
Graduate School
Özet
This study tackles the challenge of guiding a quadrotor to intercept fast-moving targets visual and radar feedback by Visual Inertial Odometry or GPS respectively. Proposed system, designed as a counter UAV solution, utilizes onboard camera and radar information of the aerial threat. Target interception process has been divided into two parts. One is pre-terminal phase guidance where target information comes from radar feedback. Unless the target has not been seen at the camera, interceptor guided from radar feedback. Once the target is detected by the camera, the quadrotor switches to terminal phase guidance which is guiding counter drone to aerial target by visual feedback. For pre-terminal guidance, two different algorithms were developed. A Model Predictive Control based guidance algorithm has been designed for pre-terminal guidance. For pre-terminal guidance, parallel interceptions (toward the head or back) provide robustness to inevitable visual processing latency in terminal phase compared to lateral engagements. By addressing these issues, the proposed methodology mainly utilizes Model Predictive Control (MPC) method with added terminal constraints to satisfy engagement at the desired angle. While formulating the MPC, the objective function in the MPC is modified to reduce the interceptor's requirement for maneuvering at the end of the trajectory. MPC prediction horizon is calculated by considering vehicle limits to satisfy the feasibility of the problem. Another method is we use Bezier Splines to guide the quadrotor. Since quadrotors has limited onboard computational power, MPC might not be practical for some cases. By ensuring continuity with Bezier Splines, the system determines the optimal interception direction (towards the head or tail) and calculates the time-to-go, considering in the target's position and velocity along with the interceptor's kinematic constraints. This method specifically addresses latency issues in target detection, crucial for intercepting high-speed targets effectively. Moreover, the delays introduced by target detection and localization pose significant challenges, particularly for small quadrotors with limited computational power. The proposed approach aims to achieve parallel engagement with the target's velocity vector, whether from the front or rear, thus minimizing delays and overcoming visual tracking difficulties before target is detected by onboard camera. This strategy reduces lateral acceleration within the image frame during the final stages of interception, resulting in smaller miss distances. This outcome is consistent with established guidance literature, which recognizes the advantages of reduced acceleration at the end of the interception path. When the target is detected by camera using object detection algorithms, terminal phase guidance is initiated. For detecting aerial threats, the object detection algorithm You Only Look Once (YOLO) is used. Maintaining detection and tracking by camera can be interrupted due to limitations such as motion blur, noise in the image and getting out of the camera field of view. When detection is interrupted, Kalman Filter is used for prediction of the target. For image based guidance we utilized proportional guidance with some modifications. For this work we assume that no stabilizing mechanism that preserve orientation of the camera is used. Since no stabilizing mechanism is used for the camera, we formulized propotional guidance rules in roll and pitch stabilized frame in order not to being affected from camera orientation. We employed two distinct navigation methods: GPS-based navigation and Visual Inertial Navigation for navigating towards to target at the pre-terminal phase. The well-established open-source ArduPilot platform was utilized for GPS-based navigation, while VINS-Mono was implemented for Visual Inertial Navigation. As for controllers, due to the differing frequencies of estimated odometry data from these systems, different position controllers were employed for each navigation solution. The ArduPilot built-in controller was utilized for GPS-based navigation, whereas a custom controller was designed and flight-tested for handling VIO feedback. The aforementioned navigation and control methods allowed us to compare and evaluate their performance in different scenarios. The GPS-based navigation provided a reliable and accurate solution in environments with clear GPS signals, while the Visual Inertial Navigation offered a robust alternative in situations where GPS signals were weak or unavailable. The custom controller designed for VIO feedback was optimized to handle the unique characteristics of visual inertial data, ensuring smooth and precise control of the quadrotor. Through this approach, we were able to develop a comprehensive navigation system that can adapt to various operational conditions, enhancing the overall reliability and effectiveness of the quadrotor's guidance and control. Finally, real world flight tests were conducted for assessing overall performance of the system. To evaluate the performance of the GPS-based and VIO-base navigation algorithms, interception flights tests were conducted separately and the performance of the guidance algorithm was assessed accordingly. In real-world flight tests, we tested the use of Bezier splines in the pre-terminal along and image-based visual servoing for the terminal phase. In doing so, we examined the use of GPS-based and VIO based navigation algorithms. Results show performance of the proposed methodology.
Açıklama
Thesis (M.Sc.) -- İstanbul Technical University, Graduate School, 2024
Anahtar kelimeler
Unmanned Aerial Vehicles,
İnsansız Hava Araçları