Start Date
9-5-2025 9:00 AM
End Date
9-5-2025 10:30 AM
Document Type
Full Paper
Description
This paper presents preliminary results from a study that evaluated vision-based methods for relative localization in GPS-denied environments using Starling 2 drones as test agents. For distributed multi-agent coordination, robot-to-robot localization is critical. In this direction, we evaluate two vision- based detection methods: fiducial-marker based and neural network-based. We use a You Only Look Once (YOLOv8) neural network for drone detection for the on-board localization method. The model was trained from scratch and fine-tuned on a custom dataset. The model performance was 0.94 for mAP@0.5 and 0.63 for mAP@0.5:0.95. The results demonstrated robust detection under diverse visual conditions. Additionally, we benchmark an open source vision-based localization method against ground-truth data from an OptiTrack motion capture system. Experiments were conducted over three fixed distances between agents to evaluate spatial performance and drift. Results show that while the OptiTrack system provides high-fidelity reference trajectories, the Starling2 onboard sensing pipeline - using AprilTags and visual odometry - achieves promising localization accuracy, with increasing drift at larger separations. These findings support the viability of lightweight, GPS-free localization in autonomous multi-agent applications. Both the AprilTag and YOLOv8-based detectors demonstrated promising potential for scalable, lightweight, GPS-free multi-agent robot localization.
DOI
https://doi.org/10.5038/DQUF4453
Evaluating Relative Localization in GPS-Denied Environments Using Starling2 UAVs
This paper presents preliminary results from a study that evaluated vision-based methods for relative localization in GPS-denied environments using Starling 2 drones as test agents. For distributed multi-agent coordination, robot-to-robot localization is critical. In this direction, we evaluate two vision- based detection methods: fiducial-marker based and neural network-based. We use a You Only Look Once (YOLOv8) neural network for drone detection for the on-board localization method. The model was trained from scratch and fine-tuned on a custom dataset. The model performance was 0.94 for mAP@0.5 and 0.63 for mAP@0.5:0.95. The results demonstrated robust detection under diverse visual conditions. Additionally, we benchmark an open source vision-based localization method against ground-truth data from an OptiTrack motion capture system. Experiments were conducted over three fixed distances between agents to evaluate spatial performance and drift. Results show that while the OptiTrack system provides high-fidelity reference trajectories, the Starling2 onboard sensing pipeline - using AprilTags and visual odometry - achieves promising localization accuracy, with increasing drift at larger separations. These findings support the viability of lightweight, GPS-free localization in autonomous multi-agent applications. Both the AprilTag and YOLOv8-based detectors demonstrated promising potential for scalable, lightweight, GPS-free multi-agent robot localization.