INTERNATIONAL JOURNAL OF AERONAUTICAL AND SPACE SCIENCES, v.25, pp.164 - 175
Abstract
This paper proposes an object detection algorithm based on a deep neural network which utilizes RGB and infrared (IR) images for human detection with drones. Although there are some public RGB image datasets in aerial view, there is no publicly available dataset in which both RGB and IR images are taken simultaneously with drones. Thus, we collect RGB and IR images at various altitudes on our own. However, the detection performance of the RGB and IR-based algorithm is limited, because there are more available RGB images for training than IR images or RGB and IR images taken simultaneously. To address this data imbalance, we use a generative adversarial neural network model called CycleGAN to generate IR images from RGB images in the public and self-collected dataset. Furthermore, the neural network for object detection is accelerated by neural network quantization and optimization to execute the algorithm faster on an embedded computing board. To shorten the search time and improve the robustness of detection, we employ a formation flight of multiple drones for human detection. The effectiveness of the integrated system of formation flight and onboard human detection algorithm is validated by the real flight experiment to find humans who are in a wide and wild area.