File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

오현동

Oh, Hyondong
Autonomous Systems Lab.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Monocular vision-based time-to-collision estimation for small drones by domain adaptation of simulated images

Author(s)
Kim, MinwooLadosz, PawelOh, Hyondong
Issued Date
2022-08
DOI
10.1016/j.eswa.2022.116973
URI
https://scholarworks.unist.ac.kr/handle/201301/58578
Fulltext
https://linkinghub.elsevier.com/retrieve/pii/S0957417422003992
Citation
EXPERT SYSTEMS WITH APPLICATIONS, v.199, pp.116973
Abstract
Recently, there is an increasing demand for small drones owing to their small size and agility in complex indoor environments. Accordingly, safety issues for navigating small drones become of significant importance. For drones to be able to navigate safely through complex environments, it would be useful to estimate accurate time-to-collision (TTC) to obstacles. To this end, in this paper, we propose a deep learning-based TTC estimation algorithm. To train generalizable neural networks for TTC estimation, large datasets including collision cases are needed. However, in real-world environments, it is impractical and infeasible to collide drones with obstacles to collect a significant amount of data. Simulation environments could facilitate the data acquisition procedure, but the data from simulated environments could be quite different from those of real environments, which is commonly termed as reality-gap. In this study, to reduce this reality-gap, sim-to-real methods based on a variant of the generative adversarial network are used to convert simulated images into real world-like synthetic images. Besides, to consider the uncertainties that come from using the synthetic dataset, the aleatoric loss function and Monte Carlo dropout method are employed. Furthermore, we improve the performance of the deep learning-based TTC estimation algorithm by replacing conventional convolutional neural networks (CNNs) with convolutional long short-term memory (ConvLSTM) layers which are known to be better at handling time-series data than CNNs. To validate the performance of the proposed framework, real flight experiments have been carried out in various indoor environments. Our proposed framework decreases the average TTC estimation error by 0.21 s compared with the baseline approach with CNNs.
Publisher
PERGAMON-ELSEVIER SCIENCE LTD
ISSN
0957-4174
Keyword (Author)
Time-to-collision estimationAleatoric uncertaintyEpistemic uncertaintyMonte Carlo dropoutConvolutional LSTMNavigation decision makingVision-based approach

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.