File Download

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Monocular depth estimation from Events and Images using Local distribution learning

Author(s)
Park, Harin
Advisor
Joo, Kyungdon
Issued Date
2024-08
URI
https://scholarworks.unist.ac.kr/handle/201301/84194 http://unist.dcollection.net/common/orgView/200000813195
Abstract
Event cameras and conventional cameras are sensors that have a complementary relationship with each other. Therefore, this study proposes a monocular depth estimation network through the fusion of two modalities. The proposed network is mainly composed of three parts. At first, I apply the event refine- ment module to consider the noisy events in low-light conditions. Through this module, the noise can be removed, and real active events are enhanced. Second, I utilize the recurrent asynchronous encoder to consider the asynchronous property of the event camera. This encoder allows for the fusion of two modalities that are not synced with each other while maintaining the asynchronous nature of events and the benefits of high temporal resolution. Finally, I utilize the local distribution learning, LocalBins. The event data is good for capturing details about the scene regardless of changes in lights. Therefore, in this study, I utilize the decoder based on LocalBins to use the local details from events more effectively. To demonstrate the proposed network, I perform the comparison with the baseline, RAMNet, on the MVSEC dataset. The results of the proposed network verify the superior performance for almost all sequences. Furthermore, the qualitative results confirm that the proposed network is better at predicting thin objects and their contours.
Publisher
Ulsan National Institute of Science and Technology
Degree
Master
Major
Graduate School of Artificial Intelligence

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.