File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

주경돈

Joo, Kyungdon
Robotics and Visual Intelligence Lab.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.endPage 5102 -
dc.citation.number 2 -
dc.citation.startPage 5095 -
dc.citation.title IEEE ROBOTICS AND AUTOMATION LETTERS -
dc.citation.volume 7 -
dc.contributor.author Park, Jinsun -
dc.contributor.author Jeong, Yongseop -
dc.contributor.author Joo, Kyungdon -
dc.contributor.author Cho, Donghyeon -
dc.contributor.author Kweon, In So -
dc.date.accessioned 2023-12-21T14:17:56Z -
dc.date.available 2023-12-21T14:17:56Z -
dc.date.created 2022-04-04 -
dc.date.issued 2022-04 -
dc.description.abstract In this letter, we propose an adaptive cost volume fusion algorithm for multi-modal depth estimation in changing environments. Our method takes measurements from multi-modal sensors to exploit their complementary characteristics and generates depth cues from each modality in the form of adaptive cost volumes using deep neural networks. The proposed adaptive cost volume considers sensor configurations and computational costs to resolve an imbalanced and redundant depth bases problem of conventional cost volumes. We further extend its role to a generalized depth representation and propose a geometry-aware cost fusion algorithm. Our unified and geometrically consistent depth representation leads to an accurate and efficient multi-modal sensor fusion, which is crucial for robustness to changing environments. To validate the proposed framework, we introduce a new multi-modal depth in changing environments (MMDCE) dataset. The dataset was collected by our own vehicular system with RGB, NIR, and LiDAR sensors in changing environments. Experimental results demonstrate that our method is robust, accurate, and reliable in changing environments. Our codes and dataset are available at our project page.(1) -
dc.identifier.bibliographicCitation IEEE ROBOTICS AND AUTOMATION LETTERS, v.7, no.2, pp.5095 - 5102 -
dc.identifier.doi 10.1109/LRA.2022.3150868 -
dc.identifier.issn 2377-3766 -
dc.identifier.scopusid 2-s2.0-85124822062 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/57734 -
dc.identifier.url https://ieeexplore.ieee.org/document/9712358 -
dc.identifier.wosid 000770005100002 -
dc.language 영어 -
dc.publisher IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC -
dc.title Adaptive Cost Volume Fusion Network for Multi-Modal Depth Estimation in Changing Environments -
dc.type Article -
dc.description.isOpenAccess FALSE -
dc.relation.journalWebOfScienceCategory Robotics -
dc.relation.journalResearchArea Robotics -
dc.type.docType Article -
dc.description.journalRegisteredClass scie -
dc.description.journalRegisteredClass scopus -
dc.subject.keywordAuthor AI-Based methods -
dc.subject.keywordAuthor data sets for robotic vision -
dc.subject.keywordAuthor deep learning for visual perception -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.