File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

주경돈

Joo, Kyungdon
Robotics and Visual Intelligence Lab.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.endPage 4679 -
dc.citation.number 3 -
dc.citation.startPage 4672 -
dc.citation.title IEEE ROBOTICS AND AUTOMATION LETTERS -
dc.citation.volume 6 -
dc.contributor.author Choe, Jaesung -
dc.contributor.author Joo, Kyungdon -
dc.contributor.author Imtiaz, Tooba -
dc.contributor.author Kweon, In So -
dc.date.accessioned 2023-12-21T15:40:23Z -
dc.date.available 2023-12-21T15:40:23Z -
dc.date.created 2021-06-01 -
dc.date.issued 2021-07 -
dc.description.abstract Stereo-LiDAR fusion is a promising task in that we can utilize two different types of 3D perceptions for practical usage - dense 3D information (stereo cameras) and highly-accurate sparse point clouds (LiDAR). However, due to their different modalities and structures, the method of aligning sensor data is the key for successful sensor fusion. To this end, we propose a geometry-aware stereo-LiDAR fusion network for long-range depth estimation, called volumetric propagation network. The key idea of our network is to exploit sparse and accurate point clouds as a cue for guiding correspondences of stereo images in a unified 3D volume space. Unlike existing fusion strategies, we directly embed point clouds into the volume, which enables us to propagate valid information into nearby voxels in the volume, and to reduce the uncertainty of correspondences. Thus, it allows us to fuse two different input modalities seamlessly and regress a long-range depth map. Our fusion is further enhanced by a newly proposed feature extraction layer for point clouds guided by images: FusionConv. FusionConv extracts point cloud features that consider both semantic (2D image domain) and geometric (3D domain) relations and aid fusion at the volume. Our network achieves state-of-the-art performance on KITTI and Virtual-KITTI datasets among recent stereo-LiDAR fusion methods. -
dc.identifier.bibliographicCitation IEEE ROBOTICS AND AUTOMATION LETTERS, v.6, no.3, pp.4672 - 4679 -
dc.identifier.doi 10.1109/LRA.2021.3068712 -
dc.identifier.issn 2377-3766 -
dc.identifier.scopusid 2-s2.0-85103265998 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/52939 -
dc.identifier.url https://ieeexplore.ieee.org/document/9385917 -
dc.identifier.wosid 000640765600032 -
dc.language 영어 -
dc.publisher IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC -
dc.title Volumetric Propagation Network: Stereo-LiDAR Fusion for Long-Range Depth Estimation -
dc.type Article -
dc.description.isOpenAccess FALSE -
dc.relation.journalWebOfScienceCategory Robotics -
dc.relation.journalResearchArea Robotics -
dc.type.docType Article -
dc.description.journalRegisteredClass scie -
dc.description.journalRegisteredClass scopus -
dc.subject.keywordAuthor stereo -LiDAR fusion -
dc.subject.keywordAuthor Three-dimensional displays -
dc.subject.keywordAuthor Feature extraction -
dc.subject.keywordAuthor Cameras -
dc.subject.keywordAuthor Estimation -
dc.subject.keywordAuthor Laser radar -
dc.subject.keywordAuthor Robot sensing systems -
dc.subject.keywordAuthor Two dimensional displays -
dc.subject.keywordAuthor Autonomous driving -
dc.subject.keywordAuthor depth estimation -
dc.subject.keywordAuthor sensor fusion -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.