dc.citation.conferencePlace |
CC |
- |
dc.citation.title |
IEEE International Conference on Robotics and Automation |
- |
dc.contributor.author |
Choe, Jaesung |
- |
dc.contributor.author |
Joo, Kyungdon |
- |
dc.contributor.author |
Rameau, Francois |
- |
dc.contributor.author |
Kweon, In So |
- |
dc.date.accessioned |
2024-01-31T21:40:50Z |
- |
dc.date.available |
2024-01-31T21:40:50Z |
- |
dc.date.created |
2022-01-07 |
- |
dc.date.issued |
2021-06-03 |
- |
dc.description.abstract |
This paper presents a stereo object matching method that exploits both 2D contextual information from images as well as 3D object-level information. Unlike existing stereo matching methods that exclusively focus on the pixellevel correspondence between stereo images within a volumetric space (i.e., cost volume), we exploit this volumetric structure in a different manner. The cost volume explicitly encompasses 3D information along its disparity axis, therefore it is a privileged structure that can encapsulate the 3D contextual information from objects. However, it is not straightforward since the disparity values map the 3D metric space in a non-linear fashion. Thus, we present two novel strategies to handle 3D objectness in the cost volume space: selective sampling (RoISelect) and 2D-3D fusion (fusion-by-occupancy), which allow us to seamlessly incorporate 3D object-level information and achieve accurate depth performance near the object boundary regions. Our depth estimation achieves competitive performance in the KITTI dataset and the Virtual-KITTI 2.0 dataset. |
- |
dc.identifier.bibliographicCitation |
IEEE International Conference on Robotics and Automation |
- |
dc.identifier.uri |
https://scholarworks.unist.ac.kr/handle/201301/77315 |
- |
dc.publisher |
Institute of Electrical and Electronics Engineers Inc. |
- |
dc.title |
Stereo Object Matching Network |
- |
dc.type |
Conference Paper |
- |
dc.date.conferenceDate |
2021-05-31 |
- |