File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

주경돈

Joo, Kyungdon
Robotics and Visual Intelligence Lab.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.endPage 2620 -
dc.citation.number 2 -
dc.citation.startPage 2613 -
dc.citation.title IEEE ROBOTICS AND AUTOMATION LETTERS -
dc.citation.volume 7 -
dc.contributor.author Kim, Pyojin -
dc.contributor.author Li, Haoang -
dc.contributor.author Joo, Kyungdon -
dc.date.accessioned 2023-12-21T14:19:02Z -
dc.date.available 2023-12-21T14:19:02Z -
dc.date.created 2022-02-17 -
dc.date.issued 2022-04 -
dc.description.abstract We present a drift-free visual compass for estimating the three degrees of freedom (DoF) rotational motion of a camera by recognizing structural regularities in a Manhattan world (MW), which posits that the major structures conform to three orthogonal principal directions. Existing Manhattan frame estimation approaches are based on either data sampling or a parameter search, and fail to guarantee accuracy and efficiency simultaneously. To overcome these limitations, we propose a novel approach to hybridize these two strategies, achieving quasi-global optimality and high efficiency. We first compute the two DoF of the camera orientation by detecting and tracking a vertical dominant direction from a depth camera or an IMU, and then search for the optimal third DoF with the image lines through the proposed Manhattan Mine-and-Stab (MnS) approach. Once we find the initial rotation estimate of the camera, we refine the absolute camera orientation by minimizing the average orthogonal distance from the endpoints of the lines to the MW axes. We compare the proposed algorithm with other state-of-the-art approaches on a variety of real-world datasets including data from a drone flying in an urban environment, and demonstrate that the proposed method outperforms them in terms of accuracy, efficiency, and stability. The code is available on the project page: https://github.com/PyojinKim/MWMS -
dc.identifier.bibliographicCitation IEEE ROBOTICS AND AUTOMATION LETTERS, v.7, no.2, pp.2613 - 2620 -
dc.identifier.doi 10.1109/LRA.2022.3141751 -
dc.identifier.issn 2377-3766 -
dc.identifier.scopusid 2-s2.0-85123290566 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/57297 -
dc.identifier.url https://ieeexplore.ieee.org/document/9678090 -
dc.identifier.wosid 000750158000004 -
dc.language 영어 -
dc.publisher IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC -
dc.title Quasi-Globally Optimal and Real-Time Visual Compass in Manhattan Structured Environments -
dc.type Article -
dc.description.isOpenAccess FALSE -
dc.relation.journalWebOfScienceCategory Robotics -
dc.relation.journalResearchArea Robotics -
dc.type.docType Article -
dc.description.journalRegisteredClass scie -
dc.description.journalRegisteredClass scopus -
dc.subject.keywordAuthor Vision-based navigation -
dc.subject.keywordAuthor computer vision for transportation -
dc.subject.keywordAuthor sensor fusion -
dc.subject.keywordAuthor RGB-D perception -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.