File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.endPage 5549 -
dc.citation.number 5 -
dc.citation.startPage 5542 -
dc.citation.title IEEE ROBOTICS AND AUTOMATION LETTERS -
dc.citation.volume 11 -
dc.contributor.author Shin, Woojae -
dc.contributor.author Kim, Minwoo -
dc.contributor.author Park, Taewook -
dc.contributor.author Bae, Geunsik -
dc.contributor.author Kim, Seunghwan -
dc.contributor.author Oh, Hyondong -
dc.date.accessioned 2026-04-06T17:22:17Z -
dc.date.available 2026-04-06T17:22:17Z -
dc.date.created 2026-04-06 -
dc.date.issued 2026-05 -
dc.description.abstract This paper addresses vision-based autonomous landing of quadrotor drones on moving platforms with uncertain motion. Vision is attractive due to its low weight, low cost, and ability to provide direct relative observations without global reference frames. However, traditional visual landing relies on requiring accurate estimation and tuning, which limits robustness. Deep reinforcement learning (DRL) offers a data-driven alternative but often degrades under motion uncertainty or intermittent visual loss from a limited field of view (FOV). The key challenge is active perception, maintaining visual observability of the landing pad under FOV constraints during aggressive maneuvers. To address this challenge, we propose a vision-based DRL framework that jointly learns perception, estimation, and control, guided by an active-perception reward that couples visibility maintenance with control performance for stable touchdown. Simulation results demonstrate robustness over visual servoing and existing DRL baseline, including landings on a platform moving at speeds up to 8 m/s under limited visibility. Real-world experiments further confirm the feasibility and stability of the proposed approach. -
dc.identifier.bibliographicCitation IEEE ROBOTICS AND AUTOMATION LETTERS, v.11, no.5, pp.5542 - 5549 -
dc.identifier.doi 10.1109/LRA.2026.3674011 -
dc.identifier.issn 2377-3766 -
dc.identifier.scopusid 2-s2.0-105032806016 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/91190 -
dc.identifier.wosid 001723028900002 -
dc.language 영어 -
dc.publisher IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC -
dc.title Vision-Based Autonomous Drone Landing on Moving Platforms With Uncertain Motion via Deep Reinforcement Learning -
dc.type Article -
dc.description.isOpenAccess FALSE -
dc.relation.journalWebOfScienceCategory Robotics -
dc.relation.journalResearchArea Robotics -
dc.type.docType Article -
dc.description.journalRegisteredClass scie -
dc.description.journalRegisteredClass scopus -
dc.subject.keywordAuthor Drones -
dc.subject.keywordAuthor Visualization -
dc.subject.keywordAuthor State estimation -
dc.subject.keywordAuthor Observability -
dc.subject.keywordAuthor Cameras -
dc.subject.keywordAuthor Estimation -
dc.subject.keywordAuthor Training -
dc.subject.keywordAuthor Perturbation methods -
dc.subject.keywordAuthor Visual servoing -
dc.subject.keywordAuthor Location awareness -
dc.subject.keywordAuthor Aerial systems: perception and autonomy -
dc.subject.keywordAuthor reinforcement learning -
dc.subject.keywordAuthor aerial systems: applications -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.