IEEE ROBOTICS AND AUTOMATION LETTERS, v.11, no.5, pp.5542 - 5549
Abstract
This paper addresses vision-based autonomous landing of quadrotor drones on moving platforms with uncertain motion. Vision is attractive due to its low weight, low cost, and ability to provide direct relative observations without global reference frames. However, traditional visual landing relies on requiring accurate estimation and tuning, which limits robustness. Deep reinforcement learning (DRL) offers a data-driven alternative but often degrades under motion uncertainty or intermittent visual loss from a limited field of view (FOV). The key challenge is active perception, maintaining visual observability of the landing pad under FOV constraints during aggressive maneuvers. To address this challenge, we propose a vision-based DRL framework that jointly learns perception, estimation, and control, guided by an active-perception reward that couples visibility maintenance with control performance for stable touchdown. Simulation results demonstrate robustness over visual servoing and existing DRL baseline, including landings on a platform moving at speeds up to 8 m/s under limited visibility. Real-world experiments further confirm the feasibility and stability of the proposed approach.