Accurate and consistent calibration among multiple cameras is a fundamental requirement for 3D per- ception systems, especially in autonomous driving and real-time scene understanding. However, con- ventional offline calibration methods rely on artificial markers and static environments, making them unsuitable for dynamic, large-scale applications where conditions change frequently. This thesis pro- poses a novel online multi-camera calibration framework designed for deployment in real-world driving scenarios without the use of fiducial markers or controlled environments. The proposed system leverages both spatial constraints (from overlapping views and known rig topology) and temporal loop closures (from repeated scene observations) to continuously refine inter- camera extrinsics during motion. It operates entirely on natural driving sequences, allowing calibration to be performed and updated while the system is in use. A modular architecture separates key stages such as feature tracking, triangulation, and non-linear optimization, ensuring adaptability across hardware setups and extensibility to additional sensing modalities. Extensive experiments, including evaluations on custom driving datasets and ablation studies, demon- strate that the framework achieves high precision and repeatability without requiring manual interven- tion. Compared to traditional offline calibration pipelines, the proposed method offers improved robust- ness, scalability, and long-term consistency—making it suitable for continuous operation in autonomous systems.
Publisher
Ulsan National Institute of Science and Technology