File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

오현동

Oh, Hyondong
Autonomous Systems Lab.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

인공 신경망 경량화 알고리듬을 활용한 마커 인식 연구

Alternative Title
Detection of Fiducial Marker With Neural Network Compression
Author(s)
박태욱신희중오현동
Issued Date
2023-08
DOI
10.5302/J.ICROS.2023.23.0054
URI
https://scholarworks.unist.ac.kr/handle/201301/65317
Citation
제어.로봇.시스템학회 논문지, v.29, no.8, pp.628 - 635
Abstract
Fiducial markers are used to localize camera positions and are widely employed in various fields where fast and highly accurate positioning is required, including AR (Augmented Reality), VR (Virtual Reality), PCB (Printed Circuit Board) design factories, and robot localization research. Over the past 20 years, many fiducial marker designs and detection algorithms have been proposed to improve detection rates, broaden the same marker family, or save computational resources. However, most of these algorithms work well in constrained environments, such as well-lit conditions, minimal motion blur, or no shadows. These limitations can be addressed by using learning-based methods, but they often suffer from high computational loads or the need for collecting training datasets. To overcome these limitations, we introduce a novel fiducial marker detection algorithm along with a neural network compression. By using a feature detection network with a simple circular-shape based fiducial marker, training datasets can be fully synthesized considering real-world noise without the effort of collecting and labeling datasets. Since many fiducial marker applications run on computationally constrained embedded systems, TD (Tensor Decomposition) and QAT (Quantization Aware Training) are applied to the neural network to reduce the number of parameters and improve the inference speed of the network. We demonstrate that our neural network compression approach preserves overall performance while reducing network parameters by 55.48% and accelerating inference speed by 569% on an NVIDIA Jetson Xavier NX. Furthermore, we validate our methods by testing them on real-world images taken by a flying drone.
Publisher
제어·로봇·시스템학회
ISSN
1976-5622
Keyword (Author)
deep learningcomputer visionfiducial markerpose estimationneural network quantizationneural network compression.

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.