Fiducial markers are used to localize camera positions and are widely employed in various fields where fast and highly accurate positioning is required, including AR (Augmented Reality), VR (Virtual Reality), PCB (Printed Circuit Board) design factories, and robot localization research. Over the past 20 years, many fiducial marker designs and detection algorithms have been proposed to improve detection rates, broaden the same marker family, or save computational resources. However, most of these algorithms work well in constrained environments, such as well-lit conditions, minimal motion blur, or no shadows. These limitations can be addressed by using learning-based methods, but they often suffer from high computational loads or the need for collecting training datasets. To overcome these limitations, we introduce a novel fiducial marker detection algorithm along with a neural network compression. By using a feature detection network with a simple circular-shape based fiducial marker, training datasets can be fully synthesized considering real-world noise without the effort of collecting and labeling datasets. Since many fiducial marker applications run on computationally constrained embedded systems, TD (Tensor Decomposition) and QAT (Quantization Aware Training) are applied to the neural network to reduce the number of parameters and improve the inference speed of the network. We demonstrate that our neural network compression approach preserves overall performance while reducing network parameters by 55.48% and accelerating inference speed by 569% on an NVIDIA Jetson Xavier NX. Furthermore, we validate our methods by testing them on real-world images taken by a flying drone.