For radiation treatment of cancer, CT scans are essential. Metal artifacts are generated during CT scans by metallic implants inserted into the body for various reasons, which make accurate dose calculation for radiation treatment impossible. Recently, deep learning-based metal artifact reduction MAR methods have been proposed. However, compared to other human body parts, metal artifacts are not effectively removed by conventional methods in the head&neck area, which has a relatively complex structure and has strong artifacts due to the insertion of dental filling and implant. In this study, weighted masked criterion function is proposed that is weighted differently for each part depending on the HU value using a extracted mask through contrastive learning to remove heavy metal artifacts in the head&neck area. Proposed method involves three-step training process: first, training the autoencoder for the mask extraction model; second, using supervised contrastive learning for mask extraction; finally, training the metal artifact reduction model using synthetic dataset with weighted masked mean squared error (MSE) loss function. Additionally, the synthetic data generation method is modified to prevent information loss during the data synthesis process. For synthetic data, proposed metal artifacts reduction method outperforms previous models with MSE (0.00149125), structural similarity (0.97763014), and peak signal-to-noise ratio (36.34359644). The artifcat index was calculated for clinical data (SNUH) and showed the lowest average value (4.95883) compared to the other MAR models.
Publisher
Ulsan National Institute of Science and Technology