File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

이지민

Lee, Jimin
Radiation & Medical Intelligence Lab.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.endPage 113647 -
dc.citation.startPage 113634 -
dc.citation.title IEEE ACCESS -
dc.citation.volume 13 -
dc.contributor.author Ki, Juhyeong -
dc.contributor.author Lee, Wonjin -
dc.contributor.author Kim, Bitbyeol -
dc.contributor.author Kim, Dukju -
dc.contributor.author Jung, Seongmoon -
dc.contributor.author Lee, Jimin -
dc.date.accessioned 2025-08-11T12:00:00Z -
dc.date.available 2025-08-11T12:00:00Z -
dc.date.created 2025-08-11 -
dc.date.issued 2025-06 -
dc.description.abstract Deep learning-based approaches to metal artifact reduction have recently been proposed, yet these methods still struggle to effectively remove metal artifacts in head and neck computed tomography (CT) images, which have a complex structure and can contain strong artifacts due to the insertion of dental fillings and implants. These strong metal artifacts cause treatment uncertainty in radiation therapy. In this study, we propose a masked criterion function that weighs each region for CT numbers using masks extracted by supervised contrastive learning to better remove metal artifacts in head and neck CT images. Applying the criterion function, a convolutional neural network-based metal artifact reduction model was trained on a synthetic dataset. We adopted a new data synthesis method to prevent tissue information loss by sinogram handling. For the synthetic data, our method outperformed previous models (e.g., linear interpolation, UNet, IndudoNet, FusionNet, Uformer) in terms of image quality and quantitative evaluations, showing the lowest average value of calculated artifact index, 26.57311, compared to the others. In addition, we recalculated the dose on artifact-reduced CT images and found that artifacts clearly degraded the plan quality for patients whose target is close to metal. The results of this study demonstrate that the proposed criterion function helps separate artifacts and tissues using masks extracted through supervised contrastive learning, and that the proposed model can reduce even strong artifacts using this criterion function. Our code can be found here: github.com/wonjin0403/MAR.git -
dc.identifier.bibliographicCitation IEEE ACCESS, v.13, pp.113634 - 113647 -
dc.identifier.doi 10.1109/ACCESS.2025.3583191 -
dc.identifier.issn 2169-3536 -
dc.identifier.scopusid 2-s2.0-105009418056 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/87700 -
dc.identifier.wosid 001522917600007 -
dc.language 영어 -
dc.publisher IEEE -
dc.title Deep Learning-Based Metal Artifact Reduction With Masked Mean Squared Error Loss Function in Simulation CT for Radiation Therapy for Head and Neck Cancer -
dc.type Article -
dc.description.isOpenAccess TRUE -
dc.type.docType Article -
dc.description.journalRegisteredClass scie -
dc.description.journalRegisteredClass scopus -
dc.subject.keywordAuthor Computed tomography -
dc.subject.keywordAuthor contrastive learning -
dc.subject.keywordAuthor head and neck -
dc.subject.keywordAuthor masked mean squared error -
dc.subject.keywordAuthor metal artifact reduction -
dc.subject.keywordAuthor radiation therapy -
dc.subject.keywordPlus IMAGE QUALITYTOMOGRAPHYNETWORK -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.