File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)
Related Researcher

이지민

Lee, Jimin
Radiation & Medical Intelligence Lab.
Read More

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Full metadata record

DC Field Value Language
dc.citation.number 1 -
dc.citation.startPage 27907 -
dc.citation.title SCIENTIFIC REPORTS -
dc.citation.volume 14 -
dc.contributor.author Ki, Juhyeong -
dc.contributor.author Lee, Jung Mok -
dc.contributor.author Lee, Wonjin -
dc.contributor.author Kim, Jin Ho -
dc.contributor.author Jin, Hyeongmin -
dc.contributor.author Jung, Seongmoon -
dc.contributor.author Lee, Jimin -
dc.date.accessioned 2024-12-16T11:05:07Z -
dc.date.available 2024-12-16T11:05:07Z -
dc.date.created 2024-12-15 -
dc.date.issued 2024-11 -
dc.description.abstract During a radiotherapy (RT) course, geometrical variations of target volumes, organs at risk, weight changes (loss/gain), tumor regression and/or progression can significantly affect the treatment outcome. Adaptive RT has become the effective methods along with technical advancements in imaging modalities including cone-beam computed tomography (CBCT). Planning CT (pCT) can be modified via deformable image registration (DIR), which is applied to the pair of pCT and CBCT. However, the artifact existed in both pCT and CBCT is a vulnerable factor in DIR. The dose calculation on CBCT is also suggested. Missing information due to the artifacts hinders the accurate dose calculation on CBCT. In this study, we aim to develop a deep learning-based metal artifact reduction (MAR) model to reduce the metal artifacts in CBCT for head and neck cancer RT. To train the proposed MAR model, we synthesized the kV-CBCT images including metallic implants, with and without metal artifacts (simulated image data pairs) through sinogram image handling process. We propose the deep learning architecture which focuses on both artifact removal and reconstruction of anatomic structure using a dual-encoder architecture. We designed four single-encoder models and three dual-encoder models based on UNet (for an artifact removal) and FusionNet (for a tissue restoration). Each single-encoder model contains either UNet or FusionNet, while the dual-encoder models have both UNet and FusionNet architectures. In the dual-encoder models, we implemented different feature fusion methods, including simple addition, spatial attention, and spatial/channel wise attention. Among the models, a dual-encoder model with spatial/channel wise attention showed the highest scores in terms of peak signal-to-noise ratio, mean squared error, structural similarity index, and Pearson correlation coefficient. CBCT images from 34 head and neck cancer patients were used to test the developed models. The dual-encoder model with spatial/channel wise attention showed the best results in terms of artifact index. By using the proposed model to CBCT, one can achieve more accurate synthetic pCT for head and neck patients as well as better tissue recognition and structure delineation for CBCT image itself. -
dc.identifier.bibliographicCitation SCIENTIFIC REPORTS, v.14, no.1, pp.27907 - -
dc.identifier.doi 10.1038/s41598-024-79305-2 -
dc.identifier.issn 2045-2322 -
dc.identifier.scopusid 2-s2.0-85209198631 -
dc.identifier.uri https://scholarworks.unist.ac.kr/handle/201301/84865 -
dc.identifier.wosid 001354506200016 -
dc.language 영어 -
dc.publisher NATURE PORTFOLIO -
dc.title Dual-encoder architecture for metal artifact reduction for kV-cone-beam CT images in head and neck cancer radiotherapy -
dc.type Article -
dc.description.isOpenAccess FALSE -
dc.relation.journalWebOfScienceCategory Multidisciplinary Sciences -
dc.relation.journalResearchArea Science & Technology - Other Topics -
dc.type.docType Article -
dc.description.journalRegisteredClass scie -
dc.description.journalRegisteredClass scopus -

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.