BROWSE

Related Researcher

Author's Photo

Lyu, Ilwoo
3D Shape Analysis Lab
Research Interests
  • 3D Shape Analysis
  • Image Processing
  • Computer Vision
  • Machine Learning
  • Medical Image Analysis

ITEM VIEW & DOWNLOAD

High-resolution 3D abdominal segmentation with random patch network fusion

Cited 0 times inthomson ciCited 0 times inthomson ci
Title
High-resolution 3D abdominal segmentation with random patch network fusion
Author
Tang, Y.Gao, R.Lee, H.H.Han, S.Chen, Y.Gao, D.Nath, V.Bermudez, C.Savona, M.R.Abramson, R.G.Bao, S.Lyu, IlwooHuo, Y.Landman, B.A.
Issue Date
2021-04
Publisher
Elsevier BV
Citation
MEDICAL IMAGE ANALYSIS, v.69
Abstract
Deep learning for three dimensional (3D) abdominal organ segmentation on high-resolution computed tomography (CT) is a challenging topic, in part due to the limited memory provide by graphics processing units (GPU) and large number of parameters and in 3D fully convolutional networks (FCN). Two prevalent strategies, lower resolution with wider field of view and higher resolution with limited field of view, have been explored but have been presented with varying degrees of success. In this paper, we propose a novel patch-based network with random spatial initialization and statistical fusion on overlapping regions of interest (ROIs). We evaluate the proposed approach using three datasets consisting of 260 subjects with varying numbers of manual labels. Compared with the canonical “coarse-to-fine” baseline methods, the proposed method increases the performance on multi-organ segmentation from 0.799 to 0.856 in terms of mean DSC score (p-value < 0.01 with paired t-test). The effect of different numbers of patches is evaluated by increasing the depth of coverage (expected number of patches evaluated per voxel). In addition, our method outperforms other state-of-the-art methods in abdominal organ segmentation. In conclusion, the approach provides a memory-conservative framework to enable 3D segmentation on high-resolution CT. The approach is compatible with many base network structures, without substantially increasing the complexity during inference. Given a CT scan with at high resolution, a low-res section (left panel) is trained with multi-channel segmentation. The low-res part contains down-sampling and normalization in order to preserve the complete spatial information. Interpolation and random patch sampling (mid panel) is employed to collect patches. The high-dimensional probability maps are acquired (right panel) from integration of all patches on field of views. © 2020
URI
https://scholarworks.unist.ac.kr/handle/201301/50086
DOI
10.1016/j.media.2020.101894
ISSN
1361-8415
Appears in Collections:
CSE_Journal Papers
Files in This Item:
There are no files associated with this item.

find_unist can give you direct access to the published full text of this article. (UNISTARs only)

Show full item record

qrcode

  • mendeley

    citeulike

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

MENU