File Download

There are no files associated with this item.

  • Find it @ UNIST can give you direct access to the published full text of this article. (UNISTARs only)

Views & Downloads

Detailed Information

Cited time in webofscience Cited time in scopus
Metadata Downloads

Realistic Microscopy Image Translation using Multi-Task Learning and Structure-Aware Constraints for Label-Free High-Content Screening

Alternative Title
레이블이 없는 고함량 스크리닝을 위한 다중 학습과 구조 인식 제약을 이용한 사실적 현미경 이미지 변환 방법
Author(s)
Lee, GyuHyun
Advisor
Chun, Se Young
Issued Date
2021-02
URI
https://scholarworks.unist.ac.kr/handle/201301/82418 http://unist.dcollection.net/common/orgView/200000371891
Abstract
Image processing is an important and inevitable pipeline for a wide range of industies such as autonomous car, manufacturing, search engine, and healthcare, to solve existing problems. Recently, the image processing in health care has focused on developing more precise prediction methods due to advancements in computing speeds and deep learning methods.Even though all of these challenges are non-trivial in the biomedical field, the most important issue among the existing problems is to determine proper targeted cancer therapy for individual patient in order to achieve precision medicine.Specially, progressing to the rapid acquisition times necessary to generate plenty of microscopy images for biomedical samples, which cannot be observed with the naked eye. Based on these microscopy images, various drug responses to patient-derived cell cultures can be analyzed by stained individual cells with various biomarkers to gain a more detailed understanding through high-content screening (HCS).

In this dissertation research, several novel image translation contributions for achieving software-based HCS for precision medicine. First, a novel image translation method, DeepHCS, for transforming bright-field microscopy images into synthetic fluorescence images of cell nuclei biomarkers is introduced. The main motivation of the proposed work is to automatically generate virtual biomarker images from conventional bright-field images, which can greatly reduce time-consuming and laborious tissue preparation efforts and improve the throughput of the screening process. DeepHCS uses bright-field images and their corresponding cell nuclei staining (DAPI) fluorescence images as a set of image pairs to train a series of end-to-end deep convolutional neural networks. Second, a novel microscopy image translation method is proposed, DeepHCS++, for transforming a bright-field microscopy image into three different fluorescence images to observe apoptosis, nuclei, and cytoplasm of cells, which visualize dead cells, nuclei of cells, and cytoplasm of cells, respectively. Thus, the main contribution of the proposed work is the automatic generation of three fluorescence images from a conventional bright-field image using multi-task learning with adversarial losses; this can greatly reduce the time-consuming and laborious tissue preparation process as well as improve throughput of the screening process. DeepHCS++ uses multi-task learning with adversarial losses to generate more accurate and realistic microscopy images. Third, an image translation method with structure-aware features is proposed for the acquisition of more realistic fluorescence microscopy images. This method integrates multi-task learning and cyclic consistency. In order to attain such realistic microscopy images, this proposed method employs an autoencoder that generates cell profile feature maps, in which include satisfactory cell textures and revise feature maps from the translation network by cooperating with the mixture network over these two different feature modalities.
Publisher
Ulsan National Institute of Science and Technology (UNIST)
Degree
Doctor
Major
Department of Computer Science and Engineering

qrcode

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.