Image processing is an important and inevitable pipeline for a wide range of industies such as autonomous car, manufacturing, search engine, and healthcare, to solve existing problems. Recently, the image processing in health care has focused on developing more precise prediction methods due to advancements in computing speeds and deep learning methods.Even though all of these challenges are non-trivial in the biomedical field, the most important issue among the existing problems is to determine proper targeted cancer therapy for individual patient in order to achieve precision medicine.Specially, progressing to the rapid acquisition times necessary to generate plenty of microscopy images for biomedical samples, which cannot be observed with the naked eye. Based on these microscopy images, various drug responses to patient-derived cell cultures can be analyzed by stained individual cells with various biomarkers to gain a more detailed understanding through high-content screening (HCS).
In this dissertation research, several novel image translation contributions for achieving software-based HCS for precision medicine. First, a novel image translation method, DeepHCS, for transforming bright-field microscopy images into synthetic fluorescence images of cell nuclei biomarkers is introduced. The main motivation of the proposed work is to automatically generate virtual biomarker images from conventional bright-field images, which can greatly reduce time-consuming and laborious tissue preparation efforts and improve the throughput of the screening process. DeepHCS uses bright-field images and their corresponding cell nuclei staining (DAPI) fluorescence images as a set of image pairs to train a series of end-to-end deep convolutional neural networks. Second, a novel microscopy image translation method is proposed, DeepHCS++, for transforming a bright-field microscopy image into three different fluorescence images to observe apoptosis, nuclei, and cytoplasm of cells, which visualize dead cells, nuclei of cells, and cytoplasm of cells, respectively. Thus, the main contribution of the proposed work is the automatic generation of three fluorescence images from a conventional bright-field image using multi-task learning with adversarial losses; this can greatly reduce the time-consuming and laborious tissue preparation process as well as improve throughput of the screening process. DeepHCS++ uses multi-task learning with adversarial losses to generate more accurate and realistic microscopy images. Third, an image translation method with structure-aware features is proposed for the acquisition of more realistic fluorescence microscopy images. This method integrates multi-task learning and cyclic consistency. In order to attain such realistic microscopy images, this proposed method employs an autoencoder that generates cell profile feature maps, in which include satisfactory cell textures and revise feature maps from the translation network by cooperating with the mixture network over these two different feature modalities.
Publisher
Ulsan National Institute of Science and Technology (UNIST)