Repository landing page

We are not able to resolve this OAI Identifier to the repository landing page. If you are the repository manager for this record, please head to the Dashboard and adjust the settings.

Advanced deep learning for medical image segmentation:Towards global and data-efficient learning

Abstract

Deep learning is a widely used tool in medical imaging, and it is especially effective in segmentation applications. However, the traditional deep learning methods for image segmentation can have problems in applications where they lack the ability to learn global information for accurate segmentation and the full annotations for medical imaging segmentation are scarcely available. This thesis provides solutions to these problems, based on methods for global and data-efficient learning. Developing methods that can learn and use more global information within images and labels can be beneficial if the learned information such as global spatial relationships between objects or the global structural information in labels help to improve the segmentation performance. Making better use of available unlabeled data may alleviate the data scarcity problem, which requires more data-efficient learning strategies. Chapter 1 introduces the background of deep learning in medical image segmentation and discusses why it is important to develop advanced global and data-efficient deep learning methods.Chapter 2 presents an end-to-end global deep learning algorithm called Posterior-CRF that uses CNN-learned features in conditional random fields (CRF) inference. As a traditional machine learning method, CRF can provide efficient global learning for CNN with limited additional computational cost. This method is validated on three medical segmentation tasks: aorta and pulmonary artery segmentation in non-contrast CT, white matter hyperintensities segmentation, and ischemic stroke lesion segmentation in multi-modal MRI. The results show that Posterior-CRF achieves high accuracy and outperforms previous CNN-CRF methods with fixed features in all three segmentation tasks. Significant improvements are observed in aorta and white matter hyperintensities segmentation.Chapter 3 presents a data-efficient learning method that uses an autoencoder to learn from unlabeled data. Unlike the traditional autoencoder that reconstructs the whole image, the proposed method reconstructs the foreground and background separately. The extracted features from the proposed method may be more relevant for the segmentation task than that from traditional autoencoders. The features learned are shared between segmentation and reconstruction, using the same encoder for both tasks. This method is validated in brain tumor segmentation and white matter hyperintensities segmentation in multi-modal MRI. The results show that the proposed method outperforms previous methods using traditional autoencoders. The learned features show more discriminative power for segmentation compared to the features encoded by traditional autoencoders.Chapter 4 presents a method for self-supervision using region-of-interest guided supervoxel inpainting. Instead of inpainting random rectangular tiles, this method works on complete supervoxels in the segmentation foreground only, thus focusing the self-supervision on learning foreground features and predicting coherent regions. The method is validated in two applications, which are brain tumor segmentation and white matter hyperintensities segmentation in multi-modal MRI. The results show that in comparison to self-supervised learning using traditional inpainting, the two simple changes in the proposed method add a significant boost to the segmentation performance.Chapter 5 presents a new self-supervised learning task called Source Identification, which is inspired by the classic blind source separation problem. The task is to identify and separate a source image from a set of synthetic images, which mix the source image with images from other sources. Both local and more high-level, global features are required to separate the source image successfully. The method is validated in brain tumor segmentation and white matter hyperintensities segmentation in multi-modal MRI. In both applications, the proposed task achieves better downstream accuracy than other self-supervised learning approaches, including inpainting, pixel shuffling, intensity shift, and super-resolution.Chapter 6 presents a label refinement method that is able to correct errors in the initial segmentation results. Synthetic errors are generated in ground truth segmentations and an appearance simulation network is applied to ensure the appearance of the resulting labels resembles that of the real labels. A label refinement network is trained on both synthetic and real labels to correct the errors. The method is validated in two tree-shaped structure segmentation tasks: lung airway segmentation in CT scans and brain vessel segmentation in CTA images. The results show that the proposed method significantly improves the continuity and completeness of the initial segmentation for both applications, and outperforms common segmentation and label refinement approaches. Chapter 7 summarizes the contributions of this thesis and provides a general discussion about the limitations of the proposed methods and possible directions for future research.<br/

Similar works

This paper was published in EUR Research Repository.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.