Repository landing page

We are not able to resolve this OAI Identifier to the repository landing page. If you are the repository manager for this record, please head to the Dashboard and adjust the settings.

Dynamic multi-scale CNN forest learning for automatic cervical cancer segmentation

Abstract

Deep-learning based labeling methods have gained unprecedented popularity in different computer vision and medical image segmentation tasks. However, to the best of our knowledge, these have not been used for cervical tumor segmentation. More importantly, while the majority of innovative deep-learning works using convolutional neural networks (CNNs) focus on developing more sophisticated and robust architectures (e.g., ResNet, U-Net, GANs), there is very limited work on how to aggregate different CNN architectures to improve their relational learning at multiple levels of CNN-to-CNN interactions. To address this gap, we introduce a Dynamic Multi-Scale CNN Forest (CK+1DMF), which aims to address three major issues in medical image labeling and ensemble CNN learning: (1) heterogeneous distribution of MRI training patches, (2) a bi-directional flow of information between two consecutive CNNs as opposed to cascading CNNs—where information passes in a directional way from current to the next CNN in the cascade, and (3) multiscale anatomical variability across patients. To solve the first issue, we group training samples into K clusters, then design a forest with (K+ 1) trees: a principal tree of CNNs trained using all data samples and subordinate trees, each trained using a cluster of samples. As for the second and third issues, we design each dynamic multiscale tree (DMT) in the forest such that each node in the tree nests a CNN architecture. Two successive CNN nodes in the tree pass bidirectional contextual maps to progressively improve the learning of their relational non-linear mapping. Besides, as we traverse a path from the root node to a leaf node in the tree, the architecture of each CNN node becomes shallower to take in smaller training patches. Our CK+1DMF significantly (p &lt; 0.05) outperformed several conventional and ensemble CNN architectures, including conventional CNN (improvement by 10.3%) and CNN-based DMT (improvement by 5%).</p

Similar works

This paper was published in University of Dundee Online Publications.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.