Repository landing page

We are not able to resolve this OAI Identifier to the repository landing page. If you are the repository manager for this record, please head to the Dashboard and adjust the settings.

Vertebral Compression Fracture Detection With Novel 3D Localisation

Abstract

Vertebral compression fractures (VCF) often go undetected in radiology images, potentially leading to secondary fractures and permanent disability or even death. The objective of this thesis is to develop a fully automated method for detecting VCF in incidental CT images acquired for other purposes, thereby facilitating better follow up and treatment. The proposed approach is based on 3D localisation in CT images, followed by VCF detection in the localised regions. The 3D localisation algorithm combines deep reinforcement learning (DRL) with imitation learning (IL) to extract thoracic / lumbar spine regions from chest / abdomen CT scans. The algorithm generates six bounding boxes as Regions of Interest (ROI) using three different CNN models, with an average Jaccard Index (JI)/Dice Coefficient (DC) of 74.21%/84.71%. The extracted ROI were then divided into slices and the slices into patches to train four convolutional neural network (CNN) models for VCF detection at the patch level. The predictions from the patches were aggregated at bounding box level, and majority voting performed to decide on the presence / absence of VCF for a patient. The best performing model was a six layered CNN, which together with majority voting achieved threefold cross validation accuracy / F1 Score of 85.95% / 85.94% from 308 chest scans. The same model also achieved a fivefold cross validation accuracy / F1 score of 86.67% / 87.04% from 168 abdomen scans. Because of the success of the 3D localisation algorithm, it was also trained on other abdominal organs, namely the spleen and left and right kidneys, with promising results. The 3D localisation algorithm was enhanced to work with fused bounding boxes and also in semi-supervised mode to address the problem of annotation time by radiologists. Experiments using three different proportions of labelled and unlabelled data achieved fairly good performance, although not as good as the fully supervised equivalents. Finally, VCF detection in a weakly supervised multiple instance learning (MIL) setting was performed to reduce radiologists’ time for annotations, together with majority voting on the six bounding boxes. The best performing model was the six layered CNN which achieved threefold cross validation accuracy / F1 score of 81.05% / 80.74 % on 308 thoracic scans, and fivefold cross validation accuracy / F1 Score of 85.45% / 86.61% on 168 abdomen scans. Overall, the results are comparable to the state-of the art that used an order of magnitude more scans

Similar works

Full text

thumbnail-image

UNSWorks

redirect
Last time updated on 04/09/2023

This paper was published in UNSWorks.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.