Repository landing page

We are not able to resolve this OAI Identifier to the repository landing page. If you are the repository manager for this record, please head to the Dashboard and adjust the settings.

Edge-enhanced dual discriminator generative adversarial network for fast MRI with parallel imaging using multi-view information

Abstract

In clinical medicine, magnetic resonance imaging (MRI) is one of the most important tools for diagnosis, triage, prognosis, and treatment planning. However, MRI suffers from an inherent slow data acquisition process because data is collected sequentially in k-space. In recent years, most MRI reconstruction methods proposed in the literature focus on holistic image reconstruction rather than enhancing the edge information. This work steps aside this general trend by elaborating on the enhancement of edge information. Specifically, we introduce a novel parallel imaging coupled dual discriminator generative adversarial network (PIDD-GAN) for fast multi-channel MRI reconstruction by incorporating multi-view information. The dual discriminator design aims to improve the edge information in MRI reconstruction. One discriminator is used for holistic image reconstruction, whereas the other one is responsible for enhancing edge information. An improved U-Net with local and global residual learning is proposed for the generator. Frequency channel attention blocks (FCA Blocks) are embedded in the generator for incorporating attention mechanisms. Content loss is introduced to train the generator for better reconstruction quality. We performed comprehensive experiments on Calgary-Campinas public brain MR dataset and compared our method with state-of-the-art MRI reconstruction methods. Ablation studies of residual learning were conducted on the MICCAI13 dataset to validate the proposed modules. Results show that our PIDD-GAN provides high-quality reconstructed MR images, with well-preserved edge information. The time of single-image reconstruction is below 5ms, which meets the demand of faster processing.This work was supported in part by the Zhejiang Shuren University Basic Scientifc Research Special Funds, in part by the European Research Council Innovative Medicines Initiative (DRAGON, H2020-JTI-IMI2 101005122), in part by the AI for Health Imaging Award (CHAIMELEON, H2020-SC1-FA-DTS-2019-1 952172), in part by the UK Research and Innovation Future Leaders Fellowship (MR/V023799/1), in part by the British Heart Foundation (Project Number: TG/18/5/34111, PG/16/78/32402), in part by the Foundation of Peking University School and Hospital of Stomatology [KUSSNT-19B11], in part by the Peking University Health Science Center Youth Science and Technology Innovation Cultivation Fund [BMU2021PYB017], in part by the National Natural Science Foundation of China [61976120], in part by the Natural Science Foundation of Jiangsu Province [BK20191445], in part by the Qing Lan Project of Jiangsu Province, in part by National Natural Science Foundation of China [61902338], in part by the Project of Shenzhen International Cooperation Foundation [GJHZ20180926165402083], in part by the Basque Government through the ELKARTEK funding program [KK-2020/00049], and in part by the consolidated research group MATHMODE [IT1294-19]

Similar works

This paper was published in TECNALIA Publications.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.