Synthetic Post-Contrast Imaging through AI. Applications in Neuroimaging: History
Please note this is an old version of this entry, which may differ significantly from the current revision.

Contrast media are widely diffused in biomedical imaging, due to their relevance in the diagnosis of numerous disorders. However, the risk of adverse reactions, the concern of potential damage to sensitive organs, and the recently described brain deposition of gadolinium salts, limit the use of contrast media in clinical practice. In recent years, the application of artificial intelligence (AI) techniques to biomedical imaging has led to the development of ‘virtual’ and ‘augmented’ contrasts. The idea behind these applications is to generate synthetic post-contrast images through AI computational modeling starting from the information available on other images acquired during the same scan. In these AI models, non-contrast images (virtual contrast) or low-dose post-contrast images (augmented contrast) are used as input data to generate synthetic post-contrast images, which are often undistinguishable from the native ones. 

  • artificial intelligence
  • synthetic imaging
  • virtual contrast
  • augmented contrast
  • MRI
  • CT
  • gadolinium-based contrast agents
  • iodinated contrast agents
  • neuroimaging

1. Introduction

Contrast media are essential tools in biomedical imaging, allowing for more precise diagnosis of many conditions. The main contrast agents employed in radiology are iodinated contrasts for CT imaging and gadolinium-based contrast agents (GBCA) for MRI. As for any molecule used for medical purposes, contrast media are not exempt from contraindications and side-effects, which need to be evaluated against the known diagnostic benefits when scans are ordered in clinical practice.
One of the main side-effects of iodinated contrasts, besides allergic reactions, is nephrotoxicity [1]. Iodinated contrast media may lead to acute kidney injury (AKI) in certain patients, and they represent a leading cause of hospitalization [1]. Although the exact mechanism of renal damage is still debated, there is evidence of direct cytotoxicity of iodinated contrasts on tubular epithelial and endothelial linings of the kidney. Additionally, these contrasts seem to affect the renal hemodynamics, due to increased oxygen radicals synthesis and reduction in blood flow in both glomerular and tubular capillaries related to hyperviscosity [1]. GBCA toxicity was first reported as a multi-systemic condition known as nephrogenic systemic fibrosis (NSF), described in few cases and mainly related to renal failure [2]. More recently, brain GBCA deposition was discovered through imaging [3,4,5], animal models [6], and autopsy studies [7,8]. Brain accumulation of GBCAs depends on their chemical structure, with higher de-chelation susceptibility for linear compounds. Such evidence led to the suspension of linear GBCA from commerce [9], leading to changes in clinical guidelines for contrast administration [10]. The existence of a definite clinical correlate for GBCA brain deposition is still debated; however, different types of toxicity may occur in the body according to the patient’s age and clinical state [3]. Oxidative stress may play a role in gadolinium ions toxicity, as reflected by changes in intracellular glutathione levels [11,12]. In this situation, finding new means to boost biomedical imaging diagnostic power with low-dose contrast administration, and finding possible alternative diagnostic methods to contrast media for common disorders [13], are extremely relevant in current clinical practice.
In the last few years, artificial intelligence (AI) has revolutionized the field of medicine, with remarkable applications in biomedical imaging [14]. Due to the high volume of imaging data stored in PACS archives, AI applications have proven capable of fueling predictive models for differential diagnosis and patient outcome in many different conditions [14,15,16,17,18,19]. Most recently, the field of ‘virtual’ and ‘augmented’ contrast emerged from the intersection of AI and biomedical imaging. The idea behind these applications is to create virtual enhancement starting from available information on non-contrast images acquired during the same scan (virtual contrast) or to augment the enhancement obtained from a low-dose administration (augmented contrast), through AI computational modeling.

2. AI Architectures in Synthetic Reconstruction

The term AI was formally proposed for the first time in 1956 at a conference at Dartmouth University, to describe the science of simulating human intellect behaviors through computers, including learning, judgment, and decision making [20]. The application of AI in medicine relies on machine learning and deep learning algorithms. These powerful mathematical algorithms can discover complex structure in high-dimensional data by improving learning through experience. There are three main types of algorithm learning: (i) unsupervised, which is the ability to find patterns in data without a priori information; (ii) supervised, which is used for data classification and prediction based on ground truth; and (iii) reinforcement learning, a technique that enables learning by the trial-and-error approach [21]. Among the many applications of AI in medicine, recent years have witnessed a rising interest towards medical image analysis (MIA). In this context, deep learning networks can be applied to solve classification problems, and for segmentation, object detection, and synthetic reconstruction of medical images [22].
Virtual and augmented contrasts can be considered an application of AI in the field of synthetic imaging. Convolutional neural networks (CNNs) and generative adversarial networks (GANs) are the only known deep learning tools used for image reconstruction [23,24], due to their ability to capture image features that describe a high level of semantic information. These two groups of machine learning architectures have achieved considerable success in the MIA field, and they are further explored in the following sections with particular attention on deep learning architectures to synthesize new images (synthetic post-contrast images) from pre-existing ones (either non-contrast images or low-dose post contrast images) [25]. Previous studies relevant to this topic are summarized in Table 1.

2.1. Convolutional Neural Networks in Synthetic Reconstruction

A CNN can be considered as a simplified version of the neocognitron model introduced by Fukushima [26] in 1980 to simulate the human visual system. CNNs are characterized by an input layer, an output layer, and multiple hidden layers (i.e., convolutional layer, pooling layer, fully connected layer, and various normalization layers). Due to their architecture, the typical use of CNNs is for image classification tasks, where the networks’ output is a single class label related to an input image [27]. When addressing CNNs’ use in medical research, the main applications here are also targeted toward classification problems [28,29]. However, the current literature includes several examples of successful implementation of CNN architectures to address other machine learning and computer vision challenges, such as image reconstruction and generation.
Gong et al., implemented a workflow with zero-dose pre-contrast MRIs and 10% low-dose postcontrast MRIs as inputs to generate full-dose MRI images through AI (augmented contrast) [30]. In this study, the model consisted of an encoder-decoder convolutional neural network (CNN) with three encoder steps and three decoder steps based on U-net architecture. The basic architecture of U-Net consists of a contracting part to capture features, and a symmetric expanding part to enable precise localization and reconstruction. A slightly modified version of the same model was tested by Pasumarthi et al., with similar results [31]. Xie et al., also investigated a U-net based approach to obtain contrast-enhanced MRI images from non-contrast MRI [32]. This type of network architecture has demonstrated superior performance in medical imaging tasks such as segmentation and MRI reconstruction [33,34,35].

2.2. Generative Adversarial Networks in Synthetic Reconstruction

Another approach to produce synthetic post-contrast images relies on generative adversarial networks (GANs), which can be used to generate images from scratch, showing increasing applications in current literature. Starting from a set of paired data, these networks are able to generate new data that are indistinguishable from the ground truth. Current state-of-the-art applications of GANs include image denoising and quality improvement algorithms, such as Wasserstein GAN (WGAN) [36] and the deep convolutional GAN (DCGAN) [37,38,39,40,41,42,43,44]. Regarding image segmentation and classification, another extension of the GAN architecture—the conditional GAN (c-GAN) demonstrated promising results [45]. Additionally, Pix2Pix GANs [46] are a general GAN approach for image-to-image translation, which is very useful for generating new synthetic MRI images starting from images of a different domain.
The most used GAN architectures to obtain synthetic diagnostic images ex novo from MRI and CT scans are the CycleGAN networks [37,43,47]. These are a class of deep learning algorithms that rely on the simultaneous training of two networks: the first is focused on data generation (alias generator) and the second is focused on data discrimination (alias discriminator). The two neural networks compete against each other, learning the statistical distribution of the training data, which allows the generation of new examples from the same distribution [48]. Specifically, the two generator models and the two discriminator models mentioned above are simultaneously trained. One generator takes images from the first domain and generates images for the second domain. Conversely, another generator takes images from the second domain and outputs images for the first domain. The discriminator models determine the plausibility of the generated images. An idea of cycle consistency is used in the CycleGAN. This means that the output image of the second generator should match the original image if the image output by the first generator was used as input to the second generator. A schematic picture of the described architecture can be seen in Figure 1. In all the reported studies, these networks are capable of generating true diagnostic images from a qualitative and informative point of view, demonstrating the enormous advantages of this type of application.
Figure 1. The CycleGAN model consists of a forward cycle and a backward cycle. (a) In the forward cycle, a synthesis network Synthc is trained to translate an input non-contrast image into a contrast one. Network Synthnc is trained to translate the resulting contrast image back into a non-contrast image that approximates the original non-contrast one. Discc discriminates between real and synthesized contrast images. (b) In the backward cycle, Synthnc synthesizes non-contrast images from input contrast images, Synthc reconstructs the input contrast image from the synthesized non- contrast one, and Discnc discriminates between real and synthesized non-contrast images. Inc = original non-contrast image; Ic = original contrast image.

3. Clinical Applications in Neuroradiology

In neuroradiology, the contrast enhancement of a lesion inside the brain tissue reflects a blood–brain barrier rupture. This event occurs in many different brain diseases such as some primary tumors and metastases, neuroinflammatory diseases, infections, and subacute ischemia, for which contrast injection during MRI examination is considered mandatory for accurate assessment, differential diagnosis, and monitoring [55,56,57,58,59,60]. Other uses of contrast include the evaluation of vascular malformation and aneurysms. In this scenario, the implementation of AI algorithms to reduce contrast usage could result in significant benefit for the patients, and reduced scan times and costs [61]. However, the main drawback of AI analysis, especially through deep learning (DL) methods, reside in the need for a large quantity of data. For this reason, the literature of AI virtual contrast in neuroradiology is focused on MRI examination of relatively common diseases such as tumors and multiple sclerosis (MS) (Table 2).

3.1. AI in Neuro-Oncology Imaging

In neuro-oncology, contrast enhancement is particularly useful not only as a marker for differential diagnosis and progression, but it is also considered the target for neurosurgical removal of a lesion and an indicator of possible recurrence. Although in recent years some authors suggested expanding the surgical resection of brain tumors beyond the contrast enhancement [62,63], injection of gadolinium remains a standard for both first and follow-up MR scans. To avoid the use of gadolinium, Kleesiek et al., applied a Bayesan DL architecture to predict contrast enhancement from non-contrast MR images in patients with gliomas of different grades and healthy controls [64]. The authors obtained good results in terms of qualitative and quantitative assessment (approximate sensitivity and specificity of 91%). Similarly, other studies applied a DL method to pre-contrast MR images of a large group of glioblastomas and low-grade gliomas with a good structural similarity index between simulated and real postcontrast imaging, and the ability of the neuroradiologist to determine the tumor grade [32,65]. These methods can also be applied to sequences different from the T1. Recently, Wang et al., developed a GAN to synthetize 3D isotropic contrast-enhanced FLAIR images from a 2D non-contrast FLAIR image stack in 185 patients with different brain tumors [66]. Interestingly, the authors went beyond simple contrast synthesis and added super-resolution and anti-aliasing tasks in order to solve MR artifacts and create isotropic 3D images, which give a better visualization of the tumor, with a good structural-similarity index to the source images [66]. Calabrese et al., obtained good results in synthetize contrast-enhanced T1 images from non-contrast ones in 400 patients with glioblastoma and lower-grade glioma [65]. In addition, the authors included an external validation analysis, which is always recommended in DL-based studies [65]. However, the simulated images appeared blurrier than real ones, a problem that could especially affect discriminating progression in follow-up exams [65]. This shortcoming appears to be a common issue of all ‘simulated imaging’ studies. As stated above, contrast enhancement reflects disruption of the blood–brain barrier, information that is usually inferred from the pharmacokinetics of gadolinium-based contrasts within the brain vasculature; this explains the difficulty of generating information from sequences that may not contain it. Moreover, virtual contrast may hinder interpretation of derived measures, such as perfusion-weighted imaging, which have been proven crucial for differential diagnosis and prognosis prediction in neuro-oncology [15,16,67,68]. Future directions could make use of ultrahigh field scanners that may have enough resolution to be closer to molecular imaging. In the meantime, another approach has been explored to address the issue. Rather than eliminating contrast injection, different studies used AI algorithms to enhance a reduced dose of gadolinium (10% or 25% of the standard dose), a method defined as ‘augmented contrast’ [30,31,69,70]. This method, used on images of different brain lesions, including meningiomas and arteriovenous malformations, allows the detection of a rupture of the blood–brain barrier with a significantly lower contrast dose. Another advantage is the better quality of the synthetized images, as perceived by evaluating neuroradiologists [30,31]. Such benefits persist with data obtained across different scanners, including both 1.5T and 3T field strengths, a fundamental step for the generalizability of results [30,69,70]. Nevertheless, augmented contrast techniques are not exempt from limitations. Frequently encountered issues with these techniques are the difficulty of detecting small lesions, the presence of false positive enhancement, probably due to flow or motion artifacts, and coregistration mismatch [31,70]. Another concern is the lack of time control between the two contrast injections. Most of the studies perform MRI in a single session by acquiring the low-dose sequence first, followed by full-dose imaging after injecting the remaining dose of gadolinium [30,31,69,70]. The resulting full-dose images are, thus, a combination of the late-delayed pre-dose (10% or 25%) and the standard-delayed contrast, which can result in a slightly different enhancement pattern from the standard full-contrast injection. Future studies could acquire low and standard dose exams on separate days, with controlled postcontrast timing. Lastly, future directions could include prediction of different contrast imaging. In fact, other types of contrast are being developed to give additional information on pathologic processes, such as neuroinflammation [71]. Another interesting application of AI in neuro-oncology imaging consists of augmenting the contrast signal ratio in standard dose T1 images, in order to better delineate tumors and detect smaller lesions. Increasing contrast signal for better detection of tumors, in fact, has always been a goal when developing new MR sequences, leading recently to a consensus for brain tumor imaging, especially for metastasis [72]. A recent study by Ye et al., used a multidimensional integration method to increase the signal-to-noise ratio in T1 gradient echo images [73], also resulting in contrast signal enhancement. By comparison, Bône et al., implemented an AI-based method to increase contrast signal similarly to the ‘augmented contrast’ studies [74]. The authors trained an artificial neural network with T1 pre-contrast, FLAIR, ADC, and low-dose T1 (0.025 mmol/kg). Once trained, the model was leveraged to amplify the contrast on routine full-dose T1 by processing it into high-contrast T1 images. The hypothesis behind this process was that the neural network learned to amplify the difference in contrast between pre-contrast and low-dose images. Hence, by replacing the low-dose sequence with a full-dose one, the synthesis led to a quadruple-dose contrast [74]. The results led to a significant improvement in image quality, contrast level, and lesion detection performance, with a sensitivity increase (16%) and similar false detection rate with respect to routine acquisition.

3.2. AI in Multiple Sclerosis Imaging

MS is the most common demyelinating chronic disorder, and mostly affects young adults [75]. MRI is a fundamental tool in both MS diagnosis and follow-up, and gadolinium injection is usually mandatory. In fact, contrast enhancement is essential to establish dissemination in time according to the McDonald’s criteria, and to evaluate ‘active’ lesions in follow-up scans [76]. Due to the relatively young age at diagnosis, MS patients undergo numerous MRIs throughout the years. For this reason, different research groups are trying to limit contrast administration in selected cases to minimize exposition and costs [40,53]. One study tried to develop a DL algorithm to predict enhancing MS lesions from non-enhanced MR sequences in nearly 2000 scans, reporting accuracy between 70 and 75%, leading the way to further research [77]. The authors used only conventional imaging such as T1, T2, and FLAIR. Future studies could add DWI to the analysis as this sequence has been proven to be useful in identifying active lesions [78]. Small dimensions of MS plaques, as already seen above for brain tumor studies, could be a reason for the low accuracy of synthetic contrast imaging; future directions could try enhancing low-dose contrast injections by means of AI algorithms.
In conclusion, virtual and augmented contrast imaging have an interesting role in neuroradiology for assessing different diseases for which gadolinium injection is, for now, mandatory.
Table 2. Studies in neuroradiology with field of application, database used, tasks, and results characteristics. In particular, the database used is expressed according to not-specified, private, or public availability with the total number of patients included in brackets (N); when the database is public, the name is reported. BraTs= Brain Tumor Segmentation database; SSIM = structural similarity index measures; PSNR = peak signal-to-noise ratio; FLAIR = fluid attenuated inversion recovery; PCC = Pearson correlation coefficient; SNR = signal-to-noise ratio; MS = multiple sclerosis; MR = magnetic resonance.
Field of Application Reference Database (N) Task Results
Neuro-oncology [30] Private (60) To obtain 100% full-dose 3D T1-weighted images from 10% low-dose 3D T1-weighted images SSIM and PNSR increase by 11% and 5 dB
[31] Private (640) To obtain 100% full-dose 3D T1-weighted images from 10% low-dose 3D T1-weighted images and pre-contrast 3D T1-weighted images SSIM 0.92 ± 0.02; PSNR 35.07 dB ± 3.84
[32] BraTs2020 (369) To obtain contrast-enhanced T1-weighted images from non-contrast-enhanced MR images SSIM and PCC 0.991 dB ± 0.007 and 0.995 ± 0.006 (whole brain); 0.993 ± 0.008 and 0.999 ± 0.003
[47] Private 1 (185)
Private 2 (36)
BraTs2018 (73 for SR)
To obtain 3D isotropic contrast-enhanced T2 FLAIR from non-contrast enhanced 2D FLAIR and image super resolution SSIM 0.932 (whole brain), 0.851 (tumor region); PSNR 31.25 dB (whole brain) 24.93 dB (tumor region)
[64] Private (116) To obtain contrast-enhanced T1-weighted images from non-contrast-enhanced MR images Sensitivity 91.8%, Specificity 91.2%
[65] Private (400)
BraTs2020 (286) (external validation)
To obtain contrast-enhanced T1-weighted images from non-contrast-enhanced MR images SSIM 0.84 ± 0.05
[69] Private (83) To obtain 100% full-dose 3D T1-weighted images from 10% low-dose 3D T1-weighted images Image quality 0.73; image SNR 0.63; lesion conspicuity 0.89; lesion enhancement 0.87
[70] Private (145) To obtain 100% full-dose 3D T1-weighted images from 25% low-dose 3D T1-weighted images SSIM 0.871 ± 0.04; PSNR 31.6 dB ± 2
[74] Private (250) To maximize contrast in full-dose 3D T1-weighted images Sensitivity in lesion detection increase by 16%
Multiple Sclerosis [77] Private (1970) To identify MS enhancing lesion from non-enhanced MR images Sensitivity and specificity 78% ± 4.3 and 73% ± 2.7 (slice-wise); 72% ± 9 and 70% ± 6.3

This entry is adapted from the peer-reviewed paper 10.3390/pharmaceutics14112378

This entry is offline, you can click here to edit this entry!
ScholarVision Creations