Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 3780 2024-01-03 18:08:27 |
2 format correct Meta information modification 3780 2024-01-04 02:39:28 |

Video Upload Options

Do you have a full video?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Pinto-Coelho, L. AI-Based Imaging Techniques Applications. Encyclopedia. Available online: https://encyclopedia.pub/entry/53386 (accessed on 04 July 2024).
Pinto-Coelho L. AI-Based Imaging Techniques Applications. Encyclopedia. Available at: https://encyclopedia.pub/entry/53386. Accessed July 04, 2024.
Pinto-Coelho, Luís. "AI-Based Imaging Techniques Applications" Encyclopedia, https://encyclopedia.pub/entry/53386 (accessed July 04, 2024).
Pinto-Coelho, L. (2024, January 03). AI-Based Imaging Techniques Applications. In Encyclopedia. https://encyclopedia.pub/entry/53386
Pinto-Coelho, Luís. "AI-Based Imaging Techniques Applications." Encyclopedia. Web. 03 January, 2024.
AI-Based Imaging Techniques Applications
Edit

AI-based imaging techniques can be divided in eight distinct categories: acquisition, preprocessing, feature extraction, registration, classification, object localization, segmentation, and visualization. These can also be organized in the clinical process pipeline broadly encompassing prevention, diagnostics, planning, therapy, prognostic, and monitoring. It is also possible to focus on the human organ or physiological process under focus.

artificial intelligence medical imaging diagnostics

1. Medical Image Analysis for Disease Detection and Diagnosis

Medical image analysis for disease detection and diagnosis is a rapidly evolving field that holds immense potential for improving healthcare outcomes. By harnessing advanced computational techniques and machine learning algorithms, medical professionals are now able to extract invaluable insights from various medical imaging modalities [1][2].
Artificial intelligence is an area where great progress has been observed, and the number of techniques applicable to medical image processing has been increasing significantly. In this context of diversity, review articles where different techniques are presented and compared are useful. For example, in the area of automated retinal disease assessment (ARDA), AI can be used to help healthcare workers in the early detection, screening, diagnosis, and grading of retinal diseases such as diabetic retinopathy (DR), retinopathy of prematurity (RoP), and age-related macular degeneration (AMD), as shown in the comprehensive survey presented in [2]. The authors highlight the significance of medical image modalities, such as optical coherence tomography (OCT), fundus photography, and fluorescein angiography, in capturing detailed retinal images for diagnostic purposes and explain how AI can cope with these distinct information sources, either isolated or combined. The limitations and subjectivity of traditional manual examination and interpretation methods are emphasized, leading to the exploration of AI-based solutions. For this, an overview of the utilization of deep learning models is presented, and the most promising results in the detection and classification of retinal diseases, including age-related macular degeneration (AMD), diabetic retinopathy, and glaucoma, are thoroughly covered. The role of AI in facilitating the analysis of large-scale retinal datasets and the development of computer-aided diagnostic systems is also highlighted. However, AI is not always a perfect solution, and the challenges and limitations of AI-based approaches are also covered, addressing issues related to data availability, model interpretability, and regulatory considerations. Given the significant interest in this field and the promising results that AI has yielded, other studies have also emerged to cover various topics related to eye image analysis [3][4].
Another area of great interest is brain imaging, whose techniques play a crucial role in understanding the intricate workings of the human brain and in diagnosing neurological disorders. Methods such as magnetic resonance imaging (MRI), functional MRI (fMRI), positron emission tomography (PET), or electroencephalography signals (EEG) provide valuable insights into brain structure, function, and connectivity. However, the analysis of these complex data, be it images or signals, requires sophisticated tools and expertise. Again, artificial intelligence (AI) comes into play. The synergy between brain imaging and AI has the potential to revolutionize neuroscience and improve patient care by unlocking deeper insights into the intricacies of the human brain. In [5], a powerful combination of deep learning techniques and the sine–cosine fitness grey wolf optimization (SCFGWO) algorithm is used on the detection and classification of brain tumors. It addresses the importance of accurate tumor detection and classification as well as the associated challenges. Complexity and variability are tackled by convolutional neural networks (CNNs) that can automatically learn and extract relevant features for tumor analysis. In this case, the SCFGWO algorithm is used to fine-tune the parameters of the CNN leading to an optimized performance. Metrics, such as accuracy, sensitivity, specificity, and F1-score, are compared with other existing approaches to showcase the effectiveness and benefits of the proposed method in brain tumor detection and classification. The advantages and limitations of the proposed approach and the potential impact of the research in clinical practice are also mentioned.
Lung imaging has been a subject of extensive research interest [6][7], primarily due to the aggressive nature of lung cancer and its tendency to be detected at an advanced stage, leading to high mortality rates among cancer patients. In this context, accurate segmentation of lung fields in medical imaging plays a crucial role in the detection and analysis of lung diseases. In a recent study [8], the authors focused on segmenting lung fields in chest X-ray images using a combination of superpixel resizing and encoder–decoder segmentation networks. The study effectively addresses the challenges associated with lung field segmentation, including anatomical variations, image artifacts, and overlapping structures. It emphasizes the potential of deep learning techniques and the utilization of encoder–decoder architectures for semantic segmentation tasks. The proposed method, which combines superpixel resizing with an encoder–decoder segmentation network, demonstrates a high level of effectiveness compared to other approaches, as assessed using evaluation metrics such as the Dice similarity coefficient, Jaccard index, sensitivity, specificity, and accuracy.
More recently, the interest in lung imaging has been reinforced due to its importance in the diagnosis and monitoring of COVID-19 disease. In a notable study [9], the authors delve into the data-driven nature of AI and its need for high-quality data. They specifically focus on the generation of synthetic data, which involves creating artificial instances that closely mimic real data. In fact, by using the proposed approach, the synthetic images are nearly indistinguishable from read images when compared using the structural similarity index (SSIM), peak signal-to-noise ratio (PSNR), and the Fréchet inception distance (FID). In this case, lung CT for COVID-19 diagnosis is used as an application example where this proposed approach has shown to be successful. The problem is tackled by means of a new regularization strategy, which refers to a technique used to prevent overfitting in ML models. This strategy does not require making significant changes to the underlying neural network architecture, making it easier to implement. Furthermore, the proposed method’s efficacy extends beyond lung CT for COVID-19 diagnosis and can be easily adapted to other image types or imaging modalities. Consequently, future research endeavors can explore its applicability to diverse diseases and investigate its relevance to emerging AI topics, such as zero-shot or few-shot learning.
Breast cancer, the second most reported cancer worldwide, must be diagnosed as early as possible for a good prognostic. In this case, medical imaging is paramount for disease prevention and diagnosis. The effectiveness of an AI-based approach is evaluated in [10]. The authors present a novel investigation that constructs and evaluates two computer-aided detection (CAD) systems for digital mammograms. The objective was to differentiate between malignant and benign breast lesions by employing two state-of-the-art approaches based on radiomics (with features such as intensity, shape, and texture) and deep transfer learning concepts and technologies (with deep features). Two CAD systems were trained and assessed using a sizable and diverse dataset of 3000 images. The findings of this study indicate that deep transfer learning can effectively extract meaningful features from medical images, even with limited training data, offering more discriminatory information than traditional handcrafted radiomics features. However, explainability, a desired characteristic in artificial intelligence and in medical decision systems in particular, must be further explored to fully unravel the mysteries of these “black-box” models.
Still, concerning breast imaging, and addressing the typical high data needs of machine learning systems, a study was made to compare and optimize models using small datasets [11]. The article discusses the challenges associated with limited data, such as overfitting and model generalization. Distinct CNN architectures, such as AlexNet, VGGNet, and ResNet, are trained using small datasets. The authors discuss strategies to mitigate these limitations, such as data augmentation techniques, transfer learning, and model regularization. With these premises, a multiclass classifier, based on the BI-RADS lexicon on the INBreast dataset [12], was developed. Compared with the literature, the model was able to improve the state-of-the-art results. This comes to reinforce that discriminative fine-tuning works well with state-of-the-art CNN models and that it is possible to achieve excellent performance even on small datasets.
Radiomics and artificial intelligence (AI) play pivotal roles in advancing breast cancer imaging, offering a range of applications across the diagnostic spectrum. These technologies contribute significantly to risk stratification, aiding in the determination of cancer recurrence risks and providing valuable insights to guide treatment decisions [13][14]. Moreover, AI algorithms leverage radiomics features extracted from diverse medical imaging modalities, such as mammography, ultrasound, magnetic resonance imaging (MRI), and positron emission tomography (PET), to enhance the accuracy of detecting and classifying breast lesions [13][14]. For treatment planning, radiomics furnishes critical information regarding treatment effectiveness, facilitating the prediction of treatment responses and the formulation of personalized treatment plans [15]. Additionally, radiomics serves as a powerful tool for prognosis, enabling the prediction of outcomes such as disease-free survival and recurrence risk in breast cancer patients [16]. Furthermore, the robustness of MRI-based radiomics features against interobserver segmentation variability has been highlighted, indicating their potential for future breast MRI-based radiomics research [17].
Liver cancer is the third most common cause of death from cancer worldwide , and its incidence has been growing. Again, the development of the disease is often asymptomatic, making screening and early detection crucial for a good prognosis. In [18], the authors focus on the segmentation of liver lesions in CT images of the LiTS dataset [19]. As a novelty, the research proposes an intelligent decision system for segmenting liver and hepatic tumors by integrating four efficient neural networks (ResNet152, ResNeXt101, DenseNet201, and InceptionV3). These classifiers are independently operated, and a final result is obtained by postprocess to eliminate artifacts. The obtained results were better than those obtained by the individual networks. In fact, concerning liver and pancreatic images, the use of AI algorithms is already a reality for speeding up repetitive tasks, such as segmentation, acquiring new quantitative parameters, such as lesion volume and tumor burden, improving image quality, reducing scanning time, and optimizing imaging acquisition [20].
Diabetic retinopathy (DR) is a significant cause of blindness globally, and early detection and intervention can help change the outcomes of the disease. AI techniques, including deep learning and convolutional neural networks (CNN), have been applied to the analysis of retinal images for DR screening and diagnosis [21]. Some studies have shown promising results in detecting referable diabetic retinopathy (rDR) using AI algorithms with high sensitivity and specificity compared to human graders [22], while reducing the associated human resources. For example, a study using a deep learning-based AI system achieved 97.05% sensitivity, 93.4% specificity, and 99.1% area under the curve (AUC) in classifying rDR as moderate or worse diabetic retinopathy, referable diabetic macular edema, or both [22]. Nevertheless, there are also shortcomings, such as the lack of standards for development and evaluation and the limited scope of application [23].
AI can also help in the detection and prediction of age-related macular degeneration (AMD). AI-based systems can screen for AMD and predict which patients are likely to progress to late-stage AMD within two years [24]. AI algorithms can provide analyses to assist physicians in diagnosing conditions based on specific features extrapolated from retinal images [25].
Yet in this area, optical coherence tomography (OCT) is a valuable tool in diagnosing various eye conditions and is where artificial intelligence (AI) can successfully be used. AI-assisted OCT has several advantages and applications in ophthalmology for diagnosis, monitoring, and disease-progression estimation (e.g., for glaucoma, macular edema, or age-related macular degeneration) [26]. AI-assisted OCT can provide more accurate and sensitive results compared to traditional methods [27]. For example, an OCT-AI-based telemedicine platform achieved a sensitivity of 96.6% and specificity of 98.8% for detecting urgent cases, and a sensitivity of 98.5% and specificity of 96.2% for detecting both urgent and routine cases [28].
These tools can lead to more efficient and objective ways of diagnosing and managing eye conditions.

2. Imaging and Modeling Techniques for Surgical Planning and Intervention

Imaging and 3D modeling techniques, coupled with the power of artificial intelligence (AI), have revolutionized the field of surgical planning and intervention, offering numerous advantages to both patients and healthcare professionals. By leveraging the capabilities of AI, medical imaging data, such as CT scans and MRI images, can be transformed into detailed three-dimensional models that provide an enhanced understanding of a patient’s anatomy. This newfound precision and depth of information allow surgeons to plan complex procedures with greater accuracy, improving patient outcomes and minimizing risks. Furthermore, AI-powered algorithms can analyze vast amounts of medical data, assisting surgeons in real-time during procedures, guiding them with valuable insights, and enabling personalized surgical interventions. For example, in [29], a new deep learning (DL)-based tool for segmenting anatomical structures of the left heart from echocardiographic images is proposed. It results from a combination of the YOLOv7 algorithm and U-net, specifically addressing segmentation of echocardiographic images into LVendo, LVepi, and LA.
Additionally, the integration of 3D printing technology with imaging and 3D modeling techniques further amplifies the advantages of surgical planning and intervention. With 3D printing, these intricate anatomical models can be translated into physical objects, allowing surgeons to hold and examine patient-specific replicas before the actual procedure. This tangible representation aids in comprehending complex anatomical structures, identifying potential challenges, and refining surgical strategies. Surgeons can also utilize 3D-printed surgical guides and implants, customized to fit each patient’s unique anatomy, thereby enhancing precision and reducing operative time.
These benefits are described and explored in [30], covering the operative workflow involved in the process of creating 3D-printed models of the heart using computed tomography (CT) scans. The authors begin by emphasizing the importance of accurate anatomical models in surgical planning, particularly in complex cardiac cases. They also discuss how 3D printing technology has gained prominence in the medical field, allowing for the creation of patient-specific anatomical models. In their developments, they thoroughly describe the operative workflow for generating 3D-printed heart models. Throughout the process, the challenges and limitations of the operative workflow from CT to 3D printing of the heart are covered. They also discuss factors such as cost, time, expertise required, and the need for validation studies to ensure the accuracy and reliability of the printed models.
A similar topic is presented in [31]. Here the authors focus specifically on coronary artery bypass graft (CABG) procedures and describe the feasibility of using a 3D modeling and printing process to create surgical guides, contributing to the success of the surgery and enhancing patient outcomes. In this paper, the authors also discuss the choice of materials for the 3D-printed guide, considering biocompatibility and sterility requirements. In addition, a case study that demonstrates the successful application of the workflow in a real clinical scenario is presented.
The combination of AI-driven imaging, 3D modeling, and 3D printing technologies revolutionizes surgical planning and intervention, empowering healthcare professionals with unparalleled tools to improve patient outcomes, create personalized solutions, and redefine the future of surgical practice. These advancements in imaging and 3D modeling techniques, driven by AI, are driving a new era of surgical precision and innovation in healthcare.

3. Image and Model Enhancement for Improved Analysis

Decision-making and diagnosis are important purposes for clinical applications, but AI can also play an important role in other applications of the clinical process. For example, in [32] the authors focus on the application of colorization techniques to medical images, with the goal of enhancing the visual interpretation and analysis by adding chromatic information. The authors highlight the importance of color in medical imaging as it can provide additional information for diagnosis, treatment planning, and educational purposes. They also address the challenges associated with medical image colorization, including the large variability in image characteristics and the need for robust and accurate colorization methods. The proposed method utilizes a spatial mask-guided colorization with a generative adversarial network (SMCGAN) technique to focus on relevant regions of the medical image while preserving important structural information during the process. The evaluation was based on a dataset from the Visible Human Project [33] and from the prostate dataset NCI-ISBI 2013 [34]. With the presented experimental setup and evaluation metrics used for performance assessment, the proposed technique was able to outperform the state-of-the-art GAN-based image colorization approaches with an average improvement of 8.48% in the peak signal-to-noise ratio (PSNR) metric.
In complex healthcare scenarios, it is crucial for clinicians and practitioners to understand the reasoning behind AI models’ predictions and recommendations. Explainable AI (XAI) plays a pivotal role in the domain of medical imaging techniques for decision support, where transparency and interpretability are paramount. In [35], the authors address the problem of nuclei detection in histopathology images, which is a crucial task in digital pathology for diagnosing and studying diseases. They specifically propose a technique called NDG-CAM (nuclei detection in histopathology images with semantic segmentation networks and Grad-CAM). Grad-CAM (gradient-weighted class activation mapping) [36] is a technique used in computer vision and deep learning to visualize and interpret the regions of an image that are most influential in the prediction made by a convolutional neural network. Hence, in the proposed methodology, the semantic segmentation network aims to accurately segment the nuclei regions in histopathology images, while Grad-CAM helps visualize the important regions that contribute to the model’s predictions, helping to improve the accuracy and interpretability of nuclei detection. The authors compare the performance of their method with other existing nuclei detection methods, demonstrating that NDG-CAM achieves improved accuracy while providing interpretable results.
Still with the purpose of making AI provide human understandable results, the authors in [37] focus on the development of an open-source COVID-19 CT dataset that includes automatic lung tissue classification for radiomics analysis. The challenges associated with COVID-19 research, including the importance of large-scale datasets and efficient analysis methods are covered. The potential of radiomics, which involves extracting quantitative features from medical images, in aiding COVID-19 diagnosis, prognosis, and treatment planning, are also mentioned. The proposed dataset consists of CT scans from COVID-19 patients, which are annotated with labels indicating different lung tissue regions, such as ground-glass opacities, consolidations, and normal lung tissue.
Novel machine learning techniques are also being used to enhance the resolution and quality of medical images [38]. These techniques aim to recover fine details and structures that are lost or blurred in low-resolution images, which can improve the diagnosis and treatment of various diseases. One of the novel machine learning techniques is based on GANs. For example, Bing at al. [39] propose the use of an improved squeeze-and-excitation block that selectively amplifies the important features and suppresses the nonimportant ones in the feature maps. A simplified EDSR (enhanced deep super-resolution) model to generate high-resolution images from low-resolution inputs is also proposed, along with a new fusion loss function. The proposed method was evaluated on public medical image datasets and compared with state-of-the-art deep learning-based methods, such as SRGAN, EDSR, VDSR, and D-DBPN. The results show that the proposed method achieves better visual quality and preserves more details, especially for high upscaling factors.
Vision transformers, with their ability to treat images as sequences of tokens and to learn global dependencies among them, can capture long-range and complex patterns in images, which can benefit super-resolution tasks. Zhu et al. [40] propose the use of vision transformers with residual dense connections and local feature fusion. This method proposes an efficient vision transformer architecture that can achieve high-quality single-image super-resolution for various medical modalities, such as MRI, CT, and X-ray. The key idea is to use residual dense blocks to enhance the feature extraction and representation capabilities of the vision transformer and to use local feature fusion to combine the low-level and high-level features for better reconstruction. Moreover, this method also introduces a novel perceptual loss function that incorporates prior knowledge of medical image segmentation to improve the image quality of desired aspects, such as edges, textures, and organs. In another work, Wei et al. [41] propose to adapt the SWIN transformer, which is a hierarchical vision transformer that uses shifted windows to capture local and global information, to the task of automatic medical image segmentation. The high-resolution SWIN transformer uses a U-net-like architecture that consists of an encoder and a decoder. The encoder converts the high-resolution input image into low-resolution feature maps using a sequence of SWIN transformer blocks, and the decoder gradually generates high-resolution representations from low-resolution feature maps using upsampling and skip connections. The high-resolution SWIN transformer can achieve state-of-the-art results on several medical image segmentation datasets, such as BraTS, LiTS, and KiTS (details below).
In addition, perceptual loss functions can be used to further enhance generative techniques. These are designed to measure the similarity between images in terms of their semantic content and visual quality rather than their pixel-wise differences. Perceptual loss functions can be derived from pretrained models, such as image classifiers or segmenters, that capture high-level features of images. By optimizing the perceptual loss functions, the super-resolution models can generate images that preserve the important structures and details of the original images while avoiding artifacts and distortions [39][42].
Medical images often suffer from noise, artifacts, and limited resolution due to the physical constraints of the imaging devices. Therefore, developing effective and efficient methods for medical image super-resolution is a challenging and promising research topic, searching to obtain previously unachievable details and resolution [43][44].

4. Medical Imaging Datasets

Numerous advancements outlined above have arisen through machine learning public challenges. These initiatives provided supporting materials in the form of datasets (which are often expensive and time consuming to collect) and, at times, baseline algorithms, contributing to the facilitation of various research studies aimed at the development and evaluation of novel algorithms. The promotion of a competitive objective was pivotal for promoting the development of a scientific community around a given topic. In Table 1, some popular datasets are presented.
Table 1. Examples of datasets with medical images.
Name Description Reference
BRATS The Multimodal Brain Tumor Segmentation Benchmark (BRATS) is an annual challenge that aims to compare different algorithms for brain tumor segmentation. The dataset, which has received several enhancements over the years, consists of preoperative multimodal MRI scans of glioblastoma and lower-grade glioma with ground truth labels and survival data for participants to segment and predict the tumor. [45]
KiTS The Kidney Tumor Segmentation Benchmark (KiTS) is a dataset used to evaluate and compare algorithms for kidney tumor segmentation. The dataset consists of CT scans of the kidneys and kidney tumors, with 300 scans in total. The data and segmentations are provided by various clinical sites around the world. [46]
LiTS The Liver Tumor Segmentation Benchmark (LiTS) is a dataset used to evaluate and compare liver tumor segmentation algorithms. The dataset consists of CT scans of the liver and liver tumors, with 130 scans in the training set and 70 scans in the test set. The data and segmentations are provided by various clinical sites around the world. [19]
MURA The Musculoskeletal Radiographs (MURA) dataset is a large dataset of musculoskeletal radiographs containing 40,561 images from 14,863 studies. Each study is manually labeled by radiologists as either normal or abnormal. [47]
MedPix A free online medical image database with over 59,000 indexed and curated images from over 12,000 patients. [48]
NIH Chest X-rays A large dataset of chest X-ray images containing over 112,000 images from more than 30,000 unique patients. The images are labeled with 14 common disease labels. [49]

References

  1. Adeshina, S.A.; Adedigba, A.P. Bag of Tricks for Improving Deep Learning Performance on Multimodal Image Classification. Bioengineering 2022, 9, 312.
  2. Saleh, G.A.; Batouty, N.M.; Haggag, S.; Elnakib, A.; Khalifa, F.; Taher, F.; Mohamed, M.A.; Farag, R.; Sandhu, H.; Sewelam, A.; et al. The Role of Medical Image Modalities and AI in the Early Detection, Diagnosis and Grading of Retinal Diseases: A Survey. Bioengineering 2022, 9, 366.
  3. Han, J.-H. Artificial Intelligence in Eye Disease: Recent Developments, Applications, and Surveys. Diagnostics 2022, 12, 1927.
  4. Daich Varela, M.; Sen, S.; De Guimaraes, T.A.C.; Kabiri, N.; Pontikos, N.; Balaskas, K.; Michaelides, M. Artificial Intelligence in Retinal Disease: Clinical Application, Challenges, and Future Directions. Graefes Arch. Clin. Exp. Ophthalmol. 2023, 261, 3283–3297.
  5. Zain Eldin, H.; Gamel, S.A.; El-Kenawy, E.-S.M.; Alharbi, A.H.; Khafaga, D.S.; Ibrahim, A.; Talaat, F.M. Brain Tumor Detection and Classification Using Deep Learning and Sine-Cosine Fitness Grey Wolf Optimization. Bioengineering 2023, 10, 18.
  6. Forte, G.C.; Altmayer, S.; Silva, R.F.; Stefani, M.T.; Libermann, L.L.; Cavion, C.C.; Youssef, A.; Forghani, R.; King, J.; Mohamed, T.-L.; et al. Deep Learning Algorithms for Diagnosis of Lung Cancer: A Systematic Review and Meta-Analysis. Cancers 2022, 14, 3856.
  7. Hunger, T.; Wanka-Pail, E.; Brix, G.; Griebel, J. Lung Cancer Screening with Low-Dose CT in Smokers: A Systematic Review and Meta-Analysis. Diagnostics 2021, 11, 1040.
  8. Lee, C.-C.; So, E.C.; Saidy, L.; Wang, M.-J. Lung Field Segmentation in Chest X-Ray Images Using Superpixel Resizing and Encoder–Decoder Segmentation Networks. Bioengineering 2022, 9, 351.
  9. Lee, K.W.; Chin, R.K.Y. Diverse COVID-19 CT Image-to-Image Translation with Stacked Residual Dropout. Bioengineering 2022, 9, 698.
  10. Danala, G.; Maryada, S.K.; Islam, W.; Faiz, R.; Jones, M.; Qiu, Y.; Zheng, B. A Comparison of Computer-Aided Diagnosis Schemes Optimized Using Radiomics and Deep Transfer Learning Methods. Bioengineering 2022, 9, 256.
  11. Adedigba, A.P.; Adeshina, S.A.; Aibinu, A.M. Performance Evaluation of Deep Learning Models on Mammogram Classification Using Small Dataset. Bioengineering 2022, 9, 161.
  12. Zebari, D.A.; Ibrahim, D.A.; Zeebaree, D.Q.; Mohammed, M.A.; Haron, H.; Zebari, N.A.; Damaševičius, R.; Maskeliūnas, R. Breast Cancer Detection Using Mammogram Images with Improved Multi-Fractal Dimension Approach and Feature Fusion. Appl. Sci. 2021, 11, 12122.
  13. Cè, M.; Caloro, E.; Pellegrino, M.E.; Basile, M.; Sorce, A.; Fazzini, D.; Oliva, G.; Cellina, M. Artificial Intelligence in Breast Cancer Imaging: Risk Stratification, Lesion Detection and Classification, Treatment Planning and Prognosis—A Narrative Review. Explor. Target. Antitumor. Ther. 2022, 3, 795–816.
  14. Zhang, T.; Tan, T.; Samperna, R.; Li, Z.; Gao, Y.; Wang, X.; Han, L.; Yu, Q.; Beets-Tan, R.G.H.; Mann, R.M. Radiomics and Artificial Intelligence in Breast Imaging: A Survey. Artif Intell. Rev. 2023, 56, 857–892.
  15. Pesapane, F.; De Marco, P.; Rapino, A.; Lombardo, E.; Nicosia, L.; Tantrige, P.; Rotili, A.; Bozzini, A.C.; Penco, S.; Dominelli, V.; et al. How Radiomics Can Improve Breast Cancer Diagnosis and Treatment. J. Clin. Med. 2023, 12, 1372.
  16. Luo, C.; Zhao, S.; Peng, C.; Wang, C.; Hu, K.; Zhong, X.; Luo, T.; Huang, J.; Lu, D. Mammography Radiomics Features at Diagnosis and Progression-Free Survival among Patients with Breast Cancer. Br. J. Cancer 2022, 127, 1886–1892.
  17. Granzier, R.W.Y.; Verbakel, N.M.H.; Ibrahim, A.; van Timmeren, J.E.; van Nijnatten, T.J.A.; Leijenaar, R.T.H.; Lobbes, M.B.I.; Smidt, M.L.; Woodruff, H.C. MRI-Based Radiomics in Breast Cancer: Feature Robustness with Respect to Inter-Observer Segmentation Variability. Sci. Rep. 2020, 10, 14163.
  18. Popescu, D.; Stanciulescu, A.; Pomohaci, M.D.; Ichim, L. Decision Support System for Liver Lesion Segmentation Based on Advanced Convolutional Neural Network Architectures. Bioengineering 2022, 9, 467.
  19. Bilic, P.; Christ, P.; Li, H.B.; Vorontsov, E.; Ben-Cohen, A.; Kaissis, G.; Szeskin, A.; Jacobs, C.; Mamani, G.E.H.; Chartrand, G.; et al. The Liver Tumor Segmentation Benchmark (LiTS). Med. Image Anal. 2023, 84, 102680.
  20. Cardobi, N.; Dal Palù, A.; Pedrini, F.; Beleù, A.; Nocini, R.; De Robertis, R.; Ruzzenente, A.; Salvia, R.; Montemezzi, S.; D’Onofrio, M. An Overview of Artificial Intelligence Applications in Liver and Pancreatic Imaging. Cancers 2021, 13, 2162.
  21. Huang, X.; Wang, H.; She, C.; Feng, J.; Liu, X.; Hu, X.; Chen, L.; Tao, Y. Artificial Intelligence Promotes the Diagnosis and Screening of Diabetic Retinopathy. Front. Endocrinol. 2022, 13, 946915.
  22. Sheng, B.; Chen, X.; Li, T.; Ma, T.; Yang, Y.; Bi, L.; Zhang, X. An Overview of Artificial Intelligence in Diabetic Retinopathy and Other Ocular Diseases. Front. Public Health 2022, 10, 971943.
  23. Li, S.; Zhao, R.; Zou, H. Artificial Intelligence for Diabetic Retinopathy. Chin. Med. J. Engl. 2022, 135, 253–260.
  24. Banerjee, I.; de Sisternes, L.; Hallak, J.A.; Leng, T.; Osborne, A.; Rosenfeld, P.J.; Gregori, G.; Durbin, M.; Rubin, D. Prediction of Age-Related Macular Degeneration Disease Using a Sequential Deep Learning Approach on Longitudinal SD-OCT Imaging Biomarkers. Sci. Rep. 2020, 10, 15434.
  25. Yin, C.; Moroi, S.E.; Zhang, P. Predicting Age-Related Macular Degeneration Progression with Contrastive Attention and Time-Aware LSTM. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, New York, NY, USA, 14 August 2022; pp. 4402–4412.
  26. Jin, K.; Ye, J. Artificial Intelligence and Deep Learning in Ophthalmology: Current Status and Future Perspectives. Adv. Ophthalmol. Pract. Res. 2022, 2, 100078.
  27. Bai, J.; Wan, Z.; Li, P.; Chen, L.; Wang, J.; Fan, Y.; Chen, X.; Peng, Q.; Gao, P. Accuracy and Feasibility with AI-Assisted OCT in Retinal Disorder Community Screening. Front. Cell Dev. Biol. 2022, 10, 1053483.
  28. Liu, X.; Zhao, C.; Wang, L.; Wang, G.; Lv, B.; Lv, C.; Xie, G.; Wang, F. Evaluation of an OCT-AI–Based Telemedicine Platform for Retinal Disease Screening and Referral in a Primary Care Setting. Transl. Vis. Sci. Technol. 2022, 11, 4.
  29. Mortada, M.J.; Tomassini, S.; Anbar, H.; Morettini, M.; Burattini, L.; Sbrollini, A. Segmentation of Anatomical Structures of the Left Heart from Echocardiographic Images Using Deep Learning. Diagnostics 2023, 13, 1683.
  30. Bertolini, M.; Rossoni, M.; Colombo, G. Operative Workflow from CT to 3D Printing of the Heart: Opportunities and Challenges. Bioengineering 2021, 8, 130.
  31. Cappello, I.A.; Candelari, M.; Pannone, L.; Monaco, C.; Bori, E.; Talevi, G.; Ramak, R.; La Meir, M.; Gharaviri, A.; Chierchia, G.B.; et al. 3D Printed Surgical Guide for Coronary Artery Bypass Graft: Workflow from Computed Tomography to Prototype. Bioengineering 2022, 9, 179.
  32. Zhang, Z.; Li, Y.; Shin, B.-S. Robust Medical Image Colorization with Spatial Mask-Guided Generative Adversarial Network. Bioengineering 2022, 9, 721.
  33. National Library of Medicine Visible Human Project. Available online: https://www.nlm.nih.gov/research/visible/visible_human.html (accessed on 16 June 2023).
  34. Bloch, B.N.; Madabhushi, A.; Huisman, H.; Freymann, J.; Kirby, J.; Grauer, M.; Enquobahrie, A.; Jaffe, C.; Clarke, L.; Farahani, K. NCI-ISBI 2013 Challenge: Automated Segmentation of Prostate Structures (ISBI-MR-Prostate-2013). 2015. Available online: https://www.cancerimagingarchive.net/analysis-result/isbi-mr-prostate-2013/ (accessed on 16 June 2023).
  35. Altini, N.; Brunetti, A.; Puro, E.; Taccogna, M.G.; Saponaro, C.; Zito, F.A.; De Summa, S.; Bevilacqua, V. NDG-CAM: Nuclei Detection in Histopathology Images with Semantic Segmentation Networks and Grad-CAM. Bioengineering 2022, 9, 475.
  36. Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. Int. J. Comput. Vis. 2020, 128, 336–359.
  37. Zaffino, P.; Marzullo, A.; Moccia, S.; Calimeri, F.; De Momi, E.; Bertucci, B.; Arcuri, P.P.; Spadea, M.F. An Open-Source COVID-19 CT Dataset with Automatic Lung Tissue Classification for Radiomics. Bioengineering 2021, 8, 26.
  38. Ahmad, W.; Ali, H.; Shah, Z.; Azmat, S. A New Generative Adversarial Network for Medical Images Super Resolution. Sci. Rep. 2022, 12, 9533.
  39. Bing, X.; Zhang, W.; Zheng, L.; Zhang, Y. Medical Image Super Resolution Using Improved Generative Adversarial Networks. IEEE Access 2019, 7, 145030–145038.
  40. Zhu, J.; Yang, G.; Lio, P. A Residual Dense Vision Transformer for Medical Image Super-Resolution with Segmentation-Based Perceptual Loss Fine-Tuning. arXiv 2023, arXiv:2302.11184.
  41. Wei, C.; Ren, S.; Guo, K.; Hu, H.; Liang, J. High-Resolution Swin Transformer for Automatic Medical Image Segmentation. Sensors 2023, 23, 3420.
  42. Zhang, K.; Hu, H.; Philbrick, K.; Conte, G.M.; Sobek, J.D.; Rouzrokh, P.; Erickson, B.J. SOUP-GAN: Super-Resolution MRI Using Generative Adversarial Networks. Tomography 2022, 8, 905–919.
  43. Yang, H.; Wang, Z.; Liu, X.; Li, C.; Xin, J.; Wang, Z. Deep Learning in Medical Image Super Resolution: A Review. Appl. Intell. 2023, 53, 20891–20916.
  44. Chen, C.; Wang, Y.; Zhang, N.; Zhang, Y.; Zhao, Z. A Review of Hyperspectral Image Super-Resolution Based on Deep Learning. Remote Sens. 2023, 15, 2853.
  45. Menze, B.H.; Jakab, A.; Bauer, S.; Kalpathy-Cramer, J.; Farahani, K.; Kirby, J.; Burren, Y.; Porz, N.; Slotboom, J.; Wiest, R.; et al. The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS). IEEE Trans. Med. Imaging 2015, 34, 1993–2024.
  46. The 2023 Kidney Tumor Segmentation Challenge. Available online: https://kits-challenge.org/kits23/ (accessed on 20 October 2023).
  47. Rajpurkar, P.; Irvin, J.; Bagul, A.; Ding, D.; Duan, T.; Mehta, H.; Yang, B.; Zhu, K.; Laird, D.; Ball, R.L.; et al. MURA: Large Dataset for Abnormality Detection in Musculoskeletal Radiographs. arXiv 2018, arXiv:1712.06957.
  48. MedPix. Available online: https://medpix.nlm.nih.gov/home (accessed on 12 December 2023).
  49. NIH Chest X-rays. Available online: https://www.kaggle.com/datasets/nih-chest-xrays/data (accessed on 12 December 2023).
More
Information
Contributor MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register :
View Times: 154
Revisions: 2 times (View History)
Update Date: 04 Jan 2024
1000/1000
Video Production Service