Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 3205 2022-04-21 12:02:51 |
2 format correct + 2 word(s) 3207 2022-04-22 03:09:44 | |
3 format correct -4 word(s) 3203 2022-04-22 03:13:13 |

Video Upload Options

We provide professional Video Production Services to translate complex research into visually appealing presentations. Would you like to try it?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Silva, F.; Pereira, T.; Neves, I.; Morgado, J.; Freitas, C.; Malafaia, M.; Sousa, J.; , .; Costa, J.L.; Hespanhol, V.; et al. Lung Cancer: Nodule-Focused Computer-Aided Decision Systems. Encyclopedia. Available online: https://encyclopedia.pub/entry/22104 (accessed on 16 February 2025).
Silva F, Pereira T, Neves I, Morgado J, Freitas C, Malafaia M, et al. Lung Cancer: Nodule-Focused Computer-Aided Decision Systems. Encyclopedia. Available at: https://encyclopedia.pub/entry/22104. Accessed February 16, 2025.
Silva, Francisco, Tania Pereira, Inês Neves, Joana Morgado, Cláudia Freitas, Mafalda Malafaia, Joana Sousa,  , José Luis Costa, Venceslau Hespanhol, et al. "Lung Cancer: Nodule-Focused Computer-Aided Decision Systems" Encyclopedia, https://encyclopedia.pub/entry/22104 (accessed February 16, 2025).
Silva, F., Pereira, T., Neves, I., Morgado, J., Freitas, C., Malafaia, M., Sousa, J., , ., Costa, J.L., Hespanhol, V., Cunha, A., & Oliveira, H. (2022, April 21). Lung Cancer: Nodule-Focused Computer-Aided Decision Systems. In Encyclopedia. https://encyclopedia.pub/entry/22104
Silva, Francisco, et al. "Lung Cancer: Nodule-Focused Computer-Aided Decision Systems." Encyclopedia. Web. 21 April, 2022.
Lung Cancer: Nodule-Focused Computer-Aided Decision Systems
Edit

The computer-aided decision (CAD) systems centered on the nodule represent the first approach dedicated to lung cancer, following the clinical proceedings, by only taking into consideration the imaging findings presented by the nodule region for the assessment. These systems comprise the detection of all possible nodule candidates and margin segmentation, followed by the stratification of the malignancy risk in order to support the clinicians.

computer-aided decision lung nodule analysis

1. Nodule Detection and Segmentation

The automated detection and segmentation of lung nodules are of great significance in the treatment of lung cancer and in increasing patient survival [1]. In clinical settings, radiologists must extract suspicious lung nodules from numerous images, a rigorous task since many destabilising factors, such as distraction and fatigue, as well as the limitations of professional experience, can contribute to misinterpretation of the available data. Therefore, several studies have focused on overcoming these difficulties, helping radiologists make more accurate diagnoses by proposing CAD systems that perform automatic detection and segmentation of lung nodules.

1.1 Nodule Detection

The heterogeneity and high variability of nodule imaging characteristics bring significant complexity to this task, and so lung nodule detection can naturally be seen separated into two sub-modules: (1) where multiple candidates are first proposed, and (2) the nodule/non-nodule distinction is refined. Considering DL-based approaches, encoder-decoder architectures are widely used as the base methods for initial nodule detection [2][3][4][5][6][7][8][9]. The extraction of hand-crafted statistical, shape, and texture features also brought valuable information for candidate detection, being further classified by SVM [10][11] or by using ensemble strategies to combine the learning abilities of different classifiers [12]. Other traditional vision algorithms found successful results in juxtapleural nodules detection [13]. In the context of this problem, missing a true nodule should be more penalized than predicting too many false suspicions; however, there is an obvious effort in the literature to decrease false positive mistakes, mostly approached by combining different classification networks [2][14], using multi-scaled patches for capturing features at different expression levels [4][5][15][16], employing other classification algorithms, such as SVM [6][10][11][17][18][19], Bayesian networks, and neuro-fuzzy classifiers [19], or proposing a graph-based image representation with deep point cloud models [20].
On the other hand, single-stage methods have also been explored. In the work by Harsono et al. [21], inspired by RetinaNet [22] successes, transfer learning techniques were employed to make use of ImageNet [23] pre-trained architectures, building a modified feature pyramid network (FPN) to combine the feature maps obtained at specific dimension levels, outperforming the previous state-of-the-art. A different single-stage approach can be found in [24], where the YOLO v3 architecture was firstly adapted for lung nodule detection, showing the capability of detecting these small imaging elements.
Table 1 summarizes the different nodule detection systems mentioned above in chronological order.
Table 1. Overview of published works regarding nodule detection approaches in lung CT images (2020–2021).
Authors Year Dataset Methods Performance Results (%)
Tan et al. [2] 2020 LIDC-IDRI 3D CNNs, based on FCN, DenseNet, and U-Net TPR = 97.5
Mukherjee et al. [12] 2020 LIDC-IDRI Ensemble stacking ACC = 99.5
TPR = 99.2
TNR = 98.8
FPR = 1.09
FNR = 0.85
Shi et al. [3] 2020 LUNA16 3D Res-I and U-Net network TPR = 96.4
FROC = 83.7
Khehrah et al. [10] 2020 LIDC-IDRI SVM ACC = 92
TPR = 93.7
TNR = 91.2
PPV = 83.3
MCC = 83.8
Kuo et al. [11] 2020 LIDC-IDRI Private (320 patients) SVM TPR = 92.1
Zheng et al. [4] 2020 LIDC-IDRI 3D multiscale dense CNNs TPR = 94.2 (1.0 FP/scan),
96.0 (2.0 FPs/image)
Paing et al. [13] 2020 LIDC-IDRI Optimized random forest ACC = 93.1
TPR = 94.9
TNR = 91.4
Liu et al. [24] 2020 LIDC-IDRI CNN algorithm: You Only Look Once v3 TPR = 87.3
Harsono et al. [21] 2020 LIDC-IDRI Private (546 patients) I3DR-Net mAP = 49.6 (LIDC),
22.9 (private)
AUC = 81.8 (LIDC),
70.4 (private)
Xu et al. [5] 2020 LUNA16 3D CNN networks: V-Net and multi-level contextual 3D CNNs TPR = 93.1 (1.64 FP/scan)
CPM = 75.7
Drokin and Ericheva [20] 2020 LIDC-IDRI Algorithm for sampling points from a point cloud FROC = 85.9
El-Regaily et al. [14] 2020 LIDC-IDRI Multi-view CNN ACC = 91.0
TPR = 96.0
TNR = 87.3
F-score = 78.7
Ye et al. [6] 2020 LUNA16 Three modified V-Nets with multilevel receptive fields ACC = 66.7
TPR = 81.1
PPV = 78.1
F-score = 78.7
Baker and Ghadi [17] 2020 LIDC-IDRI SVM NRR = 94.5
FPR = 7 cluster/image
Halder et al. [18] 2020 LIDC-IDRI SVM ACC = 88.2
TPR = 86.9
TNR = 86.9
Jain et al. [7] 2020 LUNA16 SumNet ACC = 94.1
TNR = 94.0
DSC = 93.0
Mahersia et al. [19] 2020 LIDC-IDRI SVM, Bayesian back-propagation neuronal classifier and neuro-fuzzy classifier NRR = 97.9
(neuronal classifier),
97.3 (SVM),
94.2 (neuro-fuzzy classifier)
Mittapalli and Thanikaiselvan [15] 2021 LUNA16 Multiscale CNN with Compound Fusions CPM = 94.8
Vipparla et al. [16] 2021 LUNA16 3D Attention-based CNN architectures: MP-ACNN1, MP-ACNN2 and MP-ACNN3 CPM = 93.1
Luo et al. [8] 2021 LUNA16 SCPM-Net TPR = 92.2 (1 FPs/image),
93.9 (2 FPs/image),
96.4 (8FPs/image)
Bhaskar and Ganashree [9] 2021 DSB-2017 Gaussian mixture convolutional auto encoder + 3D deep CNN ACC = 74.0
ACC: accuracy; AUC: area under the ROC curve; CPM: competition performance metric; DSC: Sørensen–Dice coefficient; FDR: false discovery rate; FNR: false negative rate; FP: false positive; FPR: false positive rate; FROC: free-response receiver operating characteristic; mAP: mean average precision; MCC: Matthews correlation coefficient; NPV: negative predictive value; NRR: nodule recognition rate; PPV: positive predictive value; TNR: true negative rate; TPR: true positive rate.

 

1.2 Nodule Segmentation

Although the popularity of deep learning approaches has caused a take over of the majority of nodule segmentation tasks, other learning algorithms have also been used. Machine learning-based approaches are still used for segmentation tasks. Hybrid models combining ML classifiers have been applied [25], standard level set image segmentations [26], or regions growing—that merge regions with similar features [27]. DL methods have shown the capability of outperforming the results presented by the previous works. The U-Net, 3D-UNet, and VNet approaches are the most common architectures applied [28][29][30]. The deep deconvolutional residual network was proposed for nodule segmentation, using a summation-based long skip connection from convolutional to deconvolutional parts of the network [31].
All of these methodologies are summarized in Table 2 in chronological order.
Table 2. Overview of the published works regarding nodule segmentation approaches in lung CT images (2020–2021).
Authors Year Dataset Methods Performance Results (%)
Sharma et al. [25] 2020 SPIE-AAPM Lung CT Challenge SVM + k-NN ACC = 93.9
TPR = 94.5
GM = 94.2
Xiao et al. [28] 2020 LUNA16 3D-UNet + Res2Net Neural Network TPR = 99.1
DSC = 95.3
Singadkar et al. [31] 2020 LIDC-IDRI Deep deconvolutional residual network DSC = 95.0
JI = 88.7
Kumar and Raman [29] 2020 LUNA16 V-Net (3D CNN) DSC = 96.1
Rocha et al. [30] 2020 LIDC-IDRI Sliding Band Filter + U-Net + SegU-Net DSC = 66.3 (SBF),
83.0 (U-Net),
82.3 (SegU-Net)
Hancock and Magnan [26] 2021 LIDC-IDRI Level set machine learning method DSC = 83.6
JI = 71.8
Savic et al. [27] 2021 LIDC-IDRI Private—phantom (108 patients) Algorithm based on the fast marching method DSC = 93.3
(solid round nodules),
90.1 (solid irregular nodules),
79.9 (non-solid nodules),
61.4 (cavity nodules)
ACC: accuracy; DSC: Sørensen–Dice coefficient; GM: Geometric mean; JI: Jaccard index; TPR: true positive rate.

2. Nodule Classification

Identification of lung nodule malignancy at the early stage has a positive impact on lung cancer prognosis. Therefore, there is a need for CAD systems to classify the lung nodule into benign and malignant types with maximum accuracy to avoid delays in diagnosis. This section will provide an overview of the current technology for lung nodule classification, a subject of study that is heavily explored by researchers who see the mortality rate increasing each day.
Science Direct, IEEE Xplore, Web of Science, and PubMed were the databases used during the search for articles pertaining to the classification of pulmonary nodules. The keywords used were “lung”, “nodule”, “classification”, “malignant”, “benign”, “pulmonary”, “tumor”, “cancer”, “CAD”, and “CADe”, with various combinations of logical expressions containing “AND” and “OR.” The articles were filtered according to their relevance, performance results, year of publication, and presence/absence in other reviews. Of the 33 articles selected, one was published in 2021 and the rest in 2020.
In recent years, many deep learning techniques have been used in lung nodule classification and have shown promising results when compared to other state-of-the-art machine learning methods. Thus, not surprisingly, the most recent review article found, published in the year 2021, and written by Naik and Edla, focused on 108 research papers, published up until 2019, which proposed novel deep learning methodologies for the nodule classification in lung CT scans [32].
The development of CAD systems for lung nodule malignancy has focused on a binary analysis, which basically resumes into finding imaging characteristics with values for distinguishing benign from malignant nodules. Although more complexity could be extracted from this problem (e.g., the assessment of more detailed malignancy levels), the literature still focused on the “two-class” version of this problem.
Regarding nodule feature extraction, CNN has became the standard approach in this field, either with single-network approaches [33][34][35][36][37][38][39][40][41][42][43] or using ensemble strategies to combine multiple models [44][45][46][47][48][49]. A combination of local and more nodule-specific features with more global information is captured by processing the input image at different dimensions [50][51][52][53][54][55], enabling to bring together features from different levels of analysis. Regarding training techniques, the possibility of making use of ImageNet [23] pre-trained architectures, as in [35][37][56][57], was shown to provide improvements in the predictive ability. In works that explored multi-task learning strategies, taking advantage of related tasks to enhance the extraction of relevant information, features captured by generative models while discriminating between real and fake lung nodules, have also shown valuable roles in training options [58][59], as well as the use of knowledge obtained by learning to reconstruct nodule images [44][48][60].
Although the majority of the literature on this topic relies on end-to-end neural network-based methodologies, algorithms, such as SVM, XGBoost, and KNN, have also been employed, serving as classifiers for previous extracted deep features [41][44], combined multimodal features [36], or hand-crafted features, such as nodule textures, intensity, and shape [61].
Table 3 summarizes the detailed nodule classification works with the reported best performance in chronological order.
Table 3. Overview of published works regarding nodule classification approaches in lung CT images (2020–2021).
Authors Year Dataset Methods Performance Results (%)
Wang et al. [33] 2020 Private (1478 patients) Adaptive-boost deep learning strategy with multiple 3D CNN-based weak classifiers ACC = 73.4
TPR = 70.5
TNR = 76.2
PPV = 83.8
AUC = 82.0
F-score = 71.6
Xiao et al. [44] 2020 LIDC-IDRI ResNet-18 + Denoising autoencoder classifier + handcrafted features ACC = 93.1
TPR = 81.7
PPV = 83.8
AUC = 82.0
Wang et al. [51] 2020 LUNGx ConvNet ACC = 90.4
TPR = 88.7
TNR = 92.4
AUC = 94.8
Lin et al. [34] 2020 LUNA16 GVGG + ResCon network TPR = 92.5
TNR = 96.8
PPV = 93.6
F-score = 93.0
Onishi et al. [58] 2020 Private (60 patients) M-Scale 3D CNN TPR = 90.9
TNR = 74.1
Zhao et al. [50] 2020 LIDC-IDRI Multi-stream multi-task network ACC = 93.9
TPR = 92.6
TNR = 96.2
AUC = 97.9
Zia et al. [56] 2020 LIDC-IDRI Multi-deep model ACC = 90.7
TPR = 90.7
TNR = 90.8
Jiang et al. [45] 2020 LUNA16 Ensemble of 3D Dual Path Networks ACC = 90.2
TPR = 92.0
FPR = 11.1
F-score = 90.4
Bao et al. [55] 2020 LIDC-IDRI Global-local residual network ACC = 90.4
TPR = 90.1
PPV = 89.9
AUC = 96.1
Shah et al. [35] 2020 LUNA16 NoduleNet (transfer learning from VGG16 and VGG19 models) ACC = 95.0
TPR = 84.0
TNR = 97.0
Tong et al. [36] 2020 LIDC-IDRI 3D-ResNet + SVM with RBF and polynomial kernels ACC = 90.6
TPR = 87.5
TNR = 94.1
Xu et al. [52] 2020 LIDC-IDRI Multi-scale cost-sensitive methods ACC = 92.6
TPR = 85.6
TNR = 95.9
PPV = 90.4
AUC = 94.0
F-score = 87.9
Huang et al. [37] 2020 LIDC-IDRI Deep transfer convolutional neural network + Extreme learning machine ACC = 94.6
TPR = 93.7
TNR = 95.1
AUC = 94.9
Naik et al. [46] 2020 LUNA16 FractalNet + CNN ACC = 94.1
TPR = 97.5
TNR = 86.8
AUC = 98.0
Zhang et al. [42] 2020 LUNA16 3D squeeze-and-excitation network and aggregated residual transformations ACC = 91.7
AUC = 95.6
Liu et al. [47] 2020 LIDC-IDRI Multi-model ensemble learning architecture based on 3D CNNs: VggNet, ResNet, and InceptionNet ACC = 90.6
TPR = 83.7
TNR = 93.9
AUC = 93.0
Afshar et al. [53] 2020 LIDC-IDRI 3D Multi-scale Capsule Network ACC = 93.1
TPR = 94.9
TNR = 90.0
AUC = 96.4
Lyu et al. [38] 2020 LIDC-IDRI Multi-level cross ResNet ACC = 92.2
TPR = 92.1
TNR = 91.5
AUC = 97.1
Wu et al. [39] 2020 LIDC-IDRI Deep residual network (ResNet + residual learning + migration learning) ACC = 98.2
TPR = 97.7
TNR = 98.3
PPV = 98.5
F-score = 98.1
FPR = 1.60
Lin and Li [40] 2020 LIDC-IDRI Taguchi-based AlexNet CNN ACC = 99.6
Kuang et al. [59] 2020 LIDC-IDRI Combination of a multi-discriminator generative adversarial network and an encoder ACC = 95.3
TPR = 94.1
TNR = 90.8
AUC = 94.3
Lima et al. [61] 2020 LIDC-IDRI SVM with Gaussian kernel + Relief + Evolutionary Genetic Algorithm AUC = 85.6
Veasey et al. [57] 2020 NLST Recurrent neural network with 2D CNN PPV = 55.9 (t0), 66.9 (t1)
AUC = 80.6 (t0), 83.5 (t1)
Bansal et al. [41] 2020 LUNA16 Deep3DSCan TPR = 87.1
TNR = 89.7
AUC = 88.3
F-score = 88.5
Zhai et al. [48] 2020 LUNA16 LIDC-IDRI Multi-task learning CNN TPR = 84.0 (LUNA16),
95.6 (LIDC-IDRI)
TNR = 96.8 (LUNA16),
88.9 (LIDC-IDRI)
AUC = 97.3 (LUNA16),
95.6 (LIDC-IDRI)
Paul et al. [49] 2020 NLST Ensemble of CNNs ACC = 90.3
AUC = 96.0
TPR = 73.0
FNR = 27.0
Ali et al. [43] 2020 LIDC-IDRI LUNGx Transferable texture CNN ACC = 96.6 (LIDC-IDRI),
90.9 (LUNGx)
TPR = 96.1 (LIDC-IDRI),
91.4 (LUNGx)
TNR = 97.4 (LIDC-IDRI),
90.5 (LUNGx)
AUC = 99.1 (LIDC-IDRI),
94.1 (LUNGx)
Silva et al. [60] 2020 LIDC-IDRI Transfer learning (convolutional autoencoder) AUC = 93.6
PPV = 79.4
TPR = 84.8
F-score = 81.7
Xia et al. [54] 2021 LIDC-IDRI Gradient boosting machine algorithm ACC = 91.9
TPR = 91.3
F-score = 91.0
FPR = 8.00
ACC: accuracy; AUC: area under the ROC curve; FNR: false negative rate; FPR: false positive rate; PPV: positive predictive value; TNR: true negative rate; TPR: true positive rate.

3. Interpretability Methods for Nodule-Focused CADs

As mentioned in the previous section, nodule classification models based on deep learning (DL) algorithms are able to achieve the highest performances. However, DL models are considered the least interpretable machine learning models due to the inherent mathematical complexity; thus, not providing reasoning for the prediction and, consequently, decreasing the trust in these models [62]. When utilizing these black-box models in the medical domain, it is critical to have systems that are trustworthy and reliable to the clinicians, therefore raising the need to make these approaches more transparent and understandable to humans [63].
Explainable AI (XAI) refers to techniques or methods that aim to find a connection between the input features and the prediction of the black-box; thus, looking to justify the decision and its reliability. Perceptive interpretability includes XAI methods that focus on generating interpretations that can be easily perceived by humans, despite not actually ‘unblackboxing’ the algorithm [64]. Visual explanations are the most commonly used XAI methodologies in deep learning image analysis approaches [65], namely in radiology image-based predictive models, where the trust in a CAD system can increase substantially by presenting the areas of a medical image with a higher contribution to the prediction, along with the prediction itself [62].
A large portion of the most utilized XAI methods in the medical domain is post-hoc models, which consist of methods external to the already trained predictive model, performing evaluations on the predictions without altering the model itself. These are off-the-shelf agnostic methods that can be found in libraries, such as PyTorch Captum [66]. This post-model approach was implemented by Knapič et al. [63], where two popular post-hoc methods, local interpretable model-agnostic explanations (LIME), and SHAPley Additive exPlanations (SHAPs) were compared in terms of understandability for humans in the predictive model with the same medical image dataset.
Furthermore, in-model XAI methods for lung nodule classification were also implemented by Li et al. [67], where an importance estimation network returns a diagnostic visual interpretation that is utilized by the classifier for an irrelevant feature destruction process in each pooling layer. In the developed model, only the essential features are preserved in the visual interpretation, being the optimization of the model achieved by a trade-off between the accuracy of the model and the amount of information used in the classification. In Jiang et al. [68], a convolutional block attention module (CBAM) was implemented to develop a partially explainable classification model for pulmonary nodules, allowing to build a relationship between the features of the input images and the symptom descriptions and infer that the rationale of the network shows some correlation with the diagnosis of physicians.
The concern for interpretability is increasing, especially in the medical field, where there are higher stakes and responsibilities in the CAD systems that are implemented. However, the research in the area of interpretable models is still in progress, despite the recent rise in the development of this approach. The increase in research efforts of interpretable CAD systems is already noticeable, mainly regarding the verification and explanation of the predicted diagnosis, rather than the unravelling of the black-box [64]. These methods may show future potential, not only in providing trustworthy explanations to physicians, but also in assuring the reliability and consistency of the developed models.

4. Discussion and Future Work: Nodule Detection, Segmentation, and Classification

In the current methods, direct comparisons of research results were hampered by the heterogeneity in the selection of included scans, different parameters for the algorithms, and inconsistent use of performance metrics and evaluation protocols. Overall, the selected works have shown good capabilities in the detection, segmentation, and classification of pulmonary nodules in CT images. It can be found that the machine learning techniques showed satisfactory performance results, while deep learning, especially CNN, outperformed conventional models and emerged as a promising approach. The main advantages of CNN lie in its ability to directly learn from a variety of data sources and automatically generate relevant and possibly unknown features, allowing for prompt and efficient development of CAD systems. The major challenge is achieving robustness to diverse clinical data of varying quality. Although the availability of heterogeneous private datasets has been shown to improve model performance, results comparability, and generalization becomes limited. Furthermore, to ensure robustness, the proposed methods need to be validated with sufficiently large datasets that include all nodule types and sizes. Thus, methods that were evaluated with fewer nodules will likely lose accuracy under clinical conditions where nodule types are more varied. The next challenge is the discrepancy or variability between the manual annotations. For image-based annotations, such as detection, segmentation, and classification, such variability may reflect a possible ceiling performance for AI-based methods. In addition, feature extraction serves as an important step in differentiating nodules from other anatomic structures present in lung lobes. Yet, the optimal set of features for nodule detection remains a subject of debate. Moreover, although deep-learning technologies avoid handcrafting and selecting image features, they instead require the selection of a loss function, network architecture, and an efficient optimization method, all of which influence the learning process. Additionally, the images used for training and testing in nodule analysis algorithms may have excluded pathological conditions in addition to lung nodule screening. Incorporating day-to-day chest CT images from multiple centers and dealing with these real-life situations are challenges and are reasons why manual correction and interaction are necessary to help physicians read the images.

4.1 Improvements Needed

To improve CADe and further develop its contribution to lung cancer treatment, some areas need to be explored:
  • Large and different public lung nodule databases for algorithm evaluation to provide replication of desired results and enhance the stringency of the algorithm so that lung nodule analysis tools can be validated by mimicking real clinical scenarios.
  • The ability to deal with pulmonary nodules based on location (isolated, juxtapleural, or juxta-vascular) and internal texture (solid, semi-solid, ground-glass opacity, and non-solid). In particular, the detection of ground glass optical and non-nodules is difficult and is explored by very few researchers.
  • The ability to deal with pulmonary nodules with extremely small diameters. Most early-stage malignant tumors are smaller in size, and if these tumors are detected at an early stage, the survival chance of the individual can be increased.
  • The ability to classify nodules not only as benign or malignant, but as benign, early-stage cancerous nodule, primary malignant, and metastasis malignant, decreasing the level of abstraction related to some clinical phenomena that must be considered.
  • Develop a system capable of segmenting out large solid nodules attached to the pleural wall, which is quite challenging.
  • Build a set of useful and efficient features based mainly on shape or geometry, intensity, and texture for better false-positive reduction.
  • Develop a new CAD system based on powerful feature map visualization techniques to better analyze CNN’s decision and transfer it to radiologists.
  • Fine-tune a pre-trained CNN model instead of training it from scratch to increase its robustness and surpass the limitation of annotated medical data.
  • Develop in-depth research on GAN models, which can solve the problem of the lack of medical databases.
  • Design new CAD systems, including two or more CNN architectures to address the problem of overfitting that occurs during the training process due to imbalance in the datasets.
  • Develop new deep learning techniques or optimize existing techniques to improve the performance of the CADe system, such as using a contracting path (to capture context) and a symmetric expanding path (to enable precise localization) to strengthen the use of available annotated samples, training multilayer networks efficiently by residual learning to gain accuracy from considerably increased depth.
  • Promote cooperation and communication between academic institutions and medical organizations to combine real clinical requirements and the latest scientific achievements.

References

  1. Zhang, G.; Jiang, S.; Yang, Z.; Gong, L.; Ma, X.; Zhou, Z.; Bao, C.; Liu, Q. Automatic nodule detection for lung cancer in CT images: A review. Comput. Biol. Med. 2018, 103, 287–300.
  2. Tan, M.; Wu, F.; Yang, B.; Ma, J.; Kong, D.; Chen, Z.; Long, D. Pulmonary nodule detection using hybrid two-stage 3D CNNs. Med. Phys. 2020, 47, 3376–3388.
  3. Shi, L.; Ma, H.; Zhang, J. Automatic detection of pulmonary nodules in CT images based on 3D Res-I network. Vis. Comput. 2020, 10, 1917–1929.
  4. Zheng, S.; Cornelissen, L.J.; Cui, X.; Jing, X.; Veldhuis, R.N.; Oudkerk, M.; van Ooijen, P.M. Deep convolutional neural networks for multi-planar lung nodule detection: Improvement in small nodule identification. Med. Phys. 2020, 48, 733–744.
  5. Xu, Y.M.; Zhang, T.; Xu, H.; Qi, L.; Zhang, W.; Zhang, Y.D.; Gao, D.S.; Yuan, M.; Yu, T.F. Deep learning in CT images: Automated pulmonary nodule detection for subsequent management using convolutional neural network. Cancer Manag. Res. 2020, 12, 2979.
  6. Ye, Y.; Tian, M.; Liu, Q.; Tai, H.M. Pulmonary Nodule Detection Using V-Net and High-Level Descriptor Based SVM Classifier. IEEE Access 2020, 8, 176033–176041.
  7. Jain, P.; Shivwanshi, R.R.; Nirala, N.; Gupta, S. SumNet Convolution Neural network based Automated pulmonary nodule detection system. In Proceedings of the 2020 IEEE International Conference on Advent Trends in Multidisciplinary Research and Innovation (ICATMRI), Buldhana, India, 30 December 2020; pp. 1–6.
  8. Luo, X.; Song, T.; Wang, G.; Chen, J.; Chen, Y.; Li, K.; Metaxas, D.N.; Zhang, S. SCPM-Net: An Anchor-free 3D Lung Nodule Detection Network using Sphere Representation and Center Points Matching. arXiv 2021, arXiv:2104.05215.
  9. Bhaskar, N.; Ganashree, T. Lung Nodule Detection from CT scans using Gaussian Mixture Convolutional AutoEncoder and Convolutional Neural Network. Ann. Rom. Soc. Cell Biol. 2021, 25, 6524–6531.
  10. Khehrah, N.; Farid, M.S.; Bilal, S.; Khan, M.H. Lung Nodule Detection in CT Images Using Statistical and Shape-Based Features. J. Imaging 2020, 6, 6.
  11. Kuo, C.F.J.; Huang, C.C.; Siao, J.J.; Hsieh, C.W.; Huy, V.Q.; Ko, K.H.; Hsu, H.H. Automatic lung nodule detection system using image processing techniques in computed tomography. Biomed. Signal Process. Control 2020, 56, 101659.
  12. Mukherjee, J.; Kar, M.; Chakrabarti, A.; Das, S. A soft-computing based approach towards automatic detection of pulmonary nodule. Biocybern. Biomed. Eng. 2020, 40, 1036–1051.
  13. Paing, M.P.; Hamamoto, K.; Tungjitkusolmun, S.; Visitsattapongse, S.; Pintavirooj, C. Automatic detection of pulmonary nodules using three-dimensional chain coding and optimized random forest. Appl. Sci. 2020, 10, 2346.
  14. El-Regaily, S.A.; Salem, M.A.M.; Aziz, M.H.A.; Roushdy, M.I. Multi-view Convolutional Neural Network for lung nodule false positive reduction. Expert Syst. Appl. 2020, 162, 113017.
  15. Mittapalli, P.S.; Thanikaiselvan, V. Multiscale CNN with compound fusions for false positive reduction in lung nodule detection. Artif. Intell. Med. 2021, 113, 102017.
  16. Vipparla, V.K.; Chilukuri, P.K.; Kande, G.B. Attention Based Multi-Patched 3D-CNNs with Hybrid Fusion Architecture for Reducing False Positives during Lung Nodule Detection. J. Comput. Commun. 2021, 9, 1.
  17. Baker, A.A.A.; Ghadi, Y. A novel CAD system to automatically detect cancerous lung nodules using wavelet transform and SVM. Int. J. Electr. Comput. Eng. 2020, 10, 4745.
  18. Halder, A.; Chatterjee, S.; Dey, D. Morphological Filter Aided GMM Technique for Lung Nodule Detection. In Proceedings of the 2020 IEEE Applied Signal Processing Conference (ASPCON), Kolkata, India, 7–9 October 2020; pp. 198–202.
  19. Mahersia, H.; Boulehmi, H.; Hamrouni, K. CAD system for lung nodules detection using wavelet-based approach and intelligent classifiers. In Proceedings of the 2020 17th International Multi-Conference on Systems, Signals & Devices (SSD), Monastir, Tunisia, 20–23 July 2020; pp. 173–178.
  20. Drokin, I.; Ericheva, E. Deep Learning on Point Clouds for False Positive Reduction at Nodule Detection in Chest CT Scans. arXiv 2020, arXiv:2005.03654.
  21. Harsono, I.W.; Liawatimena, S.; Cenggoro, T.W. Lung nodule detection and classification from thorax CT-scan using RetinaNet with transfer learning. J. King Saud-Univ.-Comput. Inf. Sci. 2020, 34, 567–577.
  22. Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollar, P. Focal Loss for Dense Object Detection. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 318–327.
  23. Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. ImageNet Large Scale Visual Recognition Challenge. Int. J. Comput. Vis. 2015, 115, 211–252.
  24. Liu, C.; Hu, S.C.; Wang, C.; Lafata, K.; Yin, F.F. Automatic detection of pulmonary nodules on CT images with YOLOv3: Development and evaluation using simulated and patient data. Quant. Imaging Med. Surg. 2020, 10, 1917.
  25. Sharma, S.; Fulzele, P.; Sreedevi, I. Hybrid Model for Lung Nodule Segmentation based on Support Vector Machine and k-Nearest Neighbor. In Proceedings of the 2020 Fourth International Conference on Computing Methodologies and Communication (ICCMC), Erode, India, 11–13 March 2020; pp. 170–175.
  26. Hancock, M.C.; Magnan, J.F. Level set image segmentation with velocity term learned from data with applications to lung nodule segmentation. arXiv 2021, arXiv:1910.03191.
  27. Savic, M.; Ma, Y.; Ramponi, G.; Du, W.; Peng, Y. Lung Nodule Segmentation with a Region-Based Fast Marching Method. Sensors 2021, 21, 1908.
  28. Xiao, Z.; Liu, B.; Geng, L.; Zhang, F.; Liu, Y. Segmentation of Lung Nodules Using Improved 3D-UNet Neural Network. Symmetry 2020, 12, 1787.
  29. Kumar, S.; Raman, S. Lung nodule segmentation using 3-dimensional convolutional neural networks. In Soft Computing for Problem Solving; Springer: Berlin/Heidelberg, Germany, 2020; pp. 585–596.
  30. Rocha, J.; Cunha, A.; Mendonça, A.M. Conventional filtering versus u-net based models for pulmonary nodule segmentation in ct images. J. Med. Syst. 2020, 44, 81.
  31. Singadkar, G.; Mahajan, A.; Thakur, M.; Talbar, S. Deep deconvolutional residual network based automatic lung nodule segmentation. J. Digit. Imaging 2020, 33, 678–684.
  32. Naik, A.; Edla, D.R. Lung nodule classification on computed tomography images using deep learning. Wirel. Pers. Commun. 2021, 116, 655–690.
  33. Wang, J.; Chen, X.; Lu, H.; Zhang, L.; Pan, J.; Bao, Y.; Su, J.; Qian, D. Feature-shared adaptive-boost deep learning for invasiveness classification of pulmonary subsolid nodules in CT images. Med. Phys. 2020, 47, 1738–1749.
  34. Lin, Z.; Zheng, J.; Hu, W. Using 3D Convolutional Networks with Shortcut Connections for Improved Lung Nodules Classification. In Proceedings of the 2020 2nd International Conference on Big Data Engineering, Shanghai, China, 29–31 May 2020; pp. 42–49.
  35. Shah, G.; Thammasudjarit, R.; Thakkinstian, A.; Suwatanapongched, T. NoduleNet: A Lung Nodule Classification Using Deep Learning. Ramathibodi Med. J. 2020, 43, 11–19.
  36. Tong, C.; Liang, B.; Su, Q.; Yu, M.; Hu, J.; Bashir, A.K.; Zheng, Z. Pulmonary nodule classification based on heterogeneous features learning. IEEE J. Sel. Areas Commun. 2020, 39, 574–581.
  37. Huang, X.; Lei, Q.; Xie, T.; Zhang, Y.; Hu, Z.; Zhou, Q. Deep transfer convolutional neural network and extreme learning machine for lung nodule diagnosis on CT images. Knowl.-Based Syst. 2020, 204, 106230.
  38. Lyu, J.; Bi, X.; Ling, S.H. Multi-level cross residual network for lung nodule classification. Sensors 2020, 20, 2837.
  39. Wu, P.; Sun, X.; Zhao, Z.; Wang, H.; Pan, S.; Schuller, B. Classification of lung nodules based on deep residual networks and migration learning. Comput. Intell. Neurosci. 2020, 2020, 8975078.
  40. Lin, C.J.; Li, Y.C. Lung Nodule Classification Using Taguchi-Based Convolutional Neural Networks for Computer Tomography Images. Electronics 2020, 9, 1066.
  41. Bansal, G.; Chamola, V.; Narang, P.; Kumar, S.; Raman, S. Deep3DSCan: Deep residual network and morphological descriptor based framework for lung cancer classification and 3D segmentation. IET Image Process. 2020, 14, 1240–1247.
  42. Zhang, G.; Yang, Z.; Gong, L.; Jiang, S.; Wang, L.; Zhang, H. Classification of lung nodules based on CT images using squeeze-and-excitation network and aggregated residual transformations. Radiol. Med. 2020, 125, 374–383.
  43. Ali, I.; Muzammil, M.; Haq, I.U.; Khaliq, A.A.; Abdullah, S. Efficient Lung Nodule Classification Using Transferable Texture Convolutional Neural Network. IEEE Access 2020, 8, 175859–175870.
  44. Xiao, N.; Qiang, Y.; Bilal Zia, M.; Wang, S.; Lian, J. Ensemble classification for predicting the malignancy level of pulmonary nodules on chest computed tomography images. Oncol. Lett. 2020, 20, 401–408.
  45. Jiang, H.; Gao, F.; Xu, X.; Huang, F.; Zhu, S. Attentive and ensemble 3D dual path networks for pulmonary nodules classification. Neurocomputing 2020, 398, 422–430.
  46. Naik, A.; Edla, D.R.; Kuppili, V. A combination of FractalNet and CNN for Lung Nodule Classification. In Proceedings of the 2020 11th International Conference on Computing, Communication and Networking Technologies (ICCCNT), Kharagpur, India, 1–3 July 2020; pp. 1–7.
  47. Liu, H.; Cao, H.; Song, E.; Ma, G.; Xu, X.; Jin, R.; Liu, C.; Hung, C.C. Multi-model Ensemble Learning Architecture Based on 3D CNN for Lung Nodule Malignancy Suspiciousness Classification. J. Digit. Imaging 2020, 33, 1242–1256.
  48. Zhai, P.; Tao, Y.; Chen, H.; Cai, T.; Li, J. Multi-Task Learning for Lung Nodule Classification on Chest CT. IEEE Access 2020, 8, 180317–180327.
  49. Paul, R.; Schabath, M.; Gillies, R.; Hall, L.; Goldgof, D. Convolutional Neural Network ensembles for accurate lung nodule malignancy prediction 2 years in the future. Comput. Biol. Med. 2020, 122, 103882.
  50. Zhao, J.; Zhang, C.; Li, D.; Niu, J. Combining multi-scale feature fusion with multi-attribute grading, a CNN model for benign and malignant classification of pulmonary nodules. J. Digit. Imaging 2020, 33, 869–878.
  51. Wang, Y.; Zhang, H.; Chae, K.J.; Choi, Y.; Jin, G.Y.; Ko, S.B. Novel convolutional neural network architecture for improved pulmonary nodule classification on computed tomography. Multidimens. Syst. Signal Process. 2020, 31, 1163–1183.
  52. Xu, X.; Wang, C.; Guo, J.; Gan, Y.; Wang, J.; Bai, H.; Zhang, L.; Li, W.; Yi, Z. MSCS-DeepLN: Evaluating lung nodule malignancy using multi-scale cost-sensitive neural networks. Med. Image Anal. 2020, 65, 101772.
  53. Afshar, P.; Oikonomou, A.; Naderkhani, F.; Tyrrell, P.N.; Plataniotis, K.N.; Farahani, K.; Mohammadi, A. 3D-MCN: A 3D multi-scale capsule network for lung nodule malignancy prediction. Sci. Rep. 2020, 10, 7948.
  54. Xia, K.; Chi, J.; Gao, Y.; Jiang, Y.; Wu, C. Adaptive Aggregated Attention Network for Pulmonary Nodule Classification. Appl. Sci. 2021, 11, 610.
  55. Bao, L.; Bao, T.; Zheng, Y.; Xia, J. A Simple Residual Network for Lung Nodule Classification. In Proceedings of the Fourth International Conference on Biological Information and Biomedical Engineering, Chengdu, China, 21–23 July 2020; pp. 1–5.
  56. Zia, M.B.; Zhao, J.J.; Zhou, X.; Xiao, N.; Wang, J.; Khan, A. Classification of malignant and benign lung nodule and prediction of image label class using multi-deep model. Int. J. Adv. Comput. Sci. Appl. 2020, 11, 35–41.
  57. Veasey, B.; Farhangi, M.M.; Frigui, H.; Broadhead, J.; Dahle, M.; Pezeshk, A.; Seow, A.; Amini, A.A. Lung nodule malignancy classification based on NLSTx Data. In Proceedings of the 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), Iowa City, IA, USA, 3–7 April 2020; pp. 1870–1874.
  58. Onishi, Y.; Teramoto, A.; Tsujimoto, M.; Tsukamoto, T.; Saito, K.; Toyama, H.; Imaizumi, K.; Fujita, H. Investigation of pulmonary nodule classification using multi-scale residual network enhanced with 3DGAN-synthesized volumes. Radiol. Phys. Technol. 2020, 13, 160–169.
  59. Kuang, Y.; Lan, T.; Peng, X.; Selasi, G.E.; Liu, Q.; Zhang, J. Unsupervised multi-discriminator generative adversarial network for lung nodule malignancy classification. IEEE Access 2020, 8, 77725–77734.
  60. Silva, F.; Pereira, T.; Frade, J.; Mendes, J.; Freitas, C.; Hespanhol, V.; Costa, J.L.; Cunha, A.; Oliveira, H.P. Pre-Training Autoencoder for Lung Nodule Malignancy Assessment Using CT Images. Appl. Sci. 2020, 10, 7837.
  61. Lima, L.; Vieira, T.; Costa, E.; Azevedo-Marques, P.; Oliveira, M. Using Support Vector Machine and Features Selection on Classification of Early Lung Nodules. In Anais do XX Simpósio Brasileiro de Computação Aplicada à Saúde; SBC: Porto Alegre, Brazil, 2020; pp. 60–71.
  62. Reyes, M.; Meier, R.; Pereira, S.; Silva, C.A.; Dahlweid, F.M.; von Tengg-Kobligk, H.; Summers, R.M.; Wiest, R. On the Interpretability of Artificial Intelligence in Radiology: Challenges and Opportunities. Radiol. Artif. Intell. 2020, 2, 12.
  63. Knapič, S.; Malhi, A.; Salujaa, R.; Främling, K. Explainable Artificial Intelligence for Human Decision-Support System in Medical Domain. arXiv 2021, arXiv:2105.02357.
  64. Tjoa, E.; Guan, C. A survey on explainable artificial intelligence (xai): Toward medical xai. IEEE Trans. Neural Netw. Learn. Syst. 2020, 32, 4793–4813.
  65. van der Velden, B.H.; Kuijf, H.J.; Gilhuijs, K.G.; Viergever, M.A. Explainable artificial intelligence (XAI) in deep learning-based medical image analysis. arXiv 2021, arXiv:2107.10912.
  66. Kokhlikyan, N.; Miglani, V.; Martin, M.; Wang, E.; Alsallakh, B.; Reynolds, J.; Melnikov, A.; Kliushkina, N.; Araya, C.; Yan, S.; et al. Captum: A unified and generic model interpretability library for PyTorch. arXiv 2020, arXiv:2009.07896.
  67. Li, Y.; Gu, D.; Wen, Z.; Jiang, F.; Liu, S. Classify and explain: An interpretable convolutional neural network for lung cancer diagnosis. In Proceedings of the ICASSP 2020—2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, 4–8 May 2020; pp. 1065–1069.
  68. Jiang, H.; Shen, F.; Gao, F.; Han, W. Learning efficient, explainable and discriminative representations for pulmonary nodules classification. Pattern Recognit. 2021, 113, 107825.
More
Information
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : , , , , , , , , , , ,
View Times: 750
Revisions: 3 times (View History)
Update Date: 22 Apr 2022
1000/1000
Video Production Service