Deep Learning-Based Diagnosis of Chest Diseases: History
Please note this is an old version of this entry, which may differ significantly from the current revision.

Chest disease refers to a variety of lung disorders, including lung cancer (LC), COVID-19, pneumonia (PNEU), tuberculosis (TB), and numerous other respiratory disorders. The symptoms (i.e., fever, cough, sore throat, etc.) of these chest diseases are similar, which might mislead radiologists and health experts when classifying chest diseases. Chest X-rays (CXR), cough sounds, and computed tomography (CT) scans are utilized by researchers and doctors to identify chest diseases such as LC, COVID-19, PNEU, and TB. 

  • X-rays
  • deep learning
  • CT scans
  • cough sound
  • COVID-19
  • lung cancer
  • pneumonia

1. Introduction

Diseases that are communicable or transmissible are those that can be passed on from one person to another, as well as from one animal or insect to another [1]. These diseases are brought on by a wide variety of infectious agents, including viruses, bacteria, fungi, and others. These symptoms, however, can be rather different from one another depending on the organism that was the source of the infection [2]. The vast majority of infections do not pose a significant risk to one’s life, but some do. The life-threatening condition known as COVID-19 is caused by the severe acute respiratory syndrome coronavirus (SARS-CoV-2). In December 2019, it was discovered for the very first time in the province of Wuhan in China [1][2][3]. A pandemic was brought about as a result of the rapid and easy spread of this disease, which may be passed on from one individual to another. A healthy individual can contract COVID-19 via inhaling aerosols or droplets containing the virus; coming into direct contact with an infected person’s cough, sneeze, or voice; or breathing in droplets containing the virus [2]. If a patient is diagnosed with the illness, it is highly recommended that they self-isolate as soon as possible to prevent the virus from spreading further. The most common symptoms of COVID-19 are coughing, loss of smell, fever, lack of taste, and difficulty with breathing. Early discovery of infected individuals is crucial so that they can isolate themselves and obtain the right therapies for a quick recovery. Because the virus spreads from an infected person to those who are in close contact [4][5], early detection of infected individuals is essential.
Antigen testing, which can detect a patient who is ill at the time, and antibody testing, which can detect antibodies in the blood of a person who was previously infected with COVID-19, are used to identify a COVID-19-infected person [6]. Because the polymerase chain reaction (PCR) is used in the vast majority of antigen testing to identify COVID-19, the tests in question are referred to as PCR tests [7]. RNA is extracted from a nasal or pharyngeal swab that has been obtained as a clinical specimen to carry out this RT-PCR test [8]. Nevertheless, the processes may take a few hours; by that time, the virus may have infected a significant number of people who were previously unaffected by it [9][10]. In addition, expensive laboratory equipment and trained workers are required for PCR testing. Moreover, the sensitivity of the RT-PCR test for detecting COVID-19 is lower, which means that the test may produce a large number of false negatives. Again, a patient who has been wrongly classified as negative has the potential to contaminate a significant number of people just by associating with them [11]. It is important to establish a diagnostic system that is more reliable, has fewer instances of false negative results, and can detect the presence of COVID-19 at an early stage of infection to lessen the likelihood that it may spread [12]. Chest radiography imaging may be an alternative for fixing this issue and accelerating the identification procedure [3], as respiratory symptoms are the earliest sign of COVID-19. Both chest computed tomography (CT) scans and chest X-rays (CXRs) provide precise views of the chest’s soft tissues, bones, blood vessels, and internal organs, which is an advantage when it comes to detecting COVID-19 [6]. Furthermore, cough sounds are also utilized for identifying chest diseases [8][9][10][11][12][13][14]. A peripheral distribution, fine reticular opacity, ground-glass opacities (GGOs), diffuse distributions, bilateral involvement, and vascular thickening are some of the distinctive features that can be seen on the chest CT scan of a person infected with COVID-19 [7]. During the screening phase, great detection sensitivity for COVID-19 has been demonstrated by both CT and CXR [8][9]. On the other hand, radiologists may experience visual tiredness, which might hinder them from diagnosing certain small lesions [10][11][12]. Because of the current situation, it is necessary to use computerized diagnosis that is based on artificial intelligence (AI) for the diagnosis of COVID-19 and other chest diseases.

2. Diagnosis of Chest Diseases using DL Models

A significant number of studies on the diagnosis of chest diseases have been carried out to help medical experts identify the disease from the beginning. On the contrary, recent studies have concentrated on the creation of various AI techniques that can automate the detection of various kinds of chest diseases. The most recent studies on the diagnosis of chest diseases using DL models are summarized in Table 1.

2.1. Deep Learning Models for Chest Disease Classification Using Chest X-rays and CT Scans

Iqbal et al. [13] introduced TBXNet, a DL network that is easy to use and very effective. It was able to categorize a very large number of TB images by utilizing CXR. Furthermore, data that had been trained before were transferred to the fusion layer via the pre-trained layer. The accuracy of the proposed TBXNet was measured at 98.98% on Dataset A and 99.17% on Dataset B. Validation of the generalizability of the proposed study was accomplished by utilizing Dataset C, which consisted of imaging data from CXR that were either normal, TB, PNE, or COVID-19, and it obtained 95.10% accuracy. By applying images obtained from chest X-rays, Kumar et al. [14] utilized an ensemble model that was able to identify COVID-19 at the earliest stage of the disease. Ensemble learning was utilized throughout the process of developing the suggested model. Three transfer learning models were specifically added to the process: GoogLeNet, EfficientNet, and XceptionNet. Patients were categorized as having COVID-19, PNEU, or TB or as being healthy according to these models. The generalization capacity of the classifier was improved by the model that is proposed, and this improvement was applied to both binary and multiple-class COVID-19 datasets. The effectiveness of the proposed ensemble model was assessed through the utilization of two well-known datasets.
The CBAMWDnet model was utilized by Huy et al. [15] to identify TB in an image of a chest X-ray. The model was built using the convolutional block attention module (CBAM) and the wide dense net (WDnet) structure, both of which were intended to successfully capture visual and contextual elements within images. In terms of accuracy, the proposed model outperformed the other models by 98.80%. The COVID-CheXNet system was developed by Al-Waisy et al. [16] to detect COVID-19 in chest X-ray pictures. This system uses a hybrid DL architecture. First, the brightness of the X-ray image was improved using the CLAHE method, and the noise level was reduced using a Butterworth bandpass filter. After that, two discriminating DL algorithms, ResNet-34 and HRNet, were developed on the pre-processed CXR images to strengthen the most recently developed model’s generalization skills and prevent overfitting. The efficacy of the COVID-CheXNet system was evaluated by generating a large-scale dataset of X-ray images called the COVID-19 vs. normal database.
Malik et al. [17] developed and evaluated a multi-classification strategy that relies on the DL model for automatically recognizing LC, PNEUTH, COVID-19, TB, and PNEU from CXR pictures. The CNN model known as CDC Net, which uses residual network perception and dilated convolution, was applied to identify COVID-19 and other conditions affecting the respiratory system. When recognizing various chest disorders, CDC Net achieved an AUC of 0.9953, with an accuracy of 99.39%, a precision of 99.4%, and a recall of 98.13%.
A classification approach that can evaluate CXR and help with the precise identification of COVID-19 was proposed by Shelke et al. [18]. The CXR images obtained using their approach were divided into the following four groups: normal, TB, PNEU, and COVID-19. VGG-16 was the DL model used to categorize PNEU, TB, and normal, with a test accuracy of 95.9%. DenseNet-161 was used to differentiate between normal, PNEU, and COVID-19, with a test accuracy of 98.9%, but ResNet-18 performed well in severity categorization, with a test accuracy as high as 75%. Their method enables the screening of huge populations because it relies heavily on X-rays as a key testing component for COVID-19.
By applying CXR as their primary data source, Ali et al. [19] developed a 19-layer CNN model to detect chest infections. The developed model was then reapplied to identify various kinds of chest infections using transfer learning. These included COVID-19, fibrosis, PNEU, and TB. The model was improved by the use of a stochastic descent of gradients with momentum. The proposed multiple-phase structure achieved a classification accuracy of 98.85% for online CXR datasets for detecting chest infections. The accuracy of the proposed multiple-phase CNN approach was further confirmed by employing an additional dataset, which revealed a 98.5% level of accuracy.
Constantinou et al. [20] identified COVID-19 using DenseNet-121, DenseNet-169, ResNet-50, ResNet-101, and Inception-V3 with transfer learning. The most extensive archive of COVID-19 CXR pictures that were available to the public was used during the development and verification of all of the models. There were 11,956 images of patients who had been confirmed to have COVID-19, 11,263 images of patients who had viral or bacterial pneumonia, and 10,701 images of healthy individuals. The ResNet-101 model had the best overall performance, scoring 96% in each of the categories measuring accuracy, precision, and recall. Performance levels for the remaining models were all satisfactory.
Agrawal et al. [21] focused on identifying COVID-19 from CXR pictures by exploring a binary categorization such as COVID-19 vs. non-COVID-19 and classification with multiple classes such as COVID-19, non-COVID-19, and PNEU. The dataset was made up of 125 CXR images for COVID-19, 500 CXR images for no findings, and 500 CXR images for pneumonia. They tested and evaluated a variety of DL models, including VGG19, InceptionV3, ResNet50, MobileNetV2, DenseNet121, and Xception, in addition to specialized models such as DarkCOVIDNet and COVID-Net, and they found that ResNet50 performed most effectively out of all of them. To classify COVID-19, non-COVID-19, bacterial PNEU, viral PNEU, and normal CXR images obtained from a variety of publicly accessible sources, Ibrahim et al. [22] recommended the development of a DL technique that made use of a pretrained AlexNet algorithm. The model’s accuracy was 93.42%, its sensitivity was 89.1%, and its specificity was 98.92%.
Ayalew et al. [23] introduced a reliable approach for classifying CXR images as those of normal vs. COVID-19 patients. This model was constructed using CNN, dropout, batch normalization, activation function, and Keras parameters. The images were subsequently categorized into a predefined class (normal vs. COVID-19) by utilizing the knowledge gained from the learning process model and SVM. The findings of the research reveal that each of the models generated favorable outcomes, with picture segmentation, augmentation, and image cropping providing the most successful outcomes, with a test accuracy of 99.8%.
Jennifer et al. [24] evaluated various deep learning models, such as ResNet-50, VGG-16, and XGBoost, for COVID-19 classification using a neutrosophic set approach. They achieved a remarkable classification accuracy of 97.33%. Jaszcz et al. [25] proposed a heuristic red fox optimization algorithm (RFOA) for medical image segmentation. Their proposed model achieved a classification accuracy of 97.20% and 94.35% for the Jaccard index. Karthik et al. [26] focused primarily on the most recent advances in image-based COVID-19 detection methods that involve classification and segmentation. By using edge-supervised information in the first stage of downsampling, Hu et al. [27] created a model edge supervised module (ESM) to emphasize low-level boundary features. The mask-supervised information can be integrated into the following step, where an auxiliary semantic supervised module (ASSM) is proposed to improve the quality of high-level semantic information. The semantic gaps between high-level and low-level feature maps are then reduced by adding an attention fusion module (AFM) to fuse various scale feature maps of different levels. Their findings demonstrate that the three proposed modules were effective at raising the dice metric by 1.12%. A unique prior knowledge-based algorithm for assessing the severity of COVID-19 was created by Li et al. [28] by utilizing CT scan images. They were successful in mining the result with an accuracy of 86.70%.

2.2. Deep Learning Models for Chest Disease Classification Using Cough Sounds

Pahar et al. [29] introduced an automated cough classifier that was created using DL. This classifier was able to differentiate between TB, COVID-19, and healthy cough sounds. The cough recordings were taken in a variety of situations, including indoors and outdoors, and were provided through the use of smartphones by people located all over the world; consequently, they contained varied degrees of background noise. CNN, LSTM, and Resnet50 were trained and evaluated using 1.68 h of TB cough sounds, 1.69 h of healthy cough sounds, and 18.54 min of COVID-19 cough sounds from 47 patients with TB, 1498 healthy patients, and 229 patients with COVID-19, respectively. Kim et al. [30] proposed MFCC, -MFCC, 2-MFCC, and wavelength contrast as a characteristic set designed for the identification of COVID-19 and implemented it in an algorithm that incorporates DNN and ResNet-50. The Coswara, Cambridge, and COUGHVID crowdsourcing databases provided them with the cough sound data that were used in their research. After the development of both the ResNet-50 and the DNN models, the respective values for accuracy, sensitivity, and specificity were 0.96, 0.95, and 0.96. Using this approach, an Android application for COVID-19 testing was created so that a large number of individuals could utilize it.
Islam et al. [31] created a research study containing the development of an algorithm for the noninvasive and automatic identification of COVID-19 by employing cough audio recordings and DNN. The noises generated by coughing can provide important information regarding the movement of the glottis in several different respiratory disorders. By applying cough audio recordings taken from healthy individuals and those with COVID-19 infections, the efficacy of the proposed algorithm was assessed. The proposed technique automatically recognizes COVID-19 cough audio recordings with a total accuracy of 89.2%, 93.8%, and 97.5%, while using time-domain, mixed-domain, and frequency-domain vectors of features, respectively.
Loey et al. [32] were able to identify and categorize characteristics by employing a total of six different deep transfer models. These models were ResNet-18, ResNet-50, GoogleNet, ResNet-101, NasNetmobile, and MobileNet-V2. The database contains a total of 1457 different cough sounds, 755 of which are from COVID-19 and 702 from healthy people. The SGDM optimizer discovered that the accuracy of the proposed model was 94.9%. The phase of sound-to-image conversion was improved through the scalogram method.
Nessiem et al. [33] assessed the use of DL models as a pervasive, affordable, and high-performing pre-testing approach for recognizing COVID-19 from recorded sounds of respiration or coughing obtained on mobile devices via the internet. They employed an ensemble of CNNs that can determine whether an individual has been impacted by COVID-19 based on the audio of raw breathing and coughing as well as spectrograms. Their proposed models were able to achieve a maximum UAR value of 74.9% and an AUC value of 80% in the held-out individual independent evaluation division. Tawfik et al. [34] developed a smart strategy that made use of DL to identify COVID-19 patients by listening to patients’ cough sounds. Their system consisted of three distinct phases: sound processing before use through noise reduction; the extraction of features, segmentation, and categorization; and the implementation of models. A total of 1635 audio subjects were analyzed, and 8 features were identified from those recordings. A total of 573 coughs tested positive for COVID-19, whereas 1062 coughs tested negative for the virus. In terms of detecting COVID-19, the DL model had an overall accuracy rate of 98.5%.
CBIR-CSNN was proposed as a method to differentiate between LC and TB in CT images by Zhang et al. [35]. Initially, the lesion regions were clipped out to generate the LC and TB databases, and then pairs of two different places were used to generate the patch–pair database. CBIR-CSNN was trained and tested on a total of 719 patients who were used throughout the process. To validate CBIR-CSNN, an additional external dataset with 30 patients was utilized. At the patch level, the CBIR-CSNN achieved remarkable results of 0.953 maP, 0.947 accuracy, and 0.970 AUC value. Multi-scale blocks of residual networks and open dense connections are the two components that make up the DAvoU-Net model that was proposed by Alebiosu et al. [36]. This model is used to divide TB-affected regions based on CT scans. The feature learning approach initiates a three-dimensional CNN for the deep extraction of features by transforming the two-dimensional values of a well-trained NN into three-dimensional values. In general, the overall performance of DAvoU-Net + ResNet-50, a 3D CNN, and a simultaneous LSTM was superior to that of the other six fully trained NNs that were used for comparison.
Toaçar et al. [37] introduced a method to detect lung cancers by using chest CT scans. The AlexNet, LeNet, and VGG-16 DL algorithms were utilized for the extraction of features and categorization. During the training of the models, image augmentation techniques such as zooming, rotation, filling, and cropping were implemented in the dataset to improve the categorization success rate. Due to the remarkable efficacy of the model, the features that were acquired from the final FCL of the AlexNet framework were used independently as inputs to LR, LDA, decision tree, SVM, SoftMax, and KNN classifiers. The combined use of the AlexNet algorithm and the kNN classifier provided the highest accuracy in classification at 98.74%.
Latif et al. [38] proposed the use of DL techniques to extract features. These algorithms were GoogleNet and ResNet-50. When integrating GoogleNet, ResNet18, and the SVM method in conjunction with the modified ML process, the maximum average accuracy that could be achieved was 99.9% after 2000 features were generated. P-DenseCOVNet is a modified version of the DenseNet structure that was designed by Sadik et al. [39] for the effective extraction of features and the evaluation of COVID-19 and pneumonia. In this structure, direct convolutional paths were added to the standard DenseNet method to improve achievement by overcoming the loss of spatial conflicts. To successfully segment the lung regions from CT scans, an upgraded version of U-Net known as SKICU-Net, containing skip connections among the decoder and encoder sections, was applied rather than the conventional U-Net. This resulted in a superior segmentation performance. A high level of achievement was shown by the system, which received a 0.97 F1-score for the task of segmenting and achieved an 87.5% accuracy when identifying normal cases, COVID-19, and common pneumonia. A federated learning method for the detection of COVID-19 using previous training DL methods was proposed by Florescu et al. [40]. In their study, a total of 2230 central CT scans of the chest were collected, including 1016 images of COVID-19, 610 images of LC, and 604 normal images. The architecture concept consisted of a single server and three clients. Each client had a collection of data. A healthcare organization that possessed a private dataset represented a client. These organizations worked together to develop a global model.
A diagnostic tool based on AI categorization of chest CT scans was created by Fu et al. [41] to diagnose COVID-19 and other prevalent infectious respiratory diseases. A total of five lung conditions were evaluated, and they were as follows: COVID-19, bacterial PNEU, viral PNEU, TB, and normal lung. Images of the training and validation groups were gathered at Wuhan Jin Hospital. Images of the test group were taken at Xiamen University and Zhongshan Hospital. The efficiency of the proposed AI system was impressive when it came to recognizing COVID-19 and other frequent viral respiratory diseases with equivalent levels of recall and specificity. Kaewlek et al. [42] tested four DL models, which included GoogleNet, ResNet, AlexNet, and deep CNN, for categorizing CT scans of TB, PNEU, and COVID-19. They obtained 2134 photos of normal cases, 943 images of TB, 2041 images of PNEU, and 3917 images of COVID-19 from internet sources. According to the results of their analysis of the effectiveness of the model, ResNet had the highest accuracy at 0.96, a 0.93 F1 score, and an AUC score of 0.95 AUC. The model with the second-greatest result was DCNN, followed by AlexNet and GoogleNet in that order. A deep CNN-based technique developed by Polat et al. [43] was capable of independently recognizing patterns associated with COVID-19-related lesions in chest CT images. Originally, 102 CT scans were segmented, which resulted in the production of a total of 16,040 CT scan segments. After that, 10,420 CT scan segments that corresponded to healthy respiratory areas were recognized as COVID-19-negative, whereas 5620 CT scan segments in which various lesions had been discovered were identified as COVID-19-positive. The accuracy of the diagnosis was able to be raised to 93.26% by utilizing the CNN architecture that was suggested.
Abayomi-Alli et al. [44] proposed a DL model called DeepShufNet for COVID-19 detection. Using the Mel COCOA-2-augmented training datasets, the suggested model had an accuracy of 90.1%, a precision of 77.1%, a recall of 62.7%, a specificity of 95.98%, and an f-score of 69.1% for identifying cases of COVID-19.
Mishra et al. [45] developed an algorithm for identifying COVID-19 from CT images that includes COVID-19, normal, and PNEU groups using their transfer learning method, which relies on the ResNet50 and VGG-16 architectures. Their research employed data enhancement and fine-tuning methods to enhance and optimize the ResNet50 and VGG16 algorithms. With a standard classification accuracy of above 99.9% for both ResNet-50- and VGG-16-based systems, the model that was suggested works extremely well for binary classification tasks such as comparing COVID-19 to normal. In the classification of multiple classes, such as COVID-19 vs. normal vs. pneumonia, the suggested approach achieved a median accuracy of classification of 86.74% and 88.52% when utilizing the VGG16 and ResNet50 architectures as the initial state, respectively. Masud et al. [46] developed a diagnostic strategy based on CNN to identify COVID-19 patients by evaluating the picture properties of CT scans. To identify COVID-19-infected individuals, their research examined a freely accessible CT scan database and inputted it into the suggested CNN approach. There were 5493 non-COVID-19 photos and 3914 images with COVID-19 in the CT scan database. During the training, validation, and evaluation stages of its development, the model achieved an accuracy of 99.76%, 96.10%, and 96%, respectively.
Table 1. A list of previous studies that used ML and DL models for the diagnosis of chest diseases using CXR, CT scans, and cough sounds.
According to many studies [14][15][16][17][18][19][20], the symptoms of nine different chest diseases, i.e., LC, ATE, COL, TB, PNET, EDE, COVID-19, PNEU, and normal, are similar to each other. It is a challenge for health experts to identify these chest diseases using CXR and CT scans. Similarly, healthcare professionals have also attempted to diagnose these chest diseases using cough sounds [29][31][32][33][34]. However, cough sounds also resemble each other among these diseases. Therefore, it is also a challenge for health experts to diagnose chest diseases based on cough sounds. Hence, there is an evident need to develop an automated framework based on DL models that can automatically diagnose chest diseases as mentioned above using X-rays, CT scans, and cough sounds. The main focus of previous studies [30][31][32][33][34][35][38] was to diagnose COVID-19 and non-COVID-19 cases from CXR images and CT scans. A few research studies [29][30][31] have employed the use of CXR images to identify COVID-19 from pneumonia infections, including viral and bacterial infections. However, limited studies [41][42][43][44][45][46] have identified PNEU and COVID-19 based on cough sounds, and no evidence has been found to diagnose LC, ATE, COL, TB, PNEUTH, and EDE based on cough sounds using DL models.

This entry is adapted from the peer-reviewed paper 10.3390/diagnostics13172772

References

  1. Aslani, S.; Jacob, J. Utilisation of deep learning for COVID-19 diagnosis. Clin. Radiol. 2023, 78, 150–157.
  2. Hertel, R.; Benlamri, R. Deep Learning Techniques for COVID-19 Diagnosis and Prognosis Based on Radiological Imaging. ACM Comput. Surv. 2023, 55, 1–39.
  3. Khan, A.; Khan, S.H.; Saif, M.; Batool, A.; Sohail, A.; Khan, M.W. A Survey of Deep Learning Techniques for the Analysis of COVID-19 and their usability for Detecting Omicron. J. Exp. Theor. Artif. Intell. 2023, 1–43.
  4. Mercaldo, F.; Belfiore, M.P.; Reginelli, A.; Brunese, L.; Santone, A. Coronavirus covid-19 detection by means of explainable deep learning. Sci. Rep. 2023, 13, 462.
  5. Bassiouni, M.M.; Chakrabortty, R.K.; Hussain, O.K.; Rahman, H.F. Advanced deep learning approaches to predict supply chain risks under COVID-19 restrictions. Expert Syst. Appl. 2023, 211, 118604.
  6. Constantinou, M.; Exarchos, T.; Vrahatis, A.G.; Vlamos, P. COVID-19 Classification on Chest X-ray Images Using Deep Learning Methods. Int. J. Environ. Res. Public Health 2023, 20, 2035.
  7. Vinod, D.N.; Prabaharan, S.R.S. COVID-19-The Role of Artificial Intelligence, Machine Learning, and Deep Learning: A Newfangled. Arch. Comput. Methods Eng. 2023, 30, 2667–2682.
  8. Gupta, K.; Bajaj, V. Deep learning models-based CT-scan image classification for automated screening of COVID-19. Biomed. Signal Process. Control. 2023, 80, 104268.
  9. Zhao, Z.; Wu, J.; Cai, F.; Zhang, S.; Wang, Y.-G. A hybrid deep learning framework for air quality prediction with spatial autocorrelation during the COVID-19 pandemic. Sci. Rep. 2023, 13, 1015.
  10. Du, H.; Dong, E.; Badr, H.S.; Petrone, M.E.; Grubaugh, N.D.; Gardner, L.M. Incorporating variant frequencies data into short-term forecasting for COVID-19 cases and deaths in the USA: A deep learning approach. Ebiomedicine 2023, 89, 104482.
  11. Choudhary, T.; Gujar, S.; Goswami, A.; Mishra, V.; Badal, T. Deep learning-based important weights-only transfer learning approach for COVID-19 CT-scan classification. Appl. Intell. 2023, 53, 7201–7215.
  12. Chen, M.-Y.; Lai, Y.-W.; Lian, J.-W. Using Deep Learning Models to Detect Fake News about COVID-19. ACM Trans. Internet Technol. 2023, 23, 1–23.
  13. Iqbal, A.; Usman, M.; Ahmed, Z. An efficient deep learning-based framework for tuberculosis detection using chest X-ray images. Tuberculosis 2022, 136, 102234.
  14. Kumar, N.; Gupta, M.; Gupta, D.; Tiwari, S. Novel deep transfer learning model for COVID-19 patient detection using X-ray chest images. J. Ambient. Intell. Humaniz. Comput. 2023, 14, 469–478.
  15. Huy, V.T.Q.; Lin, C.-M. An Improved Densenet Deep Neural Network Model for Tuberculosis Detection Using Chest X-ray Images. IEEE Access 2023, 11, 42839–42849.
  16. Al-Waisy, A.S.; Al-Fahdawi, S.; Mohammed, M.A.; Abdulkareem, K.H.; Mostafa, S.A.; Maashi, M.S.; Arif, M.; Garcia-Zapirain, B. COVID-CheXNet: Hybrid deep learning framework for identifying COVID-19 virus in chest X-rays images. Soft Comput. 2023, 27, 2657–2672.
  17. Malik, H.; Anees, T.; Din, M.; Naeem, A. CDC_Net: Multi-classification convolutional neural network model for detection of COVID-19, pneumothorax, pneumonia, lung Cancer, and tuberculosis using chest X-rays. Multimed. Tools Appl. 2023, 82, 13855–13880.
  18. Shelke, A.; Inamdar, M.; Shah, V.; Tiwari, A.; Hussain, A.; Chafekar, T.; Mehendale, N. Chest X-ray Classification Using Deep Learning for Automated COVID-19 Screening. SN Comput. Sci. 2021, 2, 300.
  19. Ali, M.U.; Kallu, K.D.; Masood, H.; Tahir, U.; Gopi, C.V.V.M.; Zafar, A.; Lee, S.W. A CNN-Based Chest Infection Diagnostic Model: A Multistage Multiclass Isolated and Developed Transfer Learning Framework. Int. J. Intell. Syst. 2023, 2023, 1–12.
  20. Agrawal, S.; Honnakasturi, V.; Nara, M.; Patil, N. Utilizing Deep Learning Models and Transfer Learning for COVID-19 Detection from X-ray Images. SN Comput. Sci. 2023, 4, 326.
  21. Ibrahim, A.U.; Ozsoz, M.; Serte, S.; Al-Turjman, F.; Yakoi, P.S. Pneumonia Classification Using Deep Learning from Chest X-ray Images During COVID-19. Cogn. Comput. 2021, 1–13.
  22. Ayalew, A.M.; Salau, A.O.; Tamyalew, Y.; Abeje, B.T.; Woreta, N. X-ray image-based COVID-19 detection using deep learning. Multimed. Tools Appl. 2023, 1–19.
  23. Jennifer, J.S.; Sharmila, T.S. A Neutrosophic Set Approach on Chest X-rays for Automatic Lung Infection Detection. Inf. Technol. Control 2023, 52, 37–52.
  24. Jaszcz, A.; Połap, D.; Damaševičius, R. Lung X-ray Image Segmentation Using Heuristic Red Fox Optimization Algorithm. Sci. Program. 2022, 2022, 1–8.
  25. Karthik, R.; Menaka, R.; Hariharan, M.; Kathiresan, G.S. Ai for COVID-19 detection from radiographs: Incisive analysis of state of the art techniques, key challenges and future directions. IRBM 2022, 43, 486–510.
  26. Hu, H.; Shen, L.; Guan, Q.; Li, X.; Zhou, Q.; Ruan, S. Deep co-supervision and attention fusion strategy for automatic COVID-19 lung infection segmentation on CT images. Pattern Recognit. 2022, 124, 108452.
  27. Li, Z.; Zhao, S.; Chen, Y.; Luo, F.; Kang, Z.; Cai, S.; Zhao, W.; Liu, J.; Zhao, D.; Li, Y. A deep-learning-based framework for severity assessment of COVID-19 with CT images. Expert Syst. Appl. 2021, 185, 115616.
  28. Pahar, M.; Klopper, M.; Reeve, B.; Warren, R.; Theron, G.; Diacon, A.; Niesler, T. Automatic Tuberculosis and COVID-19 cough classification using deep learning. In Proceedings of the 2022 International Conference on Electrical, Computer and Energy Technologies (ICECET), Prague, Czech Republic, 20–22 July 2022; pp. 1–9.
  29. Kim, S.; Baek, J.-Y.; Lee, S.-P. COVID-19 Detection Model with Acoustic Features from Cough Sound and Its Application. Appl. Sci. 2023, 13, 2378.
  30. Islam, R.; Abdel-Raheem, E.; Tarique, M. A study of using cough sounds and deep neural networks for the early detection of Covid-19. Biomed. Eng. Adv. 2022, 3, 100025.
  31. Loey, M.; Mirjalili, S. COVID-19 cough sound symptoms classification from scalogram image representation using deep learning models. Comput. Biol. Med. 2021, 139, 105020.
  32. Nessiem, M.A.; Mohamed, M.M.; Coppock, H.; Gaskell, A.; Schuller, B.W. Detecting COVID-19 from breathing and coughing sounds using deep neural networks. In Proceedings of the 2021 IEEE 34th international symposium on computer-based medical systems (CBMS), Aveiro, Portugal, 7–9 June 2021; pp. 183–188.
  33. Tawfik, M.; Nimbhore, S.; Al-Zidi, N.M.; Ahmed, Z.A.; Almadani, A.M. Multi-features extraction for automating COVID-19 detection from cough sound using deep neural networks. In Proceedings of the 2022 4th International Conference on Smart Systems and Inventive Technology (ICSSIT), Tirunelveli, India, 20–22 January 2022; pp. 944–950.
  34. Zhang, K.; Qi, S.; Cai, J.; Zhao, D.; Yu, T.; Yue, Y.; Yao, Y.; Qian, W. Content-based image retrieval with a Convolutional Siamese Neural Network: Distinguishing lung cancer and tuberculosis in CT images. Comput. Biol. Med. 2022, 140, 105096.
  35. Alebiosu, D.O.; Dharmaratne, A.; Lim, C.H. Improving tuberculosis severity assessment in computed tomography images using novel DAvoU-Net segmentation and deep learning framework. Expert Syst. Appl. 2023, 213, 119287.
  36. Toğaçar, M.; Ergen, B.; Cömert, Z. Detection of lung cancer on chest CT images using minimum redundancy maximum relevance feature selection method with convolutional neural networks. Biocybern. Biomed. Eng. 2020, 40, 23–39.
  37. Latif, G.; Morsy, H.; Hassan, A.; Alghazo, J. Novel Coronavirus and Common Pneumonia Detection from CT Scans Using Deep Learning-Based Extracted Features. Viruses 2022, 14, 1667.
  38. Sadik, F.; Dastider, A.G.; Subah, M.R.; Mahmud, T.; Fattah, S.A. A dual-stage deep convolutional neural network for au-tomatic diagnosis of COVID-19 and pneumonia from chest CT images. Comput. Biol. Med. 2022, 149, 105806.
  39. Florescu, L.M.; Streba, C.T.; Şerbănescu, M.-S.; Mămuleanu, M.; Florescu, D.N.; Teică, R.V.; Nica, R.E.; Gheonea, I.A. Federated Learning Approach with Pre-Trained Deep Learning Models for COVID-19 Detection from Unsegmented CT images. Life 2022, 12, 958.
  40. Fu, M.; Yi, S.L.; Zeng, Y.; Ye, F.; Li, Y.; Dong, X.; Ren, Y.-D.; Luo, L.; Pan, J.-S.; Zhang, Q. Deep learning-based recognizing covid-19 and other common infectious diseases of the lung by chest ct scan images. medRxiv 2020.
  41. Kaewlek, T.; Tanyong, K.; Chakkaeo, J.; Kladpree, S.; Chusin, T.; Yabsantia, S.; Udee, N. Classification of Pneumonia, Tuberculosis, and COVID-19 on Computed Tomography Images Using Deep Learning. 2023. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4379837 (accessed on 10 July 2023).
  42. Polat, H.; Özerdem, M.S.; Ekici, F.; Akpolat, V. Automatic detection and localization of COVID-19 pneumonia using axial computed tomography images and deep convolutional neural networks. Int. J. Imaging Syst. Technol. 2021, 31, 509–524.
  43. Abayomi-Alli, O.O.; Damaševičius, R.; Abbasi, A.A.; Maskeliūnas, R. Detection of COVID-19 from Deep Breathing Sounds Using Sound Spectrum with Image Augmentation and Deep Learning Techniques. Electronics 2022, 11, 2520.
  44. Mishra, N.K.; Singh, P.; Joshi, S.D. Automated detection of COVID-19 from CT scan using convolutional neural network. Biocybern. Biomed. Eng. 2021, 41, 572–588.
  45. Masud, M.; Alshehri, M.D.; Alroobaea, R.; Shorfuzzaman, M. Leveraging Convolutional Neural Network for COVID-19 Disease Detection Using CT Scan Images. Intell. Autom. Soft Comput. 2021, 29, 1–13.
  46. World Health Organization. Cancer. 2020. Available online: https://www.who.int/news-room/fact-sheets/detail/cancer (accessed on 30 July 2023).
More
This entry is offline, you can click here to edit this entry!
ScholarVision Creations