Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 1932 2023-11-24 07:57:36 |
2 Reference format revised. Meta information modification 1932 2023-11-27 06:13:08 |

Video Upload Options

Do you have a full video?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Ahmad, Z.; Malik, A.K.; Qamar, N.; Islam, S.U. Efficient Thorax Disease Classification by DCNN. Encyclopedia. Available online: https://encyclopedia.pub/entry/52017 (accessed on 04 July 2024).
Ahmad Z, Malik AK, Qamar N, Islam SU. Efficient Thorax Disease Classification by DCNN. Encyclopedia. Available at: https://encyclopedia.pub/entry/52017. Accessed July 04, 2024.
Ahmad, Zeeshan, Ahmad Kamran Malik, Nafees Qamar, Saif Ul Islam. "Efficient Thorax Disease Classification by DCNN" Encyclopedia, https://encyclopedia.pub/entry/52017 (accessed July 04, 2024).
Ahmad, Z., Malik, A.K., Qamar, N., & Islam, S.U. (2023, November 24). Efficient Thorax Disease Classification by DCNN. In Encyclopedia. https://encyclopedia.pub/entry/52017
Ahmad, Zeeshan, et al. "Efficient Thorax Disease Classification by DCNN." Encyclopedia. Web. 24 November, 2023.
Efficient Thorax Disease Classification by DCNN
Edit

Thorax disease is a life-threatening disease caused by bacterial infections that occur in the lungs. It could be deadly if not treated at the right time, so early diagnosis of thoracic diseases is vital. Computer vision techniques using deep learning are being used specifically for categorizing medical and natural images. As a direct result of this endeavor’s success, many academics are presently using deep convolutional neural networks (DCNNs) to diagnose thoracic illnesses based on chest radiographs. 

thorax disease chest X-ray Deep Convolutional Neural Network (DCNN) image processing classification

1. Introduction

Thoracic illnesses are widespread and significant health issues that affect a lot of individuals all over the globe. For instance, pneumonia kills roughly 4 million people annually and infects around 450 million worldwide (seven percent of the total population). One of the most used forms of radiological examination for identifying thoracic disorders is chest radiography, often known as chest X-ray (CXR) [1][2]. Globally, countless chest radiographs are produced each year, and virtually all of them are examined visually by humans. This is much expensive, time consuming, operator biased, and not able to take advantage of precious large data. It also needs a high level of skill and focus [3]. A significant public health issue in many nations is the absence of qualified radiologists who can interpret chest radiographs. Therefore, it is essential to create an automated technique for thoracic illness detection on chest radiographs using computer-aided diagnosis (CAD). The ChestX-ray14 dataset was provided by Wang et al. [1][4][5], who recognized its value and used it to assess automated techniques for diagnosing 14 thoracic illnesses using chest radiography.
For diagnosis using radiology, chest X-ray is currently the most used test in hospitals. The intricate pathological anatomy of many diseases and accompanying lesion regions make the automated chest X-ray classification a tough assignment [6]. In hospitals, chest X-ray analysis is entirely at the discretion of a radiology professional who may diagnose the disease and the portion of the body afflicted by the lesion. To appropriately categorize a range of illnesses by analyzing chest X-ray images, computer-aided diagnosis is critical [7][8][9]. This may be achieved with the help of a CAD that arises from the laborious task of converting human knowledge into artificial intelligence’s language [10][11][12].
For decades, radiography has been an essential tool for detecting medical disorders and helping in therapy administration [1][10]. The growth of the medical business has resulted in the employment of automatic categorization techniques based on machine learning paradigms; nevertheless, the data are meaningless in the absence of a professional diagnostician [13]. Dealing with radiological X-ray data necessitates extensive experience and knowledge. Because of the gravity of the situation, an expert will probably need a significant amount of time to review the X-ray data. Even now, there is a chance that weary paramedics may make an error that might have been prevented [14]. Similarities between some diseases, such as pneumonia, which overlaps with various ailments, have exacerbated the problem [13]. As a result, there is a demand for radiological X-ray data automation that can categorize diseases in ways that the human eye or expert knowledge cannot. According to the World Health Organization (WHO), more than 66% population of the world lacks access to modern radiology diagnostic and specialist skills [15]. Atelectasis, effusion, cardiomegaly, masses, infiltration, emphysema, pneumonia, consolidation, pneumothorax, fibrosis, nodules, pleural thickening, edema, and hernia are some of the fundamental thoracic illnesses that may be detected using a chest X-ray. Additional thoracic (CXR) research can be seen in [16][17][18]. Issues surrounding the detection and treatment of illness are growing progressively critical due to the COVID-19 epidemic. Researchers may now conduct their research using CXR for free on numerous digital platforms. This publicly accessible dataset contributes to bioinformatics and computer science by providing readers with an overview of the findings described in these reports [19]. Several pre-existing methodologies and procedures have enabled the utilization of these massive CXR recordings [3][20][21].

2. Efficient Thorax Disease Classification by DCNN

Currently computer vision techniques using deep learning are being used specifically for categorizing medical and natural images [3][22]. As a direct result of this endeavor’s success, many academics are presently using deep convolutional neural networks (DCNNs) to diagnose thoracic illnesses based on chest radiographs. However, because the vast majority of these DCNN models were designed to address a variety of problems, they typically have three shortcomings: (1) they frequently fail to take into account the characteristics of various types of thoracic diseases; (2) they frequently make the wrong diagnosis because they do not focus solely on aberrant areas; and (3) their diagnosis can be challenging to understand, which limits their utility in clinical practice.
Traditional clinical experiences, according to [23], have demonstrated the value of lesion site attention for better diagnosis. The authors in [23] developed disease location attention-guided network (LLAGnet) that focuses on lesion site features that are discriminative in chest X-rays for thoracic disease categorization using multiple labels. The authors in [24] utilized a transfer learning technique for deep learning convolution layer training and worked on a small segment of data. The researchers in [25] worked on a small dataset of a few hundred images for testing and training. They used a neutrosophic approach by applying various deep convolutional models for COVID-19 classification from different lung infections. They utilized a small amount of data, which caused overfitting. The paper [26] utilized the heuristic optimization algorithm ROFA for the classification and segmentation of pneumonia by applying different thresholds. The drawback of their approach is that the small number of pixels taken for analysis do not determine the correct location for segmentation, resulting in low accuracy for segmentation. In the research of [27], the authors utilized VGG-SegNet for pulmonary nodule classification and Lung-PET-CT-Dx for their segmentation. However, due to various parameters, the model does not tackle them and has low accuracy. The researchers in [28] proposed the CheX-Net CNN model, which is the most current and sophisticated one. CheX-Net takes in chest X-ray images and generates a heatmap that shows the locations of the regions most likely to be affected by the disease. Regarding recognizing pneumonia in 420 images of X-ray, CheX-Net outperformed four experienced radiologists on average. On the other hand, the paper’s [28] proposed model is a DenseNet variation that has not undergone any significant alterations, with the purpose of learning representations with little to no monitoring. The network weights were trained using images from ImageNet.
The research in [20] recommended employing Consult-Net to develop relevant feature representations that might be used for the classification of lung illnesses. The Consult-Net project’s goal is to overcome the obstacles posed by a broad set of diseases and by the influence of irrelevant areas in the chest X-ray classification. Study primarily focuses on classifying thoracic disorders shown in chest X-ray. Authors in [20] presented two-branch architecture known as Consult-Net for learning discriminative features to achieve two goals simultaneously, as Consult-Net is made up of two distinct parts. A feature selector bound by an information bottleneck retrieves important disease-specific features based on their relevance in the first step of the procedure. Second, using a feature integrator based on spatial and channel encoding, the latent semantic linkages in the feature space were improved. Consult-Net integrates these unique characteristics to increase the accuracy of thoracic illness categorization in CXRs.
A DCNN (Thorax-Net) is proposed in [14]. It aims to utilize chest radiography to diagnose 14 thoracic diseases. Thorax-Net features both an attention and a categorization branch. The classification branch created feature maps and the attention branch can capitalize on the relationship between class labels and clinical concerns. A diagnosis can be formed by integrating the data from two components, averaging and binarizing them, then feeding the results into the trained Thorax-Net and a chest radiograph. When trained with internal data, Thorax-Net outperforms other deep models, with AUC ranging from 0.7876 to 0.896 in each trial.
Due to the absence of substantial annotations and abnormalities in pathology, CAD diagnosis of thoracic disorders is still complex. To address this CAD challenge, the paper [29] proposed a model known as the triple-attention learning (A3 Net) system. By merging three independent attention modules into a single, coherent framework, the proposed model unifies the processes of learning attention scale-wise, channel-wise, and element-wise. The feature extraction backbone network is a pre-trained version of the DenseNet-121 network. The deep model is explicitly encouraged to focus more on the feature maps’ discriminative channels.
The initial stage in developing automated radiology classification is identifying relevant diagnostic disease in X-ray. These elements can assist in making a diagnosis. The problem is that these properties are highly non-linear, making definition difficult and providing an opportunity for subjectivity. The model must be trained to apply a sophisticated non-linear function, mapping an image with feature, to extract these features (fimg). Previous authors created a DCNN for extracting these complex non-linear features [30].
Deep CNN has a few disadvantages, the most significant being the difficulties connected with vanishing gradients and the massive number of parameters required [30]. Training deep networks for medical applications is difficult due to a lack of medical datasets (this dataset contains just about 3999 patient records). Furthermore, it is known that network is quite sensitive at beginning when it is trained from the scratch. This was established in the paper [31], which demonstrated that vanishing gradients occur in an unstable training process when a deep neural network is not properly initialized, causing the training process to be unstable. To identify qualities that are complementary to one another, Chen et al. [4] introduced DualCheXNet, a dual asymmetric feature learning network. However, the algorithms currently in use for categorizing CXR images do not take any knowledge-based information into account [4]. Instead, the emphasis is on the development of useful representations using a range of deep models. Furthermore, as the network grows in size, issues with vanishing gradients may appear on individual CXR images.
Iterative attention-guided curriculum learning was used in [32] to enhance localization performance under weak supervision and thoracic disease categorization. The results are likely affected by the attention-aware networks’ continued inability to identify goal regions accurately. This is due to a lack of expert-level supervision or instruction.
Similarly, their performance suffers greatly from imbalance data when the number of parameters increases and the existing model cannot handle large parameters, which causes incorrect classifications. On the other hand, concerns about overfitting and vanishing gradients have developed as the model’s depth has increased [20]. Furthermore, a single network with a greater depth of model is more likely to miss crucial distinguishing intermediary layer features. These characteristics are frequently subtle, yet they are critical for identifying hard-classified anomalies in CXRs. These difficulties have evolved into bottlenecks, impeding the deep extension of the ImageNet [4] model. Although 2D LSTM uses parameters more than 2D RNN. It is less efficient at runtime than 2D RNN for large images due to the issue of exploding or vanishing gradients. It is possible to overfit, especially when there are training data [33].
Gradient vanishing issue is detected in training phase as the network is fine-tuned using pre-trained models. It is likely that the lower layers of the network will not be well-customized, since gradient magnitudes (back-propagated from the training mistake) swiftly decline. Conceptual appearance of the disease is detected in top layers. The author suggested training of network one layer at a time and building from scratch the P-Net to avoid this issue [33].
The difficulty of appropriately identifying images is exacerbated due to high degree of similarities between distinct classes and the scarcity of data for specific conditions. Because the situations visually resemble one another, CXR images do not adequately reflect the entire spectrum. This is especially true for those who have two or more disorders. When CNNs are trained with many parameters, this proximity may result in overfitting, even for categories with small samples. Among the nearly 100,000 tot

References

  1. Wang, X.; Peng, Y.; Lu, L.; Lu, Z.; Bagheri, M.; Summers, R. Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In Proceedings of the IEEE CVPR 2017, Honolulu, HI, USA, 21–26 July 2017; pp. 2097–2106.
  2. Alfarghaly, O.; Khaled, R.; Elkorany, A.; Helal, M.; Fahmy, A. Automated radiology report generation using conditioned transformers. Inform. Med. Unlocked 2021, 24, 100557.
  3. Li, Z.; Wang, C.; Han, M.; Xue, Y.; Wei, W.; Li, L.J.; Fei-Fei, L. Thoracic disease identification and localization with limited supervision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2018), Salt Lake City, UT, USA, 18–22 June 2018; pp. 8290–8299.
  4. Chen, B.; Zhang, Z.; Li, Y.; Lu, G.; Zhang, D. Multi-label chest X-ray image classification via semantic similarity graph embedding. IEEE Trans. Circuits Syst. Video Technol. 2021, 32, 2455–2468.
  5. Xu, X.; Guo, Q.; Guo, J.; Yi, Z. Deepcxray: Automatically diagnosing diseases on chest X-rays using deep neural networks. IEEE Access 2018, 6, 66972–66983.
  6. Lian, J.; Liu, J.; Zhang, S.; Gao, K.; Liu, X.; Zhang, D.; Yu, Y. A Structure-Aware Relation Network for Thoracic Diseases Detection and Segmentation. IEEE Trans. Med. Imaging 2021, 40, 2042–2052.
  7. Chen, B.; Li, J.; Lu, G.; Yu, H.; Zhang, D. Label co-occurrence learning with graph convolutional networks for multi-label chest X-ray image classification. IEEE J. Biomed. Health Inform. 2020, 24, 2292–2302.
  8. Ouyang, X.; Karanam, S.; Wu, Z.; Chen, T.; Huo, J.; Zhou, X.S.; Wang, Q.; Cheng, J.Z. Learning hierarchical attention for weakly-supervised chest X-ray abnormality localization and diagnosis. IEEE Trans. Med. Imaging 2020, 40, 2698–2710.
  9. Luo, L.; Yu, L.; Chen, H.; Liu, Q.; Wang, X.; Xu, J.; Heng, P.A. Deep mining external imperfect data for chest X-ray disease screening. IEEE Trans. Med. Imaging 2020, 39, 3583–3594.
  10. Yan, C.; Yao, J.; Li, R.; Xu, Z.; Huang, J. Weakly supervised deep learning for thoracic disease classification and localization on chest X-rays. In Proceedings of the 2018 ACM International Conference on Bioinformatics, Computational Biology, and Health Informatics, Washington, DC, USA, 29 August–1 September 2018; pp. 103–110.
  11. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708.
  12. Zhao, G.; Fang, C.; Li, G.; Jiao, L.; Yu, Y. Contralaterally Enhanced Networks for Thoracic Disease Detection. IEEE Trans. Med. Imaging 2021, 40, 2428–2438.
  13. Wang, X.; Peng, Y.; Lu, L.; Lu, Z.; Summers, R.M. Tienet: Text-image embedding network for common thorax disease classification and reporting in chest X-rays. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 9049–9058.
  14. Wang, H.; Jia, H.; Lu, L.; Xia, Y. Thorax-net: An attention regularized deep neural network for classification of thoracic diseases on chest radiography. IEEE J. Biomed. Health Inform. 2019, 24, 475–485.
  15. Wang, H.; Xia, Y. Chestnet: A deep neural network for classification of thoracic diseases on chest radiography. arXiv 2018, arXiv:1807.03058.
  16. Wang, K.; Zhang, X.; Huang, S.; Chen, F.; Zhang, X.; Huangfu, L. Learning to recognize thoracic disease in chest X-rays with knowledge-guided deep zoom neural networks. IEEE Access 2020, 8, 159790–159805.
  17. Ma, Y.; Ma, A.J.; Pan, Y.; Chen, X. Multi-Scale Feature Pyramids for Weakly Supervised Thoracic Disease Localization. In Proceedings of the 2020 IEEE International Conference on Image Processing (ICIP), Abu Dhabi, United Arab Emirates, 25–28 October 2020; pp. 2481–2485.
  18. Yadav, S.S.; Jadhav, S.M. Deep convolutional neural network based medical image classification for disease diagnosis. J. Big Data 2019, 6, 1–18.
  19. Ratul, R.H.; Husain, F.A.; Purnata, T.H.; Pomil, R.A.; Khandoker, S.; Parvez, M.Z. Multi-Stage Optimization of Deep Learning Model to Detect Thoracic Complications. In Proceedings of the 2021 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Melbourne, Australia (Virtual), 17–20 October 2021; pp. 3000–3005.
  20. Guan, Q.; Huang, Y.; Luo, Y.; Liu, P.; Xu, M.; Yang, Y. Discriminative Feature Learning for Thorax Disease Classification in Chest X-ray Images. IEEE Trans. Image Process. 2021, 30, 2476–2487.
  21. Seibold, C.; Kleesiek, J.; Schlemmer, H.P.; Stiefelhagen, R. Self-Guided Multiple Instance Learning for Weakly Supervised Thoracic DiseaseClassification and Localizationin Chest Radiographs. In Proceedings of the Asian Conference on Computer Vision, Kyoto, Japan, 30 November–4 December 2020; pp. 617–634.
  22. Guan, Q.; Huang, Y.; Zhong, Z.; Zheng, Z.; Zheng, L.; Yang, Y. Diagnose like a radiologist: Attention guided convolutional neural network for thorax disease classification. arXiv 2018, arXiv:1801.09927.
  23. Chen, B.; Li, J.; Lu, G.; Zhang, D. Lesion location attention guided network for multi-label thoracic disease classification in chest X-rays. IEEE J. Biomed. Health Inform. 2019, 24, 2016–2027.
  24. Rehman, N.u.; Zia, M.S.; Meraj, T.; Rauf, H.T.; Damaševičius, R.; El-Sherbeeny, A.M.; El-Meligy, M.A. A self-activated cnn approach for multi-class chest-related COVID-19 detection. Appl. Sci. 2021, 11, 9023.
  25. Jennifer, J.S.; Sharmila, T.S. A Neutrosophic Set Approach on Chest X-rays for Automatic Lung Infection Detection. Inf. Technol. Control 2023, 52, 37–52.
  26. Jaszcz, A.; Połap, D.; Damaševičius, R. Lung X-ray image segmentation using heuristic red fox optimization algorithm. Sci. Program. 2022, 2022, 4494139.
  27. Khan, M.A.; Rajinikanth, V.; Satapathy, S.C.; Taniar, D.; Mohanty, J.R.; Tariq, U.; Damaševičius, R. VGG19 network assisted joint segmentation and classification of lung nodules in CT images. Diagnostics 2021, 11, 2208.
  28. Rajpurkar, P.; Irvin, J.; Zhu, K.; Yang, B.; Mehta, H.; Duan, T.; Ding, D.; Bagul, A.; Langlotz, C.; Shpanskaya, K.; et al. Chexnet: Radiologist-level pneumonia detection on chest X-rays with deep learning. arXiv 2017, arXiv:1711.05225.
  29. Wang, H.; Wang, S.; Qin, Z.; Zhang, Y.; Li, R.; Xia, Y. Triple attention learning for classification of 14 thoracic diseases using chest radiography. Med. Image Anal. 2021, 67, 101846.
  30. Srinivasan, P.; Thapar, D.; Bhavsar, A.; Nigam, A. Hierarchical X-ray report generation via pathology tags and multi head attention. In Proceedings of the Asian Conference on Computer Vision (ACCV 2020), Kyoto, Japan, 30 November–4 December 2020; pp. 600–616.
  31. Sanchez, E.; Bulat, A.; Zaganidis, A.; Tzimiropoulos, G. Semi-supervised facial action unit intensity estimation with contrastive learning. In Proceedings of the Asian Conference on Computer Vision (ACCV 2020), Kyoto, Japan, 30 November–4 December 2020; pp. 104–120.
  32. Tang, Y.; Wang, X.; Harrison, A.P.; Lu, L.; Xiao, J.; Summers, R.M. Attention-guided curriculum learning for weakly supervised classification and localization of thoracic diseases on chest radiographs. In Proceedings of the International Workshop on Machine Learning in Medical Imaging (MLMI 2018), Granada, Spain, 16 September 2018; Springer: Berlin/Heidelberg, Germany, 2018; pp. 249–258.
  33. Cai, J.; Lu, L.; Xing, F.; Yang, L. Pancreas segmentation in CT and MRI via task-specific network design and recurrent neural contextual learning. In Deep Learning and Convolutional Neural Networks for Medical Imaging and Clinical Informatics; Springer: Berlin/Heidelberg, Germany, 2019; pp. 3–21.
More
Information
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : , , ,
View Times: 88
Revisions: 2 times (View History)
Update Date: 27 Nov 2023
1000/1000
Video Production Service