Deep Learning Using Explainable Artificial Intelligence and Clustering: History
Please note this is an old version of this entry, which may differ significantly from the current revision.

Explainable artificial intelligence (XAI) is a field of Artificial Intelligence (AI) that seeks to offer insights into black-box models and their predictions. Trust, performance, legal (regulation), and ethical considerations are some reasons researchers advocate for XAI.

  • clustering
  • deep learning
  • explainable artificial intelligence
  • image classification

1. Introduction

Gaining new insights into diseases is crucial for diagnostics. Machine learning (ML) has been employed for this purpose. For instance, in genomics, ML has been used extensively to identify biomarkers for various diseases [1]. However, when patient data are presented as images, deriving insights into characteristics associated with a disease becomes more challenging. A common approach involves extracting a set of features from the images or performing image segmentation [2]. To search for new insights, these features can be analyzed using statistical or machine learning techniques. A recent advancement in this area is the use of graph convolutional networks on regions of interest from brain neuroimaging data [3][4]. Nevertheless, there is a risk that information might be lost during the feature extraction or image segmentation process. Another constraint is that the features and image segments typically rely on pre-existing medical knowledge, thereby limiting the potential for discovering new diagnostic insights.
To address these limitations, xa novel framework based on deep learning (DL) is being introduced. The rationale behind this proposal is that the intricate patterns learned by DL models for successful predictions/classifications might also contain valuable insights into the relationship between the characteristics of medical images and the associated disease. The DL model operates independently of handcrafted features or human interventions, potentially overcoming the aforementioned constraints.
While there is great potential for DL to uncover new insights, this is still a highly unexplored area of research. This is likely because DL models are primarily designed for optimal prediction/classification performance, making them inherently difficult to interpret. Consequently, leveraging them to gain new medical insights can be challenging. However, the recent development of explainable artificial intelligence (XAI) provides methods to interpret ML and DL algorithms [5][6]. For instance, XAI techniques can highlight the sections of an input image that predominantly influenced the DL model’s decision, such as rendering a diagnosis. XAI is vital for quality-checking ML methods, ensuring they base their predictions on pertinent parts of the input. These methods also bolster trust in ML system recommendations among users.

2. Explainable AI

XAI is a field of Artificial Intelligence (AI) that seeks to offer insights into black-box models and their predictions. Trust, performance, legal (regulation), and ethical considerations are some reasons researchers advocate for XAI [5]. This is increasingly critical as AI adoption reaches domains like healthcare.
External XAI techniques might explain single predictions through text or visualizations, or delve into models comprehensively using examples, local changes, or transformations to simpler models. While text and visualization explanations offer a direct, human-understandable clarification typically for a specific prediction, utilizing examples grants a broader understanding of a model by showcasing similar examples and predictions to the prediction in question. This method, however, does not provide an immediate explanation for a particular prediction. Local explanations focus on a subset of the problem, aiming to elucidate within that restricted context. Finally, to achieve higher interpretability, one can either employ a mimic model, which is an interpretable model that emulates the black-box model’s behavior, or replace the black-box model altogether.
In [7], the authors detail four techniques that elucidate image classifiers by modifying the input. The methods include the method of concomitant variation, the method of agreement, the method of difference, and the method of adjustment. Each method offers a unique approach and insight into how models interpret and classify images.
The technical procedure to retrieve visual explanations from an image classifier comprises two parts: (1) an attribution algorithm furnishing the data for the explanation and (2) a visualization employing that data to generate a human-understandable elucidation. Broadly, image classification’s attribution algorithms can be classified as either gradient-based methods or occlusion-based methods.
Visualizations represent the interpretations derived from the attribution methods mentioned earlier. However, there is no consensus in the literature regarding what constitutes a “good” explanation. While some believe an explanation should detail parts of an image contributing to its classification, others focus on resolution quality or the trade-offs involved. Indeed, as 2D visualizations cannot fully depict a model’s intricacy, clarity about the limitations and trade-offs is essential when using such explanations.
Other research on improved visualization argues that past studies have overly concentrated on the positive alterations in an input image without contemplating the negative impacts [8][9]. Both perspectives are necessary for a comprehensive explanation, especially for AI adoption in sensitive areas.

3. Unsupervised Learning-Clustering

Clustering is an unsupervised ML technique where the objective is to discern groupings in unlabeled data. It has diverse applications, including anomaly detection, compression, or unveiling intriguing properties in data.

3.1. K-Means and X-Means

K-means is a straightforward clustering algorithm with a time complexity of 𝑂(𝑛) in big-O notation. The algorithm commences by initializing a centroid for each of the K clusters. Various strategies exist for this initialization. One method is to select K random points from the dataset as the initial centroids, although this can make K-means sensitive to its initialization. To mitigate this, one can execute K-means multiple times. Another method, K-means++, has been proven to be a more robust initialization scheme, outdoing standard K-means in both accuracy and time [10]. K-means++ selects initial centroids using a probability based on a point’s distance to the current centroids. Once initialized, each point is assigned to its closest centroid. The primary loop of the algorithm then adjusts the centroids toward the mean of the points linked to them, and points are reassigned to the nearest centroid.
A limitation of K-means is the necessity to predefine K, the number of clusters. If the number of clusters is unknown, one can employ X-means. This method involves running K-means algorithms with various K values. The most fitting number of clusters for a dataset can be determined by evaluating multiple clustering performance metrics.

3.2. Clustering Performance Evaluation

Clustering performance evaluation metrics can be divided into two main categories: those requiring labeled data and those that do not.
The Rand index measures the similarity between the labels that the cluster has assigned to data points and the ground truth labels. This metric differs from standard accuracy measures because, in clustering, the label of the cluster to which a data point is assigned may not match its true label. To accurately measure the performance of clustering, one must therefore account for permutations. The Rand index provides a score in the range [0, 1], indicating the number of matching pairs between the cluster labels and the ground truth. While it is highly interpretable, other methods must be employed when labels are not available.
The silhouette coefficient is a metric suitable for use when no labels are present. It produces a value that increases as clusters become more defined. “More defined” in this context means that the distance between points within a cluster is small, while the distance to other clusters is large. The silhouette coefficient yields a value in the range [1,1][−1,1]: 1−1 indicates an incorrect clustering, while 1 signifies highly dense clusters that are well separated.
In contrast, the Davies–Bouldin index places less emphasis on the quality of clusters and more on their separation. A low Davies–Bouldin index suggests a significant degree of separation between clusters. Its value starts at 0, which represents the best possible separation, and has no upper bound.

4. Image-to-Image Translation

Image-to-image (I2I) translation refers to the process of learning to map from one image to another [11]. Such a mapping could, for instance, transform a healthy image into one with pathological identifiers. Differences between the input and output images can then be analyzed to extract medical insights. For this purpose, generative adversarial networks (GANs) and variational autoencoders (VAEs) have been employed.
RegGAN has proven to be the most effective I2I solution for medical data [12]. One challenge of I2I in the medical realm is the difficulty in finding aligned image pairs in real-world scenarios. To address this, the authors used magnetic resonance images of the brain and augmented them with varying levels of noise and synthetic misalignment through scaling and rotation. RegGAN surpassed previous state-of-the-art solutions for both aligned and unaligned pairs and across noise levels ranging from none to heavy.
In the realm of I2I translation, there are also initiatives leveraging newer architectures, such as Transformers. Specifically, the Swin transformer-based GAN has demonstrated promising results on medical data, even outperforming RegGAN on identical datasets [13].

5. Data Mining Techniques

Data mining involves extracting valuable knowledge from large datasets by identifying pertinent patterns and relationships [14]. Clustering is one such technique and is also employed here. While clustering has found application in diverse facets of medical knowledge discovery, its direct use in medical imaging remains relatively rare.
In [15], the authors demonstrated that K-means clustering could identify subgroups of yet-to-be-treated patients. This approach unveiled four unique subgroups.
In another study [16], a system was proposed for medical image retrieval. The process began by searching for an image within a known primary class and subsequently by the identified markers that had not been labeled previously. The authors showcased that by using clustering, previously unlabeled subclasses could be detected, facilitating the search for analogous images. This proved beneficial, enhancing a doctor’s diagnostic accuracy from 30% to 63%.
Other research focuses on how data mining techniques can offer insights not directly as new medical knowledge from AI, but rather to equip users with enriched information, allowing them to derive novel medical insights. In [17], a visualization solution was proposed for practitioners, grounded in spectral clustering, to decipher information from 2D and 3D medical data. Although spectral clustering was central to their approach, they recognized that no single clustering method excels universally.

6. Explainable Artificial Intelligence

In the medical domain, XAI research predominantly serves ethical and legal imperatives, fostering trust and privacy, and revealing model biases [18]. The deployment of XAI for medical knowledge discovery is less common, yet some recognize its untapped potential [6].
In [19], the authors illustrated a method to cluster images, assign groups of images importance scores, and subsequently obtain explanations regarding significant components across an entire class. This technique employs super-pixel segmentation to fragment the image. This procedure is replicated for all images in a class. The segments are then clustered, and the significance of each segment group is evaluated. This approach yields explanations highlighting crucial features across the class. Although their evaluation used a general dataset, it appears feasible to adapt this to the medical context. In such cases, this methodology could potentially expose medical insights by categorizing types of markers. This aligns with the objectives here, albeit through a different modality.
In [20], the authors exemplified how XAI can be harnessed for medical knowledge discovery. Using a ResNet, they autonomously analyzed ECG data. Their model could predict the sex of a subject with 86% accuracy, a feat they claim is nearly unachievable for human cardiologists. To elucidate the model’s learned insights, they turned to XAI. They modified Grad-CAM, presenting a visual justification of the segments deemed crucial for the prediction. This process revealed what the authors termed as “new insights into electrophysiology”.

This entry is adapted from the peer-reviewed paper 10.3390/diagnostics13223413

References

  1. Zhang, X.; Jonassen, I.; Goksøyr, A. Machine learning approaches for biomarker discovery using gene expression data. Bioinformatics 2021.
  2. Zhang, X.; Zhang, Y.; Zhang, G.; Qiu, X.; Tan, W.; Yin, X.; Liao, L. Deep Learning With Radiomics for Disease Diagnosis and Treatment: Challenges and Potential. Front. Oncol. 2022, 12, 773840.
  3. Zhou, H.; He, L.; Zhang, Y.; Shen, L.; Chen, B. Interpretable graph convolutional network of multi-modality brain imaging for alzheimer’s disease diagnosis. In Proceedings of the 2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI), Kolkata, India, 28–31 March 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 1–5.
  4. Zhou, H.; Zhang, Y.; Chen, B.Y.; Shen, L.; He, L. Sparse Interpretation of Graph Convolutional Networks for Multi-modal Diagnosis of Alzheimer’s Disease. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Singapore, 18–22 September 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 469–478.
  5. Arrieta, A.B.; Díaz-Rodríguez, N.; Del Ser, J.; Bennetot, A.; Tabik, S.; Barbado, A.; García, S.; Gil-López, S.; Molina, D.; Benjamins, R.; et al. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 2020, 58, 82–115.
  6. Nagahisarchoghaei, M.; Nur, N.; Cummins, L.; Nur, N.; Karimi, M.M.; Nandanwar, S.; Bhattacharyya, S.; Rahimi, S. An Empirical Survey on Explainable AI Technologies: Recent Trends, Use-Cases, and Categories from Technical and Application Perspectives. Electronics 2023, 12, 1092.
  7. Hoffman, R.; Miller, T.; Mueller, S.T.; Klein, G.; Clancey, W.J. Explaining Explanation, Part 4: A Deep Dive on Deep Nets. IEEE Intell. Syst. 2018, 33, 87–95.
  8. Zintgraf, L.M.; Cohen, T.S.; Adel, T.; Welling, M. Visualizing Deep Neural Network Decisions: Prediction Difference Analysis. In Proceedings of the International Conference on Learning Representations, Toulon, France, 24–26 April 2017.
  9. Rudin, C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 2019, 1, 206–215.
  10. Arthur, D.; Vassilvitskii, S. K-means++ the advantages of careful seeding. In Proceedings of the Eighteenth annual ACM-SIAM Symposium on Discrete Algorithms, New Orleans, LA, USA, 7–9 January 2007; pp. 1027–1035.
  11. Alotaibi, A. Deep Generative Adversarial Networks for Image-to-Image Translation: A Review. Symmetry 2020, 12, 1705.
  12. Kong, L.; Lian, C.; Huang, D.; Li, Z.; Hu, Y.; Zhou, Q. Breaking the Dilemma of Medical Image-to-image Translation. In Advances in Neural Information Processing Systems; Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P., Vaughan, J.W., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2021; Volume 34, pp. 1964–1978.
  13. Yan, S.; Wang, C.; Chen, W.; Lyu, J. Swin transformer-based GAN for multi-modal medical image translation. Front. Oncol. 2022, 12, 942511.
  14. Neha, D.; Vidyavathi, B. A survey on applications of data mining using clustering techniques. Int. J. Comput. Appl. 2015, 126, 7–12.
  15. Erro, R.; Vitale, C.; Amboni, M.; Picillo, M.; Moccia, M.; Longo, K.; Santangelo, G.; De Rosa, A.; Allocca, R.; Giordano, F.; et al. The Heterogeneity of Early Parkinson’s Disease: A Cluster Analysis on Newly Diagnosed Untreated Patients. PLoS ONE 2013, 8, e70244.
  16. Dy, J.; Brodley, C.; Kak, A.; Broderick, L.; Aisen, A. Unsupervised feature selection applied to content-based retrieval of lung images. IEEE Trans. Pattern Anal. Mach. Intell. 2003, 25, 373–378.
  17. Schultz, T.; Kindlmann, G.L. Open-Box Spectral Clustering: Applications to Medical Image Analysis. IEEE Trans. Vis. Comput. Graph. 2013, 19, 2100–2108.
  18. Sheu, R.K.; Pardeshi, M.S. A Survey on Medical Explainable AI (XAI): Recent Progress, Explainability Approach, Human Interaction and Scoring System. Sensors 2022, 22, 8068.
  19. Ghorbani, A.; Wexler, J.; Zou, J.Y.; Kim, B. Towards Automatic Concept-based Explanations. In Advances in Neural Information Processing Systems; Wallach, H., Larochelle, H., Beygelzimer, A., d’Alché-Buc, F., Fox, E., Garnett, R., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2019; Volume 32.
  20. Hicks, S.A.; Isaksen, J.L.; Thambawita, V.; Ghouse, J.; Ahlberg, G.; Linneberg, A.; Grarup, N.; Strümke, I.; Ellervik, C.; Olesen, M.S.; et al. Explaining deep neural networks for knowledge discovery in electrocardiogram analysis. Sci. Rep. 2021, 11, 10949.
More
This entry is offline, you can click here to edit this entry!
Video Production Service