Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 3318 2023-05-30 02:27:46 |
2 layout & references Meta information modification 3318 2023-05-30 03:31:51 |

Video Upload Options

Do you have a full video?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Le, T.; Prihatno, A.T.; Oktian, Y.E.; Kang, H.; Kim, H. Local Explanation of Practical Industrial AI Applications. Encyclopedia. Available online: https://encyclopedia.pub/entry/44986 (accessed on 29 July 2024).
Le T, Prihatno AT, Oktian YE, Kang H, Kim H. Local Explanation of Practical Industrial AI Applications. Encyclopedia. Available at: https://encyclopedia.pub/entry/44986. Accessed July 29, 2024.
Le, Thi-Thu-Huong, Aji Teguh Prihatno, Yustus Eko Oktian, Hyoeun Kang, Howon Kim. "Local Explanation of Practical Industrial AI Applications" Encyclopedia, https://encyclopedia.pub/entry/44986 (accessed July 29, 2024).
Le, T., Prihatno, A.T., Oktian, Y.E., Kang, H., & Kim, H. (2023, May 30). Local Explanation of Practical Industrial AI Applications. In Encyclopedia. https://encyclopedia.pub/entry/44986
Le, Thi-Thu-Huong, et al. "Local Explanation of Practical Industrial AI Applications." Encyclopedia. Web. 30 May, 2023.
Local Explanation of Practical Industrial AI Applications
Edit

Numerous explainable artificial intelligence (XAI) use cases have been developed, to solve numerous real problems in industrial applications while maintaining the explainability level of the used artificial intelligence (AI) models to judge their quality and potentially hold the models accountable if they become corrupted. Therefore, understanding the state-of-the-art methods, pointing out issues, and deriving future directions are important to drive XAI research efficiently. 

machine learning explainable artificial intelligence local explanation techniques industrial application

1. Introduction

Machine learning (ML) and deep learning (DL) models have achieved remarkable success in a variety of domains, including healthcare [1][2][3], financial systems [4][5], criminal justice [6][7], and cybersecurity [8][9]. While accuracy is critical, the emphasis on accuracy has frequently resulted in developers sacrificing interpretability in favor of better accuracy by making models more complex and difficult to comprehend [10]. When the learning model has the authority to make critical decisions that influence people’s well-being, this lack of interpretability becomes a major concern.
To overcome this issue, explainable artificial intelligence (XAI) approaches must provide end-users with coherent explanations of these models’ decision-making processes. Indeed, because these models are black boxes, it is difficult to understand how they arrive at their conclusions. XAI technologies provide visualization techniques to comprehend the decision-making processes of these models, making forecast explanations easier to understand and communicate. Although the concept of XAI has recently received significant attention in studies [11][12], it has only recently received significant attention in academia, with an increasing number of research papers being published on the topic [13].
The increasing use of ML and DL models in various applications has highlighted the need to explain the decision-making processes to gain end-users’ trust in industrial applications. However, it is essential to investigate the effectiveness and limitations of local explanation techniques in industrial settings.

2. Local Explanation Techniques in Industrial Applications

2.1. Autonomous Systems and Robotics

Several publications have focused on XAI local explanations for autonomous systems and robotics.
Zhang et al., 2020 [14] employed the SHAP method and the probability of SHAP values to explain emergency control based on DRL in power systems, in the context of time-series data. Their study combined a post hoc XAI technique with agnostic usage. As an alternative, Renda et al. [15] proposed the FED-XAI idea, which suggests federated learning of XAI models for AI-pervasive 6G networks while proposing an ante hoc XAI strategy. The concept is anticipated to enhance the performance, intelligence, and trustworthiness of automated vehicle networking and improve user experience, fostering end-users’ protection in network AI processes.
In the context of image data, He et al., 2021 [16] proposed a new DRL scheme for model explainability using an agnostic and post hoc XAI approach. They developed a saliency map generation method that merges CAM and SHAP values to create visual and textual action explanations for non-expert users. The model can be refined and enhanced based on the explanation generated. On the other hand, Zhang et al., 2022 [17] employed a distinctive and ante hoc XAI strategy to improve the performance and transparency of autonomous driving systems’ decision making by offering multi-modal explanations, particularly when interacting with pedestrians.
Cui et al., 2022 [18] used SHAP and random forest (RF) approaches regarding tabular data to encourage transparent DRL-based decision making. They adopted an agnostic and post hoc XAI approach. Regarding text datam and employing an agnostic and post hoc approach, Nahata et al., 2021 [19] developed interpretable machine learning models to evaluate and forecast collision risk based on various sensor data features.

2.2. Energy and Building Management

XAI local explanation research has concentrated on time-series and tabular data with energy and building management applications.
Much research has used an agnostic and post hoc design for time-series data and two main approaches, SHAP and LIME. For example, Arjunan et al. [20] proposed an approach that improves the current Energy Star calculation method and employs extra model output processing to illuminate how a building can reach a particular score. Li et al. [21] used SHAP to analyze and interpret the XGBoost-based grid search power load forecasting model. While Zdravkovi et al. [22] presented an AI-assisted control of district heating systems (DHS) for effective heat distribution. Kim et al.’s explanation of the energy demand forecast model employing feature importance and attention methods was published in [23]. To support and elucidate the forecasts, Grzeszczyk et al. [24] suggested a strategy based on the LIME method. Both Wenninger et al. [25] and Moon et al. [26] used distinct and ante hoc XAI techniques with feature-important analysis. While Wenninger et al. proposed the QLattice for estimating the final energy performance of residential buildings, Moon et al. evaluated the most crucial aspects of electrical load forecasting. However, several of these studies’ weaknesses must be addressed in future research. For example, the generalizability of the procedures and outcomes only relates to the input data tested. The explanations may also be difficult for average consumers to comprehend [20]. Future studies could concentrate on developing technical research and apps to provide more intuitive explanations to general consumers [21]. Furthermore, robust model interpretability can sometimes be difficult, and future studies should investigate alternate brain models and ways of explaining black-box models [25].
The application of various XAI techniques to tabular data, with an agnostic and post hoc approach, has been explored. Specifically, LIME and SHAP have been investigated for generating local explanations. Examples include Srinivasan’s SHAP-based XAI-FDD [27], Sim’s SHAP-based analysis of input variables for energy consumption forecasting [28], and Wastensteiner’s LIME and SHAP-based visualizations for personalized feedback on electricity consumption time-series data [29]. Srinivasan et al. [27] proposed XAI-FDD (explainable artificial intelligence-fault detection and diagnosis), which uses explanations generated for each data instance to detect incipient faults. By combining human expertise with the explanations provided by the XAI system, this approach could potentially improve accuracy in classification. The XAI-FDD may examine air-handling systems, renewable energy sources, and other building energy components. Sim et al. [28] used SHAP-based XAI to examine how input variables affect energy use forecasting. Their research divided the input variables into three categories—strong, ambiguous, and weak—providing insights into which variables had the most significant impact. However, they only analyzed certain targets, input factors, and predictive models.
Future studies may need to consider a more comprehensive range of socioeconomic variables for different types of buildings. Wastensteiner et al. [29] developed five visualizations using ML and XAI methods, to provide personalized feedback on electricity consumption time-series data, incorporating domain-specific knowledge. However, their approach only considered SHAP-based XAI visualizations, and future research could explore other XAI methods, such as LIME, for additional visualizations.

2.3. Environment

Researchers in the field of environmental science have focused on XAI techniques applied to time-series and image datasets.
Researchers have seen the application of SHAP and LIME approaches with agnostic and post hoc designs in time-series data. Graham et al. [30] employed deep neural networks (DNN) and XAI algorithms on the “Dynomics” platform to explore patterns in transcriptional data on a genome-scale and identify the genes contributing to these patterns. Gao et al. [31] used LIME to interpret the DFNN (dynamic fuzzy neural networks) models to assess the danger levels of a building in Cameron County, Louisiana, in response to a fictitious impending hurricane, with updated weather predictions. Despite these studies’ achievements, there are still limitations, and future research can expand the analytical targets and models employed in these investigations.
The use of SHAP and LIME techniques with agnostic and post hoc designs on image data has been explored by various research groups, as described in [32][33][34]. Masahiro et al. [32] discuss the importance of XAI in ecological modeling and list tools that can be employed to understand complex model behavior at different scales. Integrating XAI with ecological and biogeographical knowledge can enhance the accuracy of machine learning models. Kim et al. [34] used the XGBoost model and SHAP to analyze urban expansion, where land-cover characteristics were identified as the primary factor, followed by topographic attributes. However, the XGBoost-SHAP model’s accuracy, such as using AutoML algorithms, should be assessed compared to other XAI methods. Dikshit et al. [33] built and compared an XAI model to physical-based models using an explainable deep learning technique. Their study examined how predictors interacted locally for distinct drought conditions and timeframes, providing insight into how the model produced specific findings at different spatiotemporal intervals. Future research should look at SHAP plots for long-term forecasting and other additive SHAP properties.

2.4. Finance

In the finance sector, the majority of researchers have focused on using LIME and SHAP methods with an agnostic and post hoc approach for tabular and time-series datasets. Some researchers, including Gramegna et al. [35], Benhamou et al. [36], Babaei et al. [37], and de Lange et al. [38], have applied the SHAP method on tabular datasets, while Kumar et al. [39] and Bussmann et al. [40] have developed their SHAP methods on time-series datasets. Additionally, Gite et al. [41] have utilized the LIME method for a tabular dataset.
Several researchers have focused on applying explainable AI models to tabular datasets in finance. For example, Gramegna et al. [35] proposed a model that uses Shapley values and similarity clustering to explain why customers either buy or abandon non-life insurance coverage. In contrast, Benhamou et al. [36] utilized SHAP to identify important variables in stock market crises and provide local explanations of the probability of a crisis at each date. Babaei et al. [37] developed a Shapley-based model to predict the expected return of small and medium-sized enterprises based on their credit risk and expected profitability. Similarly, de Lange et al. [38] combined SHAP with a LightGBM (light gradient-boosting machine) model to interpret explanatory variables affecting credit scoring, which outperformed the bank’s logistic regression model. Additionally, Gite et al. [41] proposed a model that uses long short-term memory (LSTM) and efficient machine learning techniques to accurately predict stock prices based on user sentiments derived from news headlines. They suggest further research directions to include automated prediction of financial news headlines and adding emotion-based GIFs to enhance the model’s appeal. This model can be used as a decision maker for algorithmic trading. Future research may explore applying these methods to other situations in the finance sector, such as underwriting and claims management.
Kumar et al. [39] have suggested using SHAP with a popular deep reinforcement learning architecture, DQN, to explain an agent’s actions in financial stock trading, in time-series data. Additionally, they advised expanding the method to continuous action space and using DRL models, such as deep deterministic policy gradient (DDPG) and advantage actor–critic (A2C), to improve the explanation by adding more technical indicators as features. In another study, Bussmann et al. [40] created an explainable AI model for fintech risk management, particularly for assessing the risks associated with peer-to-peer lending platforms’ credit borrowing. They used Shapley values to interpret AI predictions based on underlying explanatory variables. Future research could be conducted on enhancing prediction understanding by clustering the Shapley values.

2.5. Healthcare

In the healthcare sector, most research studies have focused on applying Grad-CAM, SHAP, and LIME methods.
Numerous studies have utilized the SHAP method to interpret tabular data. For example, while Kim et al. [42] developed an interpretable machine learning model for predicting early neurological deterioration related to stroke, Beebe et al. [43] developed an approach that captures non-linear feature effects in personalized risk predictions. Rashed et al. [44] developed models for diagnosing chronic kidney disease (CKD) that identified important features consistent with clinical knowledge. Zhang et al. [45] also used SHAP to develop machine learning models for CKD diagnosis that provided physicians with richer information on feature importance to improve decision making. These studies highlight the usefulness of SHAP in developing interpretable models for medical applications and the potential to improve clinical decision making.
Three XAI approaches were used for time-series data: attention, layer-wise relevance propagation (LRP), and Grad-CAM. Mousavi et al. [46] introduced HAN-ECG. This bidirectional-recurrent-neural network-based technique employs three attention mechanism levels to detect atrial fibrillation (AF) patterns in an ECG (electrocardiogram). However, the success of the strategy is dependent on the preprocessing stage. This method should be applied to different ECG leads and arrhythmias in the future to extract novel patterns that may be beneficial in detecting arrhythmias. An interpretability method was proposed by Filtjens et al. in their paper [47] to explain DNN decisions for recognizing the movement preceding the freezing of gait (FOG) in Parkinson’s disease (PD). The recommended pipeline can help physicians to explain DNN conclusions and let ML experts check the generalizability of their models. This pipeline could be used to start FOG treatment pipelines. In these circumstances, the interpreters can promote the provision of external stimuli and evaluate the effectiveness of the intervention by picturing diminished significance for FOG. Dutt et al.’s SleepXAI [48], an explainable unified method for multi-class sleep stage categorization utilizing modified Grad-CAM, was first proposed in 2012. This technique explains multi-class categorization of univariate EEG (electroencephalogram) signals while significantly increasing overall sleep stage classification accuracy. During the anticipated sleep stage, SleepXAI generates a heatmap depicting relevant features learned from univariate EEG data. It enables sleep specialists to link observed characteristics to traditional manual sleep grading techniques, boosting confidence in opaque systems with justifications.
Various studies have applied the Grad-CAM method for interpreting image data, focusing on medical imaging. For example, Figueroa et al. [49] proposed a two-stage training process to enhance classification performance and guide the CNN’s attention to lesion areas. To detect COVID-19 on chest X-ray images, Chetoui et al. [50] developed DeepCCXR (deep COVID-19 CXR detection), while Barata et al. proposed a hierarchical neural network with spatial attention modules for skin cancer diagnosis. To help COVID-19 patients be triaged more quickly, Singh et al. [51] proposed an AI-based solution; Malhotra et al. [52] presented COMiT-Net; and Oztekin et al. [53] proposed an explainable deep learning model. These investigations show that the model’s decision-making process is now more accurate and comprehensible and can be directed to focus on particular areas of interest. However, limitations include the need for more accurate labeled datasets, limited data availability, and extending the models to process other types of diseases with radiography images.

2.6. Industrial Engineering

In industrial engineering, time-series and image datasets are commonly used for local explanation methods such as Grad-CAM and SHAP.
Brusa et al. [54] and Hong et al. [55] have both employed Grad-CAM in their ML models to interpret their predictions for time-series data. Hong et al. employed XAI algorithms to find the most important sensors in predicting the remaining useful life of a turbofan engine, and Brusa et al. used SHAP values to diagnose flaws in industrial bearings. Other interpretability approaches, such as LRP and LIME, were used by Grezmak et al. [56] and by Serradilla et al. [57]. The effectiveness of a CNN trained on pictures of the time-frequency spectra of vibration signals measured on an induction motor was evaluated by Grezmak et al. using LRP. Serradilla et al., in contrast, employed XAI methods to direct the development, improvement, and interpretation of a model for estimating the remaining life during fatigue testing, based on condition and setting variables.
For image data, Grad-CAM and CAM (class activation mapping) are common agnostic and post hoc explanation methods used in the industrial engineering sector. Chen et al. [58] used Grad-CAM to interpret the predictions of their CNN (convolutional neural network) model for bearing fault classification using time-frequency spectra. Similarly, Sun et al. [59] employed CAM to diagnose faults and recognize images in the cantilever beam case. Both studies demonstrated the feasibility of using explainable deep learning to diagnose faulty components from images. Future research directions may include testing different equipment settings to determine the minimum requirements for successfully implementing these techniques.

2.7. Cybersecurity

Local explanations based on XAI techniques in cybersecurity are mainly applied to tabular data. 
An ML-based intrusion detection system (IDS) employing an ensemble trees technique was proposed in work by Le et al. [60]. The method employs decision trees and random forest classifiers, and it does not require much computational power to train the models. The SHAP approach was utilized to explain and interpret the models’ classification conclusions, allowing cybersecurity specialists to optimize and evaluate the validity of their judgments swiftly. Karn et al. [61] developed an ML-based detection strategy for anomalous pods in a Kubernetes cluster in a different study. To identify and explain crypto-mining applications, the system uses auto-encoding-based techniques for LSTM models, SHAP, LIME, and LIME. The system’s explainability is critical for system administrators to grasp system-level rationales for supporting disruptive administrative decisions. Wang et al. [62] also used SHAP to improve the interpretation of IDSs by integrating local and global explanations. Their proposed architecture can improve IDS transparency, allowing cybersecurity experts to make better decisions and optimize IDS structure. Finally, Alenezi et al. [63] employed RFC (random forest classifier), XGBoost, and the sequential algorithm to study two big cybersecurity datasets and used three SHAP approaches to explain the feature contributions. The study emphasizes the need to understand the value of data in order to increase the explanatory capabilities of cybersecurity threats data using SHAP methodologies, which can lead to future data collection operations in cybersecurity or other fields.
The benefits of employing SHAP for XAI in cybersecurity include improved model transparency and interpretability, the ability for cybersecurity specialists to make better decisions, and the optimization of model structures. However, drawbacks include the method’s intricacy and the possibility of misleading explanations if the model is not thoroughly understood.

2.8. Smart Agriculture

The smart agriculture sector is explained locally using various datasets, including tabular, time-series, and text data.
Using publicly available tabular data, Ryo et al. [64] used XAI and interpretable ML to study the influence of no-tillage on agricultural yield compared to conventional tillage. The authors assessed the importance of factors for prediction, variable interactions, and the relationship between relevant variables and the response variable. Adak et al. [65] used sentiment analysis to assess customer evaluations in the food delivery services (FDSs) domain, and they justified their predictions using SHAP and LIME. Viana et al. [66] proposed a machine learning model to discover the factors influencing agricultural land usage at the regional level for wheat, maize, and olive grove plantings. Using a model-agnostic methodology, they presented global and local interpretations of the significant elements. XAI technologies were used by Cartolano et al. [67] on the ’Crop Recommendation’ dataset to make ML models clear and trustworthy. Their research focused on sensitivity analysis, comparing what the models discovered to what farmers and agronomists already knew. Future research could look into other XAI methodologies and visualization techniques in sectors as diverse as computational creativity and emotion recognition.
Several studies have looked into the usage of deep neural networks in time-series data research. For example, Wolanin et al. [68] used a deep neural network to predict wheat yield in the Indian wheat belt. They used regression activation maps (RAMs) to improve interpretability to show the model’s learned features and yield drivers. Similarly, Kawakura et al. [69] created an XAI-based technique for agri-directors to train agri-workers by analyzing varied data and merging agri-informatics, statistics, and human dynamics. Li et al. [70] created the ExamPle model, which employed a Siamese network and multi-view representation to forecast plants’ small secreted peptides (SSPs) and revealed the SSPs’ sequential pattern. Additionally, Kundu et al. [71] presented the AIDCC (automatic and intelligent data collector and classifier) framework for automating the collection of imaging and parametric datasets from farms producing pearl millet, disease prediction, and feature visualization using deep learning and IoT.
Finally, Apostolopoulos et al. [72] showed that the Xception network outperforms other CNNs in recognizing suspicious situations in various photos. It improved its post hoc explainability by using the Grad-CAM++ and LIME algorithms. They recommended that future studies look at various methodologies, such as fuzzy logic and fuzzy cognitive maps (FCM), to examine timely fire and smoke incident detection.
Ngo et al. [73] have presented OAK4XAI (model towards out-of-box explainable artificial intelligence), an XAI system for text data analysis that combines domain knowledge semantics via an ontology and knowledge map model. To describe the knowledge mined in agriculture, they developed the agriculture computer ontology (AgriComO), a well-structured framework suitable for agriculture and computer domains. In future research, the authors intend to create an explanation interface as a service for user engagement and to expand the model by integrating multiple ML algorithms for prediction utilizing explainable methodologies.

References

  1. Alex, D.T.; Hao, Y.; Armin, H.A.; Arun, D.; Lide, D.; Paul, R. Patient Facial Emotion Recognition and Sentiment Analysis Using Secure Cloud With Hardware Acceleration. In Computational Intelligence for Multimedia Big Data on the Cloud with Engineering Applications; University of Texas at San Antonio: San Antonio, TX, USA, 2018; pp. 61–89.
  2. Lee, S.M.; Seo, J.B.; Yun, J.; Cho, Y.-H.; Vogel-Claussen, J.; Schiebler, M.L.; Gefter, W.B.; Van Beek, E.J.; Goo, J.M.; Lee, K.S.; et al. Deep Learning Applications in Chest Radiography and Computed Tomography. J. Thorac. Imaging 2019, 34, 75–85.
  3. Chen, R.; Yang, L.; Goodison, S.; Sun, Y. Deep-learning Approach to Identifying Cancer Subtypes Using High-dimensional Genomic Data. Bioinformatics 2020, 36, 1476–1483.
  4. Byanjankar, A.; Heikkila, M.; Mezei, J. Predicting Credit Risk in Peer-to-Peer Lending: A Neural Network Approach. In Proceedings of the 2015 IEEE Symposium Series on Computational Intelligence, Cape Town, South Africa, 7–10 December 2015; pp. 719–725.
  5. Chen, Y.-Q.; Zhang, J.; Ng, W.W.Y. Loan Default Prediction Using Diversified Sensitivity Undersampling. In Proceedings of the 2018 International Conference on Machine Learning and Cybernetics (ICMLC), Chengdu, China, 15–18 July 2018; pp. 240–245.
  6. Zhang, Z.; Neill, D.B. Identifying Significant Predictive Bias in Classifiers. arXiv 2016, arXiv:1611.08292. Available online: http://arxiv.org/abs/1611.08292 (accessed on 20 February 2023).
  7. Hester, N.; Gray, K. For Black men, Being Tall Increases Threat Stereotyping and Police Stops. Proc. Nat. Acad. Sci. USA 2018, 115, 2711–2715.
  8. Parra, G.D.L.T.; Rad, P.; Choo, K.-K.R.; Beebe, N. Detecting Internet of Things Attacks Using Distributed Deep Learning. J. Netw. Comput. Appl. 2020, 163, 102662.
  9. Chacon, H.; Silva, S.; Rad, P. Deep Learning Poison Data Attack Detection. In Proceedings of the 2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI), Portland, OR, USA, 4–6 November 2019; pp. 971–978.
  10. Dam, H.K.; Tran, T.; Ghose, A. Explainable Software Analytics. In Proceedings of the ICSE-NIER ’18: Proceedings of the 40th International Conference on Software Engineering: New Ideas and Emerging Results, Gothenburg, Sweden, 27 May–3 June 2018; pp. 53–56.
  11. Scott, A.C.; Clancey, W.J.; Davis, R.; Shortliffe, E.H. Explanation Capabilities of Production-Based Consultation Systems; Technical Report; Stanford University: Stanford, CA, USA, 1977.
  12. Swartout, W.R. Explaining and Justifying Expert Consulting Programs. In Computer-Assisted Medical Decision Making. Computers and Medicine; Reggia, J.A., Tuhrim, S., Eds.; Springer: New York, NY, USA, 1985.
  13. Wachter, S.; Mittelstadt, B.; Russell, C. Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR. Harv. J. Law Technol. 2018, 31, 842–861. Available online: https://ssrn.com/abstract=3063289 (accessed on 20 February 2023).
  14. Zhang, K.; Xu, P.; Zhang, J. Explainable AI in Deep Reinforcement Learning Models: A SHAP Method Applied in Power System Emergency Control. In Proceedings of the 2020 IEEE 4th Conference on Energy Internet and Energy System Integration (EI2), Wuhan, China, 30 October–1 November 2020; pp. 711–716.
  15. Renda, A.; Ducange, P.; Marcelloni, F.; Sabella, D.; Filippou, M.C.; Nardini, G.; Stea, G.; Virdis, A.; Micheli, D.; Rapone, D.; et al. Federated Learning of Explainable AI Models in 6G Systems: Towards Secure and Automated Vehicle Networking. Information 2022, 13, 395.
  16. He, L.; Aouf, N.; Song, B. Explainable Deep Reinforcement Learning for UAV Autonomous Path Planning. Aerosp. Sci. Technol. 2021, 118, 107052.
  17. Zhang, Z.; Tian, R.; Sherony, R.; Domeyer, J.; Ding, Z. Attention-Based Interrelation Modeling for Explainable Automated Driving. In IEEE Transactions on Intelligent Vehicles; IEEE: Piscataway, NJ, USA, 2022.
  18. Cui, Z.; Li, M.; Huang, Y.; Wang, Y.; Chen, H. An Interpretation Framework for Autonomous Vehicles Decision-making via SHAP and RF. In Proceedings of the 2022 6th CAA International Conference on Vehicular Control and Intelligence (CVCI), Nanjing, China, 28–30 October 2022; pp. 1–7.
  19. Nahata, R.; Omeiza, D.; Howard, R.; Kunze, L. Assessing and Explaining Collision Risk in Dynamic Environments for Autonomous Driving Safety. In Proceedings of the 2021 IEEE International Intelligent Transportation Systems Conference (ITSC), Indianapolis, IN, USA, 19–22 September 2021; pp. 223–230.
  20. Arjunan, P.; Poolla, K.; Miller, C. Energystar++: Towards More Accurate and Explanatory Building Energy Benchmarking. Appl. Energy 2020, 276, 115413.
  21. Li, M.; Wang, Y. Power Load Forecasting and Interpretable Models based on GS_XGBoost and SHAP. J. Phys. Conf. Ser. 2022, 2195, 012028.
  22. Zdravković, M.; Ćirić, I.; Ignjatović, M. Towards Explainable AI-assisted Operations in District Heating Systems. IIFAC-PapersOnLine 2021, 54, 390–395.
  23. Kim, M.; Jun, J.-A.; Song, Y.; Pyo, C.S. Explanation for Building Energy Prediction. In Proceedings of the 2020 International Conference on Information and Communication Technology Convergence, Jeju, Republic of Korea, 21–23 October 2020; pp. 1168–1170.
  24. Grzeszczyk, T.A.; Grzeszczyk, M.K. Justifying Short-term Load Forecasts Obtained with the Use of Neural Models. Energies 2022, 15, 1852.
  25. Wenninger, S.; Kaymakci, C.; Wiethe, C. Explainable Long-term Building Energy Consumption Prediction using Qlattice. Appl. Energy 2022, 308, 118300.
  26. Moon, J.; Park, S.; Rho, S.; Hwang, E. Interpretable Short-term Electrical Load Forecasting Scheme Using Cubist. Comput. Intell Neurosci. 2022, 2022, 1–20.
  27. Srinivasan, S.; Arjunan, P.; Jin, B.; Sangiovanni-Vincentelli, A.L.; Sultan, Z.; Poolla, K. Explainable AI for Chiller Fault-detection Systems: Gaining Human Trust. Computer 2021, 54, 60–68.
  28. Sim, T.; Choi, S.; Kim, Y.; Youn, S.H.; Jang, D.-J.; Lee, S.; Chun, C.-J. eXplainable AI (XAI)-Based Input Variable Selection Methodology for Forecasting Energy Consumption. Electronics 2022, 11, 2947.
  29. Wastensteiner, J.; Weiss, T.M.; Haag, F.; Hopf, K. Explainable AI for Tailored Electricity Consumption Feedback–an Experimental Evaluation of Visualizations. arXiv 2022, arXiv:2208.11408.
  30. Graham, G.; Csicsery, N.; Stasiowski, E.; Thouvenin, G.; Mather, W.H.; Ferry, M.; Cookson, S.; Hasty, J. Genome-scale Transcriptional Dynamics and Environmental Biosensing. Proc. Natl. Acad. Sci. USA 2020, 117, 3301–3306.
  31. Gao, S.; Wang, Y. Explainable Deep Learning Powered Building Risk Assessment Model for Proactive Hurricane Response. Risk Anal. 2022, 1–13.
  32. Masahiro, R.; Boyan, A.; Stefano, M.; Jamie, M.K.; Blas, M. Benito, F.H. Explainable Artificial Intelligence Enhances the Ecological Interpretability of Black-box Species Distribution Models. Ecography 2020, 44, 199–205.
  33. Dikshit, A.; Pradhan, B. Interpretable and Explainable AI (XAI) Model for Spatial Drought Prediction. Sci. Total Environ. 2021, 801, 149797.
  34. Kim, M.; Kim, D.; Jin, D.; Kim, G. Application of Explainable Artificial Intelligence (XAI) in Urban Growth Modeling: A Case Study of Seoul Metropolitan Area, Korea. Land 2023, 12, 420.
  35. Gramegna, A.; Giudici, P. Why to Buy Insurance? An Explainable Artificial Intelligence Approach. Risks 2020, 8, 137.
  36. Benhamou, E.; Ohana, J.-J.; Saltiel, D.; Guez, B.; Ohana, S. Explainable AI (XAI) Models Applied to Planning in Financial Markets. Université Paris-Dauphine Research Paper No. 3862437. 2021. Available online: https://ssrn.com/abstract=3862437 (accessed on 2 February 2023).
  37. Babaei, G.; Giudici, P. Which SME is Worth an Investment? An Explainable Machine Learning Approach. 2021. Available online: http://dx.doi.org/10.2139/ssrn.3810618 (accessed on 2 February 2023).
  38. de Lange, P.E.; Melsom, B.; Vennerod, C.B.; Westgaard, S. Explainable AI for Credit Assessment in Banks. J. Risk Financ. Manag. 2022, 15, 556.
  39. Kumar, S.; Vishal, M.; Ravi, V. Explainable Reinforcement Learning on Financial Stock Trading using SHAP. arXiv 2022, arXiv:2208.08790.
  40. Bussmann, N.; Giudici, P.; Marinelli, D.; Papenbrock, J. Explainable AI in Fintech Risk Management. Front. Artif. Intell. 2020, 3, 26.
  41. Gite, S.; Khatavkar, H.; Kotecha, K.; Srivastava, S.; Maheshwari, P.; Pandey, N. Explainable Stock Prices Prediction from Financial News Articles using Sentiment Analysis. PeerJ. Comput. Sci. 2021, 7, e340.
  42. Kim, S.-H.; Jeon, E.-T.; Yu, S.; Oh, K.; Kim, C.K.; Song, T.-J.; Kim, Y.-J.; Heo, S.H.; Park, K.-Y.; Kim, J.-M.; et al. Interpretable Machine Learning for Early Neurological Deterioration Prediction in Atrial Fibrillation-related Stroke. Sci. Rep. 2021, 11, 20610.
  43. Beebe-Wang, N.; Okeson, A.; Althoff, T.; Lee, S.-I. Efficient and Explainable Risk Assessments for Imminent dementia in an Aging Cohort Study. IEEE J. Biomed. Health Inform. 2021, 25, 2409–2420.
  44. Rashed-Al-Mahfuz, M.; Haque, A.; Azad, A.; Alyami, S.A.; Quinn, J.M.; Moni, M.A. Clinically Applicable Machine Learning Approaches to Identify Attributes of Chronic kidney disease (ckd) for use in low-cost diagnostic screening. IEEE J. Transl. Eng. Health Med. 2021, 9, 4900511.
  45. Zhang, Y.; Yang, D.; Liu, Z.; Chen, C.; Ge, M.; Li, X.; Luo, T.; Wu, Z.; Shi, C.; Wang, B.; et al. An Explainable Supervised Machine Learning Predictor of Acute Kidney Injury After Adult Deceased Donor Liver Transplantation. J. Transl. Med. 2021, 19, 1–15.
  46. Mousavi, S.; Afghah, F.; Acharya, U.R. HAN-ECG: An Interpretable Atrial Fibrillation Detection Model Using Hierarchical Attention Networks. Comput. Biol. Med. 2020, 127, 104057.
  47. Filtjens, B.; Ginis, P.; Nieuwboer, A.; Afzal, M.R.; Spildooren, J.; Vanrumste, B.; Slaets, P. Modelling and Identification of Characteristic Kinematic Features Preceding Freezing of Gait with Convolutional Neural Networks and Layer-wise Relevance Propagation. BMC Med. Inform. Decis. Mak. 2021, 21, 341.
  48. Dutt, M.; Redhu, S.; Goodwin, M.; Omlin, C.W. SleepXAI: An Explainable Deep Learning Approach for Multi-class Sleep Stage Identification. Appl. Intell. 2022, 1–14.
  49. Figueroa, K.C.; Song, B.; Sunny, S.; Li, S.; Gurushanth, K.; Mendonca, P.; Mukhia, N.; Patrick, S.; Gurudath, S.; Raghavan, S.; et al. Interpretable Deep Learning Approach for Oral Cancer Classification using Guided Attention Inference Network. J. Biomed. Opt. 2022, 27, 015001.
  50. Chetoui, M.; Akhloufi, M.A.; Yousefi, B.; Bouattane, E.M. Explainable COVID-19 Detection on Chest X-rays Using an End-to-end Deep Convolutional Neural Network Architecture. Big Data Cogn. Comput. 2021, 5, 73.
  51. Singh, R.K.; Pandey, R.; Babu, R.N. COVIDScreen: Explainable Deep Learning Framework for Differential Diagnosis of COVID-19 using Chest Xrays. Neural. Comput. Appl. 2021, 33, 8871–8892.
  52. Malhotra, A.; Mittal, S.; Majumdar, P.; Chhabra, S.; Thakral, K.; Vatsa, M.; Singh, R.; Chaudhury, S.; Pudrod, A.; Agrawal, A.; et al. Multi-task Driven Explainable Diagnosis of COVID-19 using Chest X-ray Images. Pattern Recognit. 2022, 122, 108243.
  53. Oztekin, F.; Katar, O.; Sadak, F.; Yildirim, M.; Cakar, H.; Aydogan, M.; Ozpolat, Z.; Talo Yildirim, T.; Yildirim, O.; Faust, O.; et al. An Explainable Deep Learning Model to Prediction Dental Caries Using Panoramic Radiograph Images. Diagnostics 2023, 13, 226.
  54. Brusa, E.; Cibrario, L.; Delprete, C.; Di Maggio, L.G. Explainable AI for Machine Fault Diagnosis: Understanding Features’ Contribution in Machine Learning Models for Industrial Condition Monitoring. Appl. Sci. 2023, 13, 2038.
  55. Hong, C.W.; Lee, C.; Lee, K.; Ko, M.-S.; Kim, D.E.; Hur, K. Remaining Useful Life Prognosis for Turbofan Engine Using Explainable Deep Neural Networks with Dimensionality Reduction. Sensors 2020, 20, 6626.
  56. Grezmak, J.; Zhang, J.; Wang, P.; Loparo, K.A.; Gao, R.X. Interpretable Convolutional Neural Network Through Layer-wise Relevance Propagation for Machine Fault Diagnosis. IEEE Sen. J. 2020, 20, 3172–3181.
  57. Serradilla, O.; Zugasti, E.; Cernuda, C.; Aranburu, A.; de Okariz, J.R.; Zurutuza, U. Interpreting Remaining Useful Life Estimations Combining Explainable Artificial Intelligence and Domain Knowledge in Industrial Machinery. In Proceedings of the 2020 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), Glasgow, UK, 19–24 July 2020; pp. 1–8.
  58. Chen, H.-Y.; Lee, C.-H. Vibration Signals Analysis by Explainable Artificial Intelligence (XAI) Approach: Application on Bearing Faults Diagnosis. IEEE Access 2020, 8, 134246–134256.
  59. Sun, K.H.; Huh, H.; Tama, B.A.; Lee, S.Y.; Jung, J.H.; Lee, S. Vision-Based Fault Diagnostics Using Explainable Deep Learning With Class Activation Maps. IEEE Access 2020, 8, 129169–129179.
  60. Le, T.-T.-H.; Kim, H.; Kang, H.; Kim, H. Classification and Explanation for Intrusion Detection System Based on Ensemble Trees and SHAP Method. Sensors 2022, 22, 1154.
  61. Karn, R.R.; Kudva, P.; Huang, H.; Suneja, S.; Elfadel, I.M. Cryptomining Detection in Container Clouds Using System Calls and Explainable Machine Learning. IEEE Trans. Parallel Distrib. Syst. 2021, 32, 674–691.
  62. Wang, M.; Zheng, K.; Yang, Y.; Wang, X. An Explainable Machine Learning Framework for Intrusion Detection Systems. IEEE Access 2020, 8, 73127–73141.
  63. Alenezi, R.; Ludwig, S.A. Explainability of Cybersecurity Threats Data Using SHAP. In Proceedings of the 2021 IEEE Symposium Series on Computational Intelligence (SSCI), Orlando, FL, USA, 5–7 December 2021; pp. 1–10.
  64. Ryo, M. Explainable Artificial Intelligence and Interpretable Machine Learning for Agricultural Data Analysis. Artif. Intell. Agric. 2022, 6, 257–265.
  65. Adak, A.; Pradhan, B.; Shukla, N.; Alamri, A. Unboxing Deep Learning Model of Food Delivery Service Reviews Using Explainable Artificial Intelligence (XAI) Technique. Foods 2022, 11, 2019.
  66. Viana, C.M.; Santos, M.; Freire, D.; Abrantes, P.; Rocha, J. Evaluation of the factors Explaining the Use of Agricultural Land: A Machine Learning and Model-Agnostic Approach. Ecol. Indic. 2021, 131, 108200.
  67. Cartolano, A.; Cuzzocrea, A.; Pilato, G.; Grasso, G.M. Explainable AI at Work! What Can It Do for Smart Agriculture? In Proceedings of the 2022 IEEE Eighth International Conference on Multimedia Big Data (BigMM), Naples, Italy, 5–7 December 2022; pp. 87–93.
  68. Wolanin, A.; Mateo-García, G.; Camps-Valls, G.; Gómez-Chova, L.; Meroni, M.; Duveiller, G.; Guanter, L. Estimating and understanding crop yields with explainable deep learning in the Indian Wheat Belt. Environ. Res. Lett. 2020, 15, 024019.
  69. Kawakura, S.; Hirafuji, M.; Ninomiya, S.; Shibasaki, R. Analyses of Diverse Agricultural Worker Data with Explainable Artificial Intelligence: XAI based on SHAP, LIME, and LightGBM. Eur. J. Agric. Food Sci. 2022, 4, 11–19.
  70. Li, Z.; Jin, J.; Wang, Y.; Long, W.; Ding, Y.; Hu, H.; Wei, L. ExamPle: Explainable Deep Learning Framework for the Prediction of Plant Small Secreted Peptides. Bioinformatics 2023, 39, btad108.
  71. Kundu, N.; Rani, G.; Dhaka, V.S.; Gupta, K.; Nayak, S.C.; Verma, S.; Ijaz, M.F.; Woźniak, M. IoT and Interpretable Machine Learning Based Framework for Disease Prediction in Pearl Millet. Sensors 2021, 21, 5386.
  72. Apostolopoulos, I.D.; Athanasoula, I.; Tzani, M.; Groumpos, P.P. An Explainable Deep Learning Framework for Detecting and Localising Smoke and Fire Incidents: Evaluation of Grad-CAM++ and LIME. Mach. Learn. Knowl. Extr. 2022, 4, 1124–1135.
  73. Ngo, Q.H.; Kechadi, T.; Le-Khac, N.A. OAK4XAI: Model Towards Out-of-Box eXplainable Artificial Intelligence for Digital Agriculture. In Artificial Intelligence XXXIX: 42nd SGAI International Conference on Artificial Intelligence, AI 2022, Cambridge, UK, 13–15 December 2022; Springer International Publishing: Cham, Switzerland, 2022; pp. 238–251.
More
Information
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : , , , ,
View Times: 279
Revisions: 2 times (View History)
Update Date: 30 May 2023
1000/1000
Video Production Service