# Authors Title Details Date Pdf/Links/Bibtex Keywords
38El-Qoraychy FZ, Mualla Y., Zhao H., Dridi M., Créput JC, Longo L. Explainable AI for sign language recognition models: Integrating Grad-Cam LIME and Integrated Gradients Plos One 2025 Sign language Machine Learning Explainable Artificial Intelligence Grad-Cam Lime Integrated Gradients
10.1371/journal.pone.0336481
37Ephrem Tibebe Mekonnen; Longo L., Dondio P.LOMATCE: LOcal Model-Agnostic Time-series Classification Explanations IEEE Access 2025 Time series analysis Adaptation models Explainable AI Predictive models Data models Closed box Perturbation methods Computational modeling Deep learning Kernel Explainable Artificial Intelligence Model-agnostic Time series Post hoc Deep learning XAI
10.1109/ACCESS.2025.3625442
36Kopanja M., Savic M., Longo L.CORTEX: Cost-Sensitive Rule and Tree Extraction Method Knowledge-Based Systems 2025 Explainable artificial intelligence Rule-based methods Tree-based methods Cost-sensitive decision tree Rule extraction Surrogate models
10.1016/j.knosys.2025.114592
35Vilone G., Longo L. Evaluating Argumentation Graphs as Global Explainable Surrogate Models for Dense Neural Networks and Their Comparison with Decision Trees eXplainable Artificial Intelligence, The World Conference (xAI-2025) 2025 Logical Analysis Graph Theory Graph Theory in Probability Machine Learning Reasoning Symbolic AI Explainable AI Surrogate models Computational Argumentation Rule-based systems Decision-trees Dense Neural Networks Deep learning
10.1007/978-3-032-08333-3_5
34Ahmed T., Biecek P. Longo L. Latent Space Interpretation and Mechanistic Clipping of Subject-Specific Variational Autoencoders of EEG Topographic Maps for Artefacts Reduction eXplainable Artificial Intelligence, The World Conference (xAI-2025) 2025 Electroencephalography Spectral topographic maps Subject-specific Variational autoencoder Latent space interpretability Artefacts removal Deep learning full automation explainable AI
10.1007/978-3-032-08327-2_16
33Ceschin M., Arrighi L., Longo L., Barbon Junior S. Extending Decision Predicate Graphs for Comprehensive Explanation of Isolation Forest eXplainable Artificial Intelligence, The World Conference (xAI-2025) 2025 Ensemble Learning Outliers Explainable Artificial Intelligence Interpretability Anomalies Tree-based Ensemble Model
10.1007/978-3-032-08324-1_12
32Davydko O., Pavlov V., Longo L. A Combination of Integrated Gradients and SRFAMap for Explaining Neural Networks Trained with High-Order Statistical Radiomic Features eXplainable Artificial Intelligence, The World Conference (xAI-2025) 2025 Explainable artificial intelligence Radiomics Texture analysis Medical image processing Saliency map Integrated Gradients Neural Networks Interpretable Machine Learning
10.1007/978-3-032-08317-3_17
31Longo L., Berretta S., Verda D., Rizzo L.Computational argumentation and automatic rule-generation for explainable data-driven modeling IEEE Access 2025 Rule-base systems Explainable Artificial Intelligence Logic Learning Machine Non-monotonic reasoning Defeasible Reasoning Explainability Computational argumentation Argumentation semantics Explainability
10.1109/ACCESS.2025.3618992
30Kopanja M., Savic M., Longo LEnhancing Cost-Sensitive Tree-Based XAI Surrogate Method: Exploring Alternative Cost Matrix Formulation Joint Proceedings of the xAI 2025 Late-breaking Work, Demos and Doctoral Consortium co-located with the 3rd World Conference on eXplainable Artificial Intelligence (xAI 2025) 2025 Explainable artificial intelligence Cost-sensitive decision tree Surrogate modeling Rule extraction Tree-based methods Model-agnostic explanations Rule-based systems Interpretability Machine Learning.
29Marochko V., Rogala J., Longo L. Integrated Gradients for Enhanced Interpretation of P3b-ERP Classifiers Trained with EEG-superlets in Traditional and Virtual Environments Joint Proceedings of the xAI 2025 Late-breaking Work, Demos and Doctoral Consortium co-located with the 3rd World Conference on eXplainable Artificial Intelligence (xAI 2025) 2025 Event-related potentials Deep learning Convolutional neural networks Explainable Artificial Intelligence Integrated Gradients P3b Oddball paradigm time-frequency super-resolution Superlets.
28Criscuolo S., Giugliano S., Apicella, A., Donnarumma F., Amato F. Tedesco A., Longo L.Exploring the Latent Space of Person-Specific Convolutional Autoencoders for Eye-Blink Artefact Mitigation in EEG Signals 2024 IEEE 8th Forum on Research and Technologies for Society and Industry Innovation (RTSI) 2024 Electroencephalography Autoencoders Eye-blink Artefacts Detection Latent Space interpretation Explainable Artificial Intelligence Artificial Intelligence Machine Learning Deep learning
10.1109/RTSI61910.2024.10761377
27Marochko V., and Longo L.Enhancing the analysis of the P300 event-related potential with integrated gradients on a convolutional neural network trained with superlets Joint Proceedings of the xAI 2024 Late-breaking Work, Demos and Doctoral Consortium co-located with the 2nd World Conference on eXplainable Artificial Intelligence (xAI 2024) 2024 Event-related potentials Deep learning Convolutional neural networks Explainable Artificial Intelligence Integrated gradients P3b Oddball paradigm time-frequency super-resolution Superlets
26Mekonnen E. T., Longo L., Dondio P. Interpreting Black-Box Time Series Classifiers using Parameterised Event Primitives Joint Proceedings of the xAI 2024 Late-breaking Work, Demos and Doctoral Consortium co-located with the 2nd World Conference on eXplainable Artificial Intelligence (xAI 2024) 2024 Explainable Artificial Intelligence Model-Agnostic Time Series Post-hoc Deep Learning Machine Learning Event primitives Time-series
25Chikkankod A.V., Longo L.A proposal for improving EEG microstate generation via interpretable deep clustering with convolutional autoencoders Joint Proceedings of the xAI 2024 Late-breaking Work, Demos and Doctoral Consortium co-located with the 2nd World Conference on eXplainable Artificial Intelligence (xAI 2024) 2024 EEG Microstates Shallow clustering Deep clustering Convolutional autoencoders Resting state Machine Learning Deep Learning Microstate theory
24Mekonnen E.T., Longo L., Dondio P.A global model-agnostic rule-based XAI method based on Parameterized Event Primitives for time series classifiers Frontiers Artificial Intelligence 2024 Deep learning Explainable Artificial Intelligence time series classification decision tree model agnostic post-hoc Machine Learning
10.3389/frai.2024.1381921
23Rizzo L., Verda D., Berretta S., Longo L. A Novel Integration of Data-Driven Rule Generation and Computational Argumentation for Enhanced Explainable AI Machine Learning and Knowledge Extraction 2024 rule-base AI explainable artificial intelligence computational argumentation defeasible reasoning Artificial Intelligence
10.3390/make6030101
22Raufi B., Finnegan C., Longo L. A Comparative Analysis of SHAP, LIME, ANCHORS, and DICE for Interpreting a Dense Neural Network in Credit Card Fraud Detection eXplainable Artificial Intelligence, The World Conference (xAI-2024) 2024 Explainable Artificial Intelligence Credit Card Fraud Detection Interpretability methods comparison SHapley Additive exPlanations Local Interpretable Model-agnostic Explanation ANCHORS Diverse Counterfactual Explanations
10.1007/978-3-031-63803-9_20
21Davydko O., Pavlov V., Biecek P., & Longo L.SRFAMap: A Method for Mapping Integrated Gradients of a CNN Trained with Statistical Radiomic Features to Medical Image Saliency Maps eXplainable Artificial Intelligence, The World Conference (xAI-2024) 2024 Explainable artificial intelligence Radiomics Texture analysis Medical image processing Saliency map Deep-learning machine learning
10.1007/978-3-031-63803-9_1
20Hryniewska-Guzik W., Longo L., Biecek P.CNN-Based Explanation Ensembling for Dataset, Representation and Explanations Evaluation eXplainable Artificial Intelligence, The World Conference (xAI-2024) 2024 Explainable Artificial Intelligence XAI Convolutional Neural Network model evaluation data evaluation representation learning ensemble deep learning machine learning
10.1007/978-3-031-63797-1_18
19Longo L., Brcic M., Cabitza F., Choi J., Confalonieri R., Del Ser J., Guidotti R., Hayashi Y., Herrera F., Holzinger A., Jiang R., Khosravi H., Lecue F., Malgieri G., Páez A, Samek W., Schneider J, Speith T., Stumpf S.Explainable Artificial Intelligence (XAI) 2.0: A manifesto of open challenges and interdisciplinary research directions Information Fusion 2024 Explainable artificial intelligence XAI Interpretability Manifesto Open challenges Interdisciplinarity Ethical AI Large language models Trustworthy AI Responsible AI Generative AI Multi-faceted explanations Concept-based explanations Causality Actionable XAI Falsifiability
10.1016/j.inffus.2024.102301
18Sullivan R.S., Longo L. Optimizing Deep Q-Learning Experience Replay with SHAP Explanations: Exploring Minimum Experience Replay Buffer Sizes in Reinforcement Learning Joint Proceedings of the xAI-2023 Late-breaking Work, Demos and Doctoral Consortium, co-located with the 1st World Conference on eXplainable Artificial Intelligence (xAI-2023) 2023 Deep Reinforcement Learning Experience Replay SHapley Additive exPlanations eXplainable Artificial Intelligence Machine Learning
17Mekonnen E.T., Dondio P., Longo L. Explaining Deep Learning Time Series Classification Models using a Decision Tree-Based Post-Hoc XAI Method Joint Proceedings of the xAI-2023 Late-breaking Work, Demos and Doctoral Consortium co-located with the 1st World Conference on eXplainable Artificial Intelligence (xAI-2023) 2023 Explainable Artificial Intelligence Deep Learning Time Series Classification Decision-Trees Machine Learning Post-hoc
16Ahmed T., Longo L.Latent Space Interpretation and Visualisation for Understanding the Decisions of Convolutional Variational Autoencoders Trained with EEG Topographic Maps Joint Proceedings of the xAI-2023 Late-breaking Work, Demos and Doctoral Consortium co-located with the 1st World Conference on eXplainable Artificial Intelligence (xAI-2023) 2023 Electroencephalography Convolutional variational autoencoders latent space interpretation deep learning spectral topographic maps Machine Learning
15Vilone G., Longo L. An Examination of the Effect of the Inconsistency Budget in Weighted Argumentation Frameworks and their Impact on the Interpretation of Deep Neural Networks Joint Proceedings of the xAI-2023 Late-breaking Work, Demos and Doctoral Consortium co-located with the 1st World Conference on eXplainable Artificial Intelligence (xAI-2023) 2023 Explainable artificial intelligence Argumentation Non-monotonic reasoning Automatic attack extraction Weighted argumentation frameworks Inconsistency budget Machine Learning Neural Networks
14Davydko O., Pavlov V., Longo L.Selecting textural characteristics of chest X-Rays for pneumonia lesions classification with the integrated gradients XAI attribution method eXplainable Artificial Intelligence, The World Conference (xAI-2023) 2023 Explainable artificial intelligence Neural networks Texture analysis Medical image processing Classification Machine Learning
10.1007/978-3-031-44064-9_36
13Natsiou A., O’Leary S., Longo L. An Exploration of the Latent Space of a Convolutional Variational Autoencoder for the Generation of Musical Instrument Tones eXplainable Artificial Intelligence, The World Conference (xAI-2023) 2023 Explainable Artificial Intelligence Variational Autoencoders Audio Representations Audio Synthesis Latent Feature Importance Deep Learning Machine Learning
10.1007/978-3-031-44070-0_24
12Gómez Tapia C., Bozic B., Longo L. Investigating the Effect of Pre-processing Methods on Model Decision-Making in EEG-Based Person Identification eXplainable Artificial Intelligence, The World Conference (xAI-2023) 2023 Electroencephalography eXplainable Artificial Intelligence Deep Learning Signal processing attribution xAI methods Graph-Neural Network Biometrics signal-to-noise ratio
10.1007/978-3-031-44070-0_7
11Vilone G., Longo L.Development of a Human-Centred Psychometric Test for the Evaluation of Explanations Produced by XAI Methods eXplainable Artificial Intelligence, The World Conference (xAI-2023) 2023 Explainable Artificial Intelligence Human-centred evaluation Psychometrics Machine Learning Deep Learning Explainability
10.1007/978-3-031-44070-0_11
10O’ Sullivan R., Longo LExplaining Deep Q-Learning Experience Replay with SHapley Additive exPlanations Machine Learning and Knowledge Extraction 2023 Deep Reinforcement Learning Experience Replay SHapley Additive exPlanations eXplainable Artificial Intelligence Artificial Intelligence
10.3390/make5040072
9Vilone G. Longo L.A global model-agnostic XAI method for the automatic formation of an abstract argumentation framework and its objective evaluation. 1st Int. Workshop on Argumentation for eXplainable AI (with 9th Int. Conference on Computational Models of Argument, COMMA 2022) 2022 Explainable artificial intelligence Argumentation Non-monotonic reasoning Method evaluation Metrics of explainability
8Vilone G., Longo L.A Novel Human-Centred Evaluation Approach and an Argument-Based Method for Explainable Artificial Intelligence. Artificial Intelligence Applications and Innovations - 18th IFIP WG 12.5 International Conference 2022 Explainable Artificial Intelligence Argumentation Human-centred evaluation Non-monotonic reasoning Explainability
10.1007/978-3-031-08333-4_36
7Vilone G., Longo L.A Quantitative Evaluation of Global, Rule-Based Explanations of Post-Hoc, Model Agnostic Methods Frontiers in Artificial Intelligence 2021 explainable artificial intelligence rule extraction method comparison and evaluation metrics of explainability method automatic ranking artificial intelligence explainability
10.3389/frai.2021.717899
6Vilone G., Longo L.Classification of Explainable Artificial Intelligence Methods through Their Output Formats Machine Learning and Knowledge Extraction 2021 explainable artificial intelligence method classification systematic literature review
10.3390/make3030032
5Vilone G, Longo L.Notions of explainability and evaluation approaches for explainable artificial intelligence Information fusion 2021 Explainable artificial intelligence Notions of explainability Evaluation methods
10.1016/j.inffus.2021.05.009
4Longo L., Goebel R., Lecue F., Kieseberg P., Holzinger A.Explainable Artificial Intelligence: Concepts, Applications, Research Challenges and Visions Machine Learning and Knowledge Extraction. Int. Cross-Domain Conference for Machine Learning and Knowledge Extraction 2020 Explainable artificial intelligence Machine learning Explainability
10.1007/978-3-030-57321-8_1
3Vilone G., Rizzo L., Longo L.A comparative analysis of rule-based, model-agnostic methods for explainable artificial intelligence Proceedings for the 28th AIAI Irish Conference on Artificial Intelligence and Cognitive Science, Dublin, Ireland, December 7-8, 2020 2020 Explainable artificial intelligence Rule extraction Method comparison evaluation
2Rizzo, L., Longo L.An empirical evaluation of the inferential capacity of defeasible argumentation, non-monotonic fuzzy reasoning and expert systems Expert Systems with Applications 2020 Defeasible Argumentation Argumentation Theory Explainable Artificial Intelligence Non-monotonic Reasoning Fuzzy Logic Expert Systems Mental Workload
10.1016/j.eswa.2020.113220
1Rizzo L., Longo L.A Qualitative Investigation of the Explainability of Defeasible Argumentation and Non-Monotonic Fuzzy Reasoning 26th AIAI Irish Conference on Artificial Intelligence and Cognitive Science 2018 Defeasible Argumentation Non-monotonic Reasoning Fuzzy Reasoning Argumentation Theory Explainable Artificial Intelligence Artificial Intelligence Modeling
# Authors Title Details Date Pdf/Links/Bibtex Keywords