# Authors Title Details Date Pdf/Links/Bibtex Keywords
17Ephrem Tibebe Mekonnen; Longo L., Dondio P.LOMATCE: LOcal Model-Agnostic Time-series Classification Explanations IEEE Access 2025 Time series analysis Adaptation models Explainable AI Predictive models Data models Closed box Perturbation methods Computational modeling Deep learning Kernel Explainable Artificial Intelligence Model-agnostic Time series Post hoc Deep learning XAI
10.1109/ACCESS.2025.3625442
16Kopanja M., Savic M., Longo LEnhancing Cost-Sensitive Tree-Based XAI Surrogate Method: Exploring Alternative Cost Matrix Formulation Joint Proceedings of the xAI 2025 Late-breaking Work, Demos and Doctoral Consortium co-located with the 3rd World Conference on eXplainable Artificial Intelligence (xAI 2025) 2025 Explainable artificial intelligence Cost-sensitive decision tree Surrogate modeling Rule extraction Tree-based methods Model-agnostic explanations Rule-based systems Interpretability Machine Learning.
15Marochko V., Rogala J., Longo L. Integrated Gradients for Enhanced Interpretation of P3b-ERP Classifiers Trained with EEG-superlets in Traditional and Virtual Environments Joint Proceedings of the xAI 2025 Late-breaking Work, Demos and Doctoral Consortium co-located with the 3rd World Conference on eXplainable Artificial Intelligence (xAI 2025) 2025 Event-related potentials Deep learning Convolutional neural networks Explainable Artificial Intelligence Integrated Gradients P3b Oddball paradigm time-frequency super-resolution Superlets.
14Gupta G., Qureshi M.A., Longo L.A Global Post Hoc XAI Method For Interpreting LSTM Using Deterministic Finite State Automata The Irish conference on Artificial Intelligence and Cognitive Science 2025 RNN interpretability Explainable AI LSTM Deterministic Finite State Automata k-means clustering Recurrent Neural Networks
13Marochko V., and Longo L.Enhancing the analysis of the P300 event-related potential with integrated gradients on a convolutional neural network trained with superlets Joint Proceedings of the xAI 2024 Late-breaking Work, Demos and Doctoral Consortium co-located with the 2nd World Conference on eXplainable Artificial Intelligence (xAI 2024) 2024 Event-related potentials Deep learning Convolutional neural networks Explainable Artificial Intelligence Integrated gradients P3b Oddball paradigm time-frequency super-resolution Superlets
12Mekonnen E. T., Longo L., Dondio P. Interpreting Black-Box Time Series Classifiers using Parameterised Event Primitives Joint Proceedings of the xAI 2024 Late-breaking Work, Demos and Doctoral Consortium co-located with the 2nd World Conference on eXplainable Artificial Intelligence (xAI 2024) 2024 Explainable Artificial Intelligence Model-Agnostic Time Series Post-hoc Deep Learning Machine Learning Event primitives Time-series
11Chikkankod A.V., Longo L.A proposal for improving EEG microstate generation via interpretable deep clustering with convolutional autoencoders Joint Proceedings of the xAI 2024 Late-breaking Work, Demos and Doctoral Consortium co-located with the 2nd World Conference on eXplainable Artificial Intelligence (xAI 2024) 2024 EEG Microstates Shallow clustering Deep clustering Convolutional autoencoders Resting state Machine Learning Deep Learning Microstate theory
10Mekonnen E.T., Longo L., Dondio P.A global model-agnostic rule-based XAI method based on Parameterized Event Primitives for time series classifiers Frontiers Artificial Intelligence 2024 Deep learning Explainable Artificial Intelligence time series classification decision tree model agnostic post-hoc Machine Learning
10.3389/frai.2024.1381921
9Longo L., Brcic M., Cabitza F., Choi J., Confalonieri R., Del Ser J., Guidotti R., Hayashi Y., Herrera F., Holzinger A., Jiang R., Khosravi H., Lecue F., Malgieri G., Páez A, Samek W., Schneider J, Speith T., Stumpf S.Explainable Artificial Intelligence (XAI) 2.0: A manifesto of open challenges and interdisciplinary research directions Information Fusion 2024 Explainable artificial intelligence XAI Interpretability Manifesto Open challenges Interdisciplinarity Ethical AI Large language models Trustworthy AI Responsible AI Generative AI Multi-faceted explanations Concept-based explanations Causality Actionable XAI Falsifiability
10.1016/j.inffus.2024.102301
8Sullivan R.S., Longo L. Optimizing Deep Q-Learning Experience Replay with SHAP Explanations: Exploring Minimum Experience Replay Buffer Sizes in Reinforcement Learning Joint Proceedings of the xAI-2023 Late-breaking Work, Demos and Doctoral Consortium, co-located with the 1st World Conference on eXplainable Artificial Intelligence (xAI-2023) 2023 Deep Reinforcement Learning Experience Replay SHapley Additive exPlanations eXplainable Artificial Intelligence Machine Learning
7Mekonnen E.T., Dondio P., Longo L. Explaining Deep Learning Time Series Classification Models using a Decision Tree-Based Post-Hoc XAI Method Joint Proceedings of the xAI-2023 Late-breaking Work, Demos and Doctoral Consortium co-located with the 1st World Conference on eXplainable Artificial Intelligence (xAI-2023) 2023 Explainable Artificial Intelligence Deep Learning Time Series Classification Decision-Trees Machine Learning Post-hoc
6Ahmed T., Longo L.Latent Space Interpretation and Visualisation for Understanding the Decisions of Convolutional Variational Autoencoders Trained with EEG Topographic Maps Joint Proceedings of the xAI-2023 Late-breaking Work, Demos and Doctoral Consortium co-located with the 1st World Conference on eXplainable Artificial Intelligence (xAI-2023) 2023 Electroencephalography Convolutional variational autoencoders latent space interpretation deep learning spectral topographic maps Machine Learning
5Vilone G., Longo L. An Examination of the Effect of the Inconsistency Budget in Weighted Argumentation Frameworks and their Impact on the Interpretation of Deep Neural Networks Joint Proceedings of the xAI-2023 Late-breaking Work, Demos and Doctoral Consortium co-located with the 1st World Conference on eXplainable Artificial Intelligence (xAI-2023) 2023 Explainable artificial intelligence Argumentation Non-monotonic reasoning Automatic attack extraction Weighted argumentation frameworks Inconsistency budget Machine Learning Neural Networks
4Davydko O., Pavlov V., Longo L.Selecting textural characteristics of chest X-Rays for pneumonia lesions classification with the integrated gradients XAI attribution method eXplainable Artificial Intelligence, The World Conference (xAI-2023) 2023 Explainable artificial intelligence Neural networks Texture analysis Medical image processing Classification Machine Learning
10.1007/978-3-031-44064-9_36
3Gómez Tapia C., Bozic B., Longo L. Investigating the Effect of Pre-processing Methods on Model Decision-Making in EEG-Based Person Identification eXplainable Artificial Intelligence, The World Conference (xAI-2023) 2023 Electroencephalography eXplainable Artificial Intelligence Deep Learning Signal processing attribution xAI methods Graph-Neural Network Biometrics signal-to-noise ratio
10.1007/978-3-031-44070-0_7
2Vilone G., Longo L.Development of a Human-Centred Psychometric Test for the Evaluation of Explanations Produced by XAI Methods eXplainable Artificial Intelligence, The World Conference (xAI-2023) 2023 Explainable Artificial Intelligence Human-centred evaluation Psychometrics Machine Learning Deep Learning Explainability
10.1007/978-3-031-44070-0_11
1Vilone G. Longo L.A global model-agnostic XAI method for the automatic formation of an abstract argumentation framework and its objective evaluation. 1st Int. Workshop on Argumentation for eXplainable AI (with 9th Int. Conference on Computational Models of Argument, COMMA 2022) 2022 Explainable artificial intelligence Argumentation Non-monotonic reasoning Method evaluation Metrics of explainability
# Authors Title Details Date Pdf/Links/Bibtex Keywords