| # | Authors | Title | Details | Date | Pdf/Links/Bibtex | Keywords |
|---|---|---|---|---|---|---|
| 17 | Ephrem Tibebe Mekonnen; Longo L., Dondio P. | LOMATCE: LOcal Model-Agnostic Time-series Classification Explanations | IEEE Access | 2025 | @ARTICLE{MekonnenLongo2025, author={Mekonnen, Ephrem Tibebe and Longo, Luca and Dondio, Pierpaolo}, journal={IEEE Access}, title={LOMATCE: LOcal Model-Agnostic Time-series Classification Explanations}, year={2025}, volume={}, number={}, pages={1-1}, keywords={Time series analysis;Adaptation models;Explainable AI;Predictive models;Data models;Closed box;Perturbation methods;Computational modeling;Deep learning;Kernel;Explainable Artificial Intelligence;Model-agnostic;Time series;Post hoc;Deep learning;XAI}, doi={10.1109/ACCESS.2025.3625442}} [Close]
| Time series analysis • Adaptation models • Explainable AI • Predictive models • Data models • Closed box • Perturbation methods • Computational modeling • Deep learning • Kernel • Explainable Artificial Intelligence • Model-agnostic • Time series • Post hoc • Deep learning • XAI |
| 16 | Kopanja M., Savic M., Longo L | Enhancing Cost-Sensitive Tree-Based XAI Surrogate Method: Exploring Alternative Cost Matrix Formulation | Joint Proceedings of the xAI 2025 Late-breaking Work, Demos and Doctoral Consortium co-located with the 3rd World Conference on eXplainable Artificial Intelligence (xAI 2025) | 2025 |
@inproceedings{, title={Enhancing Cost-Sensitive Tree-Based XAI Surrogate Method: Exploring Alternative Cost Matrix Formulation}, author={Kopanja, Marija and Savi?, Miloš and Longo, Luca}, year={2025}, booktitle = {Joint Proceedings of the xAI 2025 Late-breaking Work, Demos and Doctoral Consortium co-located with the 3rd World Conference on eXplainable Artificial Intelligence (xAI 2025), Istanbul, Turkey, 9-11 July, 2025}, publisher = {CEUR-WS.org}, volume = {4017}, series = {{CEUR} Workshop Proceedings}, editor = {Przemys?aw Biecek, Slawomir Nowaczyk, Gitta Kutyniok, Luca Longo}, pages={129-136}, url={https://ceur-ws.org/Vol-4017/paper_17.pdf} } [Close]
| Explainable artificial intelligence • Cost-sensitive decision tree • Surrogate modeling • Rule extraction • Tree-based
methods • Model-agnostic explanations • Rule-based systems • Interpretability • Machine Learning. |
| 15 | Marochko V., Rogala J., Longo L. | Integrated Gradients for Enhanced Interpretation of P3b-ERP Classifiers Trained with EEG-superlets in Traditional and Virtual Environments | Joint Proceedings of the xAI 2025 Late-breaking Work, Demos and Doctoral Consortium co-located with the 3rd World Conference on eXplainable Artificial Intelligence (xAI 2025) | 2025 | @inproceedings{MarochkoLongo2025, title={Integrated Gradients for Enhanced Interpretation of P3b-ERP Classifiers Trained with EEG-superlets in Traditional and Virtual Environments}, author={Marochko, Vladimir and Rogala, Jacek and Longo, Luca}, year={2025}, booktitle = {Joint Proceedings of the xAI 2025 Late-breaking Work, Demos and Doctoral Consortium co-located with the 3rd World Conference on eXplainable Artificial Intelligence (xAI 2025), Istanbul, Turkey, 9-11 July, 2025}, publisher = {CEUR-WS.org}, volume = {4017}, series = {{CEUR} Workshop Proceedings}, editor = {Przemys?aw Biecek, Slawomir Nowaczyk, Gitta Kutyniok, Luca Longo}, pages={49-56}, url={https://ceur-ws.org/Vol-4017/paper_07.pdf} } [Close]
| Event-related potentials • Deep learning • Convolutional neural networks • Explainable Artificial Intelligence •
Integrated Gradients • P3b • Oddball paradigm • time-frequency super-resolution • Superlets. |
| 14 | Gupta G., Qureshi M.A., Longo L. | A Global Post Hoc XAI Method For Interpreting LSTM Using Deterministic Finite State Automata | The Irish conference on Artificial Intelligence and Cognitive Science | 2025 |
@inproceedings{GuptaLongo2024, title={A Global Post Hoc XAI Method For Interpreting LSTM Using Deterministic Finite State Automata}, author={Gupta G., Qureshi M.A., Longo, L.}, year={2024}, booktitle = { Proceedings of The 32nd Irish Conference on Artificial Intelligence and Cognitive Science (AICS 2024)}, publisher = {CEUR-WS.org}, volume = {3910}, series = {{CEUR} Workshop Proceedings}, pages={26-38} } [Close]
| RNN • interpretability • Explainable AI • LSTM • Deterministic Finite State Automata • k-means clustering • Recurrent Neural Networks |
| 13 | Marochko V., and Longo L. | Enhancing the analysis of the P300 event-related potential with integrated gradients on a convolutional neural network trained with superlets | Joint Proceedings of the xAI 2024 Late-breaking Work, Demos and Doctoral Consortium co-located with the 2nd World Conference on eXplainable Artificial Intelligence (xAI 2024) | 2024 |
@inproceedings{Marochko2024, title={Enhancing the analysis of the P300 event-related potential with integrated gradients on a convolutional neural network trained with superlets}, author={Marochko, Vladimir, and Longo, Luca}, year={2024}, booktitle = {Joint Proceedings of the xAI 2024 Late-breaking Work, Demos and Doctoral Consortium co-located with the 2nd World Conference on eXplainable Artificial Intelligence (xAI 2024), Valletta, Malta, 17-19 July, 2024}, publisher = {CEUR-WS.org}, url = {https://ceur-ws.org/Vol-3793/paper_19.pdf}, volume = {3793}, series = {{CEUR} Workshop Proceedings}, editor = {Luca Longo, Weiru Liu, Grégoire Montavon}, pages={145-152} } [Close]
| Event-related potentials • Deep learning • Convolutional neural networks • Explainable Artificial Intelligence •
Integrated gradients • P3b • Oddball paradigm • time-frequency super-resolution • Superlets |
| 12 | Mekonnen E. T., Longo L., Dondio P. | Interpreting Black-Box Time Series Classifiers using Parameterised Event Primitives | Joint Proceedings of the xAI 2024 Late-breaking Work, Demos and Doctoral Consortium co-located with the 2nd World Conference on eXplainable Artificial Intelligence (xAI 2024) | 2024 |
@inproceedings{Mekonnen2024, title={Interpreting Black-Box Time Series Classifiers using Parameterised Event Primitives}, author={Mekonnen, Ephrem. T., Longo, Luca, and Dondio, Pierpaolo}, year={2024}, booktitle = {Joint Proceedings of the xAI 2024 Late-breaking Work, Demos and Doctoral Consortium co-located with the 2nd World Conference on eXplainable Artificial Intelligence (xAI 2024), Valletta, Malta, 17-19 July, 2024}, publisher = {CEUR-WS.org}, url = {https://ceur-ws.org/Vol-3793/paper_9.pdf}, volume = {3793}, series = {{CEUR} Workshop Proceedings}, editor = {Luca Longo, Weiru Liu, Grégoire Montavon} pages={65-72} } [Close]
| Explainable Artificial Intelligence • Model-Agnostic • Time Series • Post-hoc • Deep Learning • Machine Learning • Event primitives • Time-series |
| 11 | Chikkankod A.V., Longo L. | A proposal for improving EEG microstate generation via interpretable deep clustering with convolutional autoencoders | Joint Proceedings of the xAI 2024 Late-breaking Work, Demos and Doctoral Consortium co-located with the 2nd World Conference on eXplainable Artificial Intelligence (xAI 2024) | 2024 |
@inproceedings{chikkankod2024proposal, title={A proposal for improving EEG microstate generation via interpretable deep clustering with convolutional autoencoders}, author={Chikkankod, Arjun Vinayak and Longo, Luca}, year={2024}, booktitle = {Joint Proceedings of the xAI 2024 Late-breaking Work, Demos and Doctoral Consortium co-located with the 2nd World Conference on eXplainable Artificial Intelligence (xAI 2024), Valletta, Malta, 17-19 July, 2024}, publisher = {CEUR-WS.org}, url = {https://ceur-ws.org/Vol-3793/paper_4.pdf}, volume = {3793}, series = {{CEUR} Workshop Proceedings}, editor = {Luca Longo, Weiru Liu, Grégoire Montavon}, pages={25-32} } [Close]
| EEG Microstates • Shallow clustering • Deep clustering • Convolutional autoencoders • Resting state • Machine Learning • Deep Learning • Microstate theory |
| 10 | Mekonnen E.T., Longo L., Dondio P. | A global model-agnostic rule-based XAI method based on Parameterized Event Primitives for time series classifiers | Frontiers Artificial Intelligence | 2024 |
@ARTICLE{10.3389/frai.2024.1381921, AUTHOR={Mekonnen, Ephrem T. and Dondio, Pierpaolo and Longo, Luca }, TITLE={A Global Model-Agnostic Rule-Based XAI Method based on Parameterised Event Primitives for Time Series Classifiers}, JOURNAL={Frontiers in Artificial Intelligence}, VOLUME={7}, YEAR={2024}, URL={https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2024.1381921}, DOI={10.3389/frai.2024.1381921}, ISSN={2624-8212}, } [Close]
| Deep learning • Explainable Artificial Intelligence • time series classification • decision tree •
model agnostic • post-hoc • Machine Learning |
| 9 | Longo L., Brcic M., Cabitza F., Choi J., Confalonieri R., Del Ser J., Guidotti R., Hayashi Y., Herrera F., Holzinger A., Jiang R., Khosravi H., Lecue F., Malgieri G., Páez A, Samek W., Schneider J, Speith T., Stumpf S. | Explainable Artificial Intelligence (XAI) 2.0: A manifesto of open challenges and interdisciplinary research directions | Information Fusion | 2024 |
@article{LONGO2024102301, title = {Explainable Artificial Intelligence (XAI) 2.0: A manifesto of open challenges and interdisciplinary research directions}, journal = {Information Fusion}, volume = {106}, pages = {102301}, year = {2024}, issn = {1566-2535}, doi = {https://doi.org/10.1016/j.inffus.2024.102301}, url = {https://www.sciencedirect.com/science/article/pii/S1566253524000794}, author = {Luca Longo and Mario Brcic and Federico Cabitza and Jaesik Choi and Roberto Confalonieri and Javier Del Ser and Riccardo Guidotti and Yoichi Hayashi and Francisco Herrera and Andreas Holzinger and Richard Jiang and Hassan Khosravi and Freddy Lecue and Gianclaudio Malgieri and Andrés Páez and Wojciech Samek and Johannes Schneider and Timo Speith and Simone Stumpf}, keywords = {Explainable artificial intelligence, XAI, Interpretability, Manifesto, Open challenges, Interdisciplinarity, Ethical AI, Large language models, Trustworthy AI, Responsible AI, Generative AI, Multi-faceted explanations, Concept-based explanations, Causality, Actionable XAI, Falsifiability} } [Close]
| Explainable artificial intelligence •
XAI •
Interpretability •
Manifesto •
Open challenges •
Interdisciplinarity •
Ethical AI •
Large language models •
Trustworthy AI •
Responsible AI •
Generative AI •
Multi-faceted explanations •
Concept-based explanations •
Causality •
Actionable XAI •
Falsifiability |
| 8 | Sullivan R.S., Longo L. | Optimizing Deep Q-Learning Experience Replay with SHAP Explanations: Exploring Minimum Experience Replay Buffer Sizes in Reinforcement Learning | Joint Proceedings of the xAI-2023 Late-breaking Work, Demos and Doctoral Consortium, co-located with the 1st World Conference on eXplainable Artificial Intelligence (xAI-2023) | 2023 |
| Deep Reinforcement Learning • Experience Replay • SHapley Additive exPlanations • eXplainable Artificial Intelligence • Machine Learning |
| 7 | Mekonnen E.T., Dondio P., Longo L. | Explaining Deep Learning Time Series Classification Models using a Decision Tree-Based Post-Hoc XAI Method | Joint Proceedings of the xAI-2023 Late-breaking Work, Demos and Doctoral Consortium co-located with the 1st World Conference on eXplainable Artificial Intelligence (xAI-2023) | 2023 |
@INPROCEEDINGS{Mekonnen2023, author={Mekonnen, E.T., Dondio P., and Longo L.}, booktitle={Joint Proceedings of the xAI-2023 Late-breaking Work, Demos and Doctoral Consortium co-located with the 1st World Conference on eXplainable Artificial Intelligence (xAI-2023)}, title={Explaining Deep Learning Time Series Classification Models using a Decision Tree-Based Post-Hoc XAI Method}, year={2023}, volume={3554}, number={}, pages={71-76}, publisher={CEUR} } [Close]
| Explainable Artificial Intelligence • Deep Learning • Time Series • Classification • Decision-Trees • Machine Learning • Post-hoc |
| 6 | Ahmed T., Longo L. | Latent Space Interpretation and Visualisation for Understanding the Decisions of Convolutional Variational Autoencoders Trained with EEG Topographic Maps | Joint Proceedings of the xAI-2023 Late-breaking Work, Demos and Doctoral Consortium co-located with the 1st World Conference on eXplainable Artificial Intelligence (xAI-2023) | 2023 |
@inproceedings{AhmedLongo2023, author = {Ahmed, Taufique and Longo, Luca}, title = {Latent Space Interpretation and Visualisation for Understanding the Decisions of Convolutional Variational Autoencoders Trained with EEG Topographic Maps}, booktitle = {Joint Proceedings of the xAI-2023 Late-breaking Work, Demos and Doctoral Consortium, co-located with the 1st World Conference on eXplainable Artificial Intelligence (xAI-2023)}, year = {2023}, pages={65--70}, publisher={CEUR Workshop Proceedings} } [Close]
| Electroencephalography • Convolutional variational autoencoders • latent space interpretation • deep learning • spectral topographic maps • Machine Learning |
| 5 | Vilone G., Longo L. | An Examination of the Effect of the Inconsistency Budget in Weighted Argumentation Frameworks and their Impact on the Interpretation of Deep Neural Networks | Joint Proceedings of the xAI-2023 Late-breaking Work, Demos and Doctoral Consortium co-located with the 1st World Conference on eXplainable Artificial Intelligence (xAI-2023) | 2023 |
@inproceedings{DBLP:conf/xai/ViloneL23a, author = {Giulia Vilone and Luca Longo}, editor = {Luca Longo}, title = {An Examination of the Effect of the Inconsistency Budget in Weighted Argumentation Frameworks and their Impact on the Interpretation of Deep Neural Networks}, booktitle = {Joint Proceedings of the xAI-2023 Late-breaking Work, Demos and Doctoral Consortium co-located with the 1st World Conference on eXplainable Artificial Intelligence (xAI-2023), Lisbon, Portugal, July 26-28, 2023}, series = {{CEUR} Workshop Proceedings}, volume = {3554}, pages = {53--58}, publisher = {CEUR-WS.org}, year = {2023}, url = {https://ceur-ws.org/Vol-3554/paper10.pdf} } [Close]
| Explainable artificial intelligence • Argumentation • Non-monotonic reasoning • Automatic attack extraction • Weighted argumentation frameworks • Inconsistency budget • Machine Learning • Neural Networks |
| 4 | Davydko O., Pavlov V., Longo L. | Selecting textural characteristics of chest X-Rays for pneumonia lesions classification with the integrated gradients XAI attribution method | eXplainable Artificial Intelligence, The World Conference (xAI-2023) | 2023 |
@InProceedings{DavydkoLongo2023, author="Davydko, Oleksandr and Pavlov, Vladimir and Longo, Luca", editor="Longo, Luca", title="Selecting Textural Characteristics of Chest X-Rays for Pneumonia Lesions Classification with the Integrated Gradients XAI Attribution Method", booktitle="Explainable Artificial Intelligence", year="2023", publisher="Springer Nature Switzerland", address="Cham", pages="671--687", isbn="978-3-031-44064-9" } [Close]
| Explainable artificial intelligence • Neural networks • Texture analysis • Medical image processing • Classification • Machine Learning |
| 3 | Gómez Tapia C., Bozic B., Longo L. | Investigating the Effect of Pre-processing Methods on Model Decision-Making in EEG-Based Person Identification | eXplainable Artificial Intelligence, The World Conference (xAI-2023) | 2023 |
@InProceedings{GomezLongo2023, author="Tapia, Carlos G{\'o}mez and Bozic, Bojan and Longo, Luca", editor="Longo, Luca", title="Investigating the Effect of Pre-processing Methods on Model Decision-Making in EEG-Based Person Identification", booktitle="Explainable Artificial Intelligence", year="2023", publisher="Springer Nature Switzerland", address="Cham", pages="131--152", isbn="978-3-031-44070-0" } [Close]
| Electroencephalography •
eXplainable Artificial Intelligence •
Deep Learning •
Signal processing •
attribution xAI methods •
Graph-Neural Network •
Biometrics •
signal-to-noise ratio |
| 2 | Vilone G., Longo L. | Development of a Human-Centred Psychometric Test for the Evaluation of Explanations Produced by XAI Methods | eXplainable Artificial Intelligence, The World Conference (xAI-2023) | 2023 | @InProceedings{ViloneLongo2023, author="Vilone, Giulia and Longo, Luca", editor="Longo, Luca", title="Development of a Human-Centred Psychometric Test for the Evaluation of Explanations Produced by XAI Methods", booktitle="Explainable Artificial Intelligence", year="2023", publisher="Springer Nature Switzerland", address="Cham", pages="205--232", isbn="978-3-031-44070-0" } [Close]
| Explainable Artificial Intelligence • Human-centred evaluation • Psychometrics • Machine Learning • Deep Learning • Explainability |
| 1 | Vilone G. Longo L. | A global model-agnostic XAI method for the automatic formation of an abstract argumentation framework and its objective evaluation. | 1st Int. Workshop on Argumentation for eXplainable AI (with 9th Int. Conference on Computational Models of Argument, COMMA 2022) | 2022 | @inproceedings{ViloneLongo2022XAIArg, author = {Giulia Vilone and Luca Longo}, title = {A global model-agnostic XAI method for the automatic formation of an abstract argumentation framework and its objective evaluation.}, booktitle = {1st International Workshop on Argumentation for eXplainable AI co-located with 9th International Conference on Computational Models of Argument (COMMA 2022)}, series = {{CEUR} Workshop Proceedings}, volume = {3209}, publisher = {CEUR-WS.org}, year = {2022}, url = {http://ceur-ws.org/Vol-3209/2119.pdf} } [Close]
| Explainable artificial intelligence • Argumentation • Non-monotonic reasoning • Method evaluation • Metrics
of explainability |
| # | Authors | Title | Details | Date | Pdf/Links/Bibtex | Keywords |