| # | Authors | Title | Details | Date | Pdf/Links/Bibtex | Keywords |
|---|---|---|---|---|---|---|
| 38 | El-Qoraychy FZ, Mualla Y., Zhao H., Dridi M., Créput JC, Longo L. | Explainable AI for sign language recognition models: Integrating Grad-Cam LIME and Integrated Gradients | Plos One | 2025 | @article{El-QoraychyLongo2025, doi = {10.1371/journal.pone.0336481}, author = {El-Qoraychy, Fatima-Zahrae AND Mualla, Yazan AND Zhao, Hui AND Dridi, Mahjoub AND Créput, Jean-Charles AND Longo, Luca}, journal = {PLOS ONE}, publisher = {Public Library of Science}, title = {Explainable AI for sign language recognition models: Integrating Grad-Cam LIME and Integrated Gradients}, year = {2025}, month = {12}, volume = {20}, url = {https://doi.org/10.1371/journal.pone.0336481}, pages = {1-24}, number = {12} } [Close]
| Sign language • Machine Learning • Explainable Artificial Intelligence • Grad-Cam • Lime • Integrated Gradients |
| 37 | Ephrem Tibebe Mekonnen; Longo L., Dondio P. | LOMATCE: LOcal Model-Agnostic Time-series Classification Explanations | IEEE Access | 2025 | @ARTICLE{MekonnenLongo2025, author={Mekonnen, Ephrem Tibebe and Longo, Luca and Dondio, Pierpaolo}, journal={IEEE Access}, title={LOMATCE: LOcal Model-Agnostic Time-series Classification Explanations}, year={2025}, volume={}, number={}, pages={1-1}, keywords={Time series analysis;Adaptation models;Explainable AI;Predictive models;Data models;Closed box;Perturbation methods;Computational modeling;Deep learning;Kernel;Explainable Artificial Intelligence;Model-agnostic;Time series;Post hoc;Deep learning;XAI}, doi={10.1109/ACCESS.2025.3625442}} [Close]
| Time series analysis • Adaptation models • Explainable AI • Predictive models • Data models • Closed box • Perturbation methods • Computational modeling • Deep learning • Kernel • Explainable Artificial Intelligence • Model-agnostic • Time series • Post hoc • Deep learning • XAI |
| 36 | Kopanja M., Savic M., Longo L. | CORTEX: Cost-Sensitive Rule and Tree Extraction Method | Knowledge-Based Systems | 2025 |
@article{KOPANJALongo2025, title = {CORTEX: Cost-Sensitive Rule and Tree Extraction Method}, journal = {Knowledge-Based Systems}, pages = {114592}, year = {2025}, issn = {0950-7051}, doi = {https://doi.org/10.1016/j.knosys.2025.114592}, url = {https://www.sciencedirect.com/science/article/pii/S0950705125016314}, } [Close]
| Explainable artificial intelligence •
Rule-based methods •
Tree-based methods •
Cost-sensitive decision tree •
Rule extraction •
Surrogate models |
| 35 | Vilone G., Longo L. | Evaluating Argumentation Graphs as Global Explainable Surrogate Models for Dense Neural Networks and Their Comparison with Decision Trees | eXplainable Artificial Intelligence, The World Conference (xAI-2025) | 2025 | @InProceedings{ViloneLongo2025, author="Vilone, Giulia and Longo, Luca", editor="Guidotti, Riccardo and Schmid, Ute and Longo, Luca", title="Evaluating Argumentation Graphs as Global Explainable Surrogate Models for Dense Neural Networks and Their Comparison with Decision Trees", booktitle="Explainable Artificial Intelligence", year="2026", publisher="Springer Nature Switzerland", address="Cham", pages="89--112", isbn="978-3-032-08333-3" } [Close]
| Logical Analysis • Graph Theory • Graph Theory in Probability • Machine Learning • Reasoning • Symbolic AI • Explainable AI • Surrogate models • Computational Argumentation • Rule-based systems • Decision-trees • Dense Neural Networks • Deep learning |
| 34 | Ahmed T., Biecek P. Longo L. | Latent Space Interpretation and Mechanistic Clipping of Subject-Specific Variational Autoencoders of EEG Topographic Maps for Artefacts Reduction | eXplainable Artificial Intelligence, The World Conference (xAI-2025) | 2025 | @InProceedings{AhmedLongo2025, author="Ahmed, Taufique and Biecek, Przemyslaw and Longo, Luca", editor="Guidotti, Riccardo and Schmid, Ute and Longo, Luca", title="Latent Space Interpretation and Mechanistic Clipping of Subject-Specific Variational Autoencoders of EEG Topographic Maps for Artefacts Reduction", booktitle="Explainable Artificial Intelligence", year="2026", publisher="Springer Nature Switzerland", address="Cham", pages="327--350", isbn="978-3-032-08327-2" } [Close]
| Electroencephalography •
Spectral topographic maps •
Subject-specific •
Variational autoencoder •
Latent space •
interpretability •
Artefacts removal •
Deep learning •
full automation •
explainable AI |
| 33 | Ceschin M., Arrighi L., Longo L., Barbon Junior S. | Extending Decision Predicate Graphs for Comprehensive Explanation of Isolation Forest | eXplainable Artificial Intelligence, The World Conference (xAI-2025) | 2025 |
@InProceedings{CeschinLongo2025, author="Ceschin, Matteo and Arrighi, Leonardo and Longo, Luca and Barbon Junior, Sylvio", editor="Guidotti, Riccardo and Schmid, Ute and Longo, Luca", title="Extending Decision Predicate Graphs for Comprehensive Explanation of Isolation Forest", booktitle="Explainable Artificial Intelligence", year="2026", publisher="Springer Nature Switzerland", address="Cham", pages="271--293", isbn="978-3-032-08324-1" } [Close]
| Ensemble Learning • Outliers • Explainable Artificial Intelligence • Interpretability • Anomalies • Tree-based Ensemble Model |
| 32 | Davydko O., Pavlov V., Longo L. | A Combination of Integrated Gradients and SRFAMap for Explaining Neural Networks Trained with High-Order Statistical Radiomic Features | eXplainable Artificial Intelligence, The World Conference (xAI-2025) | 2025 | @InProceedings{OleksandrLongo2025, author="Davydko, Oleksandr and Pavlov, Vladimir and Longo, Luca", editor="Guidotti, Riccardo and Schmid, Ute and Longo, Luca", title="A Combination of Integrated Gradients and SRFAMap for Explaining Neural Networks Trained with High-Order Statistical Radiomic Features", booktitle="Explainable Artificial Intelligence", year="2026", publisher="Springer Nature Switzerland", address="Cham", pages="359--379", isbn="978-3-032-08317-3" } [Close]
| Explainable artificial intelligence • Radiomics • Texture analysis • Medical image processing • Saliency map • Integrated Gradients • Neural Networks • Interpretable Machine Learning |
| 31 | Longo L., Berretta S., Verda D., Rizzo L. | Computational argumentation and automatic rule-generation for explainable data-driven modeling | IEEE Access | 2025 | @ARTICLE{Longo2025IEEEAccess, author={Longo, Luca and Berretta, Serena and Verda, Damiano and Rizzo, Lucas}, journal={IEEE Access}, title={Computational argumentation and automatic rule-generation for explainable data-driven modeling}, year={2025}, volume={}, number={}, pages={1-1}, doi={10.1109/ACCESS.2025.3618992}} [Close]
| Rule-base systems • Explainable Artificial Intelligence • Logic Learning Machine • Non-monotonic reasoning • Defeasible Reasoning • Explainability • Computational argumentation • Argumentation semantics • Explainability |
| 30 | Kopanja M., Savic M., Longo L | Enhancing Cost-Sensitive Tree-Based XAI Surrogate Method: Exploring Alternative Cost Matrix Formulation | Joint Proceedings of the xAI 2025 Late-breaking Work, Demos and Doctoral Consortium co-located with the 3rd World Conference on eXplainable Artificial Intelligence (xAI 2025) | 2025 |
@inproceedings{, title={Enhancing Cost-Sensitive Tree-Based XAI Surrogate Method: Exploring Alternative Cost Matrix Formulation}, author={Kopanja, Marija and Savi?, Miloš and Longo, Luca}, year={2025}, booktitle = {Joint Proceedings of the xAI 2025 Late-breaking Work, Demos and Doctoral Consortium co-located with the 3rd World Conference on eXplainable Artificial Intelligence (xAI 2025), Istanbul, Turkey, 9-11 July, 2025}, publisher = {CEUR-WS.org}, volume = {4017}, series = {{CEUR} Workshop Proceedings}, editor = {Przemys?aw Biecek, Slawomir Nowaczyk, Gitta Kutyniok, Luca Longo}, pages={129-136}, url={https://ceur-ws.org/Vol-4017/paper_17.pdf} } [Close]
| Explainable artificial intelligence • Cost-sensitive decision tree • Surrogate modeling • Rule extraction • Tree-based
methods • Model-agnostic explanations • Rule-based systems • Interpretability • Machine Learning. |
| 29 | Marochko V., Rogala J., Longo L. | Integrated Gradients for Enhanced Interpretation of P3b-ERP Classifiers Trained with EEG-superlets in Traditional and Virtual Environments | Joint Proceedings of the xAI 2025 Late-breaking Work, Demos and Doctoral Consortium co-located with the 3rd World Conference on eXplainable Artificial Intelligence (xAI 2025) | 2025 | @inproceedings{MarochkoLongo2025, title={Integrated Gradients for Enhanced Interpretation of P3b-ERP Classifiers Trained with EEG-superlets in Traditional and Virtual Environments}, author={Marochko, Vladimir and Rogala, Jacek and Longo, Luca}, year={2025}, booktitle = {Joint Proceedings of the xAI 2025 Late-breaking Work, Demos and Doctoral Consortium co-located with the 3rd World Conference on eXplainable Artificial Intelligence (xAI 2025), Istanbul, Turkey, 9-11 July, 2025}, publisher = {CEUR-WS.org}, volume = {4017}, series = {{CEUR} Workshop Proceedings}, editor = {Przemys?aw Biecek, Slawomir Nowaczyk, Gitta Kutyniok, Luca Longo}, pages={49-56}, url={https://ceur-ws.org/Vol-4017/paper_07.pdf} } [Close]
| Event-related potentials • Deep learning • Convolutional neural networks • Explainable Artificial Intelligence •
Integrated Gradients • P3b • Oddball paradigm • time-frequency super-resolution • Superlets. |
| 28 | Criscuolo S., Giugliano S., Apicella, A., Donnarumma F., Amato F. Tedesco A., Longo L. | Exploring the Latent Space of Person-Specific Convolutional Autoencoders for Eye-Blink Artefact Mitigation in EEG Signals | 2024 IEEE 8th Forum on Research and Technologies for Society and Industry Innovation (RTSI) | 2024 | @INPROCEEDINGS{CriscuoloLongo2024, author={Criscuolo, Sabatina and Giugliano, Salvatore and Apicella, Andrea and Donnarumma, Francesco and Amato, Francesco and Tedesco, Annarita and Longo, Luca}, booktitle={2024 IEEE 8th Forum on Research and Technologies for Society and Industry Innovation (RTSI)}, title={Exploring the Latent Space of Person-Specific Convolutional Autoencoders for Eye-Blink Artefact Mitigation in EEG Signals}, year={2024}, volume={}, number={}, pages={414-419}, keywords={Training;Correlation;Convolution;Noise reduction;Pipelines;Inspection;Brain modeling;Electroencephalography;Space exploration;Recording;Electroencephalography;Autoencoders;Eye-blink Artefacts Detection;Latent Space interpretation;Explain-able Artificial Intelligence}, doi={10.1109/RTSI61910.2024.10761377}} @INPROCEEDINGS{10761377, author={Criscuolo, Sabatina and Giugliano, Salvatore and Apicella, Andrea and Donnarumma, Francesco and Amato, Francesco and Tedesco, Annarita and Longo, Luca}, booktitle={2024 IEEE 8th Forum on Research and Technologies for Society and Industry Innovation (RTSI)}, title={Exploring the Latent Space of Person-Specific Convolutional Autoencoders for Eye-Blink Artefact Mitigation in EEG Signals}, year={2024}, volume={}, number={}, pages={414-419}, keywords={Training;Correlation;Convolution;Noise reduction;Pipelines;Inspection;Brain modeling;Electroencephalography;Space exploration;Recording;Electroencephalography;Autoencoders;Eye-blink Artefacts Detection;Latent Space interpretation;Explain-able Artificial Intelligence}, doi={10.1109/RTSI61910.2024.10761377}} [Close]
| Electroencephalography • Autoencoders • Eye-blink Artefacts Detection • Latent Space interpretation • Explainable Artificial Intelligence • Artificial Intelligence • Machine Learning • Deep learning |
| 27 | Marochko V., and Longo L. | Enhancing the analysis of the P300 event-related potential with integrated gradients on a convolutional neural network trained with superlets | Joint Proceedings of the xAI 2024 Late-breaking Work, Demos and Doctoral Consortium co-located with the 2nd World Conference on eXplainable Artificial Intelligence (xAI 2024) | 2024 |
@inproceedings{Marochko2024, title={Enhancing the analysis of the P300 event-related potential with integrated gradients on a convolutional neural network trained with superlets}, author={Marochko, Vladimir, and Longo, Luca}, year={2024}, booktitle = {Joint Proceedings of the xAI 2024 Late-breaking Work, Demos and Doctoral Consortium co-located with the 2nd World Conference on eXplainable Artificial Intelligence (xAI 2024), Valletta, Malta, 17-19 July, 2024}, publisher = {CEUR-WS.org}, url = {https://ceur-ws.org/Vol-3793/paper_19.pdf}, volume = {3793}, series = {{CEUR} Workshop Proceedings}, editor = {Luca Longo, Weiru Liu, Grégoire Montavon}, pages={145-152} } [Close]
| Event-related potentials • Deep learning • Convolutional neural networks • Explainable Artificial Intelligence •
Integrated gradients • P3b • Oddball paradigm • time-frequency super-resolution • Superlets |
| 26 | Mekonnen E. T., Longo L., Dondio P. | Interpreting Black-Box Time Series Classifiers using Parameterised Event Primitives | Joint Proceedings of the xAI 2024 Late-breaking Work, Demos and Doctoral Consortium co-located with the 2nd World Conference on eXplainable Artificial Intelligence (xAI 2024) | 2024 |
@inproceedings{Mekonnen2024, title={Interpreting Black-Box Time Series Classifiers using Parameterised Event Primitives}, author={Mekonnen, Ephrem. T., Longo, Luca, and Dondio, Pierpaolo}, year={2024}, booktitle = {Joint Proceedings of the xAI 2024 Late-breaking Work, Demos and Doctoral Consortium co-located with the 2nd World Conference on eXplainable Artificial Intelligence (xAI 2024), Valletta, Malta, 17-19 July, 2024}, publisher = {CEUR-WS.org}, url = {https://ceur-ws.org/Vol-3793/paper_9.pdf}, volume = {3793}, series = {{CEUR} Workshop Proceedings}, editor = {Luca Longo, Weiru Liu, Grégoire Montavon} pages={65-72} } [Close]
| Explainable Artificial Intelligence • Model-Agnostic • Time Series • Post-hoc • Deep Learning • Machine Learning • Event primitives • Time-series |
| 25 | Chikkankod A.V., Longo L. | A proposal for improving EEG microstate generation via interpretable deep clustering with convolutional autoencoders | Joint Proceedings of the xAI 2024 Late-breaking Work, Demos and Doctoral Consortium co-located with the 2nd World Conference on eXplainable Artificial Intelligence (xAI 2024) | 2024 |
@inproceedings{chikkankod2024proposal, title={A proposal for improving EEG microstate generation via interpretable deep clustering with convolutional autoencoders}, author={Chikkankod, Arjun Vinayak and Longo, Luca}, year={2024}, booktitle = {Joint Proceedings of the xAI 2024 Late-breaking Work, Demos and Doctoral Consortium co-located with the 2nd World Conference on eXplainable Artificial Intelligence (xAI 2024), Valletta, Malta, 17-19 July, 2024}, publisher = {CEUR-WS.org}, url = {https://ceur-ws.org/Vol-3793/paper_4.pdf}, volume = {3793}, series = {{CEUR} Workshop Proceedings}, editor = {Luca Longo, Weiru Liu, Grégoire Montavon}, pages={25-32} } [Close]
| EEG Microstates • Shallow clustering • Deep clustering • Convolutional autoencoders • Resting state • Machine Learning • Deep Learning • Microstate theory |
| 24 | Mekonnen E.T., Longo L., Dondio P. | A global model-agnostic rule-based XAI method based on Parameterized Event Primitives for time series classifiers | Frontiers Artificial Intelligence | 2024 |
@ARTICLE{10.3389/frai.2024.1381921, AUTHOR={Mekonnen, Ephrem T. and Dondio, Pierpaolo and Longo, Luca }, TITLE={A Global Model-Agnostic Rule-Based XAI Method based on Parameterised Event Primitives for Time Series Classifiers}, JOURNAL={Frontiers in Artificial Intelligence}, VOLUME={7}, YEAR={2024}, URL={https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2024.1381921}, DOI={10.3389/frai.2024.1381921}, ISSN={2624-8212}, } [Close]
| Deep learning • Explainable Artificial Intelligence • time series classification • decision tree •
model agnostic • post-hoc • Machine Learning |
| 23 | Rizzo L., Verda D., Berretta S., Longo L. | A Novel Integration of Data-Driven Rule Generation and Computational Argumentation for Enhanced Explainable AI | Machine Learning and Knowledge Extraction | 2024 |
@Article{LongoRizzo2024, AUTHOR = {Rizzo, Lucas and Verda, Damiano and Berretta, Serena and Longo, Luca}, TITLE = {A Novel Integration of Data-Driven Rule Generation and Computational Argumentation for Enhanced Explainable AI}, JOURNAL = {Machine Learning and Knowledge Extraction}, VOLUME = {6}, YEAR = {2024}, NUMBER = {3}, PAGES = {2049--2073}, URL = {https://www.mdpi.com/2504-4990/6/3/101}, ISSN = {2504-4990}, DOI = {10.3390/make6030101} } [Close]
| rule-base AI • explainable artificial intelligence • computational argumentation • defeasible reasoning • Artificial Intelligence |
| 22 | Raufi B., Finnegan C., Longo L. | A Comparative Analysis of SHAP, LIME, ANCHORS, and DICE for Interpreting a Dense Neural Network in Credit Card Fraud Detection | eXplainable Artificial Intelligence, The World Conference (xAI-2024) | 2024 |
@InProceedings{10.1007/978-3-031-63803-9_20, author="Raufi, Bujar and Finnegan, Ciaran and Longo, Luca", editor="Longo, Luca and Lapuschkin, Sebastian and Seifert, Christin", title="A Comparative Analysis of SHAP, LIME, ANCHORS, and DICE for Interpreting a Dense Neural Network in Credit Card Fraud Detection", booktitle="Explainable Artificial Intelligence", year="2024", publisher="Springer Nature Switzerland", address="Cham", pages="365--383", isbn="978-3-031-63803-9" } [Close]
| Explainable Artificial Intelligence •
Credit Card Fraud Detection •
Interpretability •
methods comparison •
SHapley Additive exPlanations •
Local Interpretable •
Model-agnostic Explanation •
ANCHORS •
Diverse Counterfactual Explanations |
| 21 | Davydko O., Pavlov V., Biecek P., & Longo L. | SRFAMap: A Method for Mapping Integrated Gradients of a CNN Trained with Statistical Radiomic Features to Medical Image Saliency Maps | eXplainable Artificial Intelligence, The World Conference (xAI-2024) | 2024 |
@InProceedings{10.1007/978-3-031-63803-9_1, author="Davydko, Oleksandr and Pavlov, Vladimir and Biecek, Przemys{\l}aw and Longo, Luca", editor="Longo, Luca and Lapuschkin, Sebastian and Seifert, Christin", title="SRFAMap: A Method for Mapping Integrated Gradients of a CNN Trained with Statistical Radiomic Features to Medical Image Saliency Maps", booktitle="Explainable Artificial Intelligence", year="2024", publisher="Springer Nature Switzerland", address="Cham", pages="3--23", isbn="978-3-031-63803-9" } [Close]
| Explainable artificial intelligence •
Radiomics •
Texture analysis •
Medical image processing •
Saliency map •
Deep-learning •
machine learning |
| 20 | Hryniewska-Guzik W., Longo L., Biecek P. | CNN-Based Explanation Ensembling for Dataset, Representation and Explanations Evaluation | eXplainable Artificial Intelligence, The World Conference (xAI-2024) | 2024 |
@InProceedings{10.1007/978-3-031-63797-1_18, author="Hryniewska-Guzik, Weronika and Longo, Luca and Biecek, Przemys{\l}aw", editor="Longo, Luca and Lapuschkin, Sebastian and Seifert, Christin", title="CNN-Based Explanation Ensembling for Dataset, Representation and Explanations Evaluation", booktitle="Explainable Artificial Intelligence", year="2024", publisher="Springer Nature Switzerland", address="Cham", pages="346--368", isbn="978-3-031-63797-1" } [Close]
| Explainable Artificial Intelligence •
XAI •
Convolutional Neural Network •
model evaluation •
data evaluation •
representation learning •
ensemble •
deep learning •
machine learning |
| 19 | Longo L., Brcic M., Cabitza F., Choi J., Confalonieri R., Del Ser J., Guidotti R., Hayashi Y., Herrera F., Holzinger A., Jiang R., Khosravi H., Lecue F., Malgieri G., Páez A, Samek W., Schneider J, Speith T., Stumpf S. | Explainable Artificial Intelligence (XAI) 2.0: A manifesto of open challenges and interdisciplinary research directions | Information Fusion | 2024 |
@article{LONGO2024102301, title = {Explainable Artificial Intelligence (XAI) 2.0: A manifesto of open challenges and interdisciplinary research directions}, journal = {Information Fusion}, volume = {106}, pages = {102301}, year = {2024}, issn = {1566-2535}, doi = {https://doi.org/10.1016/j.inffus.2024.102301}, url = {https://www.sciencedirect.com/science/article/pii/S1566253524000794}, author = {Luca Longo and Mario Brcic and Federico Cabitza and Jaesik Choi and Roberto Confalonieri and Javier Del Ser and Riccardo Guidotti and Yoichi Hayashi and Francisco Herrera and Andreas Holzinger and Richard Jiang and Hassan Khosravi and Freddy Lecue and Gianclaudio Malgieri and Andrés Páez and Wojciech Samek and Johannes Schneider and Timo Speith and Simone Stumpf}, keywords = {Explainable artificial intelligence, XAI, Interpretability, Manifesto, Open challenges, Interdisciplinarity, Ethical AI, Large language models, Trustworthy AI, Responsible AI, Generative AI, Multi-faceted explanations, Concept-based explanations, Causality, Actionable XAI, Falsifiability} } [Close]
| Explainable artificial intelligence •
XAI •
Interpretability •
Manifesto •
Open challenges •
Interdisciplinarity •
Ethical AI •
Large language models •
Trustworthy AI •
Responsible AI •
Generative AI •
Multi-faceted explanations •
Concept-based explanations •
Causality •
Actionable XAI •
Falsifiability |
| 18 | Sullivan R.S., Longo L. | Optimizing Deep Q-Learning Experience Replay with SHAP Explanations: Exploring Minimum Experience Replay Buffer Sizes in Reinforcement Learning | Joint Proceedings of the xAI-2023 Late-breaking Work, Demos and Doctoral Consortium, co-located with the 1st World Conference on eXplainable Artificial Intelligence (xAI-2023) | 2023 |
| Deep Reinforcement Learning • Experience Replay • SHapley Additive exPlanations • eXplainable Artificial Intelligence • Machine Learning |
| 17 | Mekonnen E.T., Dondio P., Longo L. | Explaining Deep Learning Time Series Classification Models using a Decision Tree-Based Post-Hoc XAI Method | Joint Proceedings of the xAI-2023 Late-breaking Work, Demos and Doctoral Consortium co-located with the 1st World Conference on eXplainable Artificial Intelligence (xAI-2023) | 2023 |
@INPROCEEDINGS{Mekonnen2023, author={Mekonnen, E.T., Dondio P., and Longo L.}, booktitle={Joint Proceedings of the xAI-2023 Late-breaking Work, Demos and Doctoral Consortium co-located with the 1st World Conference on eXplainable Artificial Intelligence (xAI-2023)}, title={Explaining Deep Learning Time Series Classification Models using a Decision Tree-Based Post-Hoc XAI Method}, year={2023}, volume={3554}, number={}, pages={71-76}, publisher={CEUR} } [Close]
| Explainable Artificial Intelligence • Deep Learning • Time Series • Classification • Decision-Trees • Machine Learning • Post-hoc |
| 16 | Ahmed T., Longo L. | Latent Space Interpretation and Visualisation for Understanding the Decisions of Convolutional Variational Autoencoders Trained with EEG Topographic Maps | Joint Proceedings of the xAI-2023 Late-breaking Work, Demos and Doctoral Consortium co-located with the 1st World Conference on eXplainable Artificial Intelligence (xAI-2023) | 2023 |
@inproceedings{AhmedLongo2023, author = {Ahmed, Taufique and Longo, Luca}, title = {Latent Space Interpretation and Visualisation for Understanding the Decisions of Convolutional Variational Autoencoders Trained with EEG Topographic Maps}, booktitle = {Joint Proceedings of the xAI-2023 Late-breaking Work, Demos and Doctoral Consortium, co-located with the 1st World Conference on eXplainable Artificial Intelligence (xAI-2023)}, year = {2023}, pages={65--70}, publisher={CEUR Workshop Proceedings} } [Close]
| Electroencephalography • Convolutional variational autoencoders • latent space interpretation • deep learning • spectral topographic maps • Machine Learning |
| 15 | Vilone G., Longo L. | An Examination of the Effect of the Inconsistency Budget in Weighted Argumentation Frameworks and their Impact on the Interpretation of Deep Neural Networks | Joint Proceedings of the xAI-2023 Late-breaking Work, Demos and Doctoral Consortium co-located with the 1st World Conference on eXplainable Artificial Intelligence (xAI-2023) | 2023 |
@inproceedings{DBLP:conf/xai/ViloneL23a, author = {Giulia Vilone and Luca Longo}, editor = {Luca Longo}, title = {An Examination of the Effect of the Inconsistency Budget in Weighted Argumentation Frameworks and their Impact on the Interpretation of Deep Neural Networks}, booktitle = {Joint Proceedings of the xAI-2023 Late-breaking Work, Demos and Doctoral Consortium co-located with the 1st World Conference on eXplainable Artificial Intelligence (xAI-2023), Lisbon, Portugal, July 26-28, 2023}, series = {{CEUR} Workshop Proceedings}, volume = {3554}, pages = {53--58}, publisher = {CEUR-WS.org}, year = {2023}, url = {https://ceur-ws.org/Vol-3554/paper10.pdf} } [Close]
| Explainable artificial intelligence • Argumentation • Non-monotonic reasoning • Automatic attack extraction • Weighted argumentation frameworks • Inconsistency budget • Machine Learning • Neural Networks |
| 14 | Davydko O., Pavlov V., Longo L. | Selecting textural characteristics of chest X-Rays for pneumonia lesions classification with the integrated gradients XAI attribution method | eXplainable Artificial Intelligence, The World Conference (xAI-2023) | 2023 |
@InProceedings{DavydkoLongo2023, author="Davydko, Oleksandr and Pavlov, Vladimir and Longo, Luca", editor="Longo, Luca", title="Selecting Textural Characteristics of Chest X-Rays for Pneumonia Lesions Classification with the Integrated Gradients XAI Attribution Method", booktitle="Explainable Artificial Intelligence", year="2023", publisher="Springer Nature Switzerland", address="Cham", pages="671--687", isbn="978-3-031-44064-9" } [Close]
| Explainable artificial intelligence • Neural networks • Texture analysis • Medical image processing • Classification • Machine Learning |
| 13 | Natsiou A., O’Leary S., Longo L. | An Exploration of the Latent Space of a Convolutional Variational Autoencoder for the Generation of Musical Instrument Tones | eXplainable Artificial Intelligence, The World Conference (xAI-2023) | 2023 |
@InProceedings{10.1007/978-3-031-44070-0_24, author="Natsiou, Anastasia and O'Leary, Se{\'a}n and Longo, Luca", editor="Longo, Luca", title="An Exploration of the Latent Space of a Convolutional Variational Autoencoder for the Generation of Musical Instrument Tones", booktitle="Explainable Artificial Intelligence", year="2023", publisher="Springer Nature Switzerland", address="Cham", pages="470--486", isbn="978-3-031-44070-0" } [Close]
| Explainable Artificial Intelligence • Variational Autoencoders •
Audio Representations •
Audio Synthesis •
Latent Feature Importance •
Deep Learning • Machine Learning |
| 12 | Gómez Tapia C., Bozic B., Longo L. | Investigating the Effect of Pre-processing Methods on Model Decision-Making in EEG-Based Person Identification | eXplainable Artificial Intelligence, The World Conference (xAI-2023) | 2023 |
@InProceedings{GomezLongo2023, author="Tapia, Carlos G{\'o}mez and Bozic, Bojan and Longo, Luca", editor="Longo, Luca", title="Investigating the Effect of Pre-processing Methods on Model Decision-Making in EEG-Based Person Identification", booktitle="Explainable Artificial Intelligence", year="2023", publisher="Springer Nature Switzerland", address="Cham", pages="131--152", isbn="978-3-031-44070-0" } [Close]
| Electroencephalography •
eXplainable Artificial Intelligence •
Deep Learning •
Signal processing •
attribution xAI methods •
Graph-Neural Network •
Biometrics •
signal-to-noise ratio |
| 11 | Vilone G., Longo L. | Development of a Human-Centred Psychometric Test for the Evaluation of Explanations Produced by XAI Methods | eXplainable Artificial Intelligence, The World Conference (xAI-2023) | 2023 | @InProceedings{ViloneLongo2023, author="Vilone, Giulia and Longo, Luca", editor="Longo, Luca", title="Development of a Human-Centred Psychometric Test for the Evaluation of Explanations Produced by XAI Methods", booktitle="Explainable Artificial Intelligence", year="2023", publisher="Springer Nature Switzerland", address="Cham", pages="205--232", isbn="978-3-031-44070-0" } [Close]
| Explainable Artificial Intelligence • Human-centred evaluation • Psychometrics • Machine Learning • Deep Learning • Explainability |
| 10 | O’ Sullivan R., Longo L | Explaining Deep Q-Learning Experience Replay with SHapley Additive exPlanations | Machine Learning and Knowledge Extraction | 2023 |
@Article{SullivanLongo2023, AUTHOR = {Sullivan, Robert S. and Longo, Luca}, TITLE = {Explaining Deep Q-Learning Experience Replay with SHapley Additive exPlanations}, JOURNAL = {Machine Learning and Knowledge Extraction}, VOLUME = {5}, YEAR = {2023}, NUMBER = {4}, PAGES = {1433--1455}, URL = {https://www.mdpi.com/2504-4990/5/4/72}, ISSN = {2504-4990}, DOI = {10.3390/make5040072} } [Close]
| Deep Reinforcement Learning • Experience Replay • SHapley Additive exPlanations • eXplainable Artificial Intelligence • Artificial Intelligence |
| 9 | Vilone G. Longo L. | A global model-agnostic XAI method for the automatic formation of an abstract argumentation framework and its objective evaluation. | 1st Int. Workshop on Argumentation for eXplainable AI (with 9th Int. Conference on Computational Models of Argument, COMMA 2022) | 2022 | @inproceedings{ViloneLongo2022XAIArg, author = {Giulia Vilone and Luca Longo}, title = {A global model-agnostic XAI method for the automatic formation of an abstract argumentation framework and its objective evaluation.}, booktitle = {1st International Workshop on Argumentation for eXplainable AI co-located with 9th International Conference on Computational Models of Argument (COMMA 2022)}, series = {{CEUR} Workshop Proceedings}, volume = {3209}, publisher = {CEUR-WS.org}, year = {2022}, url = {http://ceur-ws.org/Vol-3209/2119.pdf} } [Close]
| Explainable artificial intelligence • Argumentation • Non-monotonic reasoning • Method evaluation • Metrics
of explainability |
| 8 | Vilone G., Longo L. | A Novel Human-Centred Evaluation Approach and an Argument-Based Method for Explainable Artificial Intelligence. | Artificial Intelligence Applications and Innovations - 18th IFIP WG 12.5 International Conference | 2022 |
@inproceedings{ViloneLongo2022, author = {Giulia Vilone and Luca Longo}, editor = {Ilias Maglogiannis and Lazaros Iliadis and John Macintyre and Paulo Cortez}, title = {A Novel Human-Centred Evaluation Approach and an Argument-Based Method for Explainable Artificial Intelligence}, booktitle = {Artificial Intelligence Applications and Innovations - 18th {IFIP} {WG} 12.5 International Conference, {AIAI} 2022, Hersonissos, Crete, Greece, June 17-20, 2022, Proceedings, Part {I}}, series = {{IFIP} Advances in Information and Communication Technology}, volume = {646}, pages = {447--460}, publisher = {Springer}, year = {2022}, url = {https://doi.org/10.1007/978-3-031-08333-4\_36}, doi = {10.1007/978-3-031-08333-4\_36} } [Close]
| Explainable Artificial Intelligence • Argumentation • Human-centred evaluation •
Non-monotonic reasoning • Explainability |
| 7 | Vilone G., Longo L. | A Quantitative Evaluation of Global, Rule-Based Explanations of Post-Hoc, Model Agnostic Methods | Frontiers in Artificial Intelligence | 2021 |
@ARTICLE{ViloneLongo2021, AUTHOR={Vilone, Giulia and Longo, Luca}, TITLE={A Quantitative Evaluation of Global, Rule-Based Explanations of Post-Hoc, Model Agnostic Methods}, JOURNAL={Frontiers in Artificial Intelligence}, VOLUME={4}, PAGES={160}, YEAR={2021}, URL={https://www.frontiersin.org/article/10.3389/frai.2021.717899}, DOI={10.3389/frai.2021.717899}, ISSN={2624-8212} } [Close]
| explainable artificial intelligence • rule extraction • method comparison and evaluation • metrics of
explainability • method automatic ranking • artificial intelligence • explainability |
| 6 | Vilone G., Longo L. | Classification of Explainable Artificial Intelligence Methods through Their Output Formats | Machine Learning and Knowledge Extraction | 2021 |
@Article{Vilone2021Output, AUTHOR = {Vilone, Giulia and Longo, Luca}, TITLE = {Classification of Explainable Artificial Intelligence Methods through Their Output Formats}, JOURNAL = {Machine Learning and Knowledge Extraction}, VOLUME = {3}, YEAR = {2021}, NUMBER = {3}, PAGES = {615--661}, URL = {https://www.mdpi.com/2504-4990/3/3/32}, ISSN = {2504-4990}, DOI = {10.3390/make3030032} } [Close]
| explainable artificial intelligence • method classification • systematic literature review |
| 5 | Vilone G, Longo L. | Notions of explainability and evaluation approaches for explainable artificial intelligence | Information fusion | 2021 |
@article{VILONE202189, title = {Notions of explainability and evaluation approaches for explainable artificial intelligence}, journal = {Information Fusion}, volume = {76}, pages = {89-106}, year = {2021}, issn = {1566-2535}, doi = {https://doi.org/10.1016/j.inffus.2021.05.009}, url = {https://www.sciencedirect.com/science/article/pii/S1566253521001093}, author = {Giulia Vilone and Luca Longo}, keywords = {Explainable artificial intelligence, Notions of explainability, Evaluation methods}, } [Close]
| Explainable artificial intelligence •
Notions of explainability •
Evaluation methods |
| 4 | Longo L., Goebel R., Lecue F., Kieseberg P., Holzinger A. | Explainable Artificial Intelligence: Concepts, Applications, Research Challenges and Visions | Machine Learning and Knowledge Extraction. Int. Cross-Domain Conference for Machine Learning and Knowledge Extraction | 2020 |
@inproceedings{LongoGLKH20, author = {Luca Longo and Randy Goebel and Freddy L{\'{e}}cu{\'{e}} and Peter Kieseberg and Andreas Holzinger}, editor = {Andreas Holzinger and Peter Kieseberg and A Min Tjoa and Edgar R. Weippl}, title = {Explainable Artificial Intelligence: Concepts, Applications, Research Challenges and Visions}, booktitle = {Machine Learning and Knowledge Extraction - 4th {IFIP} {TC} 5, {TC} 12, {WG} 8.4, {WG} 8.9, {WG} 12.9 International Cross-Domain Conference, {CD-MAKE} 2020, Dublin, Ireland, August 25-28, 2020, Proceedings}, series = {Lecture Notes in Computer Science}, volume = {12279}, pages = {1--16}, publisher = {Springer}, year = {2020}, url = {https://doi.org/10.1007/978-3-030-57321-8\_1}, doi = {10.1007/978-3-030-57321-8\_1} } [Close]
| Explainable artificial intelligence • Machine learning • Explainability |
| 3 | Vilone G., Rizzo L., Longo L. | A comparative analysis of rule-based, model-agnostic methods for explainable artificial intelligence | Proceedings for the 28th AIAI Irish Conference on Artificial Intelligence and Cognitive Science, Dublin, Ireland, December 7-8, 2020 | 2020 |
@inproceedings{DBLP:conf/aics/ViloneRL20, author = {Giulia Vilone and Lucas Rizzo and Luca Longo}, editor = {Luca Longo and Lucas Rizzo and Elizabeth Hunter and Arjun Pakrashi}, title = {A comparative analysis of rule-based, model-agnostic methods for explainable artificial intelligence}, booktitle = {Proceedings of The 28th Irish Conference on Artificial Intelligence and Cognitive Science, Dublin, Republic of Ireland, December 7-8, 2020}, series = {{CEUR} Workshop Proceedings}, volume = {2771}, pages = {85--96}, publisher = {CEUR-WS.org}, year = {2020}, url = {http://ceur-ws.org/Vol-2771/AICS2020\_paper\_33.pdf} } [Close]
| Explainable artificial intelligence • Rule extraction • Method • comparison • evaluation |
| 2 | Rizzo, L., Longo L. | An empirical evaluation of the inferential capacity of defeasible argumentation, non-monotonic fuzzy reasoning and expert systems | Expert Systems with Applications | 2020 |
@article{RizzoLongo2020aee, title = "An empirical evaluation of the inferential capacity of defeasible argumentation, non-monotonic fuzzy reasoning and expert systems", journal = "Expert Systems with Applications", pages = "113-220", year = "2020", issn = "0957-4174", doi = "https://doi.org/10.1016/j.eswa.2020.113220", url = "http://www.sciencedirect.com/science/article/pii/S0957417420300464", author = "Lucas Rizzo and Luca Longo", keywords = "Defeasible Argumentation, Argumentation Theory, Explainable Artificial Intelligence, Non-monotonic Reasoning, Fuzzy Logic, Expert Systems, Mental Workload", } [Close]
| Defeasible Argumentation •
Argumentation Theory •
Explainable Artificial Intelligence •
Non-monotonic Reasoning •
Fuzzy Logic •
Expert Systems •
Mental Workload |
| 1 | Rizzo L., Longo L. | A Qualitative Investigation of the Explainability of Defeasible Argumentation and Non-Monotonic Fuzzy Reasoning | 26th AIAI Irish Conference on Artificial Intelligence and Cognitive Science | 2018 |
@inproceedings{RizzoL18Explainability, author = {Lucas Rizzo and Luca Longo}, title = {A Qualitative Investigation of the Explainability of Defeasible Argumentation and Non-Monotonic Fuzzy Reasoning}, booktitle = {Proceedings for the 26th {AIAI} Irish Conference on Artificial Intelligence and Cognitive Science Trinity College Dublin, Dublin, Ireland, December 6-7th, 2018.}, pages = {138--149}, year = {2018} } [Close]
| Defeasible Argumentation • Non-monotonic Reasoning • Fuzzy Reasoning • Argumentation Theory • Explainable Artificial Intelligence • Artificial Intelligence • Modeling |
| # | Authors | Title | Details | Date | Pdf/Links/Bibtex | Keywords |