| # | Authors | Title | Details | Date | Pdf/Links/Bibtex | Keywords |
|---|---|---|---|---|---|---|
| 9 | El-Qoraychy FZ, Mualla Y., Zhao H., Dridi M., Créput JC, Longo L. | Explainable AI for sign language recognition models: Integrating Grad-Cam LIME and Integrated Gradients | Plos One | 2025 | @article{El-QoraychyLongo2025, doi = {10.1371/journal.pone.0336481}, author = {El-Qoraychy, Fatima-Zahrae AND Mualla, Yazan AND Zhao, Hui AND Dridi, Mahjoub AND Créput, Jean-Charles AND Longo, Luca}, journal = {PLOS ONE}, publisher = {Public Library of Science}, title = {Explainable AI for sign language recognition models: Integrating Grad-Cam LIME and Integrated Gradients}, year = {2025}, month = {12}, volume = {20}, url = {https://doi.org/10.1371/journal.pone.0336481}, pages = {1-24}, number = {12} } [Close]
| Sign language • Machine Learning • Explainable Artificial Intelligence • Grad-Cam • Lime • Integrated Gradients |
| 8 | Nakanishi T. Longo L. | Approximate-Inverse Explainability of beta–VAE Latents for Multichannel EEG Participant-generalised Topographical Representation Learning | IEEE Access | 2025 |
@ARTICLE{NakanishiLongo2025, author={Nakanishi, Takafumi and Longo, Luca}, journal={IEEE Access}, title={Approximate-Inverse Explainability of ?–VAE Latents for Multichannel EEG Participant-Generalised Topographical Representation Learning}, year={2025}, volume={13}, number={}, pages={204773-204795}, keywords={Electroencephalography;Brain modeling;Spatial coherence;Scalp;Perturbation methods;Computational modeling;Explainable AI;Deep learning;Visualization;Videos;Electroencephalography (EEG);?–VAE;topographic mapping;explainable AI (XAI);approximate inverse model explanations (AIME);generative deep learning;representation learning}, doi={10.1109/ACCESS.2025.3635543}} [Close]
| Electroencephalography • Brain modeling • Spatial coherence • Scalp • Perturbation methods • Computational modeling • Explainable AI • Deep learning • Electroencephalography • VAE • topographic mapping • approximate inverse model explanations • generative deep learning • representation learning |
| 7 | Ephrem Tibebe Mekonnen; Longo L., Dondio P. | LOMATCE: LOcal Model-Agnostic Time-series Classification Explanations | IEEE Access | 2025 | @ARTICLE{MekonnenLongo2025, author={Mekonnen, Ephrem Tibebe and Longo, Luca and Dondio, Pierpaolo}, journal={IEEE Access}, title={LOMATCE: LOcal Model-Agnostic Time-series Classification Explanations}, year={2025}, volume={}, number={}, pages={1-1}, keywords={Time series analysis;Adaptation models;Explainable AI;Predictive models;Data models;Closed box;Perturbation methods;Computational modeling;Deep learning;Kernel;Explainable Artificial Intelligence;Model-agnostic;Time series;Post hoc;Deep learning;XAI}, doi={10.1109/ACCESS.2025.3625442}} [Close]
| Time series analysis • Adaptation models • Explainable AI • Predictive models • Data models • Closed box • Perturbation methods • Computational modeling • Deep learning • Kernel • Explainable Artificial Intelligence • Model-agnostic • Time series • Post hoc • Deep learning • XAI |
| 6 | Vilone G., Longo L. | Evaluating Argumentation Graphs as Global Explainable Surrogate Models for Dense Neural Networks and Their Comparison with Decision Trees | eXplainable Artificial Intelligence, The World Conference (xAI-2025) | 2025 | @InProceedings{ViloneLongo2025, author="Vilone, Giulia and Longo, Luca", editor="Guidotti, Riccardo and Schmid, Ute and Longo, Luca", title="Evaluating Argumentation Graphs as Global Explainable Surrogate Models for Dense Neural Networks and Their Comparison with Decision Trees", booktitle="Explainable Artificial Intelligence", year="2026", publisher="Springer Nature Switzerland", address="Cham", pages="89--112", isbn="978-3-032-08333-3" } [Close]
| Logical Analysis • Graph Theory • Graph Theory in Probability • Machine Learning • Reasoning • Symbolic AI • Explainable AI • Surrogate models • Computational Argumentation • Rule-based systems • Decision-trees • Dense Neural Networks • Deep learning |
| 5 | Ahmed T., Biecek P. Longo L. | Latent Space Interpretation and Mechanistic Clipping of Subject-Specific Variational Autoencoders of EEG Topographic Maps for Artefacts Reduction | eXplainable Artificial Intelligence, The World Conference (xAI-2025) | 2025 | @InProceedings{AhmedLongo2025, author="Ahmed, Taufique and Biecek, Przemyslaw and Longo, Luca", editor="Guidotti, Riccardo and Schmid, Ute and Longo, Luca", title="Latent Space Interpretation and Mechanistic Clipping of Subject-Specific Variational Autoencoders of EEG Topographic Maps for Artefacts Reduction", booktitle="Explainable Artificial Intelligence", year="2026", publisher="Springer Nature Switzerland", address="Cham", pages="327--350", isbn="978-3-032-08327-2" } [Close]
| Electroencephalography •
Spectral topographic maps •
Subject-specific •
Variational autoencoder •
Latent space •
interpretability •
Artefacts removal •
Deep learning •
full automation •
explainable AI |
| 4 | Gupta G., Qureshi M.A., Longo L. | A Global Post Hoc XAI Method For Interpreting LSTM Using Deterministic Finite State Automata | The Irish conference on Artificial Intelligence and Cognitive Science | 2025 |
@inproceedings{GuptaLongo2024, title={A Global Post Hoc XAI Method For Interpreting LSTM Using Deterministic Finite State Automata}, author={Gupta G., Qureshi M.A., Longo, L.}, year={2024}, booktitle = { Proceedings of The 32nd Irish Conference on Artificial Intelligence and Cognitive Science (AICS 2024)}, publisher = {CEUR-WS.org}, volume = {3910}, series = {{CEUR} Workshop Proceedings}, pages={26-38} } [Close]
| RNN • interpretability • Explainable AI • LSTM • Deterministic Finite State Automata • k-means clustering • Recurrent Neural Networks |
| 3 | Rizzo L., Verda D., Berretta S., Longo L. | A Novel Integration of Data-Driven Rule Generation and Computational Argumentation for Enhanced Explainable AI | Machine Learning and Knowledge Extraction | 2024 |
@Article{LongoRizzo2024, AUTHOR = {Rizzo, Lucas and Verda, Damiano and Berretta, Serena and Longo, Luca}, TITLE = {A Novel Integration of Data-Driven Rule Generation and Computational Argumentation for Enhanced Explainable AI}, JOURNAL = {Machine Learning and Knowledge Extraction}, VOLUME = {6}, YEAR = {2024}, NUMBER = {3}, PAGES = {2049--2073}, URL = {https://www.mdpi.com/2504-4990/6/3/101}, ISSN = {2504-4990}, DOI = {10.3390/make6030101} } [Close]
| rule-base AI • explainable artificial intelligence • computational argumentation • defeasible reasoning • Artificial Intelligence |
| 2 | Lal U., Vinayak Chikkankod A., Longo L. | Fractal dimensions and machine learning for detection of Parkinson’s disease in resting-state electroencephalography | Neural Computing and Applications | 2024 |
@article{lal2024fractal, title={Fractal dimensions and machine learning for detection of Parkinson’s disease in resting-state electroencephalography}, author={Lal, Utkarsh and Chikkankod, Arjun Vinayak and Longo, Luca}, journal={Neural Computing and Applications}, volume={36}, number={15}, pages={8257--8280}, year={2024}, publisher={Springer} } [Close]
| Electroencephalography •
Explainable AI •
Fractal dimension •
Entropy •
Sliding windowing •
Feature extraction •
Supervised learning •
Machine Learning •
Deep-learning |
| 1 | Vilone G. Longo L. | A global model-agnostic XAI method for the automatic formation of an abstract argumentation framework and its objective evaluation. | 1st Int. Workshop on Argumentation for eXplainable AI (with 9th Int. Conference on Computational Models of Argument, COMMA 2022) | 2022 | @inproceedings{ViloneLongo2022XAIArg, author = {Giulia Vilone and Luca Longo}, title = {A global model-agnostic XAI method for the automatic formation of an abstract argumentation framework and its objective evaluation.}, booktitle = {1st International Workshop on Argumentation for eXplainable AI co-located with 9th International Conference on Computational Models of Argument (COMMA 2022)}, series = {{CEUR} Workshop Proceedings}, volume = {3209}, publisher = {CEUR-WS.org}, year = {2022}, url = {http://ceur-ws.org/Vol-3209/2119.pdf} } [Close]
| Explainable artificial intelligence • Argumentation • Non-monotonic reasoning • Method evaluation • Metrics
of explainability |
| # | Authors | Title | Details | Date | Pdf/Links/Bibtex | Keywords |