| # | Authors | Title | Details | Date | Pdf/Links/Bibtex | Keywords |
|---|---|---|---|---|---|---|
| 25 | Ephrem Tibebe Mekonnen; Longo L., Dondio P. | LOMATCE: LOcal Model-Agnostic Time-series Classification Explanations | IEEE Access | 2025 | @ARTICLE{MekonnenLongo2025, author={Mekonnen, Ephrem Tibebe and Longo, Luca and Dondio, Pierpaolo}, journal={IEEE Access}, title={LOMATCE: LOcal Model-Agnostic Time-series Classification Explanations}, year={2025}, volume={}, number={}, pages={1-1}, keywords={Time series analysis;Adaptation models;Explainable AI;Predictive models;Data models;Closed box;Perturbation methods;Computational modeling;Deep learning;Kernel;Explainable Artificial Intelligence;Model-agnostic;Time series;Post hoc;Deep learning;XAI}, doi={10.1109/ACCESS.2025.3625442}} [Close]
| Time series analysis • Adaptation models • Explainable AI • Predictive models • Data models • Closed box • Perturbation methods • Computational modeling • Deep learning • Kernel • Explainable Artificial Intelligence • Model-agnostic • Time series • Post hoc • Deep learning • XAI |
| 24 | Vilone G., Longo L. | Evaluating Argumentation Graphs as Global Explainable Surrogate Models for Dense Neural Networks and Their Comparison with Decision Trees | eXplainable Artificial Intelligence, The World Conference (xAI-2025) | 2025 | @InProceedings{ViloneLongo2025, author="Vilone, Giulia and Longo, Luca", editor="Guidotti, Riccardo and Schmid, Ute and Longo, Luca", title="Evaluating Argumentation Graphs as Global Explainable Surrogate Models for Dense Neural Networks and Their Comparison with Decision Trees", booktitle="Explainable Artificial Intelligence", year="2026", publisher="Springer Nature Switzerland", address="Cham", pages="89--112", isbn="978-3-032-08333-3" } [Close]
| Logical Analysis • Graph Theory • Graph Theory in Probability • Machine Learning • Reasoning • Symbolic AI • Explainable AI • Surrogate models • Computational Argumentation • Rule-based systems • Decision-trees • Dense Neural Networks • Deep learning |
| 23 | Ahmed T., Biecek P. Longo L. | Latent Space Interpretation and Mechanistic Clipping of Subject-Specific Variational Autoencoders of EEG Topographic Maps for Artefacts Reduction | eXplainable Artificial Intelligence, The World Conference (xAI-2025) | 2025 | @InProceedings{AhmedLongo2025, author="Ahmed, Taufique and Biecek, Przemyslaw and Longo, Luca", editor="Guidotti, Riccardo and Schmid, Ute and Longo, Luca", title="Latent Space Interpretation and Mechanistic Clipping of Subject-Specific Variational Autoencoders of EEG Topographic Maps for Artefacts Reduction", booktitle="Explainable Artificial Intelligence", year="2026", publisher="Springer Nature Switzerland", address="Cham", pages="327--350", isbn="978-3-032-08327-2" } [Close]
| Electroencephalography •
Spectral topographic maps •
Subject-specific •
Variational autoencoder •
Latent space •
interpretability •
Artefacts removal •
Deep learning •
full automation •
explainable AI |
| 22 | Ceschin M., Arrighi L., Longo L., Barbon Junior S. | Extending Decision Predicate Graphs for Comprehensive Explanation of Isolation Forest | eXplainable Artificial Intelligence, The World Conference (xAI-2025) | 2025 |
@InProceedings{CeschinLongo2025, author="Ceschin, Matteo and Arrighi, Leonardo and Longo, Luca and Barbon Junior, Sylvio", editor="Guidotti, Riccardo and Schmid, Ute and Longo, Luca", title="Extending Decision Predicate Graphs for Comprehensive Explanation of Isolation Forest", booktitle="Explainable Artificial Intelligence", year="2026", publisher="Springer Nature Switzerland", address="Cham", pages="271--293", isbn="978-3-032-08324-1" } [Close]
| Ensemble Learning • Outliers • Explainable Artificial Intelligence • Interpretability • Anomalies • Tree-based Ensemble Model |
| 21 | Davydko O., Pavlov V., Longo L. | A Combination of Integrated Gradients and SRFAMap for Explaining Neural Networks Trained with High-Order Statistical Radiomic Features | eXplainable Artificial Intelligence, The World Conference (xAI-2025) | 2025 | @InProceedings{OleksandrLongo2025, author="Davydko, Oleksandr and Pavlov, Vladimir and Longo, Luca", editor="Guidotti, Riccardo and Schmid, Ute and Longo, Luca", title="A Combination of Integrated Gradients and SRFAMap for Explaining Neural Networks Trained with High-Order Statistical Radiomic Features", booktitle="Explainable Artificial Intelligence", year="2026", publisher="Springer Nature Switzerland", address="Cham", pages="359--379", isbn="978-3-032-08317-3" } [Close]
| Explainable artificial intelligence • Radiomics • Texture analysis • Medical image processing • Saliency map • Integrated Gradients • Neural Networks • Interpretable Machine Learning |
| 20 | Kopanja M., Savic M., Longo L | Enhancing Cost-Sensitive Tree-Based XAI Surrogate Method: Exploring Alternative Cost Matrix Formulation | Joint Proceedings of the xAI 2025 Late-breaking Work, Demos and Doctoral Consortium co-located with the 3rd World Conference on eXplainable Artificial Intelligence (xAI 2025) | 2025 |
@inproceedings{, title={Enhancing Cost-Sensitive Tree-Based XAI Surrogate Method: Exploring Alternative Cost Matrix Formulation}, author={Kopanja, Marija and Savi?, Miloš and Longo, Luca}, year={2025}, booktitle = {Joint Proceedings of the xAI 2025 Late-breaking Work, Demos and Doctoral Consortium co-located with the 3rd World Conference on eXplainable Artificial Intelligence (xAI 2025), Istanbul, Turkey, 9-11 July, 2025}, publisher = {CEUR-WS.org}, volume = {4017}, series = {{CEUR} Workshop Proceedings}, editor = {Przemys?aw Biecek, Slawomir Nowaczyk, Gitta Kutyniok, Luca Longo}, pages={129-136}, url={https://ceur-ws.org/Vol-4017/paper_17.pdf} } [Close]
| Explainable artificial intelligence • Cost-sensitive decision tree • Surrogate modeling • Rule extraction • Tree-based
methods • Model-agnostic explanations • Rule-based systems • Interpretability • Machine Learning. |
| 19 | Marochko V., Rogala J., Longo L. | Integrated Gradients for Enhanced Interpretation of P3b-ERP Classifiers Trained with EEG-superlets in Traditional and Virtual Environments | Joint Proceedings of the xAI 2025 Late-breaking Work, Demos and Doctoral Consortium co-located with the 3rd World Conference on eXplainable Artificial Intelligence (xAI 2025) | 2025 | @inproceedings{MarochkoLongo2025, title={Integrated Gradients for Enhanced Interpretation of P3b-ERP Classifiers Trained with EEG-superlets in Traditional and Virtual Environments}, author={Marochko, Vladimir and Rogala, Jacek and Longo, Luca}, year={2025}, booktitle = {Joint Proceedings of the xAI 2025 Late-breaking Work, Demos and Doctoral Consortium co-located with the 3rd World Conference on eXplainable Artificial Intelligence (xAI 2025), Istanbul, Turkey, 9-11 July, 2025}, publisher = {CEUR-WS.org}, volume = {4017}, series = {{CEUR} Workshop Proceedings}, editor = {Przemys?aw Biecek, Slawomir Nowaczyk, Gitta Kutyniok, Luca Longo}, pages={49-56}, url={https://ceur-ws.org/Vol-4017/paper_07.pdf} } [Close]
| Event-related potentials • Deep learning • Convolutional neural networks • Explainable Artificial Intelligence •
Integrated Gradients • P3b • Oddball paradigm • time-frequency super-resolution • Superlets. |
| 18 | Gupta G., Qureshi M.A., Longo L. | A Global Post Hoc XAI Method For Interpreting LSTM Using Deterministic Finite State Automata | The Irish conference on Artificial Intelligence and Cognitive Science | 2025 |
@inproceedings{GuptaLongo2024, title={A Global Post Hoc XAI Method For Interpreting LSTM Using Deterministic Finite State Automata}, author={Gupta G., Qureshi M.A., Longo, L.}, year={2024}, booktitle = { Proceedings of The 32nd Irish Conference on Artificial Intelligence and Cognitive Science (AICS 2024)}, publisher = {CEUR-WS.org}, volume = {3910}, series = {{CEUR} Workshop Proceedings}, pages={26-38} } [Close]
| RNN • interpretability • Explainable AI • LSTM • Deterministic Finite State Automata • k-means clustering • Recurrent Neural Networks |
| 17 | Marochko V., and Longo L. | Enhancing the analysis of the P300 event-related potential with integrated gradients on a convolutional neural network trained with superlets | Joint Proceedings of the xAI 2024 Late-breaking Work, Demos and Doctoral Consortium co-located with the 2nd World Conference on eXplainable Artificial Intelligence (xAI 2024) | 2024 |
@inproceedings{Marochko2024, title={Enhancing the analysis of the P300 event-related potential with integrated gradients on a convolutional neural network trained with superlets}, author={Marochko, Vladimir, and Longo, Luca}, year={2024}, booktitle = {Joint Proceedings of the xAI 2024 Late-breaking Work, Demos and Doctoral Consortium co-located with the 2nd World Conference on eXplainable Artificial Intelligence (xAI 2024), Valletta, Malta, 17-19 July, 2024}, publisher = {CEUR-WS.org}, url = {https://ceur-ws.org/Vol-3793/paper_19.pdf}, volume = {3793}, series = {{CEUR} Workshop Proceedings}, editor = {Luca Longo, Weiru Liu, Grégoire Montavon}, pages={145-152} } [Close]
| Event-related potentials • Deep learning • Convolutional neural networks • Explainable Artificial Intelligence •
Integrated gradients • P3b • Oddball paradigm • time-frequency super-resolution • Superlets |
| 16 | Mekonnen E. T., Longo L., Dondio P. | Interpreting Black-Box Time Series Classifiers using Parameterised Event Primitives | Joint Proceedings of the xAI 2024 Late-breaking Work, Demos and Doctoral Consortium co-located with the 2nd World Conference on eXplainable Artificial Intelligence (xAI 2024) | 2024 |
@inproceedings{Mekonnen2024, title={Interpreting Black-Box Time Series Classifiers using Parameterised Event Primitives}, author={Mekonnen, Ephrem. T., Longo, Luca, and Dondio, Pierpaolo}, year={2024}, booktitle = {Joint Proceedings of the xAI 2024 Late-breaking Work, Demos and Doctoral Consortium co-located with the 2nd World Conference on eXplainable Artificial Intelligence (xAI 2024), Valletta, Malta, 17-19 July, 2024}, publisher = {CEUR-WS.org}, url = {https://ceur-ws.org/Vol-3793/paper_9.pdf}, volume = {3793}, series = {{CEUR} Workshop Proceedings}, editor = {Luca Longo, Weiru Liu, Grégoire Montavon} pages={65-72} } [Close]
| Explainable Artificial Intelligence • Model-Agnostic • Time Series • Post-hoc • Deep Learning • Machine Learning • Event primitives • Time-series |
| 15 | Chikkankod A.V., Longo L. | A proposal for improving EEG microstate generation via interpretable deep clustering with convolutional autoencoders | Joint Proceedings of the xAI 2024 Late-breaking Work, Demos and Doctoral Consortium co-located with the 2nd World Conference on eXplainable Artificial Intelligence (xAI 2024) | 2024 |
@inproceedings{chikkankod2024proposal, title={A proposal for improving EEG microstate generation via interpretable deep clustering with convolutional autoencoders}, author={Chikkankod, Arjun Vinayak and Longo, Luca}, year={2024}, booktitle = {Joint Proceedings of the xAI 2024 Late-breaking Work, Demos and Doctoral Consortium co-located with the 2nd World Conference on eXplainable Artificial Intelligence (xAI 2024), Valletta, Malta, 17-19 July, 2024}, publisher = {CEUR-WS.org}, url = {https://ceur-ws.org/Vol-3793/paper_4.pdf}, volume = {3793}, series = {{CEUR} Workshop Proceedings}, editor = {Luca Longo, Weiru Liu, Grégoire Montavon}, pages={25-32} } [Close]
| EEG Microstates • Shallow clustering • Deep clustering • Convolutional autoencoders • Resting state • Machine Learning • Deep Learning • Microstate theory |
| 14 | Mekonnen E.T., Longo L., Dondio P. | A global model-agnostic rule-based XAI method based on Parameterized Event Primitives for time series classifiers | Frontiers Artificial Intelligence | 2024 |
@ARTICLE{10.3389/frai.2024.1381921, AUTHOR={Mekonnen, Ephrem T. and Dondio, Pierpaolo and Longo, Luca }, TITLE={A Global Model-Agnostic Rule-Based XAI Method based on Parameterised Event Primitives for Time Series Classifiers}, JOURNAL={Frontiers in Artificial Intelligence}, VOLUME={7}, YEAR={2024}, URL={https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2024.1381921}, DOI={10.3389/frai.2024.1381921}, ISSN={2624-8212}, } [Close]
| Deep learning • Explainable Artificial Intelligence • time series classification • decision tree •
model agnostic • post-hoc • Machine Learning |
| 13 | Raufi B., Finnegan C., Longo L. | A Comparative Analysis of SHAP, LIME, ANCHORS, and DICE for Interpreting a Dense Neural Network in Credit Card Fraud Detection | eXplainable Artificial Intelligence, The World Conference (xAI-2024) | 2024 |
@InProceedings{10.1007/978-3-031-63803-9_20, author="Raufi, Bujar and Finnegan, Ciaran and Longo, Luca", editor="Longo, Luca and Lapuschkin, Sebastian and Seifert, Christin", title="A Comparative Analysis of SHAP, LIME, ANCHORS, and DICE for Interpreting a Dense Neural Network in Credit Card Fraud Detection", booktitle="Explainable Artificial Intelligence", year="2024", publisher="Springer Nature Switzerland", address="Cham", pages="365--383", isbn="978-3-031-63803-9" } [Close]
| Explainable Artificial Intelligence •
Credit Card Fraud Detection •
Interpretability •
methods comparison •
SHapley Additive exPlanations •
Local Interpretable •
Model-agnostic Explanation •
ANCHORS •
Diverse Counterfactual Explanations |
| 12 | Davydko O., Pavlov V., Biecek P., & Longo L. | SRFAMap: A Method for Mapping Integrated Gradients of a CNN Trained with Statistical Radiomic Features to Medical Image Saliency Maps | eXplainable Artificial Intelligence, The World Conference (xAI-2024) | 2024 |
@InProceedings{10.1007/978-3-031-63803-9_1, author="Davydko, Oleksandr and Pavlov, Vladimir and Biecek, Przemys{\l}aw and Longo, Luca", editor="Longo, Luca and Lapuschkin, Sebastian and Seifert, Christin", title="SRFAMap: A Method for Mapping Integrated Gradients of a CNN Trained with Statistical Radiomic Features to Medical Image Saliency Maps", booktitle="Explainable Artificial Intelligence", year="2024", publisher="Springer Nature Switzerland", address="Cham", pages="3--23", isbn="978-3-031-63803-9" } [Close]
| Explainable artificial intelligence •
Radiomics •
Texture analysis •
Medical image processing •
Saliency map •
Deep-learning •
machine learning |
| 11 | Hryniewska-Guzik W., Longo L., Biecek P. | CNN-Based Explanation Ensembling for Dataset, Representation and Explanations Evaluation | eXplainable Artificial Intelligence, The World Conference (xAI-2024) | 2024 |
@InProceedings{10.1007/978-3-031-63797-1_18, author="Hryniewska-Guzik, Weronika and Longo, Luca and Biecek, Przemys{\l}aw", editor="Longo, Luca and Lapuschkin, Sebastian and Seifert, Christin", title="CNN-Based Explanation Ensembling for Dataset, Representation and Explanations Evaluation", booktitle="Explainable Artificial Intelligence", year="2024", publisher="Springer Nature Switzerland", address="Cham", pages="346--368", isbn="978-3-031-63797-1" } [Close]
| Explainable Artificial Intelligence •
XAI •
Convolutional Neural Network •
model evaluation •
data evaluation •
representation learning •
ensemble •
deep learning •
machine learning |
| 10 | Longo L., Brcic M., Cabitza F., Choi J., Confalonieri R., Del Ser J., Guidotti R., Hayashi Y., Herrera F., Holzinger A., Jiang R., Khosravi H., Lecue F., Malgieri G., Páez A, Samek W., Schneider J, Speith T., Stumpf S. | Explainable Artificial Intelligence (XAI) 2.0: A manifesto of open challenges and interdisciplinary research directions | Information Fusion | 2024 |
@article{LONGO2024102301, title = {Explainable Artificial Intelligence (XAI) 2.0: A manifesto of open challenges and interdisciplinary research directions}, journal = {Information Fusion}, volume = {106}, pages = {102301}, year = {2024}, issn = {1566-2535}, doi = {https://doi.org/10.1016/j.inffus.2024.102301}, url = {https://www.sciencedirect.com/science/article/pii/S1566253524000794}, author = {Luca Longo and Mario Brcic and Federico Cabitza and Jaesik Choi and Roberto Confalonieri and Javier Del Ser and Riccardo Guidotti and Yoichi Hayashi and Francisco Herrera and Andreas Holzinger and Richard Jiang and Hassan Khosravi and Freddy Lecue and Gianclaudio Malgieri and Andrés Páez and Wojciech Samek and Johannes Schneider and Timo Speith and Simone Stumpf}, keywords = {Explainable artificial intelligence, XAI, Interpretability, Manifesto, Open challenges, Interdisciplinarity, Ethical AI, Large language models, Trustworthy AI, Responsible AI, Generative AI, Multi-faceted explanations, Concept-based explanations, Causality, Actionable XAI, Falsifiability} } [Close]
| Explainable artificial intelligence •
XAI •
Interpretability •
Manifesto •
Open challenges •
Interdisciplinarity •
Ethical AI •
Large language models •
Trustworthy AI •
Responsible AI •
Generative AI •
Multi-faceted explanations •
Concept-based explanations •
Causality •
Actionable XAI •
Falsifiability |
| 9 | Sullivan R.S., Longo L. | Optimizing Deep Q-Learning Experience Replay with SHAP Explanations: Exploring Minimum Experience Replay Buffer Sizes in Reinforcement Learning | Joint Proceedings of the xAI-2023 Late-breaking Work, Demos and Doctoral Consortium, co-located with the 1st World Conference on eXplainable Artificial Intelligence (xAI-2023) | 2023 |
| Deep Reinforcement Learning • Experience Replay • SHapley Additive exPlanations • eXplainable Artificial Intelligence • Machine Learning |
| 8 | Mekonnen E.T., Dondio P., Longo L. | Explaining Deep Learning Time Series Classification Models using a Decision Tree-Based Post-Hoc XAI Method | Joint Proceedings of the xAI-2023 Late-breaking Work, Demos and Doctoral Consortium co-located with the 1st World Conference on eXplainable Artificial Intelligence (xAI-2023) | 2023 |
@INPROCEEDINGS{Mekonnen2023, author={Mekonnen, E.T., Dondio P., and Longo L.}, booktitle={Joint Proceedings of the xAI-2023 Late-breaking Work, Demos and Doctoral Consortium co-located with the 1st World Conference on eXplainable Artificial Intelligence (xAI-2023)}, title={Explaining Deep Learning Time Series Classification Models using a Decision Tree-Based Post-Hoc XAI Method}, year={2023}, volume={3554}, number={}, pages={71-76}, publisher={CEUR} } [Close]
| Explainable Artificial Intelligence • Deep Learning • Time Series • Classification • Decision-Trees • Machine Learning • Post-hoc |
| 7 | Ahmed T., Longo L. | Latent Space Interpretation and Visualisation for Understanding the Decisions of Convolutional Variational Autoencoders Trained with EEG Topographic Maps | Joint Proceedings of the xAI-2023 Late-breaking Work, Demos and Doctoral Consortium co-located with the 1st World Conference on eXplainable Artificial Intelligence (xAI-2023) | 2023 |
@inproceedings{AhmedLongo2023, author = {Ahmed, Taufique and Longo, Luca}, title = {Latent Space Interpretation and Visualisation for Understanding the Decisions of Convolutional Variational Autoencoders Trained with EEG Topographic Maps}, booktitle = {Joint Proceedings of the xAI-2023 Late-breaking Work, Demos and Doctoral Consortium, co-located with the 1st World Conference on eXplainable Artificial Intelligence (xAI-2023)}, year = {2023}, pages={65--70}, publisher={CEUR Workshop Proceedings} } [Close]
| Electroencephalography • Convolutional variational autoencoders • latent space interpretation • deep learning • spectral topographic maps • Machine Learning |
| 6 | Vilone G., Longo L. | An Examination of the Effect of the Inconsistency Budget in Weighted Argumentation Frameworks and their Impact on the Interpretation of Deep Neural Networks | Joint Proceedings of the xAI-2023 Late-breaking Work, Demos and Doctoral Consortium co-located with the 1st World Conference on eXplainable Artificial Intelligence (xAI-2023) | 2023 |
@inproceedings{DBLP:conf/xai/ViloneL23a, author = {Giulia Vilone and Luca Longo}, editor = {Luca Longo}, title = {An Examination of the Effect of the Inconsistency Budget in Weighted Argumentation Frameworks and their Impact on the Interpretation of Deep Neural Networks}, booktitle = {Joint Proceedings of the xAI-2023 Late-breaking Work, Demos and Doctoral Consortium co-located with the 1st World Conference on eXplainable Artificial Intelligence (xAI-2023), Lisbon, Portugal, July 26-28, 2023}, series = {{CEUR} Workshop Proceedings}, volume = {3554}, pages = {53--58}, publisher = {CEUR-WS.org}, year = {2023}, url = {https://ceur-ws.org/Vol-3554/paper10.pdf} } [Close]
| Explainable artificial intelligence • Argumentation • Non-monotonic reasoning • Automatic attack extraction • Weighted argumentation frameworks • Inconsistency budget • Machine Learning • Neural Networks |
| 5 | Davydko O., Pavlov V., Longo L. | Selecting textural characteristics of chest X-Rays for pneumonia lesions classification with the integrated gradients XAI attribution method | eXplainable Artificial Intelligence, The World Conference (xAI-2023) | 2023 |
@InProceedings{DavydkoLongo2023, author="Davydko, Oleksandr and Pavlov, Vladimir and Longo, Luca", editor="Longo, Luca", title="Selecting Textural Characteristics of Chest X-Rays for Pneumonia Lesions Classification with the Integrated Gradients XAI Attribution Method", booktitle="Explainable Artificial Intelligence", year="2023", publisher="Springer Nature Switzerland", address="Cham", pages="671--687", isbn="978-3-031-44064-9" } [Close]
| Explainable artificial intelligence • Neural networks • Texture analysis • Medical image processing • Classification • Machine Learning |
| 4 | Natsiou A., O’Leary S., Longo L. | An Exploration of the Latent Space of a Convolutional Variational Autoencoder for the Generation of Musical Instrument Tones | eXplainable Artificial Intelligence, The World Conference (xAI-2023) | 2023 |
@InProceedings{10.1007/978-3-031-44070-0_24, author="Natsiou, Anastasia and O'Leary, Se{\'a}n and Longo, Luca", editor="Longo, Luca", title="An Exploration of the Latent Space of a Convolutional Variational Autoencoder for the Generation of Musical Instrument Tones", booktitle="Explainable Artificial Intelligence", year="2023", publisher="Springer Nature Switzerland", address="Cham", pages="470--486", isbn="978-3-031-44070-0" } [Close]
| Explainable Artificial Intelligence • Variational Autoencoders •
Audio Representations •
Audio Synthesis •
Latent Feature Importance •
Deep Learning • Machine Learning |
| 3 | Gómez Tapia C., Bozic B., Longo L. | Investigating the Effect of Pre-processing Methods on Model Decision-Making in EEG-Based Person Identification | eXplainable Artificial Intelligence, The World Conference (xAI-2023) | 2023 |
@InProceedings{GomezLongo2023, author="Tapia, Carlos G{\'o}mez and Bozic, Bojan and Longo, Luca", editor="Longo, Luca", title="Investigating the Effect of Pre-processing Methods on Model Decision-Making in EEG-Based Person Identification", booktitle="Explainable Artificial Intelligence", year="2023", publisher="Springer Nature Switzerland", address="Cham", pages="131--152", isbn="978-3-031-44070-0" } [Close]
| Electroencephalography •
eXplainable Artificial Intelligence •
Deep Learning •
Signal processing •
attribution xAI methods •
Graph-Neural Network •
Biometrics •
signal-to-noise ratio |
| 2 | Vilone G., Longo L. | Development of a Human-Centred Psychometric Test for the Evaluation of Explanations Produced by XAI Methods | eXplainable Artificial Intelligence, The World Conference (xAI-2023) | 2023 | @InProceedings{ViloneLongo2023, author="Vilone, Giulia and Longo, Luca", editor="Longo, Luca", title="Development of a Human-Centred Psychometric Test for the Evaluation of Explanations Produced by XAI Methods", booktitle="Explainable Artificial Intelligence", year="2023", publisher="Springer Nature Switzerland", address="Cham", pages="205--232", isbn="978-3-031-44070-0" } [Close]
| Explainable Artificial Intelligence • Human-centred evaluation • Psychometrics • Machine Learning • Deep Learning • Explainability |
| 1 | Vilone G. Longo L. | A global model-agnostic XAI method for the automatic formation of an abstract argumentation framework and its objective evaluation. | 1st Int. Workshop on Argumentation for eXplainable AI (with 9th Int. Conference on Computational Models of Argument, COMMA 2022) | 2022 | @inproceedings{ViloneLongo2022XAIArg, author = {Giulia Vilone and Luca Longo}, title = {A global model-agnostic XAI method for the automatic formation of an abstract argumentation framework and its objective evaluation.}, booktitle = {1st International Workshop on Argumentation for eXplainable AI co-located with 9th International Conference on Computational Models of Argument (COMMA 2022)}, series = {{CEUR} Workshop Proceedings}, volume = {3209}, publisher = {CEUR-WS.org}, year = {2022}, url = {http://ceur-ws.org/Vol-3209/2119.pdf} } [Close]
| Explainable artificial intelligence • Argumentation • Non-monotonic reasoning • Method evaluation • Metrics
of explainability |
| # | Authors | Title | Details | Date | Pdf/Links/Bibtex | Keywords |