Every month, we want to acknowledge some valuable TAILOR papers, selected among the papers published by scientists belonging to our network by TAILOR principal investigator Fredrik Heintz.
The list of the most valuable papers gathers contributions from different TAILOR partners, each providing valuable insights on different topics related to TrustworthyAI.
Stay tuned for other valuable insights and groundbreaking research from our diverse community!
Explaining Siamese networks in few-shot learning
A. Fedele, R. Guidotti, and D. Pedreschi
Machine Learning, vol. 113, no. 10, pp. 7723–7760, 2024 doi: 10.1007/s10994-024-06529-8
Abstract: Machine learning models often struggle to generalize accurately when tested on new class distributions that were not present in their training data. This is a significant challenge for real-world applications that require quick adaptation without the need for retraining. To address this issue, few-shot learning frameworks, which includes models such as Siamese Networks, have been proposed. Siamese Networks learn similarity between pairs of records through a metric that can be easily extended to new, unseen classes. However, these systems lack interpretability, which can hinder their use in certain applications. To address this, we propose a data-agnostic method to explain the outcomes of Siamese Networks in the context of few-shot learning. Our explanation method is based on a post-hoc perturbation-based procedure that evaluates the contribution of individual input features to the final outcome. As such, it falls under the category of post-hoc explanation methods. We present two variants, one that considers each input feature independently, and another that evaluates the interplay between features. Additionally, we propose two perturbation procedures to evaluate feature contributions. Qualitative and quantitative results demonstrate that our method is able to identify highly discriminant intra-class and inter-class characteristics, as well as predictive behaviors that lead to misclassification by relying on incorrect features.
Generalized Reasoning with Graph Neural Networks by Relational Bayesian Network Encodings
R. Pojer, A. Passerini, and M. Jaeger
Proceedings of Machine Learning Research, 2023, pp. 161–1612. [Online].
Abstract: Graph neural networks (GNNs) and statistical relational learning are two different approaches to learning with graph data. The former can provide highly accurate models for specific tasks when sufficient training data is available, whereas the latter supports a wider range of reasoning types, and can incorporate manual specifications of interpretable domain knowledge. In this paper we present a method to embed GNNs in a statistical relational learning framework, such that the predictive model represented by the GNN becomes part of a full generative model. This model then supports a wide range of queries, including general conditional probability queries, and computing most probable configurations of unobserved node attributes or edges. In particular, we demonstrate how this latter type of queries can be used to obtain model-level explanations of a GNN in a flexible and interactive manner.
Towards Transparent Healthcare: Advancing Local Explanation Methods in Explainable Artificial Intelligence
C. Metta, A. Beretta, R. Pellungrini, S. Rinzivillo, and F. Giannotti
Bioengineering, vol. 11, no. 4, 2024, doi: 10.3390/bioengineering11040369.
Abstract: This paper focuses on the use of local Explainable Artificial Intelligence (XAI) methods, particularly the Local Rule-Based Explanations (LORE) technique, within healthcare and medical settings. It emphasizes the critical role of interpretability and transparency in AI systems for diagnosing diseases, predicting patient outcomes, and creating personalized treatment plans. While acknowledging the complexities and inherent trade-offs between interpretability and model performance, our work underscores the significance of local XAI methods in enhancing decision-making processes in healthcare. By providing granular, case-specific insights, local XAI methods like LORE enhance physicians’ and patients’ understanding of machine learning models and their outcome. Our paper reviews significant contributions to local XAI in healthcare, highlighting its potential to improve clinical decision making, ensure fairness, and comply with regulatory standards.