TAILOR selected papers: January

Every month, we want to acknowledge some valuable TAILOR papers, selected among the papers published by scientists belonging to our network by TAILOR principal investigator Fredrik Heintz.

The list of the most valuable papers gathers contributions from different TAILOR partners, each providing valuable insights on different topics related to TrustworthyAI.

Stay tuned for other valuable insights and groundbreaking research from our diverse community!

Generating fine-grained surrogate temporal networks

A. Longa, G. Cencetti, S. Lehmann, A. Passerini

https://www.nature.com/articles/s42005-023-01517-1

Communications Physics volume 7, Article number: 22 (2024)

Abstract: Temporal networks are essential for modeling and understanding time-dependent systems, from social interactions to biological systems. However, real-world data to construct meaningful temporal networks are expensive to collect or unshareable due to privacy concerns. Generating arbitrarily large and anonymized synthetic graphs with the properties of real-world networks, namely surrogate networks, is a potential way to bypass the problem. However, it is not easy to build surrogate temporal networks which do not lack information on the temporal and/or topological properties of the input network and their correlations. Here, we propose a simple and efficient method that decomposes the input network into star-like structures evolving in time, used in turn to generate a surrogate temporal network. The model is compared with state-of-the-art models in terms of similarity of the generated networks with the original ones, showing its effectiveness and its efficiency in terms of execution time. The simplicity of the algorithm makes it interpretable, extendable and scalable.


Structural causal models reveal confounder bias in linear program Modelling 

M. Zečević, D.S. Dhami, K. Kersting

https://arxiv.org/abs/2105.12697
https://link.springer.com/article/10.1007/s10994-023-06431-9

Machine Learning, 2024, 1-21

Abstract: The recent years have been marked by extended research on adversarial attacks, especially on deep neural networks. With this work we intend on posing and investigating the question of whether the phenomenon might be more general in nature, that is, adversarial-style attacks outside classical classification tasks. Specifically, we investigate optimization problems as they constitute a fundamental part of modern AI research. To this end, we consider the base class of optimizers namely Linear Programs (LPs). On our initial attempt of a naïve mapping between the formalism of adversarial examples and LPs, we quickly identify the key ingredients missing for making sense of a reasonable notion of adversarial examples for LPs. Intriguingly, the formalism of Pearl’s notion to causality allows for the right description of adversarial like examples for LPs. Characteristically, we show the direct influence of the Structural Causal Model (SCM) onto the subsequent LP optimization, which ultimately exposes a notion of confounding in LPs (inherited by said SCM) that allows for adversarial-style attacks. We provide both the general proof formally alongside existential proofs of such intriguing LP-parameterizations based on SCM for three combinatorial problems, namely Linear Assignment, Shortest Path and a real world problem of energy systems.


Population synthesis as scenario generation for simulation-based planning under uncertainty

Dyer, J., Quera-Bofarull, A., Bishop, N., Farmer, J. D., Calinescu, A., & Wooldridge, M.

https://ora.ox.ac.uk/objects/uuid:87663b7f-60ca-44f3-8fa5-b9fd501e6270/download_file?file_format=application%2Fpdf&safe_filename=Dyer_et_al_2023_Population_synthesis_as.pdf&type_of_work=Conference+item

Abstract: Agent-based models have the potential to become instrumental tools in real-world decision-making, equipping policy-makers with the ability to experiment with high-fidelity representations of com- plex systems. Such models often rely crucially on the generation of synthetic populations with which the model is simulated, and their behaviour can depend strongly on the population’s compo- sition. Existing approaches to synthesising populations attempt to model distributions over agent-level attributes on the basis of data collected from a real-world population. Unfortunately, these approaches are of limited utility when data is incomplete or alto- gether absent – such as during novel, unprecedented circumstances – so that considerable uncertainty regarding the characteristics of the population being modelled remains, even after accounting for any such data. What is therefore needed in these cases are tools to simulate and plan for the possible future behaviours of the complex system that can be generated by populations that are consistent with this remaining uncertainty. To this end, we frame the problem of synthesising populations in agent-based models as a problem of scenario generation. The framework that we present is designed to generate synthetic populations that are on the one hand consistent with any persisting uncertainty, while on the other hand matching closely a target, user-specified scenario that the decision-maker would like to explore and plan for. We propose and compare two generic approaches to generating synthetic populations that pro- duce target scenarios, and demonstrate through simulation studies that these approaches are able to automatically generate synthetic populations whose behaviours match the target scenario, thereby facilitating simulation-based planning under uncertainty.

Forthcoming at the 23rd International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2024)

On the evaluation of the symbolic knowledge extracted from black Boxes

F. Sabbatini, R. Calegari

https://link.springer.com/article/10.1007/s43681-023-00406-1

Abstract: As opaque decision systems are being increasingly adopted in almost any application field, issues about their lack of transparency and human readability are a concrete concern for end-users. Amongst existing proposals to associate human-interpretable knowledge with accurate predictions provided by opaque models, there are rule extraction techniques, capable of extracting symbolic knowledge out of opaque models. The quantitative assessment of the extracted knowledge’s quality is still an open issue. For this reason, we provide here a first approach to measure the knowledge quality, encompassing several indicators and providing a compact score reflecting readability, completeness and predictive performance associated with a symbolic knowledge representation. We also discuss the main criticalities behind our proposal, related to the readability assessment and evaluation, to push future research efforts towards a more robust score formulation.

AI and Ethics, 2024