Causal Analysis for Fairness of AI Models

Martina cinquini

PhD student at the University of Pisa

Abstract: Artificial Intelligence (AI) has become ubiquitous in many sensitive domains where individuals and society can potentially be harmed by its outputs. In an attempt to reduce the ethical or legal implications of AI-based decisions, the scientific community’s interest in fairness-aware Machine Learning has been increasingly emerging over recent years. Correlation-based definitions of discrimination are favored even if these measures are too simple to assess fairness properly. AI decisions’ logic is required to lift from purely statistical modeling of the relations among variables to the awareness of cause-and-effect impacts. Causal models enable the formalization of complex realms and allow for consequential reasoning over cause-effect relations. However, current state-of-the-art causality-aware fair methods do not tackle the causal graph generation task. In this scenario, the main goal of my proposal is to develop a causal discovery algorithm tuned for fairness. Such an approach will aim to infer plausible causal models from observational data and adopt strategies based on the discovered causal models to detect, prevent and quantify harmful disparities.

Keywords: Causality, Causal Discovery, Fairness, Machine Learning
Scientific area: Artificial Intelligence

Martina Cinquini is a second-year PhD student in Computer Science at the University of Pisa. Her research is focused on designing and developing effective procedures that allow for uncovering causal-effect relationships from cross-sectional data to mitigate discrimination and enhance the degree of interpretability in AI decision-making systems.