Using robustness distributions to better understand fairness in Neural Net-works

Annelot Bosman

PhD at Universiteit Leiden

This project aims to investigate fairness from a new perspect- ive, namely by using robustness distributions, introduced in previous work. Investig- ating robustness in neural networks is very computationally expensive and as such the community has directed focus on increasing verification speed. Robustness distributions, although expensive to obtain, have shown great potential in better understanding the ro- bustness of neural networks. Fairness is a vitally important topic, also in the TAILOR network, as we aim to ensure that the AI applications employed in real-world applica- tions do need lead to discrimination. Our work investigates image classification neural networks. We aim to use the robustness distributions to investigate class-fairness and provide a new perspective as class-fairness currently usually is applied to binary classification (Tian, Zhu, Liu, & Zhou, 2022) and uses relatively simple fairness concepts and the robustness distributions could provide a holistic view of the class-fairness of the entire network.

Keywords: Neural Network Verification, robustness, fairness

Scientific area: Artificial Intelligence

Bio: I am a third year PhD student at Leiden University, the Netherlands. In my work, I research the robustness of Neural Networks against input perturbations. Besides this I am involved in the organisation of WP7 on AutoAI and I enjoy supervising master thesis students.

Visiting period: 01/03/2024 until 15/05/2024 at RWTH Aachen