Collaboration Exchange Fund (CEF)

Improving inverse abstraction based neural network verification using automated machine learning techniques

Matthias könig PhD at Leiden University Abstract: This project seeks to advance the state of the art in formal neural network verification. Formal neural network verification methods check whether a trained neural network, for example an image classifier, satisfies certain properties or guarantees regarding its behaviour, such as correctness, robustness, or safety, under various inputs …

Improving inverse abstraction based neural network verification using automated machine learning techniques Read More »

Towards Stable and Robust Learning with Limited Labelled Data: Investigating the Impact of Data Choice

Branislav Pecher PhD at Kempelen Institute of Intelligent Technologies, member of Slovak.AI Abstract: Learning with limited labelled data, such as meta-learning, transfer learning or in-context learning, aims to effectively train a model using only a small amount of labelled samples. However, there is still limited understanding of the required settings or characteristics for these approaches …

Towards Stable and Robust Learning with Limited Labelled Data: Investigating the Impact of Data Choice Read More »

Graph Representation Learning for Solving Combinatorial Optimization Problems

Ya Song PhD student at Eindhoven University of Technology Abstract: In the research field of solving combinatorial optimization problems, many studies have considered combining machine learning with optimization algorithms and proposed so-called learning-based optimization algorithms. Compared to traditional handcrafted algorithms, these methods can automatically extract relevant knowledge from training data and require less domain knowledge. In …

Graph Representation Learning for Solving Combinatorial Optimization Problems Read More »

Causal Analysis for Fairness of AI Models

Martina cinquini PhD student at the University of Pisa Abstract: Artificial Intelligence (AI) has become ubiquitous in many sensitive domains where individuals and society can potentially be harmed by its outputs. In an attempt to reduce the ethical or legal implications of AI-based decisions, the scientific community’s interest in fairness-aware Machine Learning has been increasingly …

Causal Analysis for Fairness of AI Models Read More »

How we trust robots: Attribution of Intentionality, Anthropomorphism and Uncanny Valley Effect

Martina Bacaro PhD student at the University of Bologna – Alma Mater Studiorum Abstract: Interactions between humans and robots are increasing both in specialistic and everyday scenarios. Trustworthiness is acknowledged as a key factor for successful engagements between humans and robots. For humans to understand and rely on robots’ actions and intentions, they need to …

How we trust robots: Attribution of Intentionality, Anthropomorphism and Uncanny Valley Effect Read More »