Connectivity Fund

Trustworthy Probabilistic Machine Learning Models

Stefano Teso Senior Assistant Professor at CIMeC and DISI, University of Trento There is an increasing need of Artificial Intelligence (AI) and Machine Learning (ML) models that can reliably output predictions matching our expectations. Models learned from data should comply with specifications of desirable behavior supplied or elicited from humans and avoid overconfidence, i.e., being […]

Trustworthy Probabilistic Machine Learning Models Read More »

Leveraging Uncertainty for Improved Model Performance

Luuk de Jong Master student at Universiteit Leiden This project investigates the integration of a reject option in machine learning models to enhance reliability and explainability. By rejecting uncertain predictions, we can mitigate risks associated with low-confidence decisions, meaning the model will be more reliable. The core contribution of this work is the development and

Leveraging Uncertainty for Improved Model Performance Read More »

TEC4CPC – Towards a Trustworthy and Efficient Companion for Car Part Catalogs

Patrick Lang B.Sc. at N4 N4, a leading provider of procurement platforms in the automotive sector, is facing the challenge of making its catalogs for car parts (N4Parts) more user-friendly. These catalogs are used by customers both to purchase parts and to obtain information, such as installation instructions and maintenance intervals. The use of such

TEC4CPC – Towards a Trustworthy and Efficient Companion for Car Part Catalogs Read More »

Reconciling AI explanations with human expectations towards trustworthy AI

Jeff Clark Research Fellow at the University of Bristol With the widespread deployment of AI systems, it becomes increasingly important that users are equipped to scrutinise these models and their outputs. This is particular true for applications in high stakes domains such as healthcare. We propose to conduct research in the context of explainable AI,

Reconciling AI explanations with human expectations towards trustworthy AI Read More »

Alzheimer’s Diagnosis: Multimodal Explainable AI for Early Detection and Personalized Care

Nadeem Qazi Senior Lecturer in AI machine learning at University of East London,UK Alzheimer’s disease (AD) is becoming more common, emphasizing the need for early detection and prediction to improve patient outcomes. Current diagnostic methods are often too late, missing the opportunity for early intervention. This research seeks to develop advanced explainable AI models that

Alzheimer’s Diagnosis: Multimodal Explainable AI for Early Detection and Personalized Care Read More »

Exploring Prosocial Dynamics in Child-Robot Interactions: Adaptation, Measurement, and Trust

Ana Isabel Caniço Neto Assistant Researcher at the University of Lisbon Social robots are increasingly finding application in diverse settings, including our homes and schools, thus exposing children to interactions with multiple robots individually or in groups. Understanding how to design robots that can effectively interact and cooperate with children in these hybrid groups, in

Exploring Prosocial Dynamics in Child-Robot Interactions: Adaptation, Measurement, and Trust Read More »

Types of Contamination in AI Evaluation: Reasoning and Triangulation

Behzad Mehrbakhsh PhD student at Universitat Politècnica de València A comprehensive and accurate evaluation of AI systems is indispensable for advancing the field and fostering a trustworthy AI ecosystem. AI evaluation results have a significant impact on both academic research and industrial applications, ultimately determining which products or services are deemed effective, safe and reliable

Types of Contamination in AI Evaluation: Reasoning and Triangulation Read More »

CLAIRE | Rising Research Network: AI Research and Mental Well-Being Workshop 2nd edition

Marie Anastacio PhD candidate at Leiden University, RWTH Aachen After the successful execution of our 2023 workshop in collaboration with the TAILOR-ESSAI Summer School, we propose to organise a second edition at ESSAI2024. The event will focus on fostering a community of young AI researchers in Europe, supporting AI researchers and promoting mental well-being for

CLAIRE | Rising Research Network: AI Research and Mental Well-Being Workshop 2nd edition Read More »

Machine Learning Modalities for Materials Science

Milica Todorovic Associate professor at University of Turku In the past decade, artificial intelligence algorithms have demonstrated a tremendous potential and impact in speeding up the processing, optimisation, and discovery of new materials. The objective of the workshop and school “Machine Learning Modalities for Materials Science” (MLM4MS 2024) was to bring together the community of

Machine Learning Modalities for Materials Science Read More »