August 2024

New projects funded by Connectivity Fund

The TAILOR Connectivity Fund has provided many young researchers with the opportunity to conduct research abroad within the TAILOR network of laboratories, encouraging international exchange and strengthening internal collaboration within the project. We are very pleased to publish the latest projects that have benefited from the Connectivity Fund. Find all the projects funded by the […]

New projects funded by Connectivity Fund Read More »

Trustworthy Probabilistic Machine Learning Models

Stefano Teso Senior Assistant Professor at CIMeC and DISI, University of Trento There is an increasing need of Artificial Intelligence (AI) and Machine Learning (ML) models that can reliably output predictions matching our expectations. Models learned from data should comply with specifications of desirable behavior supplied or elicited from humans and avoid overconfidence, i.e., being

Trustworthy Probabilistic Machine Learning Models Read More »

Leveraging Uncertainty for Improved Model Performance

Luuk de Jong Master student at Universiteit Leiden This project investigates the integration of a reject option in machine learning models to enhance reliability and explainability. By rejecting uncertain predictions, we can mitigate risks associated with low-confidence decisions, meaning the model will be more reliable. The core contribution of this work is the development and

Leveraging Uncertainty for Improved Model Performance Read More »

TEC4CPC – Towards a Trustworthy and Efficient Companion for Car Part Catalogs

Patrick Lang B.Sc. at N4 N4, a leading provider of procurement platforms in the automotive sector, is facing the challenge of making its catalogs for car parts (N4Parts) more user-friendly. These catalogs are used by customers both to purchase parts and to obtain information, such as installation instructions and maintenance intervals. The use of such

TEC4CPC – Towards a Trustworthy and Efficient Companion for Car Part Catalogs Read More »

Reconciling AI explanations with human expectations towards trustworthy AI

Jeff Clark Research Fellow at the University of Bristol With the widespread deployment of AI systems, it becomes increasingly important that users are equipped to scrutinise these models and their outputs. This is particular true for applications in high stakes domains such as healthcare. We propose to conduct research in the context of explainable AI,

Reconciling AI explanations with human expectations towards trustworthy AI Read More »

ESSAI & ACAI 2023 summer school

Vida Groznik PhD student at University of Ljubljana, Faculty of Computer and Information Science The ESSAI & ACAI 2023 summer school took place in Ljubljana, Slovenia from July 24 to 28, 2023. The school was comprised of two main parts: the first European Summer School on Artificial Intelligence (ESSAI), which ran in six parallel sessions,

ESSAI & ACAI 2023 summer school Read More »

Alzheimer’s Diagnosis: Multimodal Explainable AI for Early Detection and Personalized Care

Nadeem Qazi Senior Lecturer in AI machine learning at University of East London,UK Alzheimer’s disease (AD) is becoming more common, emphasizing the need for early detection and prediction to improve patient outcomes. Current diagnostic methods are often too late, missing the opportunity for early intervention. This research seeks to develop advanced explainable AI models that

Alzheimer’s Diagnosis: Multimodal Explainable AI for Early Detection and Personalized Care Read More »

Exploring Prosocial Dynamics in Child-Robot Interactions: Adaptation, Measurement, and Trust

Ana Isabel Caniço Neto Assistant Researcher at the University of Lisbon Social robots are increasingly finding application in diverse settings, including our homes and schools, thus exposing children to interactions with multiple robots individually or in groups. Understanding how to design robots that can effectively interact and cooperate with children in these hybrid groups, in

Exploring Prosocial Dynamics in Child-Robot Interactions: Adaptation, Measurement, and Trust Read More »

Types of Contamination in AI Evaluation: Reasoning and Triangulation

Behzad Mehrbakhsh PhD student at Universitat Politècnica de València A comprehensive and accurate evaluation of AI systems is indispensable for advancing the field and fostering a trustworthy AI ecosystem. AI evaluation results have a significant impact on both academic research and industrial applications, ultimately determining which products or services are deemed effective, safe and reliable

Types of Contamination in AI Evaluation: Reasoning and Triangulation Read More »