April 2024

Continual Self-Supervised Learning

Giacomo Cignoni Research Fellow at the University of Pisa Learning continually from non-stationary data streams is a challenging research topic of growing popularity in the last few years. Being able to learn, adapt and generalize continually, in an efficient way appears to be fundamental for a more sustainable development of Artificial Intelligent systems. However, research […]

Continual Self-Supervised Learning Read More »

Tractable and Explainable Probabilistic AI

Lennert De Smet PhD at KU Leuven Transparency and technical robustness are two fundamental requirements for AI systems following the European Union AI Act, especially in higher-risk domains. Transparency is intricately related to the notion of explainability, allowing an AI system to accurately describe the reasoning behind its predictions. Through such explanations does the system

Tractable and Explainable Probabilistic AI Read More »

Trustworthy, Ethical and Beneficial-to-All Multiagent Systems Solutions for Social Ridesharing and the Hospitality Industry

Georgios Chalkiadakis Professor at Technical University of Crete Current mobility-as-a-service platforms have departed from the original objectives of the sharing economy-inspired social ridesharing paradigm: regrettably, they view drivers as taxi workers; focus on profit maximization rather than fair travel costs’ allocation; and disregard essential private preferences of users (relating for instance to their feeling of

Trustworthy, Ethical and Beneficial-to-All Multiagent Systems Solutions for Social Ridesharing and the Hospitality Industry Read More »

Meta-learning for scalable multi-objective Bayesian optimization

Jiarong Pan PhD at Bosch Center for Artificial Intelligence Abstract: Many real-world applications consider multiple objectives, potentially competing ones. For instance, for a model deciding whether to grant or deny loans, ensuring accurate while fair decisions is critical. Multi-objective Bayesian optimization (MOBO) is a sample-efficient technique for optimizing an expensive black-box function across multiple objectives.

Meta-learning for scalable multi-objective Bayesian optimization Read More »

Using robustness distributions to better understand fairness in Neural Net-works

Annelot Bosman PhD at Universiteit Leiden This project aims to investigate fairness from a new perspect- ive, namely by using robustness distributions, introduced in previous work. Investig- ating robustness in neural networks is very computationally expensive and as such the community has directed focus on increasing verification speed. Robustness distributions, although expensive to obtain, have

Using robustness distributions to better understand fairness in Neural Net-works Read More »

TAILOR Selected Papers: April

Every month, we want to acknowledge some valuable TAILOR papers, selected among the papers published by scientists belonging to our network by TAILOR principal investigator Fredrik Heintz.The list of the most valuable papers gathers contributions from different TAILOR partners, each providing valuable insights on different topics related to TrustworthyAI.Stay tuned for other valuable insights and

TAILOR Selected Papers: April Read More »