Using robustness distributions to better understand fairness in Neural Net-works

Annelot Bosman PhD at Universiteit Leiden This project aims to investigate fairness from a new perspect- ive, namely by using robustness distributions, introduced in previous work. Investig- ating robustness in neural networks is very computationally expensive and as such the community has directed focus on increasing verification speed. Robustness distributions, although expensive to obtain, have […]

Using robustness distributions to better understand fairness in Neural Net-works Read More »

TAILOR Selected Papers: April

Every month, we want to acknowledge some valuable TAILOR papers, selected among the papers published by scientists belonging to our network by TAILOR principal investigator Fredrik Heintz.The list of the most valuable papers gathers contributions from different TAILOR partners, each providing valuable insights on different topics related to TrustworthyAI.Stay tuned for other valuable insights and

TAILOR Selected Papers: April Read More »

Translating between AI Evaluation and Job Tasks in the human workplace for trustworthy and reliable AI deployment

Marko Tesic Post-doc at LCFI, University of Cambridge, UK Recent advancements in AI, particularly in language modeling, have rekindled concerns about the potential automation of certain roles within the human workforce. To better understand which roles are susceptible to automation and to ensure the trustworthy and reliable deployment of AI, I aim to establish a

Translating between AI Evaluation and Job Tasks in the human workplace for trustworthy and reliable AI deployment Read More »

Evaluation of cognitive capabilities for LLMs

Lorenzo Pacchiardi Post-doc at University of Cambridge Artificial Intelligence (AI) systems (such as reinforcement-learning agents and Large Language Models, or LLMs) are typically evaluated by testing them on a benchmark and reporting an aggregated score. As benchmarks are constituted of instances demanding various capability levels to be completed, the aggregated score is uninformative of the

Evaluation of cognitive capabilities for LLMs Read More »

Large Language Models for Media and Democracy: Wrecking or Saving Society?

Davide Ceolin, Piek Vossen, Ilia Markov, Catholijn Jonker, Pradeep Murukannaiah Senior Researcher (Ceolin), Full Professor (Vossen, Jonker), Assistant Professor (Markov, Murukannaiah) Over the past years, foundational models, including large-language models and multi-modal systems, have significantly advanced the possibilities regarding the understanding, analysis, and generation of human language. However, from the extensive and widespread use of

Large Language Models for Media and Democracy: Wrecking or Saving Society? Read More »

Ilia Markov

Ilia Markov Assistant Professor at Computational Linguistics & Text Mining Lab, Vrije Universiteit Amsterdam Ilia Markov is an Assistant Professor at the Vrije Universiteit Amsterdam. His research interests include hate speech detection, generation of counter-narratives against hate speech, and authorship analysis-related tasks such as authorship attribution and author profiling. Scientific area: Computational Linguistics, Natural Language

Ilia Markov Read More »

Grounded World Models for Higher Layers of Meaning

Stefano de Giorgis Post-doc researcher at Institute for Cognitive Sciences and Technologies – National Research Council (ISTC-CNR), Italy The project involves knowledge representation techniques, neuro-simbolic AI, and cognitive semantics. According to the embodied cognition principles, individuals construct a cognitive representation and conceptualization of the external world based on their perceptual capabilities, according to their affordances,

Grounded World Models for Higher Layers of Meaning Read More »

Multi-agent scheduling in a human-robot collaborative warehouse

Bram Renting PhD at Leiden University, Delft University of Technology In cooperative multi-agent environments, agents can be interdependent in completing tasks. We consider environments where agents schedule future interactions with others they depend on to perform the task. More specifically, our project focuses on human-robot warehouses where humans pick products from shelves and robots transport

Multi-agent scheduling in a human-robot collaborative warehouse Read More »

Evolution of Theory of Mind

Harmen de Weerd Assistant Professor at University of Groningen In social interactions, humans often make use of their “theory of mind”, which refers to their ability to reason about unobservable mental content of others. Humans can even use their theory of mind to reason about the way other use theory of mind. However, such higher-order

Evolution of Theory of Mind Read More »

AI Safety Working group – European kickoff workshop

Xavier Fresquet Deputy Director, PhD at Sorbonne Université AI systems offer the potential for substantial benefits to society but they are not without risks, such as toxicity, misinformation, and bias. As with other complex technologies, society needs industry-standard safety testing to realize the benefits while minimizing the risks. To address this problem, an AI Safety

AI Safety Working group – European kickoff workshop Read More »