Connectivity Fund

Translating between AI Evaluation and Job Tasks in the human workplace for trustworthy and reliable AI deployment

Marko Tesic Post-doc at LCFI, University of Cambridge, UK Recent advancements in AI, particularly in language modeling, have rekindled concerns about the potential automation of certain roles within the human workforce. To better understand which roles are susceptible to automation and to ensure the trustworthy and reliable deployment of AI, I aim to establish a […]

Translating between AI Evaluation and Job Tasks in the human workplace for trustworthy and reliable AI deployment Read More »

Evaluation of cognitive capabilities for LLMs

Lorenzo Pacchiardi Post-doc at University of Cambridge Artificial Intelligence (AI) systems (such as reinforcement-learning agents and Large Language Models, or LLMs) are typically evaluated by testing them on a benchmark and reporting an aggregated score. As benchmarks are constituted of instances demanding various capability levels to be completed, the aggregated score is uninformative of the

Evaluation of cognitive capabilities for LLMs Read More »

Large Language Models for Media and Democracy: Wrecking or Saving Society?

Davide Ceolin, Piek Vossen, Ilia Markov, Catholijn Jonker, Pradeep Murukannaiah Senior Researcher (Ceolin), Full Professor (Vossen, Jonker), Assistant Professor (Markov, Murukannaiah) Over the past years, foundational models, including large-language models and multi-modal systems, have significantly advanced the possibilities regarding the understanding, analysis, and generation of human language. However, from the extensive and widespread use of

Large Language Models for Media and Democracy: Wrecking or Saving Society? Read More »

Ilia Markov

Ilia Markov Assistant Professor at Computational Linguistics & Text Mining Lab, Vrije Universiteit Amsterdam Ilia Markov is an Assistant Professor at the Vrije Universiteit Amsterdam. His research interests include hate speech detection, generation of counter-narratives against hate speech, and authorship analysis-related tasks such as authorship attribution and author profiling. Scientific area: Computational Linguistics, Natural Language

Ilia Markov Read More »

Grounded World Models for Higher Layers of Meaning

Stefano de Giorgis Post-doc researcher at Institute for Cognitive Sciences and Technologies – National Research Council (ISTC-CNR), Italy The project involves knowledge representation techniques, neuro-simbolic AI, and cognitive semantics. According to the embodied cognition principles, individuals construct a cognitive representation and conceptualization of the external world based on their perceptual capabilities, according to their affordances,

Grounded World Models for Higher Layers of Meaning Read More »

Multi-agent scheduling in a human-robot collaborative warehouse

Bram Renting PhD at Leiden University, Delft University of Technology In cooperative multi-agent environments, agents can be interdependent in completing tasks. We consider environments where agents schedule future interactions with others they depend on to perform the task. More specifically, our project focuses on human-robot warehouses where humans pick products from shelves and robots transport

Multi-agent scheduling in a human-robot collaborative warehouse Read More »

Evolution of Theory of Mind

Harmen de Weerd Assistant Professor at University of Groningen In social interactions, humans often make use of their “theory of mind”, which refers to their ability to reason about unobservable mental content of others. Humans can even use their theory of mind to reason about the way other use theory of mind. However, such higher-order

Evolution of Theory of Mind Read More »

AI Safety Working group – European kickoff workshop

Xavier Fresquet Deputy Director, PhD at Sorbonne Université AI systems offer the potential for substantial benefits to society but they are not without risks, such as toxicity, misinformation, and bias. As with other complex technologies, society needs industry-standard safety testing to realize the benefits while minimizing the risks. To address this problem, an AI Safety

AI Safety Working group – European kickoff workshop Read More »

Building trust in administrative automation through the use of LLMs in the public sector of Sweden

Niclas Willem Fock CEO Santa Anna IT Research Insititute, Sweden (Linköping University, Department of Electrical Engineering) Introduction: Santa Anna IT Research Institute (“Santa Anna”) has through its membership in the consortium “AI Sweden” collaborated on the development of the GPT-SW3: LLMs. The GPT-SW3: models are since November 2023 open for further development, research and applications.

Building trust in administrative automation through the use of LLMs in the public sector of Sweden Read More »

Eindhoven- Leuven – Aachen AI Workshop Series on Secure, Reliable and Trustworthy AI

Alexa Kodde Project Manager at CLAIRE CLAIRE aims to provide organisational and logistical coordination and support, as well as promotion and visibility, for the planned AI Workshop Series on Secure, Reliable and Trustworthy AI, hosted by Eindhoven University of Technology, KU Leuven, and RWTH Aachen University. The overarching objective is to not only strengthen regional,

Eindhoven- Leuven – Aachen AI Workshop Series on Secure, Reliable and Trustworthy AI Read More »