Martina Bacaro

How we trust robots: Attribution of Intentionality, Anthropomorphism and Uncanny Valley Effect

Martina Bacaro PhD student at the University of Bologna – Alma Mater Studiorum Abstract: Interactions between humans and robots are increasing both in specialistic and everyday scenarios. Trustworthiness is acknowledged as a key factor for successful engagements between humans and robots. For humans to understand and rely on robots’ actions and intentions, they need to […]

How we trust robots: Attribution of Intentionality, Anthropomorphism and Uncanny Valley Effect Read More »

New Projects Funded By Connectivity Fund

Also for this call, Connectivity Fund received many applications. We are glad to announce the funded projects for this session: For having a look to the other projects granted by Connectivity Fund, check this webpage: https://tailor-network.eu/connectivity-fund/funded-projects/ The call for Connectivity Fund is every 4 months. The next deadline is on 15th of March 2023, for

New Projects Funded By Connectivity Fund Read More »

Towards Prototype-Based Explainable Machine Learning for Flood Detection

Ivica Obadic Chair of Data Science in Earth Observation at the Technical University of Munich The increasingly available high-resolution satellite data has shown to be a valuable resource in tackling pressing issues related to climate change and urbanization such as flood detection. In recent years, deep learning models based on satellite data have shown to be

Towards Prototype-Based Explainable Machine Learning for Flood Detection Read More »

Samples Selection with Group Metric for Experience Replay in Continual Learning

Andrii Krutsylo PhD student at the Institute of Computer Science of the Polish Academy of Sciences The study aims to reduce the decline in performance of a model trained incrementally on non-i.i.d. data, using replay-based strategies to retain previous task knowledge. To address limitations in existing variations, which only select samples based on individual properties,

Samples Selection with Group Metric for Experience Replay in Continual Learning Read More »

Making big benchmarks more trustworthy: Identifying the capabilities and limitations of language models by improving the BIG-Bench benchmark

Ryan Burnell Postdoctoral Research Fellow at Leverhulme Centre for the Future of Intelligence, University of Cambridge, UK AI systems are becoming an integral part of every aspect of modern life. To ensure public trust in these systems, we need tools that can be used to evaluate their capabilities and weaknesses. But these tools are struggling

Making big benchmarks more trustworthy: Identifying the capabilities and limitations of language models by improving the BIG-Bench benchmark Read More »

Learning Neural Algebras

Pedro Zuidberg Dos Martires Postdoctoral Researcher at Örebro University (Sweden) Abstract algebra provides a formalism to study sets and how the elements of these sets relate to each other by defining relations between set elements. Abstract algebraic structures are abundantly present in artificial intelligence. For instance, Boolean algebra constitutes the bedrock of symbolic AI. Interestingly,

Learning Neural Algebras Read More »

Learning trustworthy models from positive and unlabelled data

Pawel Teisseyre Assistant Professor at the Polish Academy of Sciences The goal of the research stay is to explore learning classification models using positive-unlabelled (PU) data. In PU learning, it is assumed that only some observations in training data are assigned label, which is positive, whereas the remaining observations are unlabelled and can be either

Learning trustworthy models from positive and unlabelled data Read More »