February 2023

Making big benchmarks more trustworthy: Identifying the capabilities and limitations of language models by improving the BIG-Bench benchmark

Ryan Burnell Postdoctoral Research Fellow at Leverhulme Centre for the Future of Intelligence, University of Cambridge, UK AI systems are becoming an integral part of every aspect of modern life. To ensure public trust in these systems, we need tools that can be used to evaluate their capabilities and weaknesses. But these tools are struggling […]

Making big benchmarks more trustworthy: Identifying the capabilities and limitations of language models by improving the BIG-Bench benchmark Read More »

Learning Neural Algebras

Pedro Zuidberg Dos Martires Postdoctoral Researcher at Örebro University (Sweden) Abstract algebra provides a formalism to study sets and how the elements of these sets relate to each other by defining relations between set elements. Abstract algebraic structures are abundantly present in artificial intelligence. For instance, Boolean algebra constitutes the bedrock of symbolic AI. Interestingly,

Learning Neural Algebras Read More »

Learning trustworthy models from positive and unlabelled data

Pawel Teisseyre Assistant Professor at the Polish Academy of Sciences The goal of the research stay is to explore learning classification models using positive-unlabelled (PU) data. In PU learning, it is assumed that only some observations in training data are assigned label, which is positive, whereas the remaining observations are unlabelled and can be either

Learning trustworthy models from positive and unlabelled data Read More »