Evaluation of cognitive capabilities for LLMs

Lorenzo Pacchiardi

Post-doc at University of Cambridge

Artificial Intelligence (AI) systems (such as reinforcement-learning agents and Large Language Models, or LLMs) are typically evaluated by testing them on a benchmark and reporting an aggregated score. As benchmarks are constituted of instances demanding various capability levels to be completed, the aggregated score is uninformative of the AI’s cognitive capabilities. In previous work, a framework for inferring the level of multiple capabilities of an AI system based on its granular performance on a set of instances was introduced and applied to embodied agents. This framework relies on specifying a likelihood function connecting capabilities and instance demands to the observed instance performance, using which the capability levels can be robustly evaluated with Bayesian inference. The Bayesian nature of the framework allows to provide confidence estimates for the capability levels, thus contributing to the trustworthiness of the considered AI systems. This project aims to extend this framework to LLMs. To do this, several technical challenges need to be overcome, such as the determination of cognitive capabilities to characterise LLMs and the creation of a robust pipeline to annotate instances for their cognitive demands. The envisioned output of this project is a map of the cognitive capabilities of state-of-the-art LLMs and their robustness to prompt perturbations. This would allow practitioners to determine in which scenarios LLMs can be safely and robustly deployed.

Keywords: Capability-oriented evaluation, large language models, cognitive science, Bayesian inference, AI safety.

Scientific area: AI Evaluation, AI Safety and Robustness.

Bio: I am a Research Associate at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge, where I develop a framework for evaluating the cognitive capabilities of Large Language Models, together with Prof José Hernández-Orallo and Dr Lucy Cheke. I previously worked on detecting lying in large language models with Dr Owain Evans and on technical standards for AI for the EU AI Act at the Future of Life Institute. I am deeply interested in AI policy (particularly at the EU level). I obtained a PhD in Statistics and Machine Learning at Oxford, during which I worked on Bayesian simulation-based inference, generative models and probabilistic forecasting (with applications to meteorology). My supervisors were Prof. Ritabrata Dutta (Uni. Warwick) and Prof. Geoff Nicholls (Uni. Oxford). Before my PhD studies, I obtained a Bachelor’s degree in Physical Engineering from Politecnico di Torino (Italy) and an MSc in Physics of Complex Systems from Politecnico di Torino and Université Paris-Sud, France. I carried out my MSc thesis at LightOn, a machine learning startup in Paris.

Visiting period: 22nd April – 6th May 2024