How we trust robots: Attribution of Intentionality, Anthropomorphism and Uncanny Valley Effect

Martina Bacaro

PhD student at the University of Bologna – Alma Mater Studiorum

Abstract: Interactions between humans and robots are increasing both in specialistic and everyday scenarios. Trustworthiness is acknowledged as a key factor for successful engagements between humans and robots. For humans to understand and rely on robots’ actions and intentions, they need to engage with them in interaction and understand robots as autonomous agents. In this respect, how humans attribute intentionality – i.e., the characteristic of certain mental phenomena to be “about” something as an object, such as beliefs, desires, and so on – to robots is indicative of the human predisposition to interact with robots as individuals or as mere moving objects. However, the state-of-the-art in this research field of cognitive sciences and Human-Robot Interaction (HRI) studies is far to be unitary. In many cases, intentionality is conceived as an explicit mental stance towards robots, while in other studies is addressed and tested as a primarily bodily phenomenon. Moreover, questions about the human likeness of robots and anthropomorphism must be considered to fulfil the research scenario about intentionality attribution to robots. In other words, how to conceive, and therefore experimentally investigate intentionality towards robots remains an open challenge. This project aims to contribute to addressing this challenge through a research visit of Martina Bacaro, a PhD candidate at the University of Bologna (UNIBO, TAILOR lab), to the host lab headed by prof. Tom Ziemke (Department of Computer and Information Science, Linköping University (LiU), TAILOR lab). The main objectives for this research visit are i) to enrich the state-of-the-art on the topic of attribution of intentionality towards robots, on the basis of contemporary embodied cognitive sciences, and integrate them with studies on trustworthiness towards robots; ii) to look at edge phenomena in HRI, i.e. Uncanny Valley Effect (UVE), for which intentionality attribution and anthropomorphism play a crucial role; iii) to design an experiment in which both explicit and implicit measures of intentionality attribution are tested.

Keywords: Human Behaviour prediction, Cognitive modelling, Robot Behaviour, Intentionality, Cognitive Sciences, Philosophy of Cognitive Sciences, Uncanny Valley Effect
Scientific area: Human-Robot Interaction

Martina Bacaro is a second-year PhD student at the University of Bologna in Philosophy, Science, Cognition and Semiotics. Her research consists in reconceiving the relationship between humans and robots within the framework of enactive cognitive sciences, with the aim to inform robotic engineering with the latest theories about how mind and cognition work. In particular, she investigates the Uncanny Valley Effect to understand what happens in humans when in front of humanoid robots, and to help in modelling robotic cognitive architectures able to deal with edge phenomena like this.