Reconciling AI explanations with human expectations towards trustworthy AI

Jeff Clark

Research Fellow at the University of Bristol

With the widespread deployment of AI systems, it becomes increasingly important that users are equipped to scrutinise these models and their outputs. This is particular true for applications in high stakes domains such as healthcare. We propose to conduct research in the context of explainable AI, aiming to build trustworthiness by reconciling explanations with human expectations and human decision-making processes. The host lab will be the Machine Learning Group at University of Tromsø, Norway and the visit is supported by Prof Michael Kampffmeyer – a leader in explainable AI and AI for healthcare such as medical imaging. The visit will harness work previously developed independently by the visitor and host group.

Keywords: Explainable AI, trustworthiness, healthcare, robustness, decision-making, safety

Scientific area: Artificial intelligence; AI for social good

Bio: Jeff Clark is a postdoctoral research fellow at the University of Bristol. His motivation is to apply technology for social good. At Bristol he works on a variety of healthcare and environmental projects, with a focus on explainability and trustworthiness. Prior to joining Bristol, Jeff developed technology at the University of Bath to detect diseases from CT scans alongside radiologists and physicians at Royal United Hospitals Bath NHS Trust. To enable deployment of the technology a spin-out company has since been formed (https://ingeniumai.com/). Jeff obtained his PhD in medical engineering from Imperial College London in 2020.

Visiting period: 9th June – 21st July 2024 at Machine Learning Group, UiT The Arctic University of Norway