Fostering Appropriate Trust in Predictive Policing AI Systems

Siddharth Mehrotra

PhD student at TU Delft

The use of AI in law enforcement, particularly in predictive policing, raises concerns about bias, discrimination, and infringement of civil liberties. Building appropriate trust in these systems is crucial to address these concerns and ensure ethical use. In this research proposal, we aim to investigate how explanations generated by predictive policing systems and trust calibration cues can be used to establish appropriate trust in these systems. We will conduct an user-study to measure the impact of explanations and trust calibration cues for building appropriate trust. The results of this study can contribute to the development of more reliable and trustworthy AI-based predictive policing systems. The proposed proposal addresses the two TAILOR’s research objectives (Trustworthy AI with learning and reasoning for multiagent interactions and human AI collaboration) through the Trustworthy AI – Work Package 3 focusing on Explainable AI Systems with Safety and Robustness. Funding is requested for a nine weeks visit by Siddharth Mehrotra, a final-year PhD candidate at the TAILOR TU Delft, to the non-TAILOR University of Hamburg, invited by Prof. Dr. Eva Bittner.

More information about Siddharth: