Tractable and Explainable Probabilistic AI

Lennert De Smet

PhD at KU Leuven

Transparency and technical robustness are two fundamental requirements for AI systems following the European Union AI Act, especially in higher-risk domains. Transparency is intricately related to the notion of explainability, allowing an AI system to accurately describe the reasoning behind its predictions. Through such explanations does the system itself become transparent and amenable to human oversight. Robustness is often linked to uncertainty, since being robust can be rephrased as dealing with situations that the AI system does not know much about. Both desiderata are covered by probabilistic reasoning, a set of AI methods that combine probabilistic models with logic-based reasoning. Unfortunately, probabilistic reasoning quickly loses its potential due to its inherent intractability. The intractability is the exact topic of this proposal, guided by the exploration of tractable probabilistic models such as probabilistic circuits. These models trade off computational complexity and expressivity for flexible and accurate computation and learning, further augmenting their explainability and robustness. Concretely, we study two fundamental topics. (1) The integration of tractable probabilistic models with state-of-the-art reasoning systems and (2) leveraging tractable probabilistic models for tractable probabilistic reasoning. By focusing on tractability, the expected result is a paradigm for probabilistic reasoning applicable to real-world problems while still satisfying transparency and robustness.

Keywords: probability, statistics, tractable inference, reasoning, explainability, robustness

Scientific area: Neurosymbolic Artificial Intelligence

Bio: I am Lennert De Smet, PhD student at the DTAI research group of the KU Leuven under supervision of Luc De Raedt. As a student of mathematics, I am passionate about the formal and trustworthy modelling of intelligence at the intersection of statistics, logic and optimisation. My research interests are mainly found in probabilistic neurosymbolic AI (NeSy), with a particular focus on scaling deep probabilistic reasoning as a foundational framework for trustworthy artificial intelligence.

Visiting period: 1st of June until 5th of August at APRIL, School of Informatics, University of Edinburgh