Unifying Paradigms

The question to be answered in this research theme is How to integrate learning, reasoning and optimisation? The AI community has studied the abilities to learn, to reason and to optimize largely independent of one another, and has been divided into different schools of thought or paradigms. This is often described using confrontational terms such as: System 1 versus System 2, subsymbolic vs symbolic, learning vs reasoning, knowledge-based versus data-driven, model-based versus model-free, logic or symbolic versus neural, low vs high-level, etc.. No matter which terminology is used, the terms often refer to very similar distinctions and the state-of-the-art is that each paradigm can be used to solve certain tasks but not for others. For instance, the symbolic AI or the logic paradigm has concentrated on developing sophisticated and accountable reasoning methods, the subsymbolic or neural approaches to AI have concentrated on developing powerful architectures for learning and perception, and constraint and mathematical programming have been used for combinatorial optimisation. While deep learning provides solutions to many low-level perception tasks, it cannot really be used for complex reasoning; while for logical and symbolic methods, it is just the other way around. Symbolic AI may be more explainable, interpretable and verifiable, but it is less flexible and adaptable.

Trustworthy AI cannot rely on a single paradigm, it must have all the above mentioned abilities: it must be able to learn, to reason and to optimise. Therefore the quest for integrated learning, reasoning and optimisation abilities boils down to computationally and mathematically integrating different AI paradigms.

Research Challenges

  • Integrating or unifying different representations, i.e. (subsets of) representations — logic, probability, constraints, neural models, for learning and reasoning. Scaling up inference and learning algorithms for such representations. Example: develop NeSy system that can perform both logical inference and deep learning in the same way that pure logic and pure deep learning do.
  • Explainability & Trustworthiness of integrated representations. Provide explanation methods that focus on understanding the rationales, contexts and interpretations of the model using domain knowledge rather than relying on the transparency of the internal computational mechanism.
  • Learning for combinatorial optimisation and decision making. Example: learn and continuously adapt a model to schedule tasks in a data centre in order to minimize electricity consumption.
  • Showcase applications in using domain knowledge in learning. Combining learning and complex reasoning (over knowledge graphs and ontologies). Example: improve the work on predicting sentence entailment where neural approaches have recently been debunked.
  • Showcase applications in perception, spatial reasoning, robotics and vision. Combining high-level reasoning and low-level perception. Example: use common-sense knowledge to decide which item in an image is a real object and which is itself a picture.

Contact: Luc De Raedt (luc.deraedt@cs.kuleuven.be)