TAILOR Selected papers: October 2024

Every month, we want to acknowledge some valuable TAILOR papers, selected among the papers published by scientists belonging to our network by TAILOR principal investigator Fredrik Heintz.
The list of the most valuable papers gathers contributions from different TAILOR partners, each providing valuable insights on different topics related to TrustworthyAI.
Stay tuned for other valuable insights and groundbreaking research from our diverse community!

Change detection and adaptation in multi-target regression on data streams

B. Stevanoski, A. Kostovska, P. Panov, and S. Džeroski

Machine Learning, 2024, doi: 10.1007/s10994-024-06621-z.

Abstract: An essential characteristic of data streams is the possibility of occurrence of concept drift, i.e., change in the distribution of the data in the stream over time. The capability to detect and adapt to changes in data stream mining methods is thus a necessity. While methods for multi-target prediction on data streams have recently appeared, they have largely remained without such capability. In this paper, we propose novel methods for change detection and adaptation in the context of incremental online learning of decision trees for multi-target regression. One of the approaches we propose is ensemble based, while the other uses the Page–Hinckley test. We perform an extensive evaluation of the proposed methods on real-world and artificial data streams and show their effectiveness. We also demonstrate their utility on a case study from spacecraft operations, where cosmic events can cause change and demand an appropriate and timely positioning of the space craft.

Interpretable and Explainable Logical Policies via Neurally Guided Symbolic Abstraction

Q. Delfosse, H. Shindo, D. S. Dhami, and K. Kersting

Advances in Neural Information Processing Systems, 2023, pp. 50838–50858. [Online]. Available: https://www.scopus.com/inward/record.uri?eid=2-s2.0-85205444913&partnerID=40&md5=f0a4121873341eaf5b21e2519b39b860

Abstract: The limited priors required by neural networks make them the dominating choice to encode and learn policies using reinforcement learning (RL). However, they are also black-boxes, making it hard to understand the agent’s behaviour, especially when working on the image level. Therefore, neuro-symbolic RL aims at creating policies that are interpretable in the first place. Unfortunately, interpretability is not explainability. To achieve both, we introduce Neurally gUided Differentiable loGic policiEs (NUDGE). NUDGE exploits trained neural network-based agents to guide the search of candidate-weighted logic rules, then uses differentiable logic to train the logic agents. Our experimental evaluation demonstrates that NUDGE agents can induce interpretable and explainable policies while outperforming purely neural ones and showing good flexibility to environments of different initial states and problem sizes.