Martina Bacaro

IJCAI2023 Distinguished Paper Award to TAILOR Scientists from KU Leuven

Congratulations to Wen Chi Yang, Gavin Rens, Giuseppe Marra and Luc De Raedt on winning the IJCAI 2023 Distinguished Paper Award! The IJCAI Distinguished Paper Awards recognise some of the best papers presented at the conference each year. The winners were selected from among more than 4500 papers by the associate programme committee chairs, the programme and general chairs, and the president of EurAI. […]

IJCAI2023 Distinguished Paper Award to TAILOR Scientists from KU Leuven Read More »

The Joint Strategic Research Agenda (SRA): a collaborative effort for the Future of AI

The EU’s six Networks of AI Excellence Centres (NoEs) are providing a Joint Strategic Research Agenda (SRA).  The European Union’s aspirations for AI, Data and Robotics (ADR) that are “made in Europe” demand an ambitious approach to advancing European AI research and development. The EU’s six AI Networks of Excellence (NoEs) – AI4Media, ELISE, ELSA, euROBIN, HUMANE-AI-Net, and

The Joint Strategic Research Agenda (SRA): a collaborative effort for the Future of AI Read More »

Improving inverse abstraction based neural network verification using automated machine learning techniques

Matthias könig PhD at Leiden University Abstract: This project seeks to advance the state of the art in formal neural network verification. Formal neural network verification methods check whether a trained neural network, for example an image classifier, satisfies certain properties or guarantees regarding its behaviour, such as correctness, robustness, or safety, under various inputs

Improving inverse abstraction based neural network verification using automated machine learning techniques Read More »

Towards Stable and Robust Learning with Limited Labelled Data: Investigating the Impact of Data Choice

Branislav Pecher PhD at Kempelen Institute of Intelligent Technologies, member of Slovak.AI Abstract: Learning with limited labelled data, such as meta-learning, transfer learning or in-context learning, aims to effectively train a model using only a small amount of labelled samples. However, there is still limited understanding of the required settings or characteristics for these approaches

Towards Stable and Robust Learning with Limited Labelled Data: Investigating the Impact of Data Choice Read More »

BEYOND CHATGPT: Europe needs to act now to ensure technological sovereignty in Next-Generation AI – Call for Action following the EU Parliament Meeting

On May 25th, 2023 TAILOR made its contribution to the event Beyond ChatGPT: How can Europe get in front of the pack on Generative AI Models?, organised by a broad consortium of European projects and Institutions: the HumanE-AI-Net European Network of Centres of Excellence in Artificial Intelligence, the International Research Centre on Artificial Intelligence (IRCAI) under the

BEYOND CHATGPT: Europe needs to act now to ensure technological sovereignty in Next-Generation AI – Call for Action following the EU Parliament Meeting Read More »

WP4 workshop at NeSy2023 conference in Siena, 3-5 July 2023

NeSy2023 conference (https://sites.google.com/view/nesy2023/home?authuser=0) will host a TAILOR WP4 workshop on July 3rd, 11:00-13:00, about “Benchmarks for Neural-Symbolic AI”. The workshop will be in hybrid format, however in-person participation is strongly suggested, especially for NeSy researchers and PhD students. For more information, we recommend to visit the webpage https://sailab.diism.unisi.it/tailor-wp4-workshop-at-nesy/. Abstract The study of Neural-Symbolic (NeSy) approaches has

WP4 workshop at NeSy2023 conference in Siena, 3-5 July 2023 Read More »

TAILOR Initiates Roadmapping Activities to Ensure Trustworthy AI Systems

The TAILOR project has continued its roadmapping activities by conducting a thematic workshop centred around addressing crucial questions regarding the trustworthiness of AI. Key areas of focus during the workshop included developing methods for measuring and quantifying TAI, generating trust through certifications, and identifying the mentoring and training required to enhance trustworthiness. A key point

TAILOR Initiates Roadmapping Activities to Ensure Trustworthy AI Systems Read More »

Fostering Appropriate Trust in Predictive Policing AI Systems

Siddharth Mehrotra PhD student at TU Delft The use of AI in law enforcement, particularly in predictive policing, raises concerns about bias, discrimination, and infringement of civil liberties. Building appropriate trust in these systems is crucial to address these concerns and ensure ethical use. In this research proposal, we aim to investigate how explanations generated

Fostering Appropriate Trust in Predictive Policing AI Systems Read More »