Connectivity Fund

Explainable Semi-Supervised Fuzzy C-Means

Kamil Kmita Research Assistant at Systems Research Institute, Polish Academy of Sciences Semi-Supervised Fuzzy C-Means (SSFCMeans) model adapts an unsupervised fuzzy clustering algorithm to handle partial supervision in the form of categorical labels. One of the key challenges is to appropriately handle the impact of partial supervision (IPS) on the outcomes of the model. SSFCMeans […]

Explainable Semi-Supervised Fuzzy C-Means Read More »

The First Workshop on Hybrid Human-Machine Learning and Decision Making

Andrea Passerini Associate professor at University of Trento In the past, machine learning and decision-making have been treated as independent research areas. However, with the increasing emphasis on human-centered AI, there has been a growing interest in combining these two areas. Researchers have explored approaches that aim to complement human decision-making rather than replace it,

The First Workshop on Hybrid Human-Machine Learning and Decision Making Read More »

AI for PeopleInternational Conference “AI for People: Democratizing AI”AI for People

AI for People International nonprofit organization The International Conference “AI for People” was born out of the idea of shaping Artificial Intelligence technology around human and societal needs. While Artificial Intelligence (AI) can be a beneficial tool, its development and its deployment impact society and the environment in ways that need to be thoroughly addressed

AI for PeopleInternational Conference “AI for People: Democratizing AI”AI for People Read More »

Robust and safe reinforcement learning against uncertainties in human feedback

Taku Yamagata Senior Research Associate at the University of Bristol Abstract One of the promising approaches to improve the robustness and safety of reinforcement learning (RL) is collecting human feedback and, that way, incorporating prior knowledge of the target environment. However, human feedback can be inconsistent and infrequent. In this proposed research visit, we explore

Robust and safe reinforcement learning against uncertainties in human feedback Read More »

Holistic Evaluation of AI-assisted Biomedicine: A Case study on Interactive Cell Segmentation

Wout Schellaert PhD student at Universitat Politècnica de València Abstract Rapid advances in artificial intelligence have resulted in a correspondingly growing prominence of AI-based tools in day to day biomedicine workflows. As a high-risk domain with impact on human health, it is of vital importance that any AI systems in use are reliable, safe, and

Holistic Evaluation of AI-assisted Biomedicine: A Case study on Interactive Cell Segmentation Read More »

1st ContinualAI Unconference

Vincenzo Lo Monaco Assistant Professor and ContinualAI President Abstract Organized by the non-profit research organization ContinualAI, the conference aims at speeding-up the long desired inclusive and sustainable progress of our community with an open-access, multi-timezone, 24 hours long event which brings together ideas at the intersection of machine learning, computational neuroscience, robotics and more! The

1st ContinualAI Unconference Read More »

CLAIRE | Rising Research Network: AI Research and Mental Well-Being Workshop

Nicolò Brandizzi PhD student at Sapienza University of Rome Abstract The rapid advancements in artificial intelligence (AI) research and applications have emphasized the need for collaboration between academia and industry, especially among new AI researchers. Such collaborations drive innovation, translate research into practical solutions, and address sector-specific challenges. The CLAIRE Rising Researcher Network (R2Net) aims

CLAIRE | Rising Research Network: AI Research and Mental Well-Being Workshop Read More »

Fostering Appropriate Trust in Predictive Policing AI Systems

Siddharth Mehrotra PhD student at TU Delft The use of AI in law enforcement, particularly in predictive policing, raises concerns about bias, discrimination, and infringement of civil liberties. Building appropriate trust in these systems is crucial to address these concerns and ensure ethical use. In this research proposal, we aim to investigate how explanations generated

Fostering Appropriate Trust in Predictive Policing AI Systems Read More »

Meta-learning for Continual Learning

Anna Vettoruzzo PhD student at the Halmstad University Continual learning (CL) refers to the ability to continually learn over time by accommodating new knowledge while retaining previously learned experiences. While this concept is inherent in the human learning ability, current machine learning-based methods struggle with this as they are highly prone to forget past experiences

Meta-learning for Continual Learning Read More »

Deep reinforcement learning for predictive monitoring under LTLf constraints

Efrén Rama Maneiro PhD student at the University of Santiago de Compostela Predictive monitoring is a subfield of process mining that focuses on predicting how a process will unfold. Deep learning techniques have become popular in this field due to their enhanced performance with respect to classic machine learning models. However, most of these approaches

Deep reinforcement learning for predictive monitoring under LTLf constraints Read More »