Research Assistant at Systems Research Institute, Polish Academy of Sciences
Semi-Supervised Fuzzy C-Means (SSFCMeans) model adapts an unsupervised fuzzy clustering algorithm to handle partial supervision in the form of categorical labels. One of the key challenges is to appropriately handle the impact of partial supervision (IPS) on the outcomes of the model. SSFCMeans controls this impact on its two outcomes – membership degrees and cluster prototypes – with a single hyperparameter α called the scaling factor. The existing descriptions of the role of α and guidelines on what value to set were found to be vague. They interpreted the impact of partial supervision in a human-understandable way but lacked rigorous accuracy. The resulting artificial intelligence systems were thus not trustworthy. Therefore, I proposed in my previous works to use the explainability framework (a popular approach to designing AI systems) and deliver explanations of IPS on memberships that are accurate. In this project proposal, we will provide full explainability of the SSFCMeans model by providing a missing explanation of the impact of partial supervision on clusters’ prototypes. Since we are working with the very core problems of the semisupervised fuzzy clustering field, our conclusions shall have far-reaching significance. All models or procedures reusing the SSFCMeans model or its mechanism of handling IPS will benefit from an improved understanding of the partial supervision impact. Finally, explanations are deemed better than
interpretations, but the quality of the explanations may vary. We will identify criteria for assessing this quality and propose modifications of the core SSFCMeans model to obtain better explanations of IPS if we find it necessary.
Keywords: Semi-Supervised Fuzzy C-Means, Explainability, Semi-Supervised Learning, Partial
Scientific area: Machine learning/Computational intelligence