Standardisation to Allow AI Access to Critical Services

Recent developments in the field of AI show great potential for the future of our society. However, to fulfil this great promise, we must be able to apply AI to our most constrained and critical services. But to do so, we must also make sure that we have true alignment with our goals and values, and that we can standardise technical requirements for safety, robustness and resilience against attacks.

When applying new technology to critical applications such as healthcare, the power grid or climate prediction models, we need to understand how the AI agent is generating its output. We also need to know which dataset it is being trained on – as to not put our trust in black box model. Championing these efforts within the TAILOR Network are André Meyer-Vitali and Chokri Mraidha, who are creating a standardised framework for evaluating models and output for AI systems.

The standardisation efforts are orchestrated by

André Meyer-Vitali

CEN & CENELEC, two European Standardisation Organizations mandated by the European Commission to develop standards for AI trustworthiness to support the AI Act.

The EU AI Act was introduced to make sure that models that are being deployed in the EU are aligned with our goals and values through legislation. However, how to apply the legislation, how to determine which requirements need to be met, and how to apply them to different types of models is still up for debate. There are a few different approaches to consider:

– Using neurosymbolic models would provide easier validation, better transparency, and accountability whereas causal models give us causality and allow us to determine why the model reached a certain output. Just relying on bigger datasets and compute scaling seems too uncertain and unreliable, says André Meyer-Vitali, detailing some of the differences and trade-off between different models.

The recent discussions and progress have created both a lot of hype, but also awareness of the issues with trust in AI systems. However, it has resulted in two camps where there is usually either exaggerated enthusiasm or exaggerated fear. And neither is good.

– We need objective measures, and standardisation is vital to build a collective understanding of AI systems for certification. And then we need to sort out what this means in a legal sense, as terms such as transparency, privacy, fairness, robustness, accountability, etc. need to be specified and measured, explains Chokri Mraidha, outlining the multi-step process of getting to trustworthy AI.

The potential gains are enormous. Getting greater efficiency in every form of critical service would mean huge savings financial, environmental, and in terms of human resources. Europe will also be able to stay competitive and it would help alleviate much of the current concerns when it comes to providing public services.

– We hope that the framework for evaluation that we are developing will lead to a wide adoption of trustworthy AI systems in industry and the public sector. And that the gains from doing so will be spread evenly amongst the population, providing better services and a better society for all, concludes André Meyer-Vitali and Chokri Mraidha, reiterating what is at stake.

About the researchers:
Dr. André Meyer-Vitali
Senior Researcher at DFKI (Deutsches Forschungszentrum für Künstliche Intelligenz) and Lead Scientist /Principal Investigator at Centre for European Research in Trusted AI.
Dr. Chokri Mraidha
Director of Research at CEA LIST institute and Head of the Embedded and Autonomous Systems Design Laboratory at CEA LIST.