The TAILOR Handbook of Trustworthy AI

The possibilities and promises of AI technology might seem endless, and it’s easy to get blinded by the pace of development and visions of a utopian future. We will need massive technical breakthroughs, ethical considerations as well as the development of rules, guidelines, definition and legal requirements before we allow algorithms to access all corners of our life.

The main concepts discussed in the Handbook are also available as short videos in the “Trustworthy AI Explained” series on YouTube: https://www.youtube.com/watch?v=lzg-B7kdfb0

The main question, as identified by the EU, is that of trustworthiness. In pursuing trustworthiness for AI systems, it gives us an opportunity to create universal definitions for privacy, safety, fairness, transparency, sustainability, accountability, reproducibility, justice, and equity within the context of AI.

With an agreement as to what these terms mean, there is also the possibility to create a standard to which AI systems can be held.

This would allow us to know that the output can be trusted, and that the model will let us know when it doesn’t know the answer, as opposed to just trying to make up a reasonable sounding one.

At the forefront of these considerations, we find Francesca Pratesi and Umberto Straccia. Together they are leading the team of editors behind the TAILOR Handbook of Trustworthy AI.

The handbook provides an overview of ethical dimensions in the definition and deployment of AI systems, and while this is specifically geared towards the scientific community, it is equally important for the general public.

It will allow those that are interested can have a common ground and a common understanding of what is being discussed. Having a defined terminology also makes possible to make relevant comparisons. When adding requirements and criteria for trustworthiness, this provides an opportunity to hold developers accountable, and to rank different models.

Research in this domain is taking huge steps forward at the moment, and in addition to that, new legal requirements coming from the EU AI Act are now being finalised. These will need to be accommodated and incorporated in the Handbook as well. Given the pace of AI innovation, the Handbook is a living document that will extend and redefine itself to accommodate the development of new models and new technical approaches.

– Aside from the technical and legal requirements we also think that there is more ethical consideration that we want to add, and we would like to extend the work that has been done on the environmental impact of AI, says Francesca Pratesi, emphasising the need to take a truly holistic approach to AI.

The successful implementation of the handbook would mean the adoption of it as a de facto standard with the AI space. It is a work that was inspired by, and implemented, because of the EU AI Act.

The handbook has the potential to hold great significance in the implementation of the AI act on the European and national level. In addition, the handbook provides value for academia, as a guideline for evaluating models. For policymakers and the general public, the handbook can be used as a tool for understanding how models work and what they need to be tested on.

– The handbook has the potential to create better understanding, promote discussions and stimulate critical thinking about AI in all parts of society, says Pratesi.

The content of the handbook also matters for those that are currently developing new AI models. If they know what the rules and requirements are, then they can adjust accordingly. And it is important that there is competition within this space, so not everything ends up being run by a single model developed by a single company.

– If we can build this framework, it would create a blueprint for building models that can be trusted to benefit all of society, says Umberto Straccia, underlining exactly what is at stake here.

About the researchers:
Francesca Pratesi and Umberto Straccia are researchers working at the Institute of Information Science and Technologies of Pisa. The institute is part of CNR (the Italian National Research Council), which is the largest public research organisation in Italy.