Trustworthy AI

Explainability, Safety, Fairness, Accountability, Privacy, and Sustainability are the dimensions of Trustworthy AI that are necessarily intertwined with the foundation themes of the project through a continuous mutual exchange of requirements and challenges to develop legal protection and value-sensitive approaches. The questions that will drive the research are:

  • How can we guarantee user trust in AI systems through explanation? How to formulate explanations as Machine-Human conversation depending on context and user expertise?
  • How to bridge the gap from safety engineering, formal methods, verification as well as validation to the way AI systems are built, used, and reinforced?
  • How can we build algorithms that respect fairness constraints by design through understanding causal influences among variables for dealing with bias-related issues?
  • How to uncover accountability gaps w.r.t. the attribution of AI-related harming of humans?
  • Can we guarantee privacy while preserving the desired utility functions?
  • Is there any chance to reduce energy consumption for a more sustainable AI and how can AI contribute to solving some of the big sustainability challenges that face humanity today (e.g. climate change)?
  • How to deal with properties and tensions of the interaction among multiple dimensions? For instance, accuracy vs. fairness, privacy vs. transparency, convenience vs. dignity, personalization vs. solidarity, efficiency vs. safety and sustainability.

Web page: Trustworthy AI

Contact:

Fosca Giannotti (fosca.giannotti@isti.cnr.it)

Umberto Straccia (umberto.straccia@isti.cnr.it)