Automated AI (WP7)

Partners

ULEI, INRIA, KU Leuven, UNIBO, TU/e, JSI, ALU-FR, UArtois, NKUA, UMA, PUT, UPV

See partner page for details on participating organisations.

People

Holger Hoos (ULEI), WP leader.

About WP7

This research theme focuses on how can we use AI “at the meta-level” to ensure that AI tools and systems are performant, robust and trustworthy, especially when built, deployed, maintained and monitored by people with limited AI expertise?

Highly-skilled AI experts are scarce and highly sought-after, and will remain that for the foreseeable future. The demand for AI far outpaces the availability of AI experts in many domains (medical, security, mobility, space/EO), meaning that AI systems will be increasingly built and deployed by people with more limited AI expertise, which raises many AI trustworthiness concerns. An effective way to address this issue is AutoAI: to diligently automate the labor-intensive and error-prone aspects of building AI systems to make them more trustworthy and robust.

Existing work on automated algorithm configuration and algorithm selection has already led to orders of magnitude improvements of the state-of-the-art in various reasoning and optimisation tasks, such as the Boolean satisfiability problem, mixed integer programming, timetabling, and AI planning. Likewise, automated machine learning (AutoML) has reached maturity. AutoML tools have outperformed over 100 teams of human experts in the AutoML competition series, and neural architecture search (NAS) repeatedly improves the state-of-the-art on the object recognition.
However, current techniques do not fully address all Trustworthy AI concerns. First, very little AutoAI research addresses explainability, safety, fairness, accountability, privacy, or sustainability. In fact, most AutoAI systems optimize a single performance objective, and fail to take trustworthiness dimensions into account. We need novel constraint-based and/or multi-objective approaches to simultaneously optimize all these dimensions. Moreover, there is no generally agreed-upon formal definition of any of these six dimensions: either proxies must be defined, or humans must be put in the loop. Second, very little work does any integration of learning, optimization and reasoning. Some approaches use grammars or fixed rules to compose machine learning pipelines, but this has limited exploratory power. Human experts, on the contrary, are able to reason about what to do based on the semantics and structure of the data or the behavior of the current model.

To overcome these hurdles and to improve the usability of AI tools and systems, we propose this theme on AutoAI, with a strong but not exclusive emphasis on machine learning. We will stimulate AutoAI research that produces safer and more sustainable AI, able to ensure fairness an interpretability, and driven by open-source implementations and transparent, reproducible experimentation, making it easily available to the whole European community.

Research challenges:

  • AutoML in the Wild. While Automated machine learning (AutoML) has achieved great successes in a “lab” environment, for many real-world problems it is not yet mature and flexible enough.
  • Beyond standard supervised learning. How can we extend the successes obtained in AutoML for supervised learning to other ML settings?
  • Self-monitoring AI Systems. How can we determine when an automatically built or configured AI system “gets out of its depth”, i.e., when using it or results obtained from it is no longer safe?
  • Multi-objective AutoAI. Trustworthy AI typically requires a trade-off between performance and other dimensions of trustworthiness, which may be incongruous (e.g. privacy and explainability). We aim to build on work in multi-objective optimization and multi-criteria decision making, as well as multi-objective work on AutoAI and NAS.
  • Ever-learning AutoAI. Today, many AI learning systems are built from scratch with every new task we encounter. That takes a lot of expertise, a lot of data, and a lot of time. Humans operate very differently. From newborn to adult, we experiment and learn how to do simple tasks first, and then leverage what we have learned to learn new, more complex tasks very efficiently. Likewise, we want AI systems to never stop learning, experiment across lots of different tasks, learn what works, and leverage that to learn new tasks more effectively.

Contact: Holger Hoos (hh@liacs.nl)

News related to WP7

  • 6 TAILOR scientists in new EurAI board

    TAILOR scientists Giuseppe de Giacomo, Fredrik Heintz, Holger Hoos, Ana Paiva, Carles Sierra and Manolis Koubarakis were recently elected Member of the EurAI Board.

    Read full text

  • TAILOR Handbook of Trustworthy AI

    The TAILOR Handbook of Trustworthy AI is an encyclopedia of the major scientific and technical terms related to Trustworthy Artificial Intelligence. The main goal of the Handbook of Trustworthy AI is to provide non experts, especially researchers and students, an overview of the problem related to the development of ethical and trustworthy AI systems. The…

    Read full text

  • Impressions from TAILOR Conference in Prague

    On 13th and 14th of September, TAILOR had its second annual conference. This time the conference was hosted in Prague, at the beautiful location of Charles University. The major objectives of the conference was to discuss about what TAILOR did so far, to make a plan for the next year, and finally to meet in…

    Read full text

  • Impressions from The Joint TAILOR-EurAI Summer school in Barcelona

    During the week from 13th to 17th of June, the 19th EurAI Advanced Course on AI (ACAI) and 2nd TAILOR summer school was organised in Barcelona. This joint initiative was devoted to the themes of explainable and trustworthy AI and organized by Carles Sierra and Karina Gibert from the Intelligent Data Science and Artificial Intelligence Research Center at Universitat…

    Read full text