AutoAI

Auto AI

This research theme focuses on how can we use AI “at the meta-level” to ensure that AI tools and systems are performant, robust and trustworthy, especially when built, deployed, maintained and monitored by people with limited AI expertise?

Highly-skilled AI experts are scarce and highly sought-after, and will remain that for the foreseeable future. The demand for AI far outpaces the availability of AI experts in many domains (medical, security, mobility, space/EO), meaning that AI systems will be increasingly built and deployed by people with more limited AI expertise, which raises many AI trustworthiness concerns. An effective way to address this issue is AutoAI: to diligently automate the labor-intensive and error-prone aspects of building AI systems to make them more trustworthy and robust.

Existing work on automated algorithm configuration and algorithm selection has already led to orders of magnitude improvements of the state-of-the-art in various reasoning and optimisation tasks, such as the Boolean satisfiability problem, mixed integer programming, timetabling, and AI planning. Likewise, automated machine learning (AutoML) has reached maturity. AutoML tools have outperformed over 100 teams of human experts in the AutoML competition series, and neural architecture search (NAS) repeatedly improves the state-of-the-art on the object recognition.
However, current techniques do not fully address all Trustworthy AI concerns. First, very little AutoAI research addresses explainability, safety, fairness, accountability, privacy, or sustainability. In fact, most AutoAI systems optimize a single performance objective, and fail to take trustworthiness dimensions into account. We need novel constraint-based and/or multi-objective approaches to simultaneously optimize all these dimensions. Moreover, there is no generally agreed-upon formal definition of any of these six dimensions: either proxies must be defined, or humans must be put in the loop. Second, very little work does any integration of learning, optimization and reasoning. Some approaches use grammars or fixed rules to compose machine learning pipelines, but this has limited exploratory power. Human experts, on the contrary, are able to reason about what to do based on the semantics and structure of the data or the behavior of the current model.

To overcome these hurdles and to improve the usability of AI tools and systems, we propose this theme on AutoAI, with a strong but not exclusive emphasis on machine learning. We will stimulate AutoAI research that produces safer and more sustainable AI, able to ensure fairness an interpretability, and driven by open-source implementations and transparent, reproducible experimentation, making it easily available to the whole European community.

Research challenges:

  • AutoML in the Wild. While Automated machine learning (AutoML) has achieved great successes in a “lab” environment, for many real-world problems it is not yet mature and flexible enough.
  • Beyond standard supervised learning. How can we extend the successes obtained in AutoML for supervised learning to other ML settings?
  • Self-monitoring AI Systems. How can we determine when an automatically built or configured AI system “gets out of its depth”, i.e., when using it or results obtained from it is no longer safe?
  • Multi-objective AutoAI. Trustworthy AI typically requires a trade-off between performance and other dimensions of trustworthiness, which may be incongruous (e.g. privacy and explainability). We aim to build on work in multi-objective optimization and multi-criteria decision making, as well as multi-objective work on AutoAI and NAS.
  • Ever-learning AutoAI. Today, many AI learning systems are built from scratch with every new task we encounter. That takes a lot of expertise, a lot of data, and a lot of time. Humans operate very differently. From newborn to adult, we experiment and learn how to do simple tasks first, and then leverage what we have learned to learn new, more complex tasks very efficiently. Likewise, we want AI systems to never stop learning, experiment across lots of different tasks, learn what works, and leverage that to learn new tasks more effectively.

Contact: Holger Hoos (hh@liacs.nl)