Machine Learning meets optimization (mini-workshop)

Machine learning and Optimization are closely related. On the one hand, many machine learning tasks can be solved using optimizers, on the other hand combinatorial optimization problems (static, dynamic, stochastic) are increasingly solved effectively with the help of machine learning.
The aim of the half-day workshop is to present talks and have open discussions by researchers from Machine Learning and Operations Research in order to identify how techniques from the two fields cross-fertilize.

The event is organized by:

* Patrick De Causmaecker (KU Leuven, BE)
* Neil Yorke-Smith (TU Delft, NL)
* Yingqian Zhang (TU Eindhoven, NL)

More info at

https://www.tudelft.nl/evenementen/2021/aidu/mini-workshop-machine-learning-meets-optimization

Hourly Schedule

Mini-workshop

8:30 - 9:00
A New Class of Hard Problem Instances for the 0-1 Knapsack Problem and their position in Instance Space
Speakers:
Patrick De Causmaecker
9:00 - 9:30
EXPObench: Benchmarking Surrogate-based Optimisation Algorithms on Expensive Black-box Functions
Speakers:
Laurens Bliek
9:30 - 10:00
Learning New Heuristics for Combinatorial Problems
Speakers:
Herke van Hoof
10:00 - 10:30
Coffee Break
10:30 - 11:00
Machine Learning meets Combinatorial Optimization: Real World Applications in Routing and Scheduling
Speakers:
Hoong Chuin Lau
11:00 - 11:30
Learning from Failure: Deep Reinforcement Learning for Optimization
Speakers:
Wendelin Böhmer
11:30
Lightening pitches
Patrick De Causmaecker
KU Leuven
Abstract The 0-1 knapsack problem is an important optimization problem, because it arises as a special case of a wide variety of optimization problems and has been generalized in several ways. Very powerful algorithms solve large knapsack problem instances with thousands of decision variables very efficiently, and current problem instances in the literature no longer challenge these algorithms. We propose a new class of hard problem instances for the 0-1 knapsack problem and provide theoretical support that helps explain why these problem instances are hard to solve to optimality. A large dataset of 3240 hard problem instances was generated and subsequently solved on a supercomputer, using approximately 810 CPU-hours. The analysis of the obtained results shows to which extent different parameters influence the hardness of the problem instances. This analysis demonstrates that the proposed problem instances are a lot harder - despite being order of magnitudes smaller - than the previously known hardest instances. Instance space analysis is applied and these new instances are positioned in the problem's instance space which leads to some open questions for future work. Joint work with Jorik Jooken and Pieter Leyman.
Laurens Bliek
TU Eindhoven
Abstract Surrogate algorithms such as Bayesian optimisation are especially designed for black-box optimisation problems with expensive objectives, such as hyperparameter tuning or simulation-based optimisation. In the literature, these algorithms are usually evaluated with synthetic benchmarks which are well established but have no expensive objective, and only on one or two real-life applications which vary wildly between papers. There is a clear lack of standardisation when it comes to benchmarking surrogate algorithms on real-life, expensive, black-box objective functions. A new benchmark library, EXPObench, provides first steps towards such a standardisation. The library is used to provide an extensive comparison of six different surrogate algorithms on four expensive optimisation problems from different real-life applications. This has led to new insights regarding the relative importance of exploration, the evaluation time of the objective, and the used model.
Herke van Hoof
University of Amsterdam
Abstract Large-scale combinatorial problems can usually not be solved exactly. Engineered heuristics are a popular alternative. I will present joint work with Wouter Kool and Max Welling on learning such heuristics from data rather than hand-engineering them. In our work, we focus on finding methods and models that suit the characteristic properties of such problems, such as symmetries and determinism. The resulting approach shows strong performance on routing problems such as the TSP. I will also present preliminary ideas and results on challenges such as scaling up to larger instances and solving dynamic and stochastic routing problems.
Hoong Chuin Lau
Singapore Management University
Abstract Traditionally Machine Learning belongs to the field of AI, while Combinatorial Optimization is a key area in Operations Research. The line is blurring today as the communities come together to tackle new problems in both disciplines. In this broad-level talk, I will discuss my recent works on tackling real world problems arising in urban settings -- last-mile logistics and emergency response. The underlying challenge is in generating robust routes and schedules in dynamic environments by combining learning and planning in interesting ways. Time permitting, I will take a quantum leap to talk about an emerging research topic on solving constrained optimization problems with quantum/quantum-inspired annealing, where machine learning is used to tune both problem and algorithm parameters.
Wendelin Böhmer
TU Delft
Abstract The talk will contrast reinforcement learning (RL) and combinatoric optimization at the example of the knapsack and the traveling salesman problem. We implemented a deep RL method that scales to arbitrary number of objects/locations and compare it with heuristics and OR methods. The talk will summarize the problems RL methods are face in optimization domains and the lessons we learned from our experiments.

Date

16 Dec 2021
Expired!

Time

08:30 - 12:00

More Info

Register here
Register here

Speakers

  • Herke van Hoof
    University of Amsterdam

    Abstract
    Large-scale combinatorial problems can usually not be solved
    exactly. Engineered heuristics are a popular alternative. I will present
    joint work with Wouter Kool and Max Welling on learning such heuristics
    from data rather than hand-engineering them. In our work, we focus on
    finding methods and models that suit the characteristic properties of
    such problems, such as symmetries and determinism. The resulting
    approach shows strong performance on routing problems such as the TSP. I
    will also present preliminary ideas and results on challenges such as
    scaling up to larger instances and solving dynamic and stochastic
    routing problems.

  • Laurens Bliek
    TU Eindhoven

    Abstract
    Surrogate algorithms such as Bayesian optimisation are
    especially designed for black-box optimisation problems with expensive
    objectives, such as hyperparameter tuning or simulation-based
    optimisation. In the literature, these algorithms are usually evaluated
    with synthetic benchmarks which are well established but have no
    expensive objective, and only on one or two real-life applications which
    vary wildly between papers. There is a clear lack of standardisation
    when it comes to benchmarking surrogate algorithms on real-life,
    expensive, black-box objective functions. A new benchmark library,
    EXPObench, provides first steps towards such a standardisation. The
    library is used to provide an extensive comparison of six different
    surrogate algorithms on four expensive optimisation problems from
    different real-life applications. This has led to new insights regarding
    the relative importance of exploration, the evaluation time of the
    objective, and the used model.