Machine Learning meets optimization (mini-workshop)
Machine learning and Optimization are closely related. On the one hand, many machine learning tasks can be solved using optimizers, on the other hand combinatorial optimization problems (static, dynamic, stochastic) are increasingly solved effectively with the help of machine learning.
The aim of the half-day workshop is to present talks and have open discussions by researchers from Machine Learning and Operations Research in order to identify how techniques from the two fields cross-fertilize.
The event is organized by:
* Patrick De Causmaecker (KU Leuven, BE)
* Neil Yorke-Smith (TU Delft, NL)
* Yingqian Zhang (TU Eindhoven, NL)
More info at
https://www.tudelft.nl/evenementen/2021/aidu/mini-workshop-machine-learning-meets-optimization
Hourly Schedule
Mini-workshop
- 8:30 - 9:00
- A New Class of Hard Problem Instances for the 0-1 Knapsack Problem and their position in Instance Space
-
Speakers:
Patrick De Causmaecker
- 9:00 - 9:30
- EXPObench: Benchmarking Surrogate-based Optimisation Algorithms on Expensive Black-box Functions
-
Speakers:
Laurens Bliek
- 9:30 - 10:00
- Learning New Heuristics for Combinatorial Problems
-
Speakers:
Herke van Hoof
- 10:00 - 10:30
- Coffee Break
- 10:30 - 11:00
- Machine Learning meets Combinatorial Optimization: Real World Applications in Routing and Scheduling
-
Speakers:
Hoong Chuin Lau
- 11:00 - 11:30
- Learning from Failure: Deep Reinforcement Learning for Optimization
-
Speakers:
Wendelin Böhmer
- 11:30
- Lightening pitches
Speakers
-
Herke van HoofUniversity of Amsterdam
Abstract
Large-scale combinatorial problems can usually not be solved
exactly. Engineered heuristics are a popular alternative. I will present
joint work with Wouter Kool and Max Welling on learning such heuristics
from data rather than hand-engineering them. In our work, we focus on
finding methods and models that suit the characteristic properties of
such problems, such as symmetries and determinism. The resulting
approach shows strong performance on routing problems such as the TSP. I
will also present preliminary ideas and results on challenges such as
scaling up to larger instances and solving dynamic and stochastic
routing problems. -
Laurens BliekTU Eindhoven
Abstract
Surrogate algorithms such as Bayesian optimisation are
especially designed for black-box optimisation problems with expensive
objectives, such as hyperparameter tuning or simulation-based
optimisation. In the literature, these algorithms are usually evaluated
with synthetic benchmarks which are well established but have no
expensive objective, and only on one or two real-life applications which
vary wildly between papers. There is a clear lack of standardisation
when it comes to benchmarking surrogate algorithms on real-life,
expensive, black-box objective functions. A new benchmark library,
EXPObench, provides first steps towards such a standardisation. The
library is used to provide an extensive comparison of six different
surrogate algorithms on four expensive optimisation problems from
different real-life applications. This has led to new insights regarding
the relative importance of exploration, the evaluation time of the
objective, and the used model.