Hackatons, challenges and benchmarks
Competitions driven by the TAILOR community, to foster collaboration and drive innovation in the field of Trustworthy AI.
Upcoming/past TAILOR challenges
- Machine Learning for Physical Simulations
- Sleep states Challenge
- Machine Learning for Physical Simulation Challenge
- Mind the Avatar’s Mind
- Automated Crossword Solving
- Brain Age Prediction Challenge
- Smarter mobility data challenge
- Meta Learning from Learning Curves 2
- Cross-Domain MetaDL: Any-Way Any-Shot Learning Competition with Novel Datasets from Pratical Domains
- Learning to Run a Power Network Challenge (L2RPN)
Your idea = our challenge?
Do you have an idea for an academic challenge, or a challenge related to an industrial use case? Please contact Marc.Schoenauer@inria.fr and Sebastien.Treguer@inria.fr from WP2, for feedback, coaching and help running your idea as a challenge on the Codalab Open Source platform.
Find more here:
Background
Hackatons
Hackathons are a 2-3 days event based on a clear problem with corresponding data from industrial partners. Their main goals are to develop a proof-of-concept, define an MVP challenge the industrial strategy, and open channels for further recruitments with multidisciplinary teams. The hackathons’ points of attention during their execution are:
- What data to be used (liaise with Eurostat);
- What will be the outcome;
- Search for cross-sector use cases, avoiding silos.
Please refer to the TAILOR Hackaton User Guide to know more:
Challenges
Challenges are longer, 3 to 6 months and their focus is on a specific research problem. They aim is to push forward the boundaries of the state of art. Their main goals are:
- Transfer results from research to address some of TAILOR applied problems.
- Combine research and industrial partners inside and outside the project to work on and ambitious use case, sets of data, results usability.
Find the Challenge Guidelines here:
Benchmarks
Benchmarks are datasets composed of tests and metrics to measure the performance of AI systems on specific tasks. They are used to comparing a model or process to existing methods.