posted by organizer: TomeEftimov || 2738 views || tracked by 3 users: [display]

BENCHMARK 2020 : GECCO 2020 Workshop - Good Benchmarking Practices for Evolutionary Computation


When Jul 8, 2020 - Jul 12, 2020
Where Cancun, Mexico
Submission Deadline Apr 17, 2020
Notification Due May 1, 2020
Final Version Due May 8, 2020
Categories    evolutionary algorithms   natural computing   benchmarking   artificial intelligence

Call For Papers

Scope and Objective
Benchmarking aims to illuminate the strengths and weaknesses of algorithms regarding different
problem characteristics. To this end, several benchmarking suites have been designed which target
different types of characteristics.
Gaining insight into the behavior of algorithms on a wide array of problems has benefits for different
stakeholders. It helps engineers new to the field of optimization find an algorithm suitable for their
problem. It also allows experts in optimization to develop new algorithms and improve existing ones.
Even though benchmarking is a highly-researched topic within the evolutionary computation
community, there are still a number of open questions and challenges that should be explored:

(i) most commonly-used benchmarks are small and do not cover the space of meaningful
(ii) benchmarking suites lack the complexity of real-world problems,
(iii) proper statistical analysis techniques that can easily be applied depending on the nature of the
data are lacking or seldom used, and
(iv) user-friendly, openly accessible benchmarking techniques and software need to be developed
and spread.

We wish to enable a culture of sharing to ensure direct access to resources as well as reproducibility.
This helps to avoid common pitfalls in benchmarking such as overfitting to specific test cases. We
aim to establish new standards for benchmarking in evolutionary computation research so we can
objectively compare novel algorithms and fully demonstrate where they excel and where they can be

Workshop Description
As the goal of the workshop is to discuss, develop and improve benchmarking practices in
evolutionary computation, we particularly welcome informal position statements addressing or
identifying open challenges in benchmarking, as well as all other suggestions and contributions for a
discussion. Possible contributions include, but are not limited to:
(i) lists of open questions/issues in benchmarking
(ii) examples of good benchmarking
(iii) descriptions of common pitfalls in benchmarking and how to avoid them.

For all other information about the workshop, please contact Thomas Weise at
with CC to,,, and

We also welcome the submission of workshop papers to be published in the GECCO companion
proceedings. The Workshop Call for Papers (CfP) can be downloaded in PDF format or as plain text
file here:
Our goal for the WORKshop is to collaboratively produce output that improves the state-of-the-art of
benchmarking in evolutionary computation, not to organize yet another mini-conference!

The topics of interest for this workshop include, but are not limited to:

(i) the selection of meaningful (real-world) benchmark problems,
(ii) performance measures for comparing algorithm behavior,
(iii) novel statistical approaches for analyzing empirical data,
(iv) landscape analysis,
(v) data mining approaches for understanding algorithm behavior,
(vi) transfer learning from benchmark experiences to real-world problems, and
(vii) benchmarking tools for executing experiments and analysis of experimental results.

Important Dates

Submission opening: February 27, 2020
Submission deadline: April 3, 2020
Notification of acceptance: April 17, 2020
Camera-Ready Material: April 24, 2020
Author registration deadline: April 27, 2020

Submission Instructions
All relevant instructions regarding paper submission are available at

Related Event
A similar benchmarking best practices workshop will be held at PPSN 2020, which takes place from
September 5-9, 2020, in Leiden, The Netherlands: Contributions to this workshop are welcome in any format until
June 8, 2020.

List of Organizers (alphabetical order)
Thomas Bäck (Leiden University, The Netherlands)
Carola Doerr (Sorbonne University, Paris, France)
Tome Eftimov (Jožef Stefan Institute, Ljubljana, Slovenia)
Pascal Kerschke (University of Münster, Germany)
William La Cava (University of Pennsylvania, USA)
Manuel López-Ibáñez (University of Manchester, UK)
Boris Naujoks (TH Cologne, Germany)
Pietro S. Oliveto (University of Sheffield, UK)
Patryk Orzechowski (University of Pennsylvania, USA)
Mike Preuss (Leiden University, The Netherlands)
Jérémy Rapin (Facebook AI Research, Paris, France)
Ofer M. Shir (Tel-Hai College and Migal Institute, Israel)
Olivier Teytaud (Facebook AI Research, Paris, France)
Heike Trautmann (University of Münster, Germany)
Ryan J. Urbanowicz (University of Pennsylvania, USA)
Vanessa Volz (, Copenhagen, Denmark)
Markus Wagner (The University of Adelaide, Australia)
Hao Wang (Sorbonne University, Paris, France)
Thomas Weise (Institute of Applied Optimization, Hefei University, Hefei, China)
Borys Wróbel (Adam Mickiewicz University, Poland)
Aleš Zamuda (University of Maribor, Slovenia)

Related Resources

BENCHMARK 2020   PPSN 2020 Workshop - Good Benchmarking Practices for Evolutionary Computation
IJCAI 2021   30th International Joint Conference on Artificial Intelligence
StoryCase 2012   ICCBR-12 Workshop on Stories, Episodes, and Cases (StoryCase)
CEC 2021   IEEE Congress on Evolutionary Computation
Signal 2021   8th International Conference on Signal and Image Processing
Scopus-BDAI 2020   2020 International Conference on Industrial Applications of Big Data and Artificial Intelligence (BDAI 2020)
EvoCOP 2021   The 21st European Conference on Evolutionary Computation in Combinatorial Optimisation
ICAISC 2021   International Conference on Artificial Intelligence and Soft Computing
ICAART 2021   13th International Conference on Agents and Artificial Intelligence