Results Reporting

Supported Benchmark Datasets

Currently, the following benchmark datasets are supported by the Python data loader scripts for results reporting:

Data Loading and Results Computing Scripts

A set of Python data loader functionalities and figure of merit calculation functions are provided through this link. These scripts simplify the process of downloading, loading, and splitting various datasets available on the website. It also provides basic instructions on submitting benchmark results. Examples of how to use these functions for each of the supported benchmark datasets are provided on the script's GitHub page.

Results Submission

The results can be submitted through the online results submission form. This form collects the mandatory information regarding the obtained results. This includes a link to a (preprint) publication or technical report, a link to the publically available implementation of the approach, and the figures of merit as obtained using the provided Python scripts. The results should be obtained by making use of the training+validation / test data split as provided by the previously mentioned Python scripts. The test data cannot be used in any way to tune the model parameters or structure. Note that the reported results are curated, only complete submissions with meaningful contributions will be included.

Authors who wish to make a batch submission of many results at once can contact us by e-mail (m.schoukens@tue.nl).

Results Overview

The curated results are displayed on the page of each of the supported benchmark systems. You can also find the complete list below. This list also includes some results that have been obtained before the introduction of the supporting data loading and splitting scripts, they are indicated as legacy results as they can have used different data splits during the identification process.

Benchmark Results