In the preparation of AutoDL challenges, we formatted around 100 datasets and used 66 of them for AutoDL challenges. Some meta-features of these datasets are shown in the table below. Note that in AutoDL challenges all tasks are multi-label classification task. You can also format your own data with our code.
The public datasets provided in the final AutoDL challenge can be found here.
1st prize: DeepWisdom [GitHub repo]
2nd prize: DeepBlueAI [GitHub repo]
3rd prizes: Inspur_AutoDL [GitHub repo], PASA_NJU [GitHub repo]
The implementation of the strongest baseline (Baseline 3) we provided in AutoDL challenges can be found here.
As a first step towards a rich AutoDL benchmark, we ran Baseline 3 on all 66 AutoDL datasets. Their Area under Learning Curve (ALC) scores and final NAUC scores (time budget T=1200s and t0=60) are shown in the following figures. The rectangular area in the first figure is zoomed in the second figure.
We also ran AutoDL challenge's top-1 winner DeepWisdom's solution on these 66 datasets and the results are shown below.
Numerical values are shown in the following table.
A complete table (CSV file) including all results in AutoDL challenge's feedback phase, final phase and post-challenge analysis can be found
(updated on 7 May 2020)
If you wish to use above results and data, please think of citing following reference: