This repo is derived from HPO-B repo to evaluate the newly proposed generative based black-box HPO algorithm on their benchmark meta-dataset to assess the performance of the proposed VAE-Transformer model.
-
HPO-B Dataset: The HPO-B Paper discusses various search spaces across different datasets. You can download the dataset from here.
-
Surrogates for Continuous Search Spaces: The model is evaluated on continuous search spaces. Download the necessary surrogate models from here.
-
Prepare the Folders:
- Extract the dowloaded files and place the folders in the root directory of the project.
-
Run VAET Example:
- Execute the VAET example script with the following command:
python example_vaet.py
- This will create a
results/VAET.json
file containing accuracy results for different seeds on all datasets of search-space id5971
.
- Execute the VAET example script with the following command:
-
Run VAET Benchmark:
- To generate benchmark comparisons, use this command:
python examplevaet_benchmark.py
- This script generates plots of rank regret, comparing the performance of the proposed generative-based black-box algorithm against methods like Random Search, Gaussian Process, and Deep Gaussian Process.
- To generate benchmark comparisons, use this command:
- HPO-B Benchmark GitHub
- Pineda-Arango, S., Jomaa, H. S., Wistuba, M., & Grabocka, J. (2021). HPO-B: A Large-Scale Reproducible Benchmark for Black-Box HPO Based on OpenML. In Neural Information Processing Systems (NeurIPS) Track on Datasets and Benchmarks.