Skip to content

Conversation

@jeroenrook
Copy link
Contributor

Multi-objective SMAC as described in https://doi.org/10.1162/evco_a_00371

Carolin Benjamins and others added 30 commits January 9, 2023 12:57
This is a safety measure. Normally, everytime we
update the runhistory, the objective bounds are
updated so that the value to normalize should
be inside the bound.
Created a helper method to create a set with
preserved order from a list
Previously: random scalarization of MO costs bc
ParEGO was the only MO algo.
Now: separate function which can be overwritten.
Also, reset obtain to the kwargs (no magic numbers)

Better debug message
Updates incumbents of runhistory automatically if updated
# Conflicts:
#	smac/acquisition/maximizer/local_search.py
Copy link
Contributor Author

@jeroenrook jeroenrook left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Extra note: After merge add additional functions such as separate surrogate models and runhistories (log) for the different objectives.

objectives="accuracy",
# min_budget=1, # Train the MLP using a hyperparameter configuration for at least 5 epochs
# max_budget=25, # Train the MLP using a hyperparameter configuration for at most 25 epochs
n_workers=4,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Works

configs_acq.sort(reverse=True, key=lambda x: x[0])
for a, inc in configs_acq:
inc.origin = "Acquisition Function Maximizer: Local Search"
inc.origin = "Local Search"
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed this already to make the pytests work again

def _create_sort_keys(self, costs: np.array) -> list[list[float]]:
"""Non-Dominated Sorting of Costs

In case of the predictive model returning the prediction for more than one objective per configuration
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Update text to comply with workings of function


return init_points

def _create_sort_keys(self, costs: np.array) -> list[list[float]]:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is to get points to perform the local search with based on earlier runs. So it can make a difference. This code was not implemented by me btw. Only moved probably

How many incumbents to keep track of in the case of multi-objective.
"""
return Intensifier(
class NewIntensifier(intermediate_decision.NewCostDominatesOldCost,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Combine with abstraction of the intensifier

config_hash = get_config_hash(config)

# Do not compare very early in the process
if len(config_isb_keys) < 4:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Empirically. But the mixing is likely to overwrite this function anyway

isb_keys = self.get_incumbent_instance_seed_budget_keys(compare=True)

n_samples = 1000
if len(isb_keys) < 7: # When there are only a limited number of trials available we run all combinations
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe 7 is the miminum number you need to have a least 1000 distinct samples.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants