You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi! I would like to use the following performance metrics to validate the hyperparameter optimization algorithms with kurobako.
The number of budgets to reach the X% point of the best performance. I want to give X as a float or a list of floats. I want to know the mean and variance.
The performance of the final parameter that was found to perform best. I want to know the mean and variance.
The text was updated successfully, but these errors were encountered:
Thank you for the feature request!
As both seem useful, I'd like to support them. I'll consider the details of them when I have time.
For now, let me leave random thoughts about them.
The number of budgets to reach the X% point of the best performance. I want to give X as a float or a list of floats. I want to know the mean and variance.
I think that the most difficult point is how to specify an appropriate X% to each problem. For example, considering to optimize the validation accuracy of ML tasks of which the best performance is 1.0, some tasks may easily achieve 0.9 accuracies while other tasks may not be able to reach 0.8 accuracies even if it's allowed to run infinity trials. It seems not an easy task to specify X% because it requires some insight about the target problem.
The following approach that uses random search might be a possible solution:
Run a benchmark that uses random search as the solver with a huge budget and save the result.
Let users specify a percentile point of the random search result instead of the X% point of the best performance of the problem.
I don't think that this is the best approach but at least, this ensures that users always specify a feasible point.
The performance of the final parameter that was found to perform best. I want to know the mean and variance.
It seems not difficult this feature, but currently kurobako doesn't provide built-in problems that have noise in the evaluation. So it might be worth considering adding such a problem too when we implement this feature.
If you have any comments, please let me know. > @HideakiImamura
Hi! I would like to use the following performance metrics to validate the hyperparameter optimization algorithms with
kurobako
.The text was updated successfully, but these errors were encountered: