Replies: 1 comment 7 replies
-
Hi @jduerholt. As to whether one is better than the other, I haven't benchmarked this. My guess is, in the typical small budget regime, you would get a more diverse Pareto frontier if you use different scalarizations for each point, but that's just a guess.
Yes, good idea. Some of the tutorials are quite old and could use an audit. |
Beta Was this translation helpful? Give feedback.
-
Hi all,
I came across theimplementation of a
qLogNParEgo
acquisition function (https://github.com/pytorch/botorch/blob/main/botorch/acquisition/multi_objective/parego.py).From my understanding, it is like a
qLogNEI
acqf but instead of letting the user handling the scalarization upfront (as in this tutorial: https://github.com/pytorch/botorch/blob/main/tutorials/constrained_multi_objective_bo/constrained_multi_objective_bo.ipynb), it is done within the new acqf.The only difference, I see, with respect to the tutorial is, that in the tutorial the
q
candidates are generated sequentially using different scalarizations per acqf (it is done usingoptimize_acqf_list
). In the "new" acquisition function, the batch of candidates is generated in parallel using always the same scalarization for every candidate in the batch.What is your experience there? What performs better? Or is the idea, to use the new acquisition function within
optimize_acqf_list
in a sequential way with a different scalarization per candidate in a batch? In this case, the new acqf class would release the burden from the user to codeup the scalarization on his own. If this is the case, we should maybe also think about updating the tutorial notebook ;)Best,
Johannes
Beta Was this translation helpful? Give feedback.
All reactions