-
Notifications
You must be signed in to change notification settings - Fork 143
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Benchmarking with asv #1761
base: main
Are you sure you want to change the base?
Benchmarking with asv #1761
Conversation
This reverts commit 0f5fa2a.
@erikvansebille On the topic of performance, are you also experiencing it taking something like 10s occasionally to run |
Yes, I also experience this slow import sometimes. Not sure why... |
Setting to draft until we have some actual benchmarks that we can include in this. |
From meeting: |
for more information, see https://pre-commit.ci
The Argo tutorial at https://docs.oceanparcels.org/en/latest/examples/tutorial_Argofloats.html is also a quite nice simulation for benchmarking, as it has a 'complex' kernel. It took approximately 20s to run on v3.1 in JIT mode, and no takes 50s on my local computer to run in Scipy-mode. @danliba or @willirath, could you add the Argo tutorial to the benchmark stack? |
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@danliba I've left a few comments.
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
Todo
asv_bench/benchmarks/benchmarks_integration.py
for an example.)asv_bench/benchmarks/benchmarks_particle_execution.py
for an example.)When applicable, split into different phases. We could, e.g., go for a
.setup()
method for fieldset creating, and a.time_execute()
method for the particle execution.This PR introduces benchmarking infrastructure to the project via asv. Benchmarks can be run on a pull request by adding the
run-benchmarks
label to it. Two environments will be created in the CI runner with the prior and proposed changes, and both suites of benchmarks will be run and compared against each other.Note that this PR only has example benchmarks for the timebeing until we can discuss benchmarks of interest.
The running of the benchmarks in CI is only one aspect of the benchmarking (ie, only for core parcels functionality). Using asv, we can create different suites of benchmarks (e.g., one for CI, and one for more heavy simulations). The benefit of using asv is everything else that comes out of the box with it, some being:
Changes:
I have done some testing of the PR label workflow in VeckoTheGecko#10 . We can only test this for PRs in OceanParcels/parcels when its in master
Related to #1712