-
-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inconsistent heatmaps and the current timing script #51
Comments
Can you look into Python's std library |
I was looking into how to time processes that create other parallel processes. The So, I started looking into how ASV benchmarks time their benchmarks and they use the timeit docs also suggests running the program multiple times: PS: Before implementing all this and re-generating all the heatmaps, I just want to dig a bit deeper and figure out how exactly the |
It seems like you want some approximation of "wall clock time" -- that is, how long between starting and finishing the call. I don't think we want to somehow compute the total cpu-time of the call. So Running it multiple times and taking the best time also seems reasonable. Using your own machine and comparing to a VM is expected to give different values, but hopefully the factor of speedup is similar. Timing is difficult to get exactly right. But a consistent set of hardware without distractions from other processes will be a good start. |
fyi, Would there be any interest in making common utilities that make it easy to benchmark different backends? Maybe we could use what we have for
|
current heatmap- in repo(it says- ran on 2 cores):
I tried re-running these and got these results(used 8 cores):
Potential cause: other background processes or the machine goes into 'sleep mode' in between, this can increase or decrease the time a function takes.
Potential solution: running heatmaps on VM and using some other timing function that gives the time CPU core(s) was actually running the process, instead of just
time.time()
.The text was updated successfully, but these errors were encountered: