-
Notifications
You must be signed in to change notification settings - Fork 80
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add warnings about too few or too many samples #210
Changes from 3 commits
766ecdc
0e3dbc1
7c47af1
a685a14
56dfad1
980003e
a02bff9
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -424,6 +424,39 @@ def median_abs_dev(self): | |
raise ValueError("MAD must be >= 0") | ||
return value | ||
|
||
def required_nsamples(self): | ||
""" | ||
Determines the number of samples that would be required to have 95% | ||
certainty that the samples have a variance of less than 1%. | ||
|
||
This is described in this Wikipedia article about estimating the sampling of | ||
a mean: | ||
|
||
https://en.wikipedia.org/wiki/Sample_size_determination#Estimation_of_a_mean | ||
""" | ||
# Get the means of the values per run | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Why not computing the mean only once, for all values of all runs? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Because for some benchmarks, cache effects are visible within the same process. For example, pylint takes about 30% longer during the first iteration than the subsequent 2 iterations. One could argue that's a bad benchmark, but it's common enough that we should control for it. There's some more discussion here: faster-cpython/bench_runner#318 (comment) That said, it's definitely worth putting a comment about that here. |
||
values = [] | ||
for run in self._runs: | ||
if len(run.values): | ||
values.append(statistics.mean(run.values)) | ||
|
||
if len(values) < 2: | ||
return None | ||
|
||
total = math.fsum(values) | ||
mean = total / len(values) | ||
stddev = statistics.stdev(values) | ||
# Normalize the stddev so we can target "percentage changed" rather than | ||
# absolute time | ||
sigma = stddev / mean | ||
|
||
# 95% certainty | ||
Z = 1.96 | ||
# 1% variation | ||
W = 0.01 | ||
|
||
return int(math.ceil((4 * Z ** 2 * sigma ** 2) / (W ** 2))) | ||
mdboom marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
def percentile(self, p): | ||
if not (0 <= p <= 100): | ||
raise ValueError("p must be in the range [0; 100]") | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -413,6 +413,8 @@ def format_checks(bench, lines=None): | |
warnings = [] | ||
warn = warnings.append | ||
|
||
required_nsamples = bench.required_nsamples() | ||
|
||
# Display a warning if the standard deviation is greater than 10% | ||
# of the mean | ||
if len(values) >= 2: | ||
|
@@ -421,6 +423,13 @@ def format_checks(bench, lines=None): | |
if percent >= 10.0: | ||
warn("the standard deviation (%s) is %.0f%% of the mean (%s)" | ||
% (bench.format_value(stdev), percent, bench.format_value(mean))) | ||
else: | ||
# display a warning if the number of samples isn't enough to get a stable result | ||
if ( | ||
required_nsamples is not None and | ||
required_nsamples > len(bench._runs) | ||
): | ||
warn("Not enough samples to get a stable result (95% certainly of less than 1% variation)") | ||
|
||
# Minimum and maximum, detect obvious outliers | ||
for minimum, value in ( | ||
|
@@ -457,6 +466,16 @@ def format_checks(bench, lines=None): | |
lines.append("Use pyperf stats, pyperf dump and pyperf hist to analyze results.") | ||
lines.append("Use --quiet option to hide these warnings.") | ||
|
||
if ( | ||
required_nsamples is not None and | ||
required_nsamples < len(bench._runs) * 0.75 | ||
): | ||
lines.append("Benchmark was run more times than necessary to get a stable result.") | ||
lines.append( | ||
"Consider passing processes=%d to the Runner constructor to save time." % | ||
required_nsamples | ||
) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This warning may be a little bit annoying. Maybe only show it in the "pyperf check" command? https://pyperf.readthedocs.io/en/latest/cli.html#check-cmd There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yeah, that's a good idea. We can run |
||
|
||
# Warn if nohz_full+intel_pstate combo if found in cpu_config metadata | ||
for run in bench._runs: | ||
cpu_config = run._metadata.get('cpu_config') | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -478,11 +478,16 @@ def test_hist(self): | |
22.8 ms: 3 ############## | ||
22.9 ms: 4 ################### | ||
22.9 ms: 4 ################### | ||
Benchmark was run more times than necessary to get a stable result. | ||
Consider passing processes=7 to the Runner constructor to save time. | ||
""") | ||
self.check_command(expected, 'hist', TELCO, env=env) | ||
|
||
def test_show(self): | ||
expected = (""" | ||
Benchmark was run more times than necessary to get a stable result. | ||
Consider passing processes=7 to the Runner constructor to save time. | ||
|
||
Mean +- std dev: 22.5 ms +- 0.2 ms | ||
""") | ||
self.check_command(expected, 'show', TELCO) | ||
|
@@ -518,6 +523,8 @@ def test_stats(self): | |
100th percentile: 22.9 ms (+2% of the mean) -- maximum | ||
|
||
Number of outlier (out of 22.0 ms..23.0 ms): 0 | ||
Benchmark was run more times than necessary to get a stable result. | ||
Consider passing processes=7 to the Runner constructor to save time. | ||
""") | ||
self.check_command(expected, 'stats', TELCO) | ||
|
||
|
@@ -628,8 +635,10 @@ def test_slowest(self): | |
|
||
def test_check_stable(self): | ||
stdout = self.run_command('check', TELCO) | ||
self.assertEqual(stdout.rstrip(), | ||
'The benchmark seems to be stable') | ||
self.assertTrue( | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I suggest using assertIn() instead. |
||
'The benchmark seems to be stable' in | ||
stdout.rstrip() | ||
) | ||
|
||
def test_command(self): | ||
command = [sys.executable, '-c', 'pass'] | ||
|
@@ -689,7 +698,7 @@ def _check_track_memory(self, track_option): | |
'[1,2]*1000', | ||
'-o', tmp_name) | ||
bench = pyperf.Benchmark.load(tmp_name) | ||
|
||
self._check_track_memory_bench(bench, loops=5) | ||
|
||
def test_track_memory(self): | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you want to add a public function, please document it at: https://pyperf.readthedocs.io/en/latest/api.html#benchmark-class
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point. I think we do want it to be public (for the same reason the other statistics methods are public).