Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(benchmark): add summary output for benchmark comparisons #74

Closed
wants to merge 5 commits into from

Conversation

hendriknielaender
Copy link
Owner

@hendriknielaender hendriknielaender commented Apr 16, 2024

Implement a new feature in the benchmark module to display a summary at the end of benchmarking output. This summary includes the fastest benchmark, and a comparison of each benchmark showing how many times slower they are relative to the fastest. This enhancement aims to provide users with a quick overview of their code's performance characteristics.

closes #64

Copy link
Collaborator

@bens bens left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, I think it can be simplified a bit but see what you think.

@@ -323,9 +378,10 @@ pub fn getSystemInfo() !platform.OsInfo {
pub const Result = struct {
name: []const u8,
readings: Readings,
total_duration_ns: u64,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As far as I can see, the total duration is only needed for the summary, so it doesn't seem necessary to add a new field to Result just to carry it. I think it can be summed and passed to printSummary which makes it a more local change.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah you are right. Currently I have not found a clean way to handle this. When I tried to move it, I end up with a segfault. Need to have another look here.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where you're defining an array of Results in run(), you could have an array total_durations: []u64 instead and append the total_duration which you've calculated to that rather than make a new Result from the data in the Result you get in the switch's .result clause. Then rather than pass a slice of Result values to printSummary you can just pass the total_durations, and to get the name for each one you use the index to look it up in the benchmarks given, just like you're doing for the fastest_index. Does that make sense?

zbench.zig Outdated Show resolved Hide resolved
zbench.zig Outdated Show resolved Hide resolved
zbench.zig Show resolved Hide resolved
examples/summary.zig Show resolved Hide resolved
@hendriknielaender
Copy link
Owner Author

@bens thank you for your review 👍 i will try to address these points tomorrow.

@bens
Copy link
Collaborator

bens commented May 2, 2024

The thing that makes me uncertain about this overall change is that if two benchmarks have different numbers of iterations then they're not comparable. I've been reading up on some stats and might have some ideas later, but generally speaking I think for something like this you'd want to establish that the distribution you're modeling the results with is a good fit, then work out the degree of confidence that one benchmark is faster than another. Just comparing runtimes of the two benchmarks is simplistic.

@hendriknielaender
Copy link
Owner Author

The thing that makes me uncertain about this overall change is that if two benchmarks have different numbers of iterations then they're not comparable. I've been reading up on some stats and might have some ideas later, but generally speaking I think for something like this you'd want to establish that the distribution you're modeling the results with is a good fit, then work out the degree of confidence that one benchmark is faster than another. Just comparing runtimes of the two benchmarks is simplistic.

Good point, that's actually too simplistic. When adding the feature, I wasn't really considering that.

@hendriknielaender
Copy link
Owner Author

closing this PR, since the approach would be too simplistic

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Benchmark Summary
2 participants