Skip to content

Commit 5141252

Browse files
Merge pull request #138 from nathanrboyer/nb/chairmarks
Replace BenchmarkTools.jl with Chairmarks.jl in Optimizing
2 parents 86474bc + 531d264 commit 5141252

File tree

1 file changed

+29
-23
lines changed

1 file changed

+29
-23
lines changed

optimizing/index.md

Lines changed: 29 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ With this in mind, after you're done with the current page, you should read the
3636

3737
## Measurements
3838

39-
\tldr{Use BenchmarkTools.jl's `@benchmark` with a setup phase to get the most accurate idea of your code's performance. Use Chairmarks.jl as a faster alternative.}
39+
\tldr{Use Chairmarks.jl's `@be` with a setup phase to get the most accurate idea of your code's performance.}
4040

4141
The simplest way to measure how fast a piece of code runs is to use the `@time` macro, which returns the result of the code and prints the measured runtime and allocations.
4242
Because code needs to be compiled before it can be run, you should first run a function without timing it so it can be compiled, and then time it:
@@ -53,39 +53,42 @@ using BenchmarkTools
5353
Using `@time` is quick but it has flaws, because your function is only measured once.
5454
That measurement might have been influenced by other things going on in your computer at the same time.
5555
In general, running the same block of code multiple times is a safer measurement method, because it diminishes the probability of only observing an outlier.
56+
The Chairmarks.jl package provides convenient syntax to do just that.
5657

57-
### BenchmarkTools
58+
### Chairmarks
5859

59-
[BenchmarkTools.jl](https://github.com/JuliaCI/BenchmarkTools.jl) is the most popular package for repeated measurements on function executions.
60-
Similarly to `@time`, BenchmarkTools offers `@btime` which can be used in exactly the same way but will run the code multiple times and provide an average.
61-
Additionally, by using `$` to [interpolate external values](https://juliaci.github.io/BenchmarkTools.jl/stable/manual/#Interpolating-values-into-benchmark-expressions), you remove the overhead caused by global variables.
60+
[Chairmarks.jl](https://github.com/LilithHafner/Chairmarks.jl) is the latest benchmarking toolkit, designed to make fast and accurate timing measurements.
61+
Chairmarks offers `@b` (for "benchmark") which can be used in the same way as `@time` but will run the code multiple times and provide a minimum execution time.
62+
Alternatively, Chairmarks also provides `@be` to run the same benchmark and output all of its statistics.
6263

63-
```>$-example
64-
using BenchmarkTools
65-
@btime sum_abs(v);
66-
@btime sum_abs($v);
64+
```>chairmarks-example
65+
using Chairmarks
66+
@b sum_abs(v)
67+
@be sum_abs(v)
68+
```
69+
70+
Chairmarks supports a pipeline syntax with optional `init`, `setup`, `teardown`, and `keywords` arguments for more extensive control over the benchmarking process.
71+
The `sum_abs` function could also be benchmarked using pipeline syntax as below.
72+
73+
```>pipeline-example-simple
74+
@be v sum_abs
6775
```
6876

69-
In more complex settings, you might need to construct variables in a [setup phase](https://juliaci.github.io/BenchmarkTools.jl/stable/manual/#Setup-and-teardown-phases) that is run before each sample.
70-
This can be useful to generate a new random input every time, instead of always using the same input.
77+
For a more complicated example, you could write the following to benchmark a matrix multiplication function for one second, excluding the time spent to *setup* the arrays.
7178

72-
```>setup-example
79+
```>pipeline-example-complex
7380
my_matmul(A, b) = A * b;
74-
@btime my_matmul(A, b) setup=(
75-
A = rand(1000, 1000); # use semi-colons between setup lines
76-
b = rand(1000)
77-
);
81+
@be (A=rand(1000,1000), b=rand(1000)) my_matmul(_.A, _.b) seconds=1
7882
```
7983

80-
For better visualization, the `@benchmark` macro shows performance histograms:
84+
See the [Chairmarks documentation](https://chairmarks.lilithhafner.com/) for more details on benchmarking options.
85+
For better visualization, [PrettyChairmarks.jl](https://github.com/astrozot/PrettyChairmarks.jl) shows performance histograms alongside the numerical results.
8186

8287
\advanced{
83-
Certain computations may be [optimized away by the compiler]((https://juliaci.github.io/BenchmarkTools.jl/stable/manual/#Understanding-compiler-optimizations)) before the benchmark takes place.
88+
No matter the benchmarking tool used, certain computations may be [optimized away by the compiler]((https://juliaci.github.io/BenchmarkTools.jl/stable/manual/#Understanding-compiler-optimizations)) before the benchmark takes place.
8489
If you observe suspiciously fast performance, especially below the nanosecond scale, this is very likely to have happened.
8590
}
8691

87-
[Chairmarks.jl](https://github.com/LilithHafner/Chairmarks.jl) offers an alternative to BenchmarkTools.jl, promising faster benchmarking while attempting to maintain high accuracy and using an alternative syntax based on pipelines.
88-
8992
### Benchmark suites
9093

9194
While we previously discussed the importance of documenting breaking changes in packages using [semantic versioning](/sharing/index.md#versions-and-registration), regressions in performance can also be vital to track.
@@ -97,10 +100,13 @@ Several packages exist for this purpose:
97100

98101
### Other tools
99102

100-
BenchmarkTools.jl works fine for relatively short and simple blocks of code (microbenchmarking).
103+
Chairmarks.jl works fine for relatively short and simple blocks of code (microbenchmarking).
101104
To find bottlenecks in a larger program, you should rather use a [profiler](#profiling) or the package [TimerOutputs.jl](https://github.com/KristofferC/TimerOutputs.jl).
102105
It allows you to label different sections of your code, then time them and display a table of grouped by label.
103106

107+
[BenchmarkTools.jl](https://github.com/JuliaCI/BenchmarkTools.jl) is the older standard for benchmarking in Julia. It is still widely used today.
108+
However, its default parameters run benchmarks for longer than Chairmarks, and it requires interpolating variables into the benchmarked expressions with `$`.
109+
104110
Finally, if you know a loop is slow and you'll need to wait for it to be done, you can use [ProgressMeter.jl](https://github.com/timholy/ProgressMeter.jl) or [ProgressLogging.jl](https://github.com/JuliaLogging/ProgressLogging.jl) to track its progress.
105111

106112
## Profiling
@@ -141,7 +147,7 @@ To integrate profile visualisations into environments like Jupyter and Pluto, us
141147
No matter which tool you use, if your code is too fast to collect samples, you may need to run it multiple times in a loop.
142148

143149
\advanced{
144-
To visualize memory allocation profiles, use PProf.jl or VSCode's `@profview_allocs`.
150+
To visualize memory allocation profiles, use PProf.jl or VSCode's `@profview_allocs`.
145151
A known issue with the allocation profiler is that it is not able to determine the type of every object allocated, instead `Profile.Allocs.UnknownType` is shown instead.
146152
Inspecting the call graph can help identify which types are responsible for the allocations.
147153
}
@@ -386,7 +392,7 @@ However, in order for all workers to know about a function or module, we have to
386392
using Distributed
387393
388394
# Add additional workers then load code on the workers
389-
addprocs(3)
395+
addprocs(3)
390396
@everywhere using SharedArrays
391397
@everywhere f(x) = 3x^2
392398

0 commit comments

Comments
 (0)