Skip to content
Lisa Rennels edited this page Oct 17, 2018 · 29 revisions

Mimi will use the PkgBenchmark package to do basic benchmarking for Mimi. The goal is to provide a straightforward way to run small performance checks to support design decisions and ensure that changes made to Mimi do not negatively affect performance in a significant way.

Note that it would be prudent to also run these tests between two identical Mimi versions (ie. run master twice, saving each run to different files and then comparing them), to ensure that the package is properly benchmarking. This comparison file should show no significant differences.

Required Infrastructure

To use these tools, use Julia 1.0, and versions of Mimi that has been ported to Julia 1.0. You will also need to work on the Master branch of PkgBenchmark (thus the call to add PkgBenchmark#master), as opposed to the latest tagged version. A setup for working with these tools may look like the following:

jl #start a Julia1.0 REPL
] activate --shared v1.0 #activate desired environment
] add BenchmarkTools 
] add PkgBenchmark#master #add PkgBenchmark and checkout master branch with the #master postfix
] dev Mimi #add local Mimi dev 

#look at the status
] st
    Status `~/.julia/dev/Mimi/default/Project.toml`
  [6e4b80f9] BenchmarkTools v0.4.1
  [e4e893b0] Mimi v0.5.0+ [`~/.julia/dev/Mimi`]
  [32113eaa] PkgBenchmark v0.1.1+ #master (https://github.com/JuliaCI/PkgBenchmark.jl.git)
  

Benchmarking

The benchmarking tool depends on the Mimi/bechmark/benchmarks.jl file, which instructs it on what code to test. This file can be changed, but since it must be consistent between compared branches in order to get sensible results, do not change this file without communicating with the rest of the development team. Currently the script calls RegionTutorialBenchmarks.jl, which runs both the one-region and two-region tutorials. In summary, the workflow for benchmarking will include running the test code for both branches being considered, save a results .json file for each, and then comparing the two files and outputting this comparison into a pre-formatted Markdown .md file.

Before benchmarking be sure to run scripts once to account for method compilation time

The first step is to produce the .json results files for both test runs.

using PkgBenchmark 

#checkout your master or baseline branch now

benchmarkpkg("Mimi", resultfile = "masterrun.json")

#checkout your new branch now

benchmarkpkg("Mimi", resultfile = "enhancementrun.json")

The next step will be to compare the two results files and output a formatted Markdown .md file, which presents the results and also highlights any significant regressions or improvements. Note that this assumes your first argument to judge is the "target" run and the second is the "baseline" run.

export_markdown("comparison.md", judge(readresults("enhancementrun.json"), readresults("masterrun.json")))
Clone this wiki locally