From 464a6927870659948b65a0dada97743b3b17795d Mon Sep 17 00:00:00 2001 From: wldeh <62161211+wldeh@users.noreply.github.com> Date: Mon, 5 Jun 2023 14:20:06 -0700 Subject: [PATCH 01/30] docs: create new docs directory --- CONTRIBUTING.md | 0 README.md | 315 +++++++++++++++++----------------- docs/benchmark-runtime.md | 25 +++ docs/benchmarking-overview.md | 136 +++++++++++++++ docs/compatibility.md | 21 +++ docs/configuration-options.md | 115 +++++++++++++ docs/images/placeholder.png | 1 + docs/interpreting-results.md | 51 ++++++ docs/separate-source-sets.md | 114 ++++++++++++ docs/tasks-overview.md | 8 + 10 files changed, 631 insertions(+), 155 deletions(-) create mode 100644 CONTRIBUTING.md create mode 100644 docs/benchmark-runtime.md create mode 100644 docs/benchmarking-overview.md create mode 100644 docs/compatibility.md create mode 100644 docs/configuration-options.md create mode 100644 docs/images/placeholder.png create mode 100644 docs/interpreting-results.md create mode 100644 docs/separate-source-sets.md create mode 100644 docs/tasks-overview.md diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md new file mode 100644 index 00000000..e69de29b diff --git a/README.md b/README.md index 03eb087b..f7e66a97 100644 --- a/README.md +++ b/README.md @@ -1,210 +1,213 @@ +# kotlinx-benchmark + [![Kotlin Alpha](https://kotl.in/badges/alpha.svg)](https://kotlinlang.org/docs/components-stability.html) [![JetBrains incubator project](https://jb.gg/badges/incubator.svg)](https://confluence.jetbrains.com/display/ALL/JetBrains+on+GitHub) [![GitHub license](https://img.shields.io/badge/license-Apache%20License%202.0-blue.svg?style=flat)](https://www.apache.org/licenses/LICENSE-2.0) -[![Build status](https://teamcity.jetbrains.com/guestAuth/app/rest/builds/buildType:(id:KotlinTools_KotlinxCollectionsImmutable_Build_All)/statusIcon.svg)](https://teamcity.jetbrains.com/viewType.html?buildTypeId=KotlinTools_KotlinxBenchmark_Build_All) +[![Build status]()](https://teamcity.jetbrains.com/viewType.html?buildTypeId=KotlinTools_KotlinxBenchmark_Build_All) [![Maven Central](https://img.shields.io/maven-central/v/org.jetbrains.kotlinx/kotlinx-benchmark-runtime.svg?label=Maven%20Central)](https://search.maven.org/search?q=g:%22org.jetbrains.kotlinx%22%20AND%20a:%22kotlinx-benchmark-runtime%22) [![Gradle Plugin Portal](https://img.shields.io/maven-metadata/v?label=Gradle%20Plugin&metadataUrl=https://plugins.gradle.org/m2/org/jetbrains/kotlinx/kotlinx-benchmark-plugin/maven-metadata.xml)](https://plugins.gradle.org/plugin/org.jetbrains.kotlinx.benchmark) [![IR](https://img.shields.io/badge/Kotlin%2FJS-IR%20supported-yellow)](https://kotl.in/jsirsupported) +kotlinx.benchmark is a toolkit for running benchmarks for multiplatform code written in Kotlin and running on the following supported targets: JVM, JavaScript and Native. -> **_NOTE:_**   Starting from version 0.3.0 of the library: -> * The library runtime is published to Maven Central and no longer published to Bintray. -> * The Gradle plugin is published to Gradle Plugin Portal -> * The Gradle plugin id has changed to `org.jetbrains.kotlinx.benchmark` -> * The library runtime artifact id has changed to `kotlinx-benchmark-runtime` +## Features +- Low noise and reliable results +- Statistical analysis +- Detailed performance reports -**kotlinx.benchmark** is a toolkit for running benchmarks for multiplatform code written in Kotlin -and running on the following supported targets: JVM, JavaScript and Native. +## Table of contents -Both Legacy and IR backends are supported for JS, however `kotlin.js.compiler=both` or `js(BOTH)` target declaration won't work. -You should declare each targeted backend separately. See build script of the [kotlin-multiplatform example project](https://github.com/Kotlin/kotlinx-benchmark/tree/master/examples/kotlin-multiplatform). + -On JVM [JMH](https://openjdk.java.net/projects/code-tools/jmh/) is used under the hoods to run benchmarks. -This library has a very similar way of defining benchmark methods. Thus, using this library you can run your JMH-based -Kotlin/JVM benchmarks on other platforms with minimum modifications, if any at all. +- [Using in Your Projects](#using-in-your-projects) + - [Gradle Setup](#gradle-setup) + - [Kotlin DSL](#kotlin-dsl) + - [Groovy DSL](#groovy-dsl) + - [Target-specific configurations](#target-specific-configurations) + - [Kotlin/JS](#kotlinjs) + - [Multiplatform](#multiplatform) + - [Benchmark Configuration](#benchmark-configuration) +- [Examples](#examples) +- [Contributing](#contributing) -# Requirements + -Gradle 7.0 or newer +- **Additional links** + - [Harnessing Code Performance: The Art and Science of Benchmarking](docs/benchmarking-overview.md) + - [Understanding Benchmark Runtime](docs/benchmark-runtime.md) + - [Configuring kotlinx-benchmark](docs/configuration-options.md) + - [Interpreting and Analyzing Results](docs/interpreting-results.md) + - [Creating Separate Source Sets](docs/seperate-source-sets.md) + - [Tasks Overview](docs/tasks-overview.md) + - [Compatibility Guide](docs/compatibility.md) + - [Submitting issues and PRs](CONTRIBUTING.md) -Kotlin 1.7.20 or newer +## Using in Your Projects -# Gradle plugin +The `kotlinx-benchmark` library is designed to work with Kotlin/JVM, Kotlin/JS, and Kotlin/Native targets. To get started, ensure you're using Kotlin 1.7.20 or newer and Gradle 7.0 or newer. -Use plugin in `build.gradle`: +### Gradle Setup -```groovy -plugins { - id 'org.jetbrains.kotlinx.benchmark' version '0.4.4' -} -``` +
+Kotlin DSL -For Kotlin/JS specify building `nodejs` flavour: +1. **Adding Dependency**: Add the `kotlinx-benchmark-runtime` dependency in your `build.gradle.kts` file. -```groovy -kotlin { - js { - nodejs() - … - } -} -``` + ```kotlin + dependencies { + implementation("org.jetbrains.kotlinx:kotlinx-benchmark-runtime:0.4.8") + } + ``` -For Kotlin/JVM code, add `allopen` plugin to make JMH happy. Alternatively, make all benchmark classes and methods `open`. +2. **Applying Benchmark Plugin**: Next, apply the benchmark plugin. -For example, if you annotated each of your benchmark classes with `@State(Scope.Benchmark)`: -```kotlin -@State(Scope.Benchmark) -class Benchmark { - … -} -``` -and added the following code to your `build.gradle`: -```groovy -plugins { - id 'org.jetbrains.kotlin.plugin.allopen' -} + ```kotlin + plugins { + kotlin("plugin.allopen") version "1.8.21" + id("org.jetbrains.kotlinx.benchmark") version "0.4.8" + } + ``` -allOpen { - annotation("org.openjdk.jmh.annotations.State") -} -``` -then you don't have to make benchmark classes and methods `open`. +3. **Enabling AllOpen Plugin**: If your benchmark classes are annotated with `@State(Scope.Benchmark)`, apply the `allopen` plugin and specify the `State` annotation. -# Runtime Library + ```kotlin + plugins { + kotlin("plugin.allopen") version "1.8.21" + } + ``` -You need a runtime library with annotations and code that will run benchmarks. +4. **Using AllOpen Plugin**: The `allopen` plugin is used to satisfy JMH requirements, or all benchmark classes and methods should be `open`. -Enable Maven Central for dependencies lookup: -```groovy -repositories { - mavenCentral() + ``` + allOpen { + annotation("org.openjdk.jmh.annotations.State") + } + ``` + +5. **Specifying Repository**: Ensure you have `mavenCentral()` for dependencies lookup in the list of repositories: + + ```kotlin + repositories { + mavenCentral() + } + ``` + +
+ +
+Groovy DSL + +1. **Adding Dependency**: In your `build.gradle` file, include the `kotlinx-benchmark-runtime` dependency. + + ```groovy + dependencies { + implementation 'org.jetbrains.kotlinx:kotlinx-benchmark-runtime:0.4.8' + } + ``` + +2. **Applying Benchmark Plugin**: Next, apply the benchmark plugin. + + ```groovy + plugins { + id 'org.jetbrains.kotlin.plugin.allopen' version "1.8.21" + id 'org.jetbrains.kotlinx.benchmark' version '0.4.8' + } + ``` + +3. **Enabling AllOpen Plugin**: If your benchmark classes are annotated with `@State(Scope.Benchmark)`, apply the `allopen` plugin and specify the `State` annotation. + + ```groovy + plugins { + id 'org.jetbrains.kotlin.plugin.allopen' + } + ``` + +4. **Using AllOpen Plugin**: The `allopen` plugin is used to satisfy JMH requirements, or all benchmark classes and methods should be `open`. + + ``` + allOpen { + annotation("org.openjdk.jmh.annotations.State") + } + ``` + +5. **Specifying Repository**: Ensure you have `mavenCentral()` in the list of repositories: + + ```groovy + repositories { + mavenCentral() + } + ``` + +
+ +### Target-specific configurations + +#### Kotlin/JS + +For Kotlin/JS, include the `nodejs()` method call in the `kotlin` block: + +```kotlin +kotlin { + js { + nodejs() + } } ``` -Add the runtime to dependencies of the platform source set, e.g.: -``` +For Kotlin/JS, both Legacy and IR backends are supported. However, simultaneous target declarations such as `kotlin.js.compiler=both` or `js(BOTH)` are not feasible. Ensure each backend is separately declared. For a detailed configuration example, please refer to the [build script of the kotlin-multiplatform example project](https://github.com/Kotlin/kotlinx-benchmark/blob/master/examples/kotlin-multiplatform/build.gradle). + +#### Multiplatform + +For multiplatform projects, add the `kotlinx-benchmark-runtime` dependency to the `commonMain` source set: + +```kotlin kotlin { sourceSets { commonMain { - dependencies { - implementation("org.jetbrains.kotlinx:kotlinx-benchmark-runtime:0.4.4") - } + dependencies { + implementation("org.jetbrains.kotlinx:kotlinx-benchmark-runtime:0.4.8") + } } } } ``` -# Configuration +The platform-specific artifacts will be resolved automatically. + +### Benchmark Configuration In a `build.gradle` file create `benchmark` section, and inside it add a `targets` section. -In this section register all compilations you want to run benchmarks for. -`register` should either be called on the name of a target (e.g. `"jvm"`) which will register its `main` compilation -(meaning that `register("jvm")` and `register("jvmMain")` register the same compilation) -Or on the name of a source set (e.g. `"jvmTest"`, `"jsBenchmark"`) which will register the apt compilation -(e.g. `register("jsFoo")` uses the `foo` compilation defined for the `js` target) +In this section register all targets you want to run benchmarks from. Example for multiplatform project: -```groovy +```kotlin benchmark { targets { - register("jvm") + register("jvm") register("js") register("native") - register("wasm") // Experimental + // Add this line if you are working with WebAssembly (experimental) + // register("wasm") } } ``` -This package can also be used for Java and Kotlin/JVM projects. Register a Java sourceSet as a target: +To further customize your benchmarks, add a `configurations` section within the `benchmark` block. By default, a `main` configuration is generated, but additional configurations can be added as needed: -```groovy -benchmark { - targets { - register("main") - } -} -``` - -To configure benchmarks and create multiple profiles, create a `configurations` section in the `benchmark` block, -and place options inside. Toolkit creates `main` configuration by default, and you can create as many additional -configurations, as you need. - - -```groovy -benchmark { - configurations { - main { - // configure default configuration - } - smoke { - // create and configure "smoke" configuration, e.g. with several fast benchmarks to quickly check - // if code changes result in something very wrong, or very right. - } - } -} -``` - -Available configuration options: - -* `iterations` – number of measuring iterations -* `warmups` – number of warm up iterations -* `iterationTime` – time to run each iteration (measuring and warmup) -* `iterationTimeUnit` – time unit for `iterationTime` (default is seconds) -* `outputTimeUnit` – time unit for results output -* `mode` - - "thrpt" (default) – measures number of benchmark function invocations per time - - "avgt" – measures time per benchmark function invocation -* `include("…")` – regular expression to include benchmarks with fully qualified names matching it, as a substring -* `exclude("…")` – regular expression to exclude benchmarks with fully qualified names matching it, as a substring -* `param("name", "value1", "value2")` – specify a parameter for a public mutable property `name` annotated with `@Param` -* `reportFormat` – format of report, can be `json`(default), `csv`, `scsv` or `text` -* There are also some advanced platform-specific settings that can be configured using `advanced("…", …)` function, - where the first argument is the name of the configuration parameter, and the second is its value. Valid options: - * (Kotlin/Native) `nativeFork` - - "perBenchmark" (default) – executes all iterations of a benchmark in the same process (one binary execution) - - "perIteration" – executes each iteration of a benchmark in a separate process, measures in cold Kotlin/Native runtime environment - * (Kotlin/Native) `nativeGCAfterIteration` – when set to `true`, additionally collects garbage after each measuring iteration (default is `false`). - * (Kotlin/JVM) `jvmForks` – number of times harness should fork (default is `1`) - - a non-negative integer value – the amount to use for all benchmarks included in this configuration, zero means "no fork" - - "definedByJmh" – let the underlying JMH determine, which uses the amount specified in [`@Fork` annotation](https://javadoc.io/static/org.openjdk.jmh/jmh-core/1.21/org/openjdk/jmh/annotations/Fork.html) defined for the benchmark function or its enclosing class, - or [Defaults.MEASUREMENT_FORKS (`5`)](https://javadoc.io/static/org.openjdk.jmh/jmh-core/1.21/org/openjdk/jmh/runner/Defaults.html#MEASUREMENT_FORKS) if it is not specified by `@Fork`. - * (Kotlin/Js and Wasm) `jsUseBridge` – when `false` disables to generate special benchmark bridges to prevent inlining optimisations (only for `BuiltIn` benchmark executors). - -Time units can be NANOSECONDS, MICROSECONDS, MILLISECONDS, SECONDS, MINUTES, or their short variants such as "ms" or "ns". - -Example: - -```groovy +```kotlin benchmark { - // Create configurations configurations { - main { // main configuration is created automatically, but you can change its defaults - warmups = 20 // number of warmup iterations - iterations = 10 // number of iterations - iterationTime = 3 // time in seconds per iteration + main { + warmups = 20 + iterations = 10 + iterationTime = 3 } smoke { - warmups = 5 // number of warmup iterations - iterations = 3 // number of iterations - iterationTime = 500 // time in seconds per iteration - iterationTimeUnit = "ms" // time unit for iterationTime, default is seconds - } - } - - // Setup targets - targets { - // This one matches compilation base name, e.g. 'jvm', 'jvmTest', etc - register("jvm") { - jmhVersion = "1.21" // available only for JVM compilations & Java source sets - } - register("js") { - // Note, that benchmarks.js uses a different approach of minTime & maxTime and run benchmarks - // until results are stable. We estimate minTime as iterationTime and maxTime as iterationTime*iterations - // - // You can configure benchmark executor - benchmarkJs or buildIn (works only for JsIr backend) with the next line: - // jsBenchmarksExecutor = JsBenchmarksExecutor.BuiltIn + warmups = 5 + iterations = 3 + iterationTime = 500 + iterationTimeUnit = "ms" } register("native") register("wasm") // Experimental @@ -215,7 +218,6 @@ benchmark { # Separate source sets for benchmarks Often you want to have benchmarks in the same project, but separated from main code, much like tests. Here is how: -For a Kotlin/JVM project: Define source set: ```groovy @@ -270,4 +272,7 @@ benchmark { # Examples The project contains [examples](https://github.com/Kotlin/kotlinx-benchmark/tree/master/examples) subproject that demonstrates using the library. - + +## Contributing + +We welcome contributions to kotlinx-benchmark! If you want to contribute, please refer to our Contribution Guidelines. diff --git a/docs/benchmark-runtime.md b/docs/benchmark-runtime.md new file mode 100644 index 00000000..75943cb2 --- /dev/null +++ b/docs/benchmark-runtime.md @@ -0,0 +1,25 @@ +# kotlinx.benchmark: A Comprehensive Guide to Benchmark Runtime for Each Target + +This document provides an in-depth overview of the kotlinx.benchmark library, focusing on how the benchmark runtime works for each supported target: JVM, JavaScript, and Native. This guide is designed for beginners and intermediates, providing a clear understanding of the underlying libraries used and the benchmark execution process. + +## Table of Contents + +- [JVM Target](#jvm-target) +- [JavaScript Target](#javascript-target) +- [Native Target](#native-target) + +## JVM Target + +The JVM target in kotlinx.benchmark leverages the Java Microbenchmark Harness (JMH) to run benchmarks. JMH is a widely-used tool for building, running, and analyzing benchmarks written in Java and other JVM languages. + +### Benchmark Execution + +JMH handles the execution of benchmarks, managing the setup, running, and teardown of tests. It also handles the calculation of results, providing a robust and reliable framework for benchmarking on the JVM. + +### Benchmark Configuration + +The benchmark configuration is handled through annotations that map directly to JMH annotations. These include `@State`, `@Benchmark`, `@BenchmarkMode`, `@OutputTimeUnit`, `@Warmup`, `@Measurement`, and `@Param`. + +### File Operations + +File reading and writing operations are performed using standard Java I/O classes, providing a consistent and reliable method for file operations across all JVM platforms. diff --git a/docs/benchmarking-overview.md b/docs/benchmarking-overview.md new file mode 100644 index 00000000..b6941e60 --- /dev/null +++ b/docs/benchmarking-overview.md @@ -0,0 +1,136 @@ +# Harnessing Code Performance: The Art and Science of Benchmarking with kotlinx-benchmark + +This guide serves as your compass for mastering the art of benchmarking with kotlinx-benchmark. By harnessing the power of benchmarking, you can unlock performance insights in your code, uncover bottlenecks, compare different implementations, detect regressions, and make informed decisions for optimization. + +## Table of Contents + +1. [Understanding Benchmarking](#understanding-benchmarking) + - [Benchmarking Unveiled: A Beginner's Introduction](#benchmarking-unveiled-a-beginners-introduction) + - [Why Benchmarking Deserves Your Attention](#why-benchmarking-deserves-your-attention) + - [Benchmarking: A Developer's Torchlight](#benchmarking-a-developers-torchlight) +2. [Benchmarking Use Cases](#benchmarking-use-cases) +3. [Target Code for Benchmarking](#target-code-for-benchmarking) + - [What to Benchmark](#what-to-benchmark) + - [What Not to Benchmark](#what-not-to-benchmark) +4. [Maximizing Benchmarking](#maximizing-benchmarking) + - [Top Tips for Maximizing Benchmarking](#top-tips-for-maximizing-benchmarking) +5. [Community and Support](#community-and-support) +6. [Inquiring Minds: Your Benchmarking Questions Answered](#inquiring-minds-your-benchmarking-questions-answered) +7. [Further Reading and Resources](#further-reading-and-resources) + +## Understanding Benchmarking + +### Benchmarking Unveiled: A Beginner's Introduction + +Benchmarking is the magnifying glass for your code's performance. It helps you uncover performance bottlenecks, carry out comparative analyses, detect performance regressions, and evaluate different environments. By providing a standard and reliable method of performance measurement, benchmarking ensures code optimization and quality, and improves decision-making within the team and the wider development community. + +_kotlinx-benchmark_ is designed for microbenchmarking, providing a lightweight and accurate solution for measuring the performance of Kotlin code. + +### Why Benchmarking Deserves Your Attention + +The significance of benchmarking in software development is undeniable: + +- **Performance Analysis**: Benchmarks provide insights into performance characteristics, allowing you to identify bottlenecks and areas for improvement. +- **Algorithm Optimization**: By comparing different implementations, you can choose the most efficient solution. +- **Code Quality**: Benchmarking ensures that your code meets performance requirements and maintains high quality. +- **Scalability**: Understanding how your code performs at different scales helps you make optimization decisions and trade-offs. + +### Benchmarking: A Developer's Torchlight + +Benchmarking provides several benefits for software development projects: + +1. **Performance Optimization:** By benchmarking different parts of a system, developers can identify performance bottlenecks, areas for improvement, and potential optimizations. This helps in enhancing the overall efficiency and speed of the software. + +2. **Comparative Analysis:** Benchmarking allows developers to compare various implementations, libraries, or configurations to make informed decisions. It helps choose the best-performing option or measure the impact of changes made during development. + +3. **Regression Detection:** Regular benchmarking enables the detection of performance regressions, i.e., when a change causes a degradation in performance. This helps catch potential issues early in the development process and prevents performance degradation in production. + +4. **Hardware and Environment Variations:** Benchmarking helps evaluate the impact of different hardware configurations, system setups, or environments on performance. It enables developers to optimize their software for specific target platforms. + + comparison across systems. This eases sharing and discussing performance results within a team or the larger community. + +## Benchmarking Use Cases + +Benchmarking serves as a critical tool across various scenarios in software development. Here are a few notable use cases: + +- **Performance Tuning:** Developers often employ benchmarking while optimizing algorithms, especially when subtle tweaks could lead to drastic performance changes. + +- **Library Selection:** When deciding between third-party libraries offering similar functionalities, benchmarking can help identify the most efficient option. + +- **Hardware Evaluation:** Benchmarking can help understand how a piece of software performs across different hardware configurations, aiding in better infrastructure decisions. + +- **Continuous Integration (CI) Systems:** Automated benchmarks as part of a CI pipeline help spot performance regressions in the early stages of development. + +## Target Code for Benchmarking + +### What to Benchmark + +Consider benchmarking these: + +- **Measurable Microcosms: Isolated Code Segments:** Benchmarking thrives on precision, making small, isolated code segments an excellent area of focus. These miniature microcosms of your codebase are more manageable and provide clearer, more focused insights into your application's performance characteristics. + +- **The Powerhouses: Performance-Critical Functions, Methods or Algorithms:** Your application's overall performance often hinges on a select few performance-critical sections of code. These powerhouses - whether they're specific functions, methods, or complex algorithms - have a significant influence on your application's overall performance and thus make for ideal benchmarking candidates. + +- **The Chameleons: Code Ripe for Optimization or Refactoring:** Change is the only constant in the world of software development. Parts of your code that are regularly refactored, updated, or optimized hold immense value from a benchmarking perspective. By tracking performance changes as this code evolves, you gain insights into the impact of your optimizations, ensuring that every tweak is a step forward in performance. + +### What Not to Benchmark + +It's best to avoid benchmarking: + +- **The Giants: Complex, Monolithic Code Segments:** Although it might be tempting to analyze large, intricate segments of your codebase, these can often lead to a benchmarking quagmire. Interdependencies within these sections can complicate your results, making it challenging to derive precise, actionable insights. Instead, concentrate your efforts on smaller, isolated parts of your code that can be analyzed in detail. + +- **The Bedrocks: Stagnant, Inflexible Code:** Code segments that are infrequently altered or have reached their final form may not provide much value from benchmarking. While it's important to understand their performance characteristics, it's the code that you actively optimize or refactor that can truly benefit from the continuous feedback loop that benchmarking provides. + +- **The Simples: Trivial or Overly Simplistic Code Segments:** While every line of code contributes to the overall performance, directing your benchmarking efforts towards overly simple or negligible impact parts of your code may not yield much fruit. Concentrate on areas that have a more pronounced impact on your application's performance to ensure your efforts are well spent. + +- **The Wild Cards: Non-Reproducible or Unpredictable Behavior Code:** Consistency is key in benchmarking, so code that's influenced by external, unpredictable factors, such as I/O operations, network conditions, or random data generation, should generally be avoided. The resulting inconsistent benchmark results may obstruct your path to precise insights, hindering your optimization efforts. + +## Maximizing Benchmarking + +### Top Tips for Maximizing Benchmarking + +To obtain accurate and insightful benchmark results, keep in mind these essential tips: + +1. **Focus on Vital Code Segments**: Benchmark small, isolated code segments that are critical to performance or likely to be optimized. + +2. **Employ Robust Tools**: Employ powerful benchmarking tools like kotlinx-benchmark that handle potential pitfalls and provide reliable measurement solutions. + +3. **Context is Crucial**: Supplement your benchmarking with performance evaluations on real applications to gain a holistic understanding of performance traits. + +4. **Control Your Environment**: Minimize external factors by running benchmarks in a controlled environment, reducing variations in results. + +5. **Warm-Up the Code**: Before benchmarking, execute your code multiple times. This allows the JVM to perform optimizations, leading to more accurate results. + +6. **Interpreting Results**: Understand that lower values are better in a benchmarking context. Also, consider the statistical variance and look for meaningful differences, not just any difference. + +## Community and Support + +For further assistance and learning, consider engaging with these communities: + +- **Stack Overflow:** Use the `kotlinx-benchmark` tag to find or ask questions related to this tool. + +- **Kotlinlang Slack:** The `#benchmarks` channels is the perfect place to discuss topics related to benchmarking. + +- **Github Discussions:** The kotlinx-benchmark Github repository is another place to discuss and ask questions about this library. + +## Inquiring Minds: Your Benchmarking Questions Answered + +Benchmarking may raise a myriad of questions, especially when you're first getting started. To help you navigate through these complexities, we've compiled answers to some commonly asked questions. + +**1. The Warm-Up Riddle: Why is it Needed Before Benchmarking?** + +The Java Virtual Machine (JVM) features sophisticated optimization techniques, such as Just-In-Time (JIT) compilation, which becomes more effective as your code runs. Warming up allows these optimizations to take place, providing a more accurate representation of how your code performs under standard operating conditions + +**2. Decoding Benchmark Results: How Should I Interpret Them?** + +In benchmarking, lower values represent better performance. But don't get too fixated on minuscule differences. Remember to take into account statistical variances and concentrate on significant performance disparities. It's the impactful insights, not every minor fluctuation, that matter most. + +**3. Multi-threaded Conundrum: Can I Benchmark Multi-threaded Code with kotlinx-benchmark?** + +While kotlinx-benchmark is geared towards microbenchmarking — typically examining single-threaded performance — it's possible to benchmark multi-threaded code. However, keep in mind that such benchmarking can introduce additional complexities due to thread synchronization, contention, and other concurrency challenges. Always ensure you understand these intricacies before proceeding. + +## Further Reading and Resources + +If you'd like to dig deeper into the world of benchmarking, here are some resources to help you on your journey: + +- [Mastering High Performance with Kotlin](https://www.amazon.com/Mastering-High-Performance-Kotlin-difficulties/dp/178899664X) diff --git a/docs/compatibility.md b/docs/compatibility.md new file mode 100644 index 00000000..81782764 --- /dev/null +++ b/docs/compatibility.md @@ -0,0 +1,21 @@ +# Compatibility Guide + +This guide provides you with information on the compatibility of different versions of `kotlinx-benchmark` with both Kotlin and Gradle. To use `kotlinx-benchmark` effectively, ensure that you have the minimum required versions of Kotlin and Gradle installed. + +| `kotlinx-benchmark` Version | Minimum Required Kotlin Version | Minimum Required Gradle Version | +| :-------------------------: | :-----------------------------: | :-----------------------------: | +| 0.4.8 | 1.8.2 | 8.0 or newer | +| 0.4.7 | 1.8.0 | 8.0 or newer | +| 0.4.6 | 1.7.20 | 8.0 or newer | +| 0.4.5 | 1.7.0 | 7.0 or newer | +| 0.4.4 | 1.7.0 | 7.0 or newer | +| 0.4.3 | 1.6.20 | 7.0 or newer | +| 0.4.2 | 1.6.0 | 7.0 or newer | +| 0.4.1 | 1.6.0 | 6.8 or newer | +| 0.4.0 | 1.5.30 | 6.8 or newer | +| 0.3.1 | 1.4.30 | 6.8 or newer | +| 0.3.0 | 1.4.30 | 6.8 or newer | + +*Note: "Minimum Required" implies that any higher version than the one mentioned will also be compatible.* + +For more details about the changes, improvements, and updates in each `kotlinx-benchmark` version, please refer to the [RELEASE NOTES](https://github.com/Kotlin/kotlinx-benchmark/releases) and [CHANGELOG](#). diff --git a/docs/configuration-options.md b/docs/configuration-options.md new file mode 100644 index 00000000..6c9b917b --- /dev/null +++ b/docs/configuration-options.md @@ -0,0 +1,115 @@ +# Configuring kotlinx-benchmark + +kotlinx-benchmark offers a plethora of configuration options that enable you to customize your benchmarking setup according to your precise needs. This advanced guide provides an in-depth explanation of how to setup your benchmark configurations, alongside detailed insights into kotlinx's functionalities. + +## Table of Contents + +- [Step 1: Laying the Foundation – Establish Benchmark Targets](#step-1) +- [Step 2: Tailoring the Setup – Create Benchmark Configurations](#step-2) +- [Step 3: Fine-tuning Your Setup – Understanding and Setting Configuration Options](#step-3) + - [Basic Configuration Options: The Essential Settings](#step-3a) + - [Advanced Configuration Options: The Power Settings](#step-3b) + +## Step 1: Laying the Foundation – Establish Benchmark Targets + +Your journey starts by defining the `benchmark` section within your `build.gradle` file. This section is your playground where you register the compilations you wish to run benchmarks on, within a `targets` subsection. + +Targets can be registered in two ways. Either by their name, such as `"jvm"`, which registers its `main` compilation, meaning `register("jvm")` and `register("jvmMain")` will register the same compilation. Alternatively, you can register a source set, for instance, `"jvmTest"` or `"jsBenchmark"`, which will register the corresponding compilation. Here's an illustration using a multiplatform project: + +```groovy +benchmark { + targets { + register("jvm") + register("js") + register("native") + register("wasm") // Experimental + } +} +``` + +## Step 2: Tailoring the Setup – Create Benchmark Configurations + +Having laid the groundwork with your targets, the next phase involves creating configurations for your benchmarks. You accomplish this by adding a `configurations` subsection within your `benchmark` block. + +The kotlinx benchmark toolkit automatically creates a `main` configuration as a default. However, you can mold this tool to suit your needs by creating additional configurations. These configurations are your control knobs, letting you adjust the parameters of your benchmark profiles. Here's how: + +```groovy +benchmark { + configurations { + main { + // Configuration parameters for the default profile go here + } + smoke { + // Create and configure a "smoke" configuration. + } + } +} +``` + +## Step 3: Fine-tuning Your Setup – Understanding and Setting Configuration Options + +Each configuration brings a bundle of options to the table, providing you with the flexibility to meet your specific benchmarking needs. We delve into these options to give you a better understanding and help you make the most of the basic and advanced settings: + +### Basic Configuration Options: The Essential Settings + +| Option | Description | Default Value | Possible Values | +| --- | --- | --- | --- | +| `iterations` | Specifies the number of iterations for measurements. | - | Integer | +| `warmups` | Specifies the number of iterations for system warming, ensuring accurate measurements. | - | Integer | +| `iterationTime` | Specifies the duration for each iteration, both measurement and warm-up. | - | Integer | +| `iterationTimeUnit` | Specifies the unit for `iterationTime`. | Seconds | "ns", "μs", "ms", "s", "m", "h", "d" | +| `outputTimeUnit` | Specifies the unit for the results display. | - | "ns", "μs", "ms", "s", "m", "h", "d" | +| `mode` | Selects between "thrpt" for measuring the number of function calls per unit time or "avgt" for measuring the time per function call. | "thrpt" | "thrpt", "avgt" | +| `include("…")` | Applies a regular expression to include benchmarks that match the substring in their fully qualified names. | - | Regex pattern | +| `exclude("…")` | Applies a regular expression to exclude benchmarks that match the substring in their fully qualified names. | - | Regex pattern | +| `param("name", "value1", "value2")` | Assigns values to a public mutable property, annotated with `@Param`. | - | Any string values | +| `reportFormat` | Defines the benchmark report's format options. | "json" | "json", "csv", "scsv", "text" | + +### Advanced Configuration Options: The Power Settings + +Beyond the basics, kotlinx allows you to take a deep dive into platform-specific settings, offering more control over your benchmarks: + +| Option | Platform | Description | Default Value | Possible Values | +| --- | --- | --- | --- | --- | +| `advanced("nativeFork", "value")` | Kotlin/Native | Executes iterations within the same process ("perBenchmark") or each iteration in a separate process ("perIteration"). | "perBenchmark" | "perBenchmark", "perIteration" | +| `advanced("nativeGCAfterIteration", "value")` | Kotlin/Native | Triggers garbage collection after each iteration when set to `true`. | `false` | `true`, `false` | +| `advanced("jvmForks", "value")` | Kotlin/JVM | Determines how many times the harness should fork. | "1" | "0" (no fork), "1", "definedByJmh" (JMH decides) | +| `advanced("jsUseBridge", "value")` | Kotlin/JS, Kotlin/Wasm | Disables the generation of benchmark bridges to stop inlining optimizations when set to `false`. | - | `true`, `false` | + +Here's an example of how you can customize a benchmark configuration using these options: + +```groovy +benchmark { + configurations { + main { + warmups = 20 // Number of warmup iterations + iterations = 10 // Number of measurement iterations + iterationTime = 3 // Duration per iteration in seconds + iterationTimeUnit = "s" // Unit for iterationTime + mode = "avgt" // Measure the average time per function call + outputTimeUnit = "ms" // Display results in milliseconds + include(".*MyBenchmark.*") // Only include benchmarks matching this pattern + param("size", "100", "200") // Parameter for benchmark + reportFormat = "json" // Format of the benchmark report + } + smoke { + warmups = 5 + iterations = 3 + iterationTime = 500 + iterationTimeUnit = "ms" + advanced("nativeFork", "perIteration") + advanced("nativeGCAfterIteration", "true") + } + } + targets { + register("jvm") { + jmhVersion = "1.21" + } + register("js") + register("native") + register("wasm") + } +} +``` + +With this guide, you should now be well-equipped to fine-tune your benchmarking process, ensuring you generate precise, reliable performance measurements every time. diff --git a/docs/images/placeholder.png b/docs/images/placeholder.png new file mode 100644 index 00000000..8b137891 --- /dev/null +++ b/docs/images/placeholder.png @@ -0,0 +1 @@ + diff --git a/docs/interpreting-results.md b/docs/interpreting-results.md new file mode 100644 index 00000000..c932b894 --- /dev/null +++ b/docs/interpreting-results.md @@ -0,0 +1,51 @@ +# Interpreting and Analyzing Kotlinx-Benchmark Results + +When you use the kotlinx-benchmark library to profile your Kotlin code, it provides a detailed output that can help you identify bottlenecks, inefficiencies, and performance variations in your application. Here is a comprehensive guide on how to interpret and analyze these results. + +## Understanding the Output + +A typical kotlinx-benchmark result may look something like this: + +``` +Benchmark Mode Cnt Score Error Units +ListBenchmark.first thrpt 20 74512.866 ± 3415.994 ops/s +ListBenchmark.first thrpt 20 7685.378 ± 359.982 ops/s +ListBenchmark.first thrpt 20 619.714 ± 31.470 ops/s +``` + +Let's break down what each column represents: + +1. **Benchmark:** This is the name of the benchmark test. +2. **Mode:** This is the benchmark mode. It may be "avgt" (average time), "ss" (single shot time), "thrpt" (throughput), or "sample" (sampling time). +3. **Cnt:** This is the number of measurements taken for the benchmark. More measurements lead to more reliable results. +4. **Score:** This is the primary result of the benchmark. For "avgt", "ss" and "sample" modes, lower scores are better, as they represent time taken per operation. For "thrpt", higher scores are better, as they represent operations per unit of time. +5. **Error:** This is the error rate for the Score. It helps you understand the statistical dispersion in the data. A small error rate means the Score is more reliable. +6. **Units:** These indicate the units for Score and Error, like operations per second (ops/s) or time per operation (us/op, ms/op, etc.) + +## Analyzing the Results + +Here are some general steps to analyze your benchmark results: + +1. **Compare Scores:** The primary factor to consider is the Score. Remember to interpret it in the context of the benchmark mode - for throughput, higher is better, and for time-based modes, lower is better. + +2. **Consider Error:** The Error rate gives you an idea of the reliability of your Score. If the Error is high, the benchmark might need to be run more times to get a reliable Score. + +3. **Review Parameters:** Consider the impact of different parameters (like 'size' in the example) on your benchmark. They can give you insights into how your code performs under different conditions. + +4. **Factor in Units:** Be aware of the units in which your results are measured. Time can be measured in nanoseconds, microseconds, milliseconds, or seconds, and throughput in operations per second. + +5. **Compare Benchmarks:** If you have run multiple benchmarks, compare the results. This can help identify which parts of your code are slower or less efficient than others. + +## Common Pitfalls + +While analyzing benchmark results, watch out for these common pitfalls: + +1. **Variance:** If you're seeing a high amount of variance (a high Error rate), consider running the benchmark more times. + +2. **JVM Warmup:** Java's HotSpot VM optimizes the code as it runs, which can cause the first few runs to be significantly slower. Make sure you allow for adequate JVM warmup time to get accurate benchmark results. + +3. **Micro-benchmarks:** Be cautious when drawing conclusions from micro-benchmarks (benchmarks of very small pieces of code). They can be useful for testing small, isolated pieces of code, but real-world performance often depends on a wide array of factors that aren't captured in micro-benchmarks. + +4. **Dead Code Elimination:** The JVM is very good at optimizing your code, and sometimes it can optimize your benchmark right out of existence! Make sure your benchmarks do real work and that their results are used somehow (often by returning them from the benchmark method), or else the JVM might optimize them away. + +5. **Measurement error:** Ensure that you are not running any heavy processes in the background that could distort your benchmark results. diff --git a/docs/separate-source-sets.md b/docs/separate-source-sets.md new file mode 100644 index 00000000..a8afbdf5 --- /dev/null +++ b/docs/separate-source-sets.md @@ -0,0 +1,114 @@ +# Benchmarking with Gradle: Creating Separate Source Sets + +Elevate your project's performance potential with organized, efficient, and isolated benchmarks. This guide will walk you through the process of creating separate source sets for benchmarks in your Kotlin project with Gradle. + +## Table of Contents + +1. [What is a Source Set?](#what-is-a-source-set) +2. [Why Have Separate Source Sets for Benchmarks?](#why-have-separate-source-sets-for-benchmarks) +3. [Step-by-step Setup Guide](#setup-guide) + - [Kotlin JVM Project](#jvm-project) + - [Kotlin Multiplatform Project](#multiplatform-project) +4. [Frequently Asked Questions](#frequently-asked-questions) +5. [Troubleshooting](#troubleshooting) + +## What is a Source Set? + +Before we delve into the details, let's clarify what a source set is. In Gradle, a source set represents a group of source files that are compiled and executed together. By default, every Gradle project includes two source sets: `main` for your application code and `test` for your test code. + +A source set defines the location of your source code, the names of compiled classes, and their placement. It also handles additional assets such as resources and configuration files. + +## Why Have Separate Source Sets for Benchmarks? + +Having separate source sets for benchmarks offers several advantages: + +1. **Organization**: It helps maintain a clean and organized project structure. Segregating benchmarks from the main code makes it easier to navigate and locate specific code segments. + +2. **Isolation**: Separating benchmarks ensures that the benchmarking code does not interfere with your main code or test code. This isolation guarantees accurate measurements without unintentional side effects. + +3. **Flexibility**: Creating a separate source set allows you to manage your benchmarking code independently. You can compile, test, and run benchmarks without impacting your main source code. + +## Step-by-step Setup Guide + +Below are the step-by-step instructions to set up separate source sets for benchmarks in both Kotlin JVM and Multiplatform projects: + +### Kotlin JVM Project + +Transform your Kotlin JVM project with separate benchmark source sets by following these simple steps: + +1. **Define Source Set**: + + Begin by defining a new source set in your `build.gradle` file. We'll use `benchmarks` as the name for the source set. + + ```groovy + sourceSets { + benchmarks + } + ``` + +2. **Propagate Dependencies**: + + Next, propagate dependencies and output from the `main` source set to your `benchmarks` source set. This ensures the `benchmarks` source set has access to classes and resources from the `main` source set. + + ```groovy + dependencies { + benchmarksCompile sourceSets.main.output + sourceSets.main.runtimeClasspath + } + ``` + + You can also add output and `compileClasspath` from `sourceSets.test` in the same way if you wish to reuse some of the test infrastructure. + +3. **Register Benchmark Source Set**: + + Finally, register your benchmark source set. This informs the kotlinx-benchmark tool that benchmarks reside within this source set and need to be executed accordingly. + + ```groovy + benchmark { + targets { + register("benchmarks") + } + } + ``` + +### Kotlin Multiplatform Project + +Set up your Kotlin Multiplatform project to accommodate separate benchmark source sets by following these steps: + +1. **Define New Compilation**: + + Start by defining a new compilation in your target of choice (e.g. jvm, js, etc.) in your `build.gradle.kts` file. In this example, we're associating the new compilation 'benchmark' with the `main` compilation of the `jvm` target. + + ```kotlin + kotlin { + jvm { + compilations.create('benchmark') { associateWith(compilations.main) } + } + } + ``` + +2. **Register Benchmark Compilation**: + + Conclude by registering your benchmark compilation. This notifies the kotlinx-benchmark tool that benchmarks are located within this compilation and should be executed accordingly. + + ```kotlin + benchmark { + targets { + register("jvmBenchmark") + } + } + ``` + + For more information on creating a custom compilation, you can refer to the [Kotlin documentation on creating a custom compilation](https://kotlinlang.org/docs/multiplatform-configure-compilations.html#create-a-custom-compilation). + +## Frequently Asked Questions + +Here are some common questions about creating separate source sets for benchmarks: + +**Q: Can I use the same benchmark source set for multiple targets?** +A: While it's possible, it's generally recommended to have separate source sets for different targets to avoid configuration conflicts and ensure more accurate benchmarks. + +**Q: I'm encountering issues when running benchmarks from the IDE. What should I do?** +A: Ensure that the `src/benchmark/kotlin` directory is marked as "Sources Root" in your IDE. If you're still experiencing difficulties, refer to the discussions in [issue #112](https://github.com/Kotlin/kotlinx-benchmark/pull/112) for potential solutions. + +**Q: Where can I ask additional questions?** +A: Feel free to post any questions or issues on the [kotlinx-benchmark GitHub page](https://github.com/Kotlin/kotlinx-benchmark). The community is always ready to assist you! \ No newline at end of file diff --git a/docs/tasks-overview.md b/docs/tasks-overview.md new file mode 100644 index 00000000..adba5efc --- /dev/null +++ b/docs/tasks-overview.md @@ -0,0 +1,8 @@ +| Task | Description | +|---|---| +| **assembleBenchmarks** | The task responsible for generating and building all benchmarks in the project. Serves as a dependency for other benchmark tasks. | +| **benchmark** | The primary task for executing all benchmarks in the project. Depends on `assembleBenchmarks` to ensure benchmarks are ready and built. | +| **{configName}Benchmark** | Executes all benchmarks under the specific configuration. Useful when different benchmarking requirements exist for different parts of the application. | +| **{configName}BenchmarkGenerate** | Generates JMH source files for the specified configuration. JMH is a benchmarking toolkit for Java and JVM-targeting languages. | +| **{configName}BenchmarkCompile** | Compiles the JMH source files generated for a specific configuration, transforming them into machine code for JVM execution. | +| **{configName}BenchmarkJar** | Packages the compiled JMH files into a JAR (Java Archive) file for distribution and execution. | \ No newline at end of file From a3a303b6a5ed133556d6348916db57b068f2841c Mon Sep 17 00:00:00 2001 From: wldeh <62161211+wldeh@users.noreply.github.com> Date: Thu, 22 Jun 2023 10:38:56 -0700 Subject: [PATCH 02/30] docs(README.md): update Kotlin and Gradle prereq versions --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index f7e66a97..08969e19 100644 --- a/README.md +++ b/README.md @@ -45,7 +45,7 @@ kotlinx.benchmark is a toolkit for running benchmarks for multiplatform code wri ## Using in Your Projects -The `kotlinx-benchmark` library is designed to work with Kotlin/JVM, Kotlin/JS, and Kotlin/Native targets. To get started, ensure you're using Kotlin 1.7.20 or newer and Gradle 7.0 or newer. +The `kotlinx-benchmark` library is designed to work with Kotlin/JVM, Kotlin/JS, and Kotlin/Native targets. To get started, ensure you're using Kotlin 1.8.20 or newer and Gradle 8.0 or newer. ### Gradle Setup From 6652cd8a46ffeb40174da4505cf365f1df8ad76c Mon Sep 17 00:00:00 2001 From: wldeh <62161211+wldeh@users.noreply.github.com> Date: Thu, 22 Jun 2023 10:56:59 -0700 Subject: [PATCH 03/30] docs(README.md): update AllOpen plugin usage, backend support, and code snippet --- README.md | 56 +++++++++++++++++++------------------------------------ 1 file changed, 19 insertions(+), 37 deletions(-) diff --git a/README.md b/README.md index 08969e19..eb35b018 100644 --- a/README.md +++ b/README.md @@ -64,28 +64,11 @@ The `kotlinx-benchmark` library is designed to work with Kotlin/JVM, Kotlin/JS, ```kotlin plugins { - kotlin("plugin.allopen") version "1.8.21" id("org.jetbrains.kotlinx.benchmark") version "0.4.8" } ``` -3. **Enabling AllOpen Plugin**: If your benchmark classes are annotated with `@State(Scope.Benchmark)`, apply the `allopen` plugin and specify the `State` annotation. - - ```kotlin - plugins { - kotlin("plugin.allopen") version "1.8.21" - } - ``` - -4. **Using AllOpen Plugin**: The `allopen` plugin is used to satisfy JMH requirements, or all benchmark classes and methods should be `open`. - - ``` - allOpen { - annotation("org.openjdk.jmh.annotations.State") - } - ``` - -5. **Specifying Repository**: Ensure you have `mavenCentral()` for dependencies lookup in the list of repositories: +3. **Specifying Repository**: Ensure you have `mavenCentral()` for dependencies lookup in the list of repositories: ```kotlin repositories { @@ -114,24 +97,8 @@ The `kotlinx-benchmark` library is designed to work with Kotlin/JVM, Kotlin/JS, id 'org.jetbrains.kotlinx.benchmark' version '0.4.8' } ``` - -3. **Enabling AllOpen Plugin**: If your benchmark classes are annotated with `@State(Scope.Benchmark)`, apply the `allopen` plugin and specify the `State` annotation. - - ```groovy - plugins { - id 'org.jetbrains.kotlin.plugin.allopen' - } - ``` - -4. **Using AllOpen Plugin**: The `allopen` plugin is used to satisfy JMH requirements, or all benchmark classes and methods should be `open`. - - ``` - allOpen { - annotation("org.openjdk.jmh.annotations.State") - } - ``` - -5. **Specifying Repository**: Ensure you have `mavenCentral()` in the list of repositories: + +3. **Specifying Repository**: Ensure you have `mavenCentral()` in the list of repositories: ```groovy repositories { @@ -143,6 +110,21 @@ The `kotlinx-benchmark` library is designed to work with Kotlin/JVM, Kotlin/JS, ### Target-specific configurations +#### Kotlin/JVM + +For Kotlin/JVM, applying the [allopen plugin](https://kotlinlang.org/docs/all-open-plugin.html) is pivotal to meet JMH's criteria for `open` benchmark classes/methods. Alternatively, make all benchmark classes and methods `open`. Implement it as follows: + +```kotlin +plugins { + kotlin("jvm") version "1.8.21" + kotlin("plugin.allopen") version "1.8.21" +} + +allOpen { + annotation("org.openjdk.jmh.annotations.State") +} +``` + #### Kotlin/JS For Kotlin/JS, include the `nodejs()` method call in the `kotlin` block: @@ -155,7 +137,7 @@ kotlin { } ``` -For Kotlin/JS, both Legacy and IR backends are supported. However, simultaneous target declarations such as `kotlin.js.compiler=both` or `js(BOTH)` are not feasible. Ensure each backend is separately declared. For a detailed configuration example, please refer to the [build script of the kotlin-multiplatform example project](https://github.com/Kotlin/kotlinx-benchmark/blob/master/examples/kotlin-multiplatform/build.gradle). +For Kotlin/JS, IR backends are supported. However, simultaneous target declarations such as `kotlin.js.compiler=both` or `js(BOTH)` are not feasible. Ensure each backend is separately declared. For a detailed configuration example, please refer to the [build script of the kotlin-multiplatform example project](https://github.com/Kotlin/kotlinx-benchmark/blob/master/examples/kotlin-multiplatform/build.gradle). #### Multiplatform From 02a201532f47da8e224585bdfc0471a9ab40a44e Mon Sep 17 00:00:00 2001 From: wldeh <62161211+wldeh@users.noreply.github.com> Date: Thu, 22 Jun 2023 11:10:34 -0700 Subject: [PATCH 04/30] docs(configuration-options): add note about build script overriding annotation values --- docs/configuration-options.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/docs/configuration-options.md b/docs/configuration-options.md index 6c9b917b..316bd4c3 100644 --- a/docs/configuration-options.md +++ b/docs/configuration-options.md @@ -50,6 +50,8 @@ benchmark { Each configuration brings a bundle of options to the table, providing you with the flexibility to meet your specific benchmarking needs. We delve into these options to give you a better understanding and help you make the most of the basic and advanced settings: +**Note:** Many of these configuration options correspond to annotations in the benchmark code. Please be aware that any values provided in the build script will override those defined by annotations in the code. + ### Basic Configuration Options: The Essential Settings | Option | Description | Default Value | Possible Values | From 979e0ba11f36d702b3f27de9d0e55373b15e8ddd Mon Sep 17 00:00:00 2001 From: wldeh <62161211+wldeh@users.noreply.github.com> Date: Sat, 24 Jun 2023 00:11:02 -0700 Subject: [PATCH 05/30] docs(README.md): update README with enhanced explanation and format --- README.md | 69 +++++++++++++++++++++++++++++++++++++++++++------------ 1 file changed, 54 insertions(+), 15 deletions(-) diff --git a/README.md b/README.md index eb35b018..ed366dc1 100644 --- a/README.md +++ b/README.md @@ -25,6 +25,7 @@ kotlinx.benchmark is a toolkit for running benchmarks for multiplatform code wri - [Kotlin DSL](#kotlin-dsl) - [Groovy DSL](#groovy-dsl) - [Target-specific configurations](#target-specific-configurations) + - [Kotlin/JVM](#kotlinjvm) - [Kotlin/JS](#kotlinjs) - [Multiplatform](#multiplatform) - [Benchmark Configuration](#benchmark-configuration) @@ -34,7 +35,7 @@ kotlinx.benchmark is a toolkit for running benchmarks for multiplatform code wri - **Additional links** - - [Harnessing Code Performance: The Art and Science of Benchmarking](docs/benchmarking-overview.md) + - [Code Benchmarking: A Brief Overview](docs/benchmarking-overview.md) - [Understanding Benchmark Runtime](docs/benchmark-runtime.md) - [Configuring kotlinx-benchmark](docs/configuration-options.md) - [Interpreting and Analyzing Results](docs/interpreting-results.md) @@ -70,11 +71,11 @@ The `kotlinx-benchmark` library is designed to work with Kotlin/JVM, Kotlin/JS, 3. **Specifying Repository**: Ensure you have `mavenCentral()` for dependencies lookup in the list of repositories: - ```kotlin - repositories { - mavenCentral() - } - ``` + ```kotlin + repositories { + mavenCentral() + } + ``` @@ -97,22 +98,24 @@ The `kotlinx-benchmark` library is designed to work with Kotlin/JVM, Kotlin/JS, id 'org.jetbrains.kotlinx.benchmark' version '0.4.8' } ``` - + 3. **Specifying Repository**: Ensure you have `mavenCentral()` in the list of repositories: - ```groovy - repositories { - mavenCentral() - } - ``` + ```groovy + repositories { + mavenCentral() + } + ``` ### Target-specific configurations +For different platforms, there may be distinct requirements and settings that need to be configured. + #### Kotlin/JVM -For Kotlin/JVM, applying the [allopen plugin](https://kotlinlang.org/docs/all-open-plugin.html) is pivotal to meet JMH's criteria for `open` benchmark classes/methods. Alternatively, make all benchmark classes and methods `open`. Implement it as follows: +When benchmarking Kotlin/JVM code with Java Microbenchmark Harness (JMH), you should use the [allopen plugin](https://kotlinlang.org/docs/all-open-plugin.html). This plugin ensures your benchmark classes and methods are `open`, meeting JMH's requirements. ```kotlin plugins { @@ -125,9 +128,45 @@ allOpen { } ``` +
+ Illustrative Example + +Consider you annotated each of your benchmark classes with `@State(Scope.Benchmark)`: + +```kotlin +@State(Scope.Benchmark) +class MyBenchmark { + // Benchmarking-related methods and variables + fun benchmarkMethod() { + // benchmarking logic + } +} +``` + +In Kotlin, classes and methods are `final` by default, which means they can't be overridden. This is incompatible with the operation of the Java Microbenchmark Harness (JMH), which needs to generate subclasses for benchmarking. + +This is where the `allopen` plugin comes into play. With the plugin applied, any class annotated with `@State` is treated as `open`, which allows JMH to work as intended. Here's the Kotlin DSL configuration for the `allopen` plugin: + +```kotlin +plugins { + kotlin("plugin.allopen") version "1.8.21" +} + +allOpen { + annotation("org.openjdk.jmh.annotations.State") +} +``` + +This configuration ensures that your `MyBenchmark` class and its `benchmarkMethod` function are treated as `open`, allowing JMH to generate subclasses and conduct the benchmark. + +
+ +You can alternatively mark your benchmark classes and methods `open` manually, but using the `allopen` plugin enhances code maintainability. + #### Kotlin/JS -For Kotlin/JS, include the `nodejs()` method call in the `kotlin` block: +For benchmarking Kotlin/JS code Node.js execution enviroment should be targeted. See https://kotlinlang.org/docs/js-project-setup.html#execution-environments. This is because kotlinx-benchmark-runtime uses Node.js environment to run benchmarks. Include the `nodejs()` method call in the `kotlin` block: + ```kotlin kotlin { @@ -137,7 +176,7 @@ kotlin { } ``` -For Kotlin/JS, IR backends are supported. However, simultaneous target declarations such as `kotlin.js.compiler=both` or `js(BOTH)` are not feasible. Ensure each backend is separately declared. For a detailed configuration example, please refer to the [build script of the kotlin-multiplatform example project](https://github.com/Kotlin/kotlinx-benchmark/blob/master/examples/kotlin-multiplatform/build.gradle). +For Kotlin/JS, only IR backend is supported. For more information on the IR compiler, please refer to the [Kotlin/JS IR compiler documentation](https://kotlinlang.org/docs/js-ir-compiler.html) #### Multiplatform From e74decce9c9601375b58872d78cddd3a00846cfb Mon Sep 17 00:00:00 2001 From: wldeh <62161211+wldeh@users.noreply.github.com> Date: Sat, 24 Jun 2023 00:17:27 -0700 Subject: [PATCH 06/30] docs(configuration-options): Enhance configuration options table with Corresponding Annotation column, refine example and improve formatting --- docs/configuration-options.md | 87 ++++++++++++++++++----------------- 1 file changed, 45 insertions(+), 42 deletions(-) diff --git a/docs/configuration-options.md b/docs/configuration-options.md index 316bd4c3..2165b36d 100644 --- a/docs/configuration-options.md +++ b/docs/configuration-options.md @@ -9,17 +9,18 @@ kotlinx-benchmark offers a plethora of configuration options that enable you to - [Step 3: Fine-tuning Your Setup – Understanding and Setting Configuration Options](#step-3) - [Basic Configuration Options: The Essential Settings](#step-3a) - [Advanced Configuration Options: The Power Settings](#step-3b) +- [The Benchmark Configuration in Action: An In-Depth Example](#example) ## Step 1: Laying the Foundation – Establish Benchmark Targets -Your journey starts by defining the `benchmark` section within your `build.gradle` file. This section is your playground where you register the compilations you wish to run benchmarks on, within a `targets` subsection. +To start off, define the `benchmark` section within your `build.gradle` file. This section is your playground where you register the compilations you wish to run benchmarks on, within a `targets` subsection. Targets can be registered in two ways. Either by their name, such as `"jvm"`, which registers its `main` compilation, meaning `register("jvm")` and `register("jvmMain")` will register the same compilation. Alternatively, you can register a source set, for instance, `"jvmTest"` or `"jsBenchmark"`, which will register the corresponding compilation. Here's an illustration using a multiplatform project: ```groovy benchmark { targets { - register("jvm") + register("jvm") register("js") register("native") register("wasm") // Experimental @@ -27,6 +28,8 @@ benchmark { } ``` +For detailed guidance on creating separate source sets for benchmarks in your Kotlin project, please refer to [Benchmarking with Gradle: Creating Separate Source Sets](docs/separate-source-sets.md). + ## Step 2: Tailoring the Setup – Create Benchmark Configurations Having laid the groundwork with your targets, the next phase involves creating configurations for your benchmarks. You accomplish this by adding a `configurations` subsection within your `benchmark` block. @@ -36,12 +39,12 @@ The kotlinx benchmark toolkit automatically creates a `main` configuration as a ```groovy benchmark { configurations { - main { + main { // Configuration parameters for the default profile go here } - smoke { + smoke { // Create and configure a "smoke" configuration. - } + } } } ``` @@ -50,58 +53,58 @@ benchmark { Each configuration brings a bundle of options to the table, providing you with the flexibility to meet your specific benchmarking needs. We delve into these options to give you a better understanding and help you make the most of the basic and advanced settings: -**Note:** Many of these configuration options correspond to annotations in the benchmark code. Please be aware that any values provided in the build script will override those defined by annotations in the code. +**Note:** Many of these configuration options correspond to annotations in the benchmark code. Please be aware that any values provided in the build script will override those defined by annotations in the code. ### Basic Configuration Options: The Essential Settings -| Option | Description | Default Value | Possible Values | -| --- | --- | --- | --- | -| `iterations` | Specifies the number of iterations for measurements. | - | Integer | -| `warmups` | Specifies the number of iterations for system warming, ensuring accurate measurements. | - | Integer | -| `iterationTime` | Specifies the duration for each iteration, both measurement and warm-up. | - | Integer | -| `iterationTimeUnit` | Specifies the unit for `iterationTime`. | Seconds | "ns", "μs", "ms", "s", "m", "h", "d" | -| `outputTimeUnit` | Specifies the unit for the results display. | - | "ns", "μs", "ms", "s", "m", "h", "d" | -| `mode` | Selects between "thrpt" for measuring the number of function calls per unit time or "avgt" for measuring the time per function call. | "thrpt" | "thrpt", "avgt" | -| `include("…")` | Applies a regular expression to include benchmarks that match the substring in their fully qualified names. | - | Regex pattern | -| `exclude("…")` | Applies a regular expression to exclude benchmarks that match the substring in their fully qualified names. | - | Regex pattern | -| `param("name", "value1", "value2")` | Assigns values to a public mutable property, annotated with `@Param`. | - | Any string values | -| `reportFormat` | Defines the benchmark report's format options. | "json" | "json", "csv", "scsv", "text" | +| Option | Description | Default Value | Possible Values | Corresponding Annotation | +| ----------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------ | ------------- | ------------------------------------ | ------------------------ | +| `iterations` | Specifies the number of iterations for measurements. | - | Integer | @BenchmarkMode | +| `warmups` | Specifies the number of iterations for system warming, ensuring accurate measurements. | - | Integer | @Warmup | +| `iterationTime` | Specifies the duration for each iteration, both measurement and warm-up. | - | Integer | @Measurement | +| `iterationTimeUnit` | Specifies the unit for `iterationTime`. | - | "ns", "μs", "ms", "s", "m", "h", "d" | @Measurement | +| `outputTimeUnit` | Specifies the unit for the results display. | - | "ns", "μs", "ms", "s", "m", "h", "d" | @OutputTimeUnit | +| `mode` | Selects between "thrpt" for measuring the number of function calls per unit time or "avgt" for measuring the time per function call. | - | "thrpt", "avgt" | @BenchmarkMode | +| `include("…")` | Applies a regular expression to include benchmarks that match the substring in their fully qualified names. | - | Regex pattern | - | +| `exclude("…")` | Applies a regular expression to exclude benchmarks that match the substring in their fully qualified names. | - | Regex pattern | - | +| `param("name", "value1", "value2")` | Assigns values to a public mutable property, annotated with `@Param`. | - | Any string values | @Param | +| `reportFormat` | Defines the benchmark report's format options. | "json" | "json", "csv", "scsv", "text" | - | ### Advanced Configuration Options: The Power Settings Beyond the basics, kotlinx allows you to take a deep dive into platform-specific settings, offering more control over your benchmarks: -| Option | Platform | Description | Default Value | Possible Values | -| --- | --- | --- | --- | --- | -| `advanced("nativeFork", "value")` | Kotlin/Native | Executes iterations within the same process ("perBenchmark") or each iteration in a separate process ("perIteration"). | "perBenchmark" | "perBenchmark", "perIteration" | -| `advanced("nativeGCAfterIteration", "value")` | Kotlin/Native | Triggers garbage collection after each iteration when set to `true`. | `false` | `true`, `false` | -| `advanced("jvmForks", "value")` | Kotlin/JVM | Determines how many times the harness should fork. | "1" | "0" (no fork), "1", "definedByJmh" (JMH decides) | -| `advanced("jsUseBridge", "value")` | Kotlin/JS, Kotlin/Wasm | Disables the generation of benchmark bridges to stop inlining optimizations when set to `false`. | - | `true`, `false` | +| Option | Platform | Description | Default Value | Possible Values | Corresponding Annotation | +| --------------------------------------------- | ---------------------- | ---------------------------------------------------------------------------------------------------------------------- | -------------- | ------------------------------------------------ | ------------------------ | +| `advanced("nativeFork", "value")` | Kotlin/Native | Executes iterations within the same process ("perBenchmark") or each iteration in a separate process ("perIteration"). | "perBenchmark" | "perBenchmark", "perIteration" | - | +| `advanced("nativeGCAfterIteration", "value")` | Kotlin/Native | Triggers garbage collection after each iteration when set to `true`. | `false` | `true`, `false` | - | +| `advanced("jvmForks", "value")` | Kotlin/JVM | Determines how many times the harness should fork. | "1" | "0" (no fork), "1", "definedByJmh" (JMH decides) | @Fork | +| `advanced("jsUseBridge", "value")` | Kotlin/JS, Kotlin/Wasm | Disables the generation of benchmark bridges to stop inlining optimizations when set to `false`. | - | `true`, `false` | - | -Here's an example of how you can customize a benchmark configuration using these options: +## The Benchmark Configuration in Action: An In-Depth Example ```groovy benchmark { configurations { - main { - warmups = 20 // Number of warmup iterations - iterations = 10 // Number of measurement iterations - iterationTime = 3 // Duration per iteration in seconds - iterationTimeUnit = "s" // Unit for iterationTime - mode = "avgt" // Measure the average time per function call - outputTimeUnit = "ms" // Display results in milliseconds - include(".*MyBenchmark.*") // Only include benchmarks matching this pattern - param("size", "100", "200") // Parameter for benchmark - reportFormat = "json" // Format of the benchmark report + main { + warmups = 20 // Execute 20 iterations for system warming to stabilize the JVM and ensure accurate measurements + iterations = 10 // Perform 10 iterations for the actual measurement + iterationTime = 3 // Each iteration lasts for 3 seconds + iterationTimeUnit = "s" // Time unit for iterationTime is seconds + mode = "avgt" // Benchmarking mode is set to average time per function call + outputTimeUnit = "ms" // The results will be displayed in milliseconds + include(".*MyBenchmark.*") // Only include benchmarks that match this regular expression pattern + param("size", "100", "200") // Assign two potential values ("100" and "200") to a property annotated with @Param + reportFormat = "json" // The benchmark report will be generated in JSON format for easy parsing and visualization } smoke { - warmups = 5 - iterations = 3 - iterationTime = 500 - iterationTimeUnit = "ms" - advanced("nativeFork", "perIteration") - advanced("nativeGCAfterIteration", "true") - } + warmups = 5 // Perform 5 warmup iterations + iterations = 3 // Perform 3 measurement iterations + iterationTime = 500 // Each iteration lasts for 500 milliseconds + iterationTimeUnit = "ms" // Time unit for iterationTime is milliseconds + advanced("nativeFork", "perIteration") // Execute each iteration in a separate Kotlin/Native process + advanced("nativeGCAfterIteration", "true") // Trigger garbage collection after each iteration in Kotlin/Native + } } targets { register("jvm") { From d95512b28378cb06ee88128e2a0b6aec52257d57 Mon Sep 17 00:00:00 2001 From: wldeh <62161211+wldeh@users.noreply.github.com> Date: Sat, 24 Jun 2023 00:34:00 -0700 Subject: [PATCH 07/30] docs(seperate-source-sets): Refine wording, enhance FAQ section, and address snippet errors --- docs/separate-source-sets.md | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) diff --git a/docs/separate-source-sets.md b/docs/separate-source-sets.md index a8afbdf5..654c7d25 100644 --- a/docs/separate-source-sets.md +++ b/docs/separate-source-sets.md @@ -20,7 +20,7 @@ A source set defines the location of your source code, the names of compiled cla ## Why Have Separate Source Sets for Benchmarks? -Having separate source sets for benchmarks offers several advantages: +Creating separate source sets for benchmarks is especially beneficial when you are integrating benchmarks into an existing project. Here are several advantages of doing so: 1. **Organization**: It helps maintain a clean and organized project structure. Segregating benchmarks from the main code makes it easier to navigate and locate specific code segments. @@ -65,7 +65,7 @@ Transform your Kotlin JVM project with separate benchmark source sets by followi ```groovy benchmark { targets { - register("benchmarks") + register("benchmarks") } } ``` @@ -93,7 +93,7 @@ Set up your Kotlin Multiplatform project to accommodate separate benchmark sourc ```kotlin benchmark { targets { - register("jvmBenchmark") + register("benchmark") } } ``` @@ -104,11 +104,12 @@ Set up your Kotlin Multiplatform project to accommodate separate benchmark sourc Here are some common questions about creating separate source sets for benchmarks: -**Q: Can I use the same benchmark source set for multiple targets?** -A: While it's possible, it's generally recommended to have separate source sets for different targets to avoid configuration conflicts and ensure more accurate benchmarks. +**Q: Is it recommended to reuse the same benchmark source set for benchmarking multiple target platforms in a Kotlin Multiplatform Project?** +A: It's generally recommended to have separate source sets for different targets to avoid configuration conflicts and ensure more accurate benchmarks. This practice mitigates the risk of configuration conflicts inherent in different platforms that may have unique dependencies and setup requirements. -**Q: I'm encountering issues when running benchmarks from the IDE. What should I do?** -A: Ensure that the `src/benchmark/kotlin` directory is marked as "Sources Root" in your IDE. If you're still experiencing difficulties, refer to the discussions in [issue #112](https://github.com/Kotlin/kotlinx-benchmark/pull/112) for potential solutions. +Moreover, the performance characteristics can vary significantly across platforms. Having separate source sets for each benchmarking target ensures that your benchmarking process accurately reflects the performance of your code in its specific operational context. + +For instance, consider a multiplatform project with JVM and JavaScript targets. Rather than using a single shared benchmark source set, you should ideally create two separate benchmark source sets, say `jvmBenchmark` and `jsBenchmark`. By doing so, you are able to customize each benchmark source set according to the peculiarities and performance nuances of its corresponding platform, thereby yielding more accurate and meaningful benchmarking results. **Q: Where can I ask additional questions?** -A: Feel free to post any questions or issues on the [kotlinx-benchmark GitHub page](https://github.com/Kotlin/kotlinx-benchmark). The community is always ready to assist you! \ No newline at end of file +A: We invite you to bring your questions or issues to several platforms. For more immediate interactive feedback, consider joining our [Slack channel](https://kotlinlang.slack.com) where developers and Kotlin enthusiasts discuss a range of topics. For more in-depth, threaded discussions, post your queries on the [GitHub Discussions page](https://github.com/Kotlin/kotlinx-benchmark/discussions) for kotlinx-benchmark. You're also welcome to raise specific issues on the [kotlinx-benchmark GitHub page](https://github.com/Kotlin/kotlinx-benchmark). Each of these platforms is actively monitored, and the community is always prepared to assist you! From 78de0371072853612bab9a1acd37cf83218d8038 Mon Sep 17 00:00:00 2001 From: wldeh <62161211+wldeh@users.noreply.github.com> Date: Sat, 24 Jun 2023 00:42:15 -0700 Subject: [PATCH 08/30] docs: Add step-by-step setup guide for single-platform project --- docs/singleplatform-setup.md | 185 +++++++++++++++++++++++++++++++++++ 1 file changed, 185 insertions(+) create mode 100644 docs/singleplatform-setup.md diff --git a/docs/singleplatform-setup.md b/docs/singleplatform-setup.md new file mode 100644 index 00000000..7f173702 --- /dev/null +++ b/docs/singleplatform-setup.md @@ -0,0 +1,185 @@ +## Step-by-Step Setup Guide for Single-Platform Benchmarking Project Using kotlinx-benchmark + +### Prerequisites + +Before starting, ensure your development environment meets the following [requirements](docs/compatibility.md): + +- **Kotlin**: Version 1.8.20 or newer. Install Kotlin from the [official website](https://kotlinlang.org/) or a package manager like SDKMAN! or Homebrew. +- **Gradle**: Version 8.0 or newer. Download Gradle from the [official website](https://gradle.org/) or use a package manager. + +### Step 1: Create a New Kotlin Project + +If you're starting from scratch, you can begin by creating a new Kotlin project with Gradle. This can be done either manually, through the command line, or by using an IDE like IntelliJ IDEA, which offers built-in support for project generation. + +### Step 2: Configure Build + +In this step, you'll modify your project's build file to add necessary dependencies and plugins. + +
+Kotlin DSL + +#### 2.1 Apply the Necessary Plugins + +In your `build.gradle.kts` file, add the required plugins. These plugins are necessary for enabling Kotlin/JVM, making all classes and functions open, and using the kotlinx.benchmark plugin. + +```kotlin +plugins { + kotlin("jvm") version "1.8.21" + kotlin("plugin.allopen") version "1.8.21" + id("org.jetbrains.kotlinx.benchmark") version "0.4.8" +} +``` + +#### 2.2 Add the Dependencies + +Next, add the `kotlinx-benchmark-runtime` dependency to your project. This dependency contains the necessary runtime components for benchmarking. + +```kotlin +dependencies { + implementation("org.jetbrains.kotlinx:kotlinx-benchmark-runtime:0.4.8") +} +``` + +#### 2.3 Apply the AllOpen Annotation + +Now, you need to instruct the [allopen](https://kotlinlang.org/docs/all-open-plugin.html) plugin to consider all benchmark classes and their methods as open. For that, apply the `allOpen` block and specify the JMH annotation `State`. + +```kotlin +allOpen { + annotation("org.openjdk.jmh.annotations.State") +} +``` + +#### 2.4 Define the Repositories + +Gradle needs to know where to find the libraries your project depends on. In this case, we're using the libraries hosted on Maven Central, so we need to specify that. + +In your `build.gradle.kts` file, add the following code block: + +```kotlin +repositories { + mavenCentral() +} +``` + +#### 2.5 Register the Benchmark Targets + +Next, we need to inform the kotlinx.benchmark plugin about our benchmarking target. In this case, we are targeting JVM. + +In your `build.gradle.kts` file, add the following code block within the `benchmark` block: + +```kotlin +benchmark { + targets { + register("jvm") + } +} +``` + +
+ +
+Groovy DSL + +#### 2.1 Apply the Necessary Plugins + +In your `build.gradle` file, apply the required plugins. These plugins are necessary for enabling Kotlin/JVM, making all classes and functions open, and using the kotlinx.benchmark plugin. + +```groovy +plugins { + id 'org.jetbrains.kotlin.jvm' version '1.8.21' + id 'org.jetbrains.kotlin.plugin.allopen' version '1.8.21' + id 'org.jetbrains.kotlinx.benchmark' version '0.4.8' +} +``` + +#### 2.2 Add the Dependencies + +Next, add the `kotlinx-benchmark-runtime` dependency to your project. This dependency contains the necessary runtime components for benchmarking. + +```groovy +dependencies { + implementation 'org.jetbrains.kotlinx:kotlinx-benchmark-runtime:0.4.8' +} +``` + +#### 2.3 Apply the AllOpen Annotation + +Now, you need to instruct the [allopen](https://kotlinlang.org/docs/all-open-plugin.html) plugin to consider all benchmark classes and their methods as open. For that, apply the `allOpen` block and specify the JMH annotation `State`. + +```groovy +allOpen { + annotation("org.openjdk.jmh.annotations.State") +} +``` + +#### 2.4 Define the Repositories + +Gradle needs to know where to find the libraries your project depends on. In this case, we're using the libraries hosted on Maven Central, so we need to specify that. + +In your `build.gradle` file, add the following code block: + +```groovy +repositories { + mavenCentral() +} +``` + +#### 2.5 Register the Benchmark Targets + +Next, we need to inform the kotlinx.benchmark plugin about our benchmarking target. In this case, we are targeting JVM. + +In your `build.gradle` file, add the following code block within the `benchmark` block: + +```groovy +benchmark { + targets { + register("jvm") + } +} +``` + +
+ +### Step 3: Writing Benchmarks + +Create a new Kotlin source file in your `src/main/kotlin` directory to write your benchmarks. Each benchmark is a class or object with methods annotated with `@Benchmark`. Here's a simple example: + +```kotlin +import org.openjdk.jmh.annotations.Benchmark + +open class ListBenchmark { + @Benchmark + fun listOfBenchmark() { + listOf(1, 2, 3, 4, 5) + } +} +``` + +Ensure that your benchmark class and methods are `open`, as JMH creates subclasses during the benchmarking process. The `allopen` plugin we added earlier enforces this. + +### Step 4: Running Your Benchmarks + +Executing your benchmarks is an important part of the process. This will allow you to gather performance data about your code. There are two primary ways to run your benchmarks: through the command line or using your IDE. + +#### 4.1 Running Benchmarks From the Command Line + +The simplest way to run your benchmarks is by using the Gradle task `benchmark`. You can do this by opening a terminal, navigating to the root of your project, and entering the following command: + +```bash +./gradlew benchmark +``` + +This command instructs Gradle to execute the `benchmark` task, which in turn runs your benchmarks. + +#### 4.2 Understanding Benchmark Execution + +The execution of your benchmarks might take some time. This is normal and necessary: benchmarks must be run for a sufficient length of time to produce reliable, statistically significant results. + +For more details regarding the available Gradle tasks, refer to this [document](docs/tasks-overview.md). + +### Step 5: Analyze the Results + +To fully understand and make the best use of these results, it's important to know how to interpret and analyze them properly. For a comprehensive guide on interpreting and analyzing benchmarking results, please refer to this dedicated document: [Interpreting and Analyzing Results](docs/interpreting-results.md). + +Congratulations! You have successfully set up a Kotlin/JVM benchmark project using kotlinx-benchmark. From b139400d1ab67bc9a2e0e085371ef7a1822f6427 Mon Sep 17 00:00:00 2001 From: wldeh <62161211+wldeh@users.noreply.github.com> Date: Sat, 24 Jun 2023 00:42:28 -0700 Subject: [PATCH 09/30] docs: Add step-by-step setup guide for multi-platform project --- docs/multiplatform-setup.md | 241 ++++++++++++++++++++++++++++++++++++ 1 file changed, 241 insertions(+) create mode 100644 docs/multiplatform-setup.md diff --git a/docs/multiplatform-setup.md b/docs/multiplatform-setup.md new file mode 100644 index 00000000..42c19ab7 --- /dev/null +++ b/docs/multiplatform-setup.md @@ -0,0 +1,241 @@ +## Step-by-Step Guide for Multiplatform Benchmarking Setup Using kotlinx.benchmark + +### Prerequisites + +Before starting, ensure your development environment meets the following requirements: + +- **Kotlin**: Version 1.8.20 or newer. Install Kotlin from the [official website](https://kotlinlang.org/) or a package manager like SDKMAN! or Homebrew. +- **Gradle**: Version 8.0 or newer. Download Gradle from the [official website](https://gradle.org/) or use a package manager. + +### Step 1: Create a New Kotlin Multiplatform Project + +Begin by creating a new Kotlin Multiplatform project. You can do this either manually or by using an IDE such as IntelliJ IDEA, which can generate the project structure for you. + +### Step 2: Configure Build + +In this step, you'll modify your project's build file to add necessary dependencies and plugins. + +
+Kotlin DSL + +#### 2.1 Apply the Necessary Plugins + +In your `build.gradle.kts` file, add the required plugins. These plugins are necessary for enabling Kotlin Multiplatform, making all classes and functions open, and using the kotlinx.benchmark plugin. + +```kotlin +plugins { + kotlin("multiplatform") + kotlin("plugin.allopen") version "1.8.21" + id("org.jetbrains.kotlinx.benchmark") version "0.4.8" +} +``` + +#### 2.2 Add the Dependencies + +Next, add the `kotlinx-benchmark-runtime` dependency to your project. This dependency contains the necessary runtime components for benchmarking. + +```kotlin +kotlin { + sourceSets { + commonMain { + dependencies { + implementation("org.jetbrains.kotlinx:kotlinx-benchmark-runtime:0.4.8") + } + } + } +} +``` + +#### 2.3 Apply the AllOpen Annotation + +Now, you need to instruct the [allopen](https://kotlinlang.org/docs/all-open-plugin.html) plugin to consider all benchmark classes and their methods as open. For that, apply the `allOpen` block and specify the JMH annotation `State`. + +```kotlin +allOpen { + annotation("org.openjdk.jmh.annotations.State") +} +``` + +#### 2.4 Define the Repositories + +Gradle needs to know where to find the libraries your project depends on. In this case, we're using the libraries hosted on Maven Central, so we need to specify that. + +In your `build.gradle.kts` file, add the following code block: + +```kotlin +repositories { + mavenCentral() +} +``` + +#### 2.5 Register the Benchmark Targets + +Next, we need to inform the kotlinx.benchmark plugin about our benchmarking targets. For multiplatform projects, we need to register each platform separately. + +In your `build.gradle.kts` file, add the following code block within the `benchmark` block: + +```kotlin +benchmark { + targets { + register("jvm") + register("js") + register("native") + // Add more platforms as needed + } +} +``` + +#### 2.6 Define the Kotlin Targets and SourceSets + +In the `kotlin` block, you define the different platforms that your project targets and the related source sets. Within each target, you can specify the related compilations. For the JVM, you create a specific 'benchmark' compilation associated with the main compilation. + +```kotlin +jvm { + compilations.create('benchmark') { associateWith(compilations.main) } +} +``` + +The dependency `kotlinx-benchmark-runtime` is applied to the `commonMain` source set, indicating that it will be used across all platforms: + +```kotlin +sourceSets { + commonMain { + dependencies { + implementation project(":kotlinx-benchmark-runtime") + } + } +} +``` + +
+ +
+Groovy DSL + +#### 2.1 Apply the Necessary Plugins + +In your `build.gradle` file, apply the required plugins. These plugins are necessary for enabling Kotlin Multiplatform, making all classes and functions open, and using the kotlinx.benchmark plugin. + +```groovy +plugins { + id 'org.jetbrains.kotlin.multiplatform' + id 'org.jetbrains.kotlin.plugin.allopen' version '1.8.21' + id 'org.jetbrains.kotlinx.benchmark' version '0.4.8' +} +``` + +#### 2.2 Add the Dependencies + +Next, add the `kotlinx-benchmark-runtime` dependency to your project. This dependency contains the necessary runtime components for benchmarking. + +```groovy +dependencies { + implementation 'org.jetbrains.kotlinx:kotlinx-benchmark-runtime:0.4.8' +} +``` + +#### 2.3 Apply the AllOpen Annotation + +Now, you need to instruct the [allopen](https://kotlinlang.org/docs/all-open-plugin.html) plugin to consider all benchmark classes and their methods as open. For that, apply the `allOpen` block and specify the JMH annotation `State`. + +```groovy +allOpen { + annotation("org.openjdk.jmh.annotations.State") +} +``` + +#### 2.4 Define the Repositories + +Gradle needs to know where to find the libraries your project depends on. In this case, we're using the libraries hosted on Maven Central, so we need to specify that. + +In your `build.gradle` file, add the following code block: + +```groovy +repositories { + mavenCentral() +} +``` + +#### 2.5 Register the Benchmark Targets + +Next, we need to inform the kotlinx.benchmark plugin about our benchmarking targets. For multiplatform projects, we need to register each platform separately. + +In your `build.gradle` file, add the following code block within the `benchmark` block: + +```groovy +benchmark { + targets { + register("jvm") + register("js") + register("native") + // Add more platforms as needed + } +} +``` + +#### 2.6 Define the Kotlin Targets and SourceSets + +In the `kotlin` block, you define the different platforms that your project targets and the related source sets. Within each target, you can specify the related compilations. For the JVM, you create a specific 'benchmark' compilation associated with the main compilation. + +```kotlin +jvm { + compilations.create('benchmark') { associateWith(compilations.main) } +} +``` + +The dependency `kotlinx-benchmark-runtime` is applied to the `commonMain` source set, indicating that it will be used across all platforms: + +```kotlin +sourceSets { + commonMain { + dependencies { + implementation project(":kotlinx-benchmark-runtime") + } + } +} +``` + +
+ +### Step 3: Writing Benchmarks + +Create a new Kotlin source file in your `src/main/kotlin` directory to write your benchmarks. Each benchmark is a class or object with methods annotated with `@Benchmark`. Here's a simple example: + +```kotlin +import org.openjdk.jmh.annotations.Benchmark + +open class ListBenchmark { + @Benchmark + fun listOfBenchmark() { + listOf(1, 2, 3, 4, 5) + } +} +``` + +Ensure that your benchmark class and methods are `open`, as JMH creates subclasses during the benchmarking process. The `allopen` plugin we added earlier enforces this. + +### Step 4: Running Your Benchmarks + +Executing your benchmarks is an important part of the process. This will allow you to gather performance data about your code. There are two primary ways to run your benchmarks: through the command line or using your IDE. + +#### 4.1 Running Benchmarks From the Command Line + +The simplest way to run your benchmarks is by using the Gradle task `benchmark`. You can do this by opening a terminal, navigating to the root of your project, and entering the following command: + +```bash +./gradlew benchmark +``` + +This command instructs Gradle to execute the `benchmark` task, which in turn runs your benchmarks. + +#### 4.2 Understanding Benchmark Execution + +The execution of your benchmarks might take some time. This is normal and necessary: benchmarks must be run for a sufficient length of time to produce reliable, statistically significant results. + +For more details regarding the available Gradle tasks, refer to this [document](docs/tasks-overview.md). + +### Step 5: Analyze the Results + +To fully understand and make the best use of these results, it's important to know how to interpret and analyze them properly. For a comprehensive guide on interpreting and analyzing benchmarking results, please refer to this dedicated document: [Interpreting and Analyzing Results](docs/interpreting-results.md). + +Congratulations! You have successfully set up a Kotlin Multiplatform benchmark project using kotlinx-benchmark. From 8da1fdfe289e77756b9fd28567dc358a6ef244ab Mon Sep 17 00:00:00 2001 From: wldeh <62161211+wldeh@users.noreply.github.com> Date: Sat, 24 Jun 2023 00:44:01 -0700 Subject: [PATCH 10/30] docs: Refine benchmarking-overview and benchmark-runtime docs for clarity and relevance --- docs/benchmark-runtime.md | 32 ++++++++++++++++++++++++++++++++ docs/benchmarking-overview.md | 4 +--- 2 files changed, 33 insertions(+), 3 deletions(-) diff --git a/docs/benchmark-runtime.md b/docs/benchmark-runtime.md index 75943cb2..33eceba7 100644 --- a/docs/benchmark-runtime.md +++ b/docs/benchmark-runtime.md @@ -23,3 +23,35 @@ The benchmark configuration is handled through annotations that map directly to ### File Operations File reading and writing operations are performed using standard Java I/O classes, providing a consistent and reliable method for file operations across all JVM platforms. + +## JavaScript Target + +The JavaScript target in kotlinx.benchmark leverages the Benchmark.js library to run benchmarks. Benchmark.js is a robust tool for executing JavaScript benchmarks in different environments, including browsers and Node.js. + +### Benchmark Execution + +Benchmark.js handles the execution of benchmarks, managing the setup, running, and teardown of tests. Just like JMH for JVM, it also handles the calculation of results, providing a reliable framework for benchmarking on JavaScript. + +### Benchmark Configuration + +The benchmark configuration in JavaScript is handled through a suite and benchmark API provided by benchmark.js. The API allows the users to specify the details of the benchmark such as the function to benchmark, setup function, and teardown function. + +### File Operations + +File reading and writing operations in JavaScript are performed using the standard JavaScript file I/O APIs. This includes the fs module in Node.js or the File API in browsers. + +## Native Target + +The Native target in kotlinx.benchmark leverages the built-in benchmarking capabilities of the Kotlin/Native runtime to execute benchmarks. + +### Benchmark Execution + +Kotlin/Native manages the execution of benchmarks, handling the setup, running, and teardown of tests. Just like JMH for JVM and Benchmark.js for JavaScript, Kotlin/Native also takes care of the calculation of results, providing a reliable framework for benchmarking in a native environment. + +### Benchmark Configuration + +The benchmark configuration in Kotlin/Native is handled through annotations that are similar to those used in the JVM target. These include `@State`, `@Benchmark`, `@BenchmarkMode`, `@OutputTimeUnit`, `@Warmup`, `@Measurement`, and `@Param`. + +### File Operations + +File operations in the Native target are handled through Kotlin's standard file I/O APIs. These APIs are compatible with all platforms supported by Kotlin/Native, providing a consistent method for file operations. diff --git a/docs/benchmarking-overview.md b/docs/benchmarking-overview.md index b6941e60..570023bc 100644 --- a/docs/benchmarking-overview.md +++ b/docs/benchmarking-overview.md @@ -1,4 +1,4 @@ -# Harnessing Code Performance: The Art and Science of Benchmarking with kotlinx-benchmark +# Code Benchmarking: A Brief Overview This guide serves as your compass for mastering the art of benchmarking with kotlinx-benchmark. By harnessing the power of benchmarking, you can unlock performance insights in your code, uncover bottlenecks, compare different implementations, detect regressions, and make informed decisions for optimization. @@ -47,8 +47,6 @@ Benchmarking provides several benefits for software development projects: 4. **Hardware and Environment Variations:** Benchmarking helps evaluate the impact of different hardware configurations, system setups, or environments on performance. It enables developers to optimize their software for specific target platforms. - comparison across systems. This eases sharing and discussing performance results within a team or the larger community. - ## Benchmarking Use Cases Benchmarking serves as a critical tool across various scenarios in software development. Here are a few notable use cases: From a1412af3a256d194490daea36790a241c5e05228 Mon Sep 17 00:00:00 2001 From: wldeh <62161211+wldeh@users.noreply.github.com> Date: Sat, 24 Jun 2023 00:44:55 -0700 Subject: [PATCH 11/30] fix: remove placeholder png --- docs/images/placeholder.png | 1 - 1 file changed, 1 deletion(-) delete mode 100644 docs/images/placeholder.png diff --git a/docs/images/placeholder.png b/docs/images/placeholder.png deleted file mode 100644 index 8b137891..00000000 --- a/docs/images/placeholder.png +++ /dev/null @@ -1 +0,0 @@ - From 4c59e60b433774aff7cf5cd11252675c9ae2a01c Mon Sep 17 00:00:00 2001 From: wldeh <62161211+wldeh@users.noreply.github.com> Date: Sat, 24 Jun 2023 00:50:26 -0700 Subject: [PATCH 12/30] docs: update broken hyperlinks --- docs/compatibility.md | 2 +- docs/configuration-options.md | 2 +- docs/multiplatform-setup.md | 4 ++-- docs/singleplatform-setup.md | 6 +++--- 4 files changed, 7 insertions(+), 7 deletions(-) diff --git a/docs/compatibility.md b/docs/compatibility.md index 81782764..7b26d1f9 100644 --- a/docs/compatibility.md +++ b/docs/compatibility.md @@ -18,4 +18,4 @@ This guide provides you with information on the compatibility of different versi *Note: "Minimum Required" implies that any higher version than the one mentioned will also be compatible.* -For more details about the changes, improvements, and updates in each `kotlinx-benchmark` version, please refer to the [RELEASE NOTES](https://github.com/Kotlin/kotlinx-benchmark/releases) and [CHANGELOG](#). +For more details about the changes, improvements, and updates in each `kotlinx-benchmark` version, please refer to the [RELEASE NOTES](https://github.com/Kotlin/kotlinx-benchmark/releases) and [CHANGELOG](../CHANGELOG.md). diff --git a/docs/configuration-options.md b/docs/configuration-options.md index 2165b36d..521ee317 100644 --- a/docs/configuration-options.md +++ b/docs/configuration-options.md @@ -28,7 +28,7 @@ benchmark { } ``` -For detailed guidance on creating separate source sets for benchmarks in your Kotlin project, please refer to [Benchmarking with Gradle: Creating Separate Source Sets](docs/separate-source-sets.md). +For detailed guidance on creating separate source sets for benchmarks in your Kotlin project, please refer to [Benchmarking with Gradle: Creating Separate Source Sets](separate-source-sets.md). ## Step 2: Tailoring the Setup – Create Benchmark Configurations diff --git a/docs/multiplatform-setup.md b/docs/multiplatform-setup.md index 42c19ab7..67477c8a 100644 --- a/docs/multiplatform-setup.md +++ b/docs/multiplatform-setup.md @@ -232,10 +232,10 @@ This command instructs Gradle to execute the `benchmark` task, which in turn run The execution of your benchmarks might take some time. This is normal and necessary: benchmarks must be run for a sufficient length of time to produce reliable, statistically significant results. -For more details regarding the available Gradle tasks, refer to this [document](docs/tasks-overview.md). +For more details regarding the available Gradle tasks, refer to this [document](tasks-overview.md). ### Step 5: Analyze the Results -To fully understand and make the best use of these results, it's important to know how to interpret and analyze them properly. For a comprehensive guide on interpreting and analyzing benchmarking results, please refer to this dedicated document: [Interpreting and Analyzing Results](docs/interpreting-results.md). +To fully understand and make the best use of these results, it's important to know how to interpret and analyze them properly. For a comprehensive guide on interpreting and analyzing benchmarking results, please refer to this dedicated document: [Interpreting and Analyzing Results](interpreting-results.md). Congratulations! You have successfully set up a Kotlin Multiplatform benchmark project using kotlinx-benchmark. diff --git a/docs/singleplatform-setup.md b/docs/singleplatform-setup.md index 7f173702..692a415e 100644 --- a/docs/singleplatform-setup.md +++ b/docs/singleplatform-setup.md @@ -2,7 +2,7 @@ ### Prerequisites -Before starting, ensure your development environment meets the following [requirements](docs/compatibility.md): +Before starting, ensure your development environment meets the following [requirements](compatibility.md): - **Kotlin**: Version 1.8.20 or newer. Install Kotlin from the [official website](https://kotlinlang.org/) or a package manager like SDKMAN! or Homebrew. - **Gradle**: Version 8.0 or newer. Download Gradle from the [official website](https://gradle.org/) or use a package manager. @@ -176,10 +176,10 @@ This command instructs Gradle to execute the `benchmark` task, which in turn run The execution of your benchmarks might take some time. This is normal and necessary: benchmarks must be run for a sufficient length of time to produce reliable, statistically significant results. -For more details regarding the available Gradle tasks, refer to this [document](docs/tasks-overview.md). +For more details regarding the available Gradle tasks, refer to this [document](tasks-overview.md). ### Step 5: Analyze the Results -To fully understand and make the best use of these results, it's important to know how to interpret and analyze them properly. For a comprehensive guide on interpreting and analyzing benchmarking results, please refer to this dedicated document: [Interpreting and Analyzing Results](docs/interpreting-results.md). +To fully understand and make the best use of these results, it's important to know how to interpret and analyze them properly. For a comprehensive guide on interpreting and analyzing benchmarking results, please refer to this dedicated document: [Interpreting and Analyzing Results](interpreting-results.md). Congratulations! You have successfully set up a Kotlin/JVM benchmark project using kotlinx-benchmark. From 05d80897d3f69bf4e45d9b63b8156578a7fec1e0 Mon Sep 17 00:00:00 2001 From: wldeh <62161211+wldeh@users.noreply.github.com> Date: Tue, 27 Jun 2023 00:12:18 -0700 Subject: [PATCH 13/30] docs(README.md): add hyperlinks and enhance wording --- README.md | 28 +++++++++++++++++++++++----- 1 file changed, 23 insertions(+), 5 deletions(-) diff --git a/README.md b/README.md index ed366dc1..6b7ce5bb 100644 --- a/README.md +++ b/README.md @@ -115,7 +115,7 @@ For different platforms, there may be distinct requirements and settings that ne #### Kotlin/JVM -When benchmarking Kotlin/JVM code with Java Microbenchmark Harness (JMH), you should use the [allopen plugin](https://kotlinlang.org/docs/all-open-plugin.html). This plugin ensures your benchmark classes and methods are `open`, meeting JMH's requirements. +When benchmarking Kotlin/JVM code with Java Microbenchmark Harness (JMH), you should use the [allopen plugin](https://kotlinlang.org/docs/all-open-plugin.html). This plugin ensures your benchmark classes and methods are `open`, meeting JMH's requirements. Make sure to apply the jvm plugin. ```kotlin plugins { @@ -161,7 +161,19 @@ This configuration ensures that your `MyBenchmark` class and its `benchmarkMetho -You can alternatively mark your benchmark classes and methods `open` manually, but using the `allopen` plugin enhances code maintainability. +You can alternatively mark your benchmark classes and methods `open` manually, but using the `allopen` plugin enhances code maintainability. For a practical example, please refer to [examples](examples/kotlin-kts). + +#### Java + + In order to conduct benchmarking in Java, you need to apply the Java plugin. + +```kotlin +plugins { + id("java") +} +``` + +For a practical example, please refer to [examples](examples/java). #### Kotlin/JS @@ -180,9 +192,13 @@ For Kotlin/JS, only IR backend is supported. For more information on the IR comp #### Multiplatform -For multiplatform projects, add the `kotlinx-benchmark-runtime` dependency to the `commonMain` source set: +For multiplatform projects, add the `kotlinx-benchmark-runtime` dependency to the `commonMain` source set, and be sure to apply the multiplatform plugin, as shown below: ```kotlin +plugins { + id("multiplatform") +} + kotlin { sourceSets { commonMain { @@ -194,7 +210,7 @@ kotlin { } ``` -The platform-specific artifacts will be resolved automatically. +This setup enables running benchmarks in the main compilation of any registered targets. Another option is to register the compilation you want to run benchmarks from. The platform-specific artifacts will be resolved automatically. For a practical example, please refer to [examples](examples/multiplatform). ### Benchmark Configuration @@ -290,9 +306,11 @@ benchmark { } ``` +For comprehensive guidance on configuring your benchmark setup, please refer to our detailed documentation on [Configuring kotlinx-benchmark](docs/configuration-options.md). + # Examples -The project contains [examples](https://github.com/Kotlin/kotlinx-benchmark/tree/master/examples) subproject that demonstrates using the library. +To help you better understand how to use the kotlinx-benchmark library, we've provided an [examples](examples) subproject. These examples showcase various use cases and offer practical insights into the library's functionality. ## Contributing From c26f523b82b4c4a8361195f1cc0c25c7cda3bb10 Mon Sep 17 00:00:00 2001 From: wldeh <62161211+wldeh@users.noreply.github.com> Date: Fri, 30 Jun 2023 22:56:46 -0700 Subject: [PATCH 14/30] docs!: overhaul for accuracy and quality --- README.md | 58 ++---- docs/benchmark-runtime.md | 67 ++---- docs/benchmarking-overview.md | 2 +- docs/compatibility.md | 2 +- docs/configuration-options.md | 137 +++--------- docs/interpreting-results.md | 2 +- docs/multiplatform-setup.md | 379 ++++++++++++++++++++++++---------- docs/separate-source-sets.md | 28 +-- docs/singleplatform-setup.md | 241 ++++++++++++--------- docs/tasks-overview.md | 53 ++++- 10 files changed, 520 insertions(+), 449 deletions(-) diff --git a/README.md b/README.md index 6b7ce5bb..1cbeafee 100644 --- a/README.md +++ b/README.md @@ -53,23 +53,15 @@ The `kotlinx-benchmark` library is designed to work with Kotlin/JVM, Kotlin/JS,
Kotlin DSL -1. **Adding Dependency**: Add the `kotlinx-benchmark-runtime` dependency in your `build.gradle.kts` file. - - ```kotlin - dependencies { - implementation("org.jetbrains.kotlinx:kotlinx-benchmark-runtime:0.4.8") - } - ``` - -2. **Applying Benchmark Plugin**: Next, apply the benchmark plugin. +1. **Applying Benchmark Plugin**: Apply the benchmark plugin. ```kotlin plugins { - id("org.jetbrains.kotlinx.benchmark") version "0.4.8" + id("org.jetbrains.kotlinx.benchmark") version "0.4.9" } ``` -3. **Specifying Repository**: Ensure you have `mavenCentral()` for dependencies lookup in the list of repositories: +2. **Specifying Repository**: Ensure you have `mavenCentral()` for dependencies lookup in the list of repositories: ```kotlin repositories { @@ -82,24 +74,16 @@ The `kotlinx-benchmark` library is designed to work with Kotlin/JVM, Kotlin/JS,
Groovy DSL -1. **Adding Dependency**: In your `build.gradle` file, include the `kotlinx-benchmark-runtime` dependency. - - ```groovy - dependencies { - implementation 'org.jetbrains.kotlinx:kotlinx-benchmark-runtime:0.4.8' - } - ``` - -2. **Applying Benchmark Plugin**: Next, apply the benchmark plugin. +1. **Applying Benchmark Plugin**: Apply the benchmark plugin. ```groovy plugins { id 'org.jetbrains.kotlin.plugin.allopen' version "1.8.21" - id 'org.jetbrains.kotlinx.benchmark' version '0.4.8' + id 'org.jetbrains.kotlinx.benchmark' version '0.4.9' } ``` -3. **Specifying Repository**: Ensure you have `mavenCentral()` in the list of repositories: +2. **Specifying Repository**: Ensure you have `mavenCentral()` in the list of repositories: ```groovy repositories { @@ -163,42 +147,26 @@ This configuration ensures that your `MyBenchmark` class and its `benchmarkMetho You can alternatively mark your benchmark classes and methods `open` manually, but using the `allopen` plugin enhances code maintainability. For a practical example, please refer to [examples](examples/kotlin-kts). -#### Java - - In order to conduct benchmarking in Java, you need to apply the Java plugin. - -```kotlin -plugins { - id("java") -} -``` - -For a practical example, please refer to [examples](examples/java). - #### Kotlin/JS -For benchmarking Kotlin/JS code Node.js execution enviroment should be targeted. See https://kotlinlang.org/docs/js-project-setup.html#execution-environments. This is because kotlinx-benchmark-runtime uses Node.js environment to run benchmarks. Include the `nodejs()` method call in the `kotlin` block: - +Specify a compiler like the [IR compiler](https://kotlinlang.org/docs/js-ir-compiler.html) and set benchmarking targets in one step. Here, `jsIr` and `jsIrBuiltIn` are both using the IR compiler. The former uses benchmark.js, while the latter uses Kotlin's built-in plugin. ```kotlin kotlin { - js { - nodejs() + js('jsIr', IR) { + nodejs() + } + js('jsIrBuiltIn', IR) { + nodejs() } } ``` -For Kotlin/JS, only IR backend is supported. For more information on the IR compiler, please refer to the [Kotlin/JS IR compiler documentation](https://kotlinlang.org/docs/js-ir-compiler.html) - #### Multiplatform -For multiplatform projects, add the `kotlinx-benchmark-runtime` dependency to the `commonMain` source set, and be sure to apply the multiplatform plugin, as shown below: +For multiplatform projects, add the `kotlinx-benchmark-runtime` dependency to the `commonMain` source set: ```kotlin -plugins { - id("multiplatform") -} - kotlin { sourceSets { commonMain { diff --git a/docs/benchmark-runtime.md b/docs/benchmark-runtime.md index 33eceba7..0edc0edf 100644 --- a/docs/benchmark-runtime.md +++ b/docs/benchmark-runtime.md @@ -1,57 +1,28 @@ -# kotlinx.benchmark: A Comprehensive Guide to Benchmark Runtime for Each Target +# Table of Contents +1. [Introduction](#Understanding-Benchmark-Runtime-Across-Targets) +2. [JVM: Harnessing JMH](#jvm-harnessing-jmh) +3. [JavaScript: Benchmark.js Integration and In-built Support](#javascript-benchmarkjs-integration-and-in-built-support) +4. [Native: Harnessing Native Capabilities](#native-harnessing-native-capabilities) +5. [WebAssembly (Wasm): Custom-Built Benchmarking](#webassembly-wasm-custom-built-benchmarking) -This document provides an in-depth overview of the kotlinx.benchmark library, focusing on how the benchmark runtime works for each supported target: JVM, JavaScript, and Native. This guide is designed for beginners and intermediates, providing a clear understanding of the underlying libraries used and the benchmark execution process. +# Understanding Benchmark Runtime Across Targets -## Table of Contents +This comprehensive guide aims to shed light on the underlying libraries that Kotlinx Benchmark utilizes to measure performance on these platforms, and elucidate the benchmark runtime process. -- [JVM Target](#jvm-target) -- [JavaScript Target](#javascript-target) -- [Native Target](#native-target) +## JVM: Harnessing JMH +In the JVM ecosystem, Kotlinx Benchmark capitalizes on the Java microbenchmarking harness [JMH](https://openjdk.org/projects/code-tools/jmh/). Designed by OpenJDK, JMH is a well-respected tool for creating, executing, and scrutinizing nano/micro/milli/macro benchmarks composed in Java and other JVM-compatible languages. -## JVM Target +Kotlinx Benchmark complements JMH with an array of advanced features that fine-tune JVM-specific settings. An exemplary feature is the handling of 'forks', a mechanism that facilitates running multiple tests in distinct JVM processes. By doing so, it assures a pristine environment for each test, enhancing the reliability of the benchmark results. Moreover, its sophisticated error and exception handling system ensures that any issues arising during testing are logged and addressed. -The JVM target in kotlinx.benchmark leverages the Java Microbenchmark Harness (JMH) to run benchmarks. JMH is a widely-used tool for building, running, and analyzing benchmarks written in Java and other JVM languages. +## JavaScript: Benchmark.js Integration and In-built Support +Targeting JavaScript, Kotlinx Benchmark utilizes the `benchmark.js` library to measure performance. Catering to both synchronous and asynchronous benchmarks, this library enables evaluation of a vast array of JavaScript operations. `benchmark.js` operates by setting up a suite of benchmarks, where each benchmark corresponds to a distinct JavaScript operation to be evaluated. -### Benchmark Execution +It's noteworthy that, alongside `benchmark.js`, Kotlinx Benchmark also incorporates its own built-in yet somewhat limited benchmarking system for Kotlin/JavaScript runtime. -JMH handles the execution of benchmarks, managing the setup, running, and teardown of tests. It also handles the calculation of results, providing a robust and reliable framework for benchmarking on the JVM. +## Native: Harnessing Native Capabilities +For Native platforms, Kotlinx Benchmark resorts to its built-in benchmarking system, which is firmly rooted in platform-specific technologies. Benchmarks are defined in the form of suites, each representing a specific Kotlin/Native operation to be evaluated. -### Benchmark Configuration +## WebAssembly (Wasm): Custom-Built Benchmarking +For Kotlin code running on WebAssembly (Wasm), Kotlinx Benchmark deploys built-in mechanisms to establish a testing milieu and measure code performance. -The benchmark configuration is handled through annotations that map directly to JMH annotations. These include `@State`, `@Benchmark`, `@BenchmarkMode`, `@OutputTimeUnit`, `@Warmup`, `@Measurement`, and `@Param`. - -### File Operations - -File reading and writing operations are performed using standard Java I/O classes, providing a consistent and reliable method for file operations across all JVM platforms. - -## JavaScript Target - -The JavaScript target in kotlinx.benchmark leverages the Benchmark.js library to run benchmarks. Benchmark.js is a robust tool for executing JavaScript benchmarks in different environments, including browsers and Node.js. - -### Benchmark Execution - -Benchmark.js handles the execution of benchmarks, managing the setup, running, and teardown of tests. Just like JMH for JVM, it also handles the calculation of results, providing a reliable framework for benchmarking on JavaScript. - -### Benchmark Configuration - -The benchmark configuration in JavaScript is handled through a suite and benchmark API provided by benchmark.js. The API allows the users to specify the details of the benchmark such as the function to benchmark, setup function, and teardown function. - -### File Operations - -File reading and writing operations in JavaScript are performed using the standard JavaScript file I/O APIs. This includes the fs module in Node.js or the File API in browsers. - -## Native Target - -The Native target in kotlinx.benchmark leverages the built-in benchmarking capabilities of the Kotlin/Native runtime to execute benchmarks. - -### Benchmark Execution - -Kotlin/Native manages the execution of benchmarks, handling the setup, running, and teardown of tests. Just like JMH for JVM and Benchmark.js for JavaScript, Kotlin/Native also takes care of the calculation of results, providing a reliable framework for benchmarking in a native environment. - -### Benchmark Configuration - -The benchmark configuration in Kotlin/Native is handled through annotations that are similar to those used in the JVM target. These include `@State`, `@Benchmark`, `@BenchmarkMode`, `@OutputTimeUnit`, `@Warmup`, `@Measurement`, and `@Param`. - -### File Operations - -File operations in the Native target are handled through Kotlin's standard file I/O APIs. These APIs are compatible with all platforms supported by Kotlin/Native, providing a consistent method for file operations. +In this setup, similarly a suite of benchmarks is created, each pinpointing a different code segment. The execution time of each benchmark is gauged using high-resolution JavaScript functions, thereby providing accurate and precise performance measurements. \ No newline at end of file diff --git a/docs/benchmarking-overview.md b/docs/benchmarking-overview.md index 570023bc..33ebc717 100644 --- a/docs/benchmarking-overview.md +++ b/docs/benchmarking-overview.md @@ -131,4 +131,4 @@ While kotlinx-benchmark is geared towards microbenchmarking — typically examin If you'd like to dig deeper into the world of benchmarking, here are some resources to help you on your journey: -- [Mastering High Performance with Kotlin](https://www.amazon.com/Mastering-High-Performance-Kotlin-difficulties/dp/178899664X) +- [Mastering High Performance with Kotlin](https://www.amazon.com/Mastering-High-Performance-Kotlin-difficulties/dp/178899664X) \ No newline at end of file diff --git a/docs/compatibility.md b/docs/compatibility.md index 7b26d1f9..babcc226 100644 --- a/docs/compatibility.md +++ b/docs/compatibility.md @@ -18,4 +18,4 @@ This guide provides you with information on the compatibility of different versi *Note: "Minimum Required" implies that any higher version than the one mentioned will also be compatible.* -For more details about the changes, improvements, and updates in each `kotlinx-benchmark` version, please refer to the [RELEASE NOTES](https://github.com/Kotlin/kotlinx-benchmark/releases) and [CHANGELOG](../CHANGELOG.md). +For more details about the changes, improvements, and updates in each `kotlinx-benchmark` version, please refer to the [RELEASE NOTES](https://github.com/Kotlin/kotlinx-benchmark/releases) and [CHANGELOG](../CHANGELOG.md). \ No newline at end of file diff --git a/docs/configuration-options.md b/docs/configuration-options.md index 521ee317..e1882c11 100644 --- a/docs/configuration-options.md +++ b/docs/configuration-options.md @@ -1,120 +1,33 @@ -# Configuring kotlinx-benchmark +# Mastering kotlinx-benchmark Configuration -kotlinx-benchmark offers a plethora of configuration options that enable you to customize your benchmarking setup according to your precise needs. This advanced guide provides an in-depth explanation of how to setup your benchmark configurations, alongside detailed insights into kotlinx's functionalities. +Unleash the power of `kotlinx-benchmark` with our comprehensive guide, highlighting the breadth of configuration options that help fine-tune your benchmarking setup to suit your specific needs. Dive into the heart of the configuration process with both basic and advanced settings, offering a granular level of control to realize accurate, reliable performance measurements every time. -## Table of Contents +## Core Configuration Options: The Essential Settings -- [Step 1: Laying the Foundation – Establish Benchmark Targets](#step-1) -- [Step 2: Tailoring the Setup – Create Benchmark Configurations](#step-2) -- [Step 3: Fine-tuning Your Setup – Understanding and Setting Configuration Options](#step-3) - - [Basic Configuration Options: The Essential Settings](#step-3a) - - [Advanced Configuration Options: The Power Settings](#step-3b) -- [The Benchmark Configuration in Action: An In-Depth Example](#example) +The `configurations` section of the `benchmark` block is where you control the parameters of your benchmark profiles. Each configuration offers a rich array of settings. Be aware that values defined in the build script will override those specified by annotations in the code. -## Step 1: Laying the Foundation – Establish Benchmark Targets +| Option | Description | Possible Values | Corresponding Annotation | +| ----------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------ | ------------------------ | +| `iterations` | Sets the number of iterations for measurements. | Integer | @BenchmarkMode | +| `warmups` | Sets the number of iterations for system warming, ensuring accurate measurements. | Integer | @Warmup | +| `iterationTime` | Sets the duration for each iteration, both measurement and warm-up. | Integer | @Measurement | +| `iterationTimeUnit` | Defines the unit for `iterationTime`. | "ns", "μs", "ms", "s", "m", "h", "d" | @Measurement | +| `outputTimeUnit` | Sets the unit for the results display. | "ns", "μs", "ms", "s", "m", "h", "d" | @OutputTimeUnit | +| `mode` | Selects "thrpt" for measuring the number of function calls per unit time or "avgt" for measuring the time per function call. | "thrpt", "avgt" | @BenchmarkMode | +| `include("…")` | Applies a regular expression to include benchmarks that match the substring in their fully qualified names. | Regex pattern | - | +| `exclude("…")` | Applies a regular expression to exclude benchmarks that match the substring in their fully qualified names. | Regex pattern | - | +| `param("name", "value1", "value2")` | Assigns values to a public mutable property, annotated with `@Param`. | Any string values | @Param | +| `reportFormat` | Defines the benchmark report's format options. | "json", "csv", "scsv", "text" | - | -To start off, define the `benchmark` section within your `build.gradle` file. This section is your playground where you register the compilations you wish to run benchmarks on, within a `targets` subsection. +## Expert Configuration Options: The Power Settings -Targets can be registered in two ways. Either by their name, such as `"jvm"`, which registers its `main` compilation, meaning `register("jvm")` and `register("jvmMain")` will register the same compilation. Alternatively, you can register a source set, for instance, `"jvmTest"` or `"jsBenchmark"`, which will register the corresponding compilation. Here's an illustration using a multiplatform project: +The power of kotlinx-benchmark extends beyond basic settings. Delve into platform-specific options for tighter control over your benchmarks: -```groovy -benchmark { - targets { - register("jvm") - register("js") - register("native") - register("wasm") // Experimental - } -} -``` +| Option | Platform | Description | Possible Values | Corresponding Annotation | +| --------------------------------------------- | ---------------------- | ---------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------ | ------------------------ | +| `advanced("nativeFork", "value")` | Kotlin/Native | Executes iterations within the same process ("perBenchmark") or each iteration in a separate process ("perIteration"). | "perBenchmark", "perIteration" | - | +| `advanced("nativeGCAfterIteration", "value")` | Kotlin/Native | Triggers garbage collection after each iteration when set to `true`. | `true`, `false` | - | +| `advanced("jvmForks", "value")` | Kotlin/JVM | Determines how many times the harness should fork. | "0" (no fork), "1", "definedByJmh" (JMH decides) | @Fork | +| `advanced("jsUseBridge", "value")` | Kotlin/JS, Kotlin/Wasm | Disables the generation of benchmark bridges to stop inlining optimizations when set to `false`. | `true`, `false` | - | -For detailed guidance on creating separate source sets for benchmarks in your Kotlin project, please refer to [Benchmarking with Gradle: Creating Separate Source Sets](separate-source-sets.md). - -## Step 2: Tailoring the Setup – Create Benchmark Configurations - -Having laid the groundwork with your targets, the next phase involves creating configurations for your benchmarks. You accomplish this by adding a `configurations` subsection within your `benchmark` block. - -The kotlinx benchmark toolkit automatically creates a `main` configuration as a default. However, you can mold this tool to suit your needs by creating additional configurations. These configurations are your control knobs, letting you adjust the parameters of your benchmark profiles. Here's how: - -```groovy -benchmark { - configurations { - main { - // Configuration parameters for the default profile go here - } - smoke { - // Create and configure a "smoke" configuration. - } - } -} -``` - -## Step 3: Fine-tuning Your Setup – Understanding and Setting Configuration Options - -Each configuration brings a bundle of options to the table, providing you with the flexibility to meet your specific benchmarking needs. We delve into these options to give you a better understanding and help you make the most of the basic and advanced settings: - -**Note:** Many of these configuration options correspond to annotations in the benchmark code. Please be aware that any values provided in the build script will override those defined by annotations in the code. - -### Basic Configuration Options: The Essential Settings - -| Option | Description | Default Value | Possible Values | Corresponding Annotation | -| ----------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------ | ------------- | ------------------------------------ | ------------------------ | -| `iterations` | Specifies the number of iterations for measurements. | - | Integer | @BenchmarkMode | -| `warmups` | Specifies the number of iterations for system warming, ensuring accurate measurements. | - | Integer | @Warmup | -| `iterationTime` | Specifies the duration for each iteration, both measurement and warm-up. | - | Integer | @Measurement | -| `iterationTimeUnit` | Specifies the unit for `iterationTime`. | - | "ns", "μs", "ms", "s", "m", "h", "d" | @Measurement | -| `outputTimeUnit` | Specifies the unit for the results display. | - | "ns", "μs", "ms", "s", "m", "h", "d" | @OutputTimeUnit | -| `mode` | Selects between "thrpt" for measuring the number of function calls per unit time or "avgt" for measuring the time per function call. | - | "thrpt", "avgt" | @BenchmarkMode | -| `include("…")` | Applies a regular expression to include benchmarks that match the substring in their fully qualified names. | - | Regex pattern | - | -| `exclude("…")` | Applies a regular expression to exclude benchmarks that match the substring in their fully qualified names. | - | Regex pattern | - | -| `param("name", "value1", "value2")` | Assigns values to a public mutable property, annotated with `@Param`. | - | Any string values | @Param | -| `reportFormat` | Defines the benchmark report's format options. | "json" | "json", "csv", "scsv", "text" | - | - -### Advanced Configuration Options: The Power Settings - -Beyond the basics, kotlinx allows you to take a deep dive into platform-specific settings, offering more control over your benchmarks: - -| Option | Platform | Description | Default Value | Possible Values | Corresponding Annotation | -| --------------------------------------------- | ---------------------- | ---------------------------------------------------------------------------------------------------------------------- | -------------- | ------------------------------------------------ | ------------------------ | -| `advanced("nativeFork", "value")` | Kotlin/Native | Executes iterations within the same process ("perBenchmark") or each iteration in a separate process ("perIteration"). | "perBenchmark" | "perBenchmark", "perIteration" | - | -| `advanced("nativeGCAfterIteration", "value")` | Kotlin/Native | Triggers garbage collection after each iteration when set to `true`. | `false` | `true`, `false` | - | -| `advanced("jvmForks", "value")` | Kotlin/JVM | Determines how many times the harness should fork. | "1" | "0" (no fork), "1", "definedByJmh" (JMH decides) | @Fork | -| `advanced("jsUseBridge", "value")` | Kotlin/JS, Kotlin/Wasm | Disables the generation of benchmark bridges to stop inlining optimizations when set to `false`. | - | `true`, `false` | - | - -## The Benchmark Configuration in Action: An In-Depth Example - -```groovy -benchmark { - configurations { - main { - warmups = 20 // Execute 20 iterations for system warming to stabilize the JVM and ensure accurate measurements - iterations = 10 // Perform 10 iterations for the actual measurement - iterationTime = 3 // Each iteration lasts for 3 seconds - iterationTimeUnit = "s" // Time unit for iterationTime is seconds - mode = "avgt" // Benchmarking mode is set to average time per function call - outputTimeUnit = "ms" // The results will be displayed in milliseconds - include(".*MyBenchmark.*") // Only include benchmarks that match this regular expression pattern - param("size", "100", "200") // Assign two potential values ("100" and "200") to a property annotated with @Param - reportFormat = "json" // The benchmark report will be generated in JSON format for easy parsing and visualization - } - smoke { - warmups = 5 // Perform 5 warmup iterations - iterations = 3 // Perform 3 measurement iterations - iterationTime = 500 // Each iteration lasts for 500 milliseconds - iterationTimeUnit = "ms" // Time unit for iterationTime is milliseconds - advanced("nativeFork", "perIteration") // Execute each iteration in a separate Kotlin/Native process - advanced("nativeGCAfterIteration", "true") // Trigger garbage collection after each iteration in Kotlin/Native - } - } - targets { - register("jvm") { - jmhVersion = "1.21" - } - register("js") - register("native") - register("wasm") - } -} -``` - -With this guide, you should now be well-equipped to fine-tune your benchmarking process, ensuring you generate precise, reliable performance measurements every time. +With this guide at your side, you're ready to optimize your benchmarking process with `kotlinx-benchmark`. Happy benchmarking! \ No newline at end of file diff --git a/docs/interpreting-results.md b/docs/interpreting-results.md index c932b894..951c42ee 100644 --- a/docs/interpreting-results.md +++ b/docs/interpreting-results.md @@ -48,4 +48,4 @@ While analyzing benchmark results, watch out for these common pitfalls: 4. **Dead Code Elimination:** The JVM is very good at optimizing your code, and sometimes it can optimize your benchmark right out of existence! Make sure your benchmarks do real work and that their results are used somehow (often by returning them from the benchmark method), or else the JVM might optimize them away. -5. **Measurement error:** Ensure that you are not running any heavy processes in the background that could distort your benchmark results. +5. **Measurement error:** Ensure that you are not running any heavy processes in the background that could distort your benchmark results. \ No newline at end of file diff --git a/docs/multiplatform-setup.md b/docs/multiplatform-setup.md index 67477c8a..4fabc4da 100644 --- a/docs/multiplatform-setup.md +++ b/docs/multiplatform-setup.md @@ -1,241 +1,396 @@ -## Step-by-Step Guide for Multiplatform Benchmarking Setup Using kotlinx.benchmark +# Step-by-Step Setup Guide for a Multiplatform Benchmarking Project Using kotlinx-benchmark -### Prerequisites +This guide will walk you through the process of setting up a multiplatform benchmarking project in Kotlin using kotlinx-benchmark. -Before starting, ensure your development environment meets the following requirements: +# Table of Contents -- **Kotlin**: Version 1.8.20 or newer. Install Kotlin from the [official website](https://kotlinlang.org/) or a package manager like SDKMAN! or Homebrew. -- **Gradle**: Version 8.0 or newer. Download Gradle from the [official website](https://gradle.org/) or use a package manager. +1. [Prerequisites](#prerequisites) +2. [Kotlin/JS Project Setup](#kotlinjs-project-setup) +3. [Kotlin/Native Project Setup](#kotlinnative-project-setup) +4. [Kotlin/WASM Project Setup](#kotlinwasm-project-setup) +5. [Multiplatform Project Setup](#multiplatform-project-setup) +6. [Conclusion](#conclusion) -### Step 1: Create a New Kotlin Multiplatform Project +## Prerequisites -Begin by creating a new Kotlin Multiplatform project. You can do this either manually or by using an IDE such as IntelliJ IDEA, which can generate the project structure for you. +Ensure your development environment meets the following [requirements](compatibility.md): -### Step 2: Configure Build +- **Kotlin**: Version 1.8.20 or newer. +- **Gradle**: Version 8.0 or newer. -In this step, you'll modify your project's build file to add necessary dependencies and plugins. +## Kotlin/JS Project Setup + +### Step 1: Add the Benchmark Plugin
Kotlin DSL -#### 2.1 Apply the Necessary Plugins - -In your `build.gradle.kts` file, add the required plugins. These plugins are necessary for enabling Kotlin Multiplatform, making all classes and functions open, and using the kotlinx.benchmark plugin. +In your `build.gradle.kts` file, add the benchmarking plugin: ```kotlin plugins { kotlin("multiplatform") - kotlin("plugin.allopen") version "1.8.21" id("org.jetbrains.kotlinx.benchmark") version "0.4.8" } ``` +
+ +
+Groovy DSL + +In your `build.gradle` file, add the benchmarking plugin: + +```groovy +plugins { + id 'org.jetbrains.kotlin.multiplatform' + id 'org.jetbrains.kotlinx.benchmark' version '0.4.8' +} +``` +
+ +### Step 2: Configure the Benchmark Plugin + +The next step is to configure the benchmark plugin to know which targets to run the benchmarks against. In this case, we're specifying `js` as the target platform: + +```groovy +benchmark { + targets { + register("js") + } +} +``` + +### Step 3: Specify the Node.js Target and Optional Compiler + +In Kotlin/JS, set the Node.js runtime as your target: + +```kotlin +kotlin { + js { + nodejs() + } +} +``` + +Optionally you can specify a compiler such as the [IR compiler](https://kotlinlang.org/docs/js-ir-compiler.html) and configure the benchmarking targets: + +```kotlin +kotlin { + js('jsIr', IR) { + nodejs() + } + js('jsIrBuiltIn', IR) { + nodejs() + } +} +``` + +In this configuration, `jsIr` and `jsIrBuiltIn` are both set up for Node.js and use the IR compiler. The `jsIr` target relies on an external benchmarking library (benchmark.js), whereas `jsIrBuiltIn` leverages the built-in Kotlin benchmarking plugin. Choosing one depends on your specific benchmarking requirements. -#### 2.2 Add the Dependencies +### Step 4: Add the Runtime Library -Next, add the `kotlinx-benchmark-runtime` dependency to your project. This dependency contains the necessary runtime components for benchmarking. +To run benchmarks, add the runtime library, `kotlinx-benchmark-runtime`, to the dependencies of your source set and enable Maven Central for dependencies lookup: ```kotlin kotlin { sourceSets { commonMain { dependencies { - implementation("org.jetbrains.kotlinx:kotlinx-benchmark-runtime:0.4.8") + implementation("org.jetbrains.kotlinx:kotlinx-benchmark-runtime:0.4.4") } } } } + +repositories { + mavenCentral() +} ``` -#### 2.3 Apply the AllOpen Annotation +### Step 5: Write Benchmarks -Now, you need to instruct the [allopen](https://kotlinlang.org/docs/all-open-plugin.html) plugin to consider all benchmark classes and their methods as open. For that, apply the `allOpen` block and specify the JMH annotation `State`. +Create a new source file in your `src/main/kotlin` directory and write your benchmarks. Here's an example: ```kotlin -allOpen { - annotation("org.openjdk.jmh.annotations.State") +package benchmark + +import org.openjdk.jmh.annotations.* + +@State(Scope.Benchmark) +open class JSBenchmark { + private var data = 0.0 + + @Setup + fun setUp() { + data = 3.0 + } + + @Benchmark + fun sqrtBenchmark(): Double { + return kotlin.math.sqrt(data) + } } ``` -#### 2.4 Define the Repositories +### Step 6: Run Benchmarks + +In the terminal, navigate to your project's root directory and run `./gradlew benchmark`. -Gradle needs to know where to find the libraries your project depends on. In this case, we're using the libraries hosted on Maven Central, so we need to specify that. +## Kotlin/Native Project Setup -In your `build.gradle.kts` file, add the following code block: +### Step 1: Add the Benchmark Plugin + +
+Kotlin DSL + +In your `build.gradle.kts` file, add the benchmarking plugin: ```kotlin -repositories { - mavenCentral() +plugins { + kotlin("multiplatform") + id("org.jetbrains.kotlinx.benchmark") version "0.4.8" } ``` +
-#### 2.5 Register the Benchmark Targets +
+Groovy DSL -Next, we need to inform the kotlinx.benchmark plugin about our benchmarking targets. For multiplatform projects, we need to register each platform separately. +In your `build.gradle` file, add the benchmarking plugin: -In your `build.gradle.kts` file, add the following code block within the `benchmark` block: +```groovy +plugins { + id 'org.jetbrains.kotlin.multiplatform' + id 'org.jetbrains.kotlinx.benchmark' version '0.4.8' +} +``` +
-```kotlin +### Step 2: Configure the Benchmark Plugin + +The next step is to configure the benchmark plugin to know which targets to run the benchmarks against. In this case, we're specifying `native` as the target platform: + +```groovy benchmark { targets { - register("jvm") - register("js") register("native") - // Add more platforms as needed } } ``` -#### 2.6 Define the Kotlin Targets and SourceSets +### Step 3: Add the Runtime Library -In the `kotlin` block, you define the different platforms that your project targets and the related source sets. Within each target, you can specify the related compilations. For the JVM, you create a specific 'benchmark' compilation associated with the main compilation. +To run benchmarks, add the runtime library, `kotlinx-benchmark-runtime`, to the dependencies of your source set and enable Maven Central for dependencies lookup: ```kotlin -jvm { - compilations.create('benchmark') { associateWith(compilations.main) } +kotlin { + sourceSets { + commonMain { + dependencies { + implementation("org.jetbrains.kotlinx:kotlinx-benchmark-runtime:0.4.4") + } + } + } +} + +repositories { + mavenCentral() } ``` -The dependency `kotlinx-benchmark-runtime` is applied to the `commonMain` source set, indicating that it will be used across all platforms: +### Step 4: Write Benchmarks + +Create a new source file in your `src/nativeMain/kotlin` directory and write your benchmarks. Here's an example: ```kotlin -sourceSets { - commonMain { - dependencies { - implementation project(":kotlinx-benchmark-runtime") - } +package benchmark + +import org.openjdk.jmh.annotations.* + +@State(Scope.Benchmark) +open class NativeBenchmark { + private var data = 0.0 + + @Setup + fun setUp() { + data = 3.0 + } + + @Benchmark + fun sqrtBenchmark(): Double { + return kotlin.math.sqrt(data) } } ``` -
+### Step 5: Run Benchmarks -
-Groovy DSL +In the terminal, navigate to your project's root directory and run `./gradlew benchmark`. -#### 2.1 Apply the Necessary Plugins +## Kotlin/WASM Project Setup -In your `build.gradle` file, apply the required plugins. These plugins are necessary for enabling Kotlin Multiplatform, making all classes and functions open, and using the kotlinx.benchmark plugin. +### Step 1: Add the Benchmark Plugin -```groovy +
+Kotlin DSL + +In your `build.gradle.kts` file, add the following: + +```kotlin plugins { - id 'org.jetbrains.kotlin.multiplatform' - id 'org.jetbrains.kotlin.plugin.allopen' version '1.8.21' - id 'org.jetbrains.kotlinx.benchmark' version '0.4.8' + kotlin("multiplatform") + id("org.jetbrains.kotlinx.benchmark") version "0.4.8" } ``` +
-#### 2.2 Add the Dependencies +
+Groovy DSL -Next, add the `kotlinx-benchmark-runtime` dependency to your project. This dependency contains the necessary runtime components for benchmarking. +In your `build.gradle` file, add the following: ```groovy -dependencies { - implementation 'org.jetbrains.kotlinx:kotlinx-benchmark-runtime:0.4.8' +plugins { + id 'org.jetbrains.kotlin.multiplatform' + id 'org.jetbrains.kotlinx.benchmark' version '0.4.8' } ``` +
-#### 2.3 Apply the AllOpen Annotation +### Step 2: Configure the Benchmark Plugin -Now, you need to instruct the [allopen](https://kotlinlang.org/docs/all-open-plugin.html) plugin to consider all benchmark classes and their methods as open. For that, apply the `allOpen` block and specify the JMH annotation `State`. +The next step is to configure the benchmark plugin to know which targets to run the benchmarks against. In this case, we're specifying `wasm` as the target platform: ```groovy -allOpen { - annotation("org.openjdk.jmh.annotations.State") +benchmark { + targets { + register("wasm") + } } ``` -#### 2.4 Define the Repositories +### Step 3: Add Runtime Library -Gradle needs to know where to find the libraries your project depends on. In this case, we're using the libraries hosted on Maven Central, so we need to specify that. +To run benchmarks, add the runtime library, `kotlinx-benchmark-runtime`, to the dependencies of your source set and enable Maven Central for dependencies lookup: -In your `build.gradle` file, add the following code block: +```kotlin +kotlin { + sourceSets { + commonMain { + dependencies { + implementation("org.jetbrains.kotlinx:kotlinx-benchmark-runtime:0.4.4") + } + } + } +} -```groovy repositories { mavenCentral() } ``` -#### 2.5 Register the Benchmark Targets +### Step 4: Write Benchmarks -Next, we need to inform the kotlinx.benchmark plugin about our benchmarking targets. For multiplatform projects, we need to register each platform separately. +Create a new source file in your `src/wasmMain/kotlin` directory and write your benchmarks. Here's an example: -In your `build.gradle` file, add the following code block within the `benchmark` block: +```kotlin +package benchmark -```groovy -benchmark { - targets { - register("jvm") - register("js") - register("native") - // Add more platforms as needed +import org.openjdk.jmh.annotations.* + +@State(Scope.Benchmark) +open class WASMBenchmark { + private var data = 0.0 + + @Setup + fun setUp() { + data = 3.0 + } + + @Benchmark + fun sqrtBenchmark(): Double { + return kotlin.math.sqrt(data) } } ``` -#### 2.6 Define the Kotlin Targets and SourceSets +### Step 5: Run Benchmarks + +In the terminal, navigate to your project's root directory and run `./gradlew benchmark`. For a practical example, please refer to [examples](../examples/multiplatform). + +## Kotlin Multiplatform Project Setup + +### Step 1: Add the Benchmark Plugin + +
+Kotlin DSL -In the `kotlin` block, you define the different platforms that your project targets and the related source sets. Within each target, you can specify the related compilations. For the JVM, you create a specific 'benchmark' compilation associated with the main compilation. +In your `build.gradle.kts` file, add the following: ```kotlin -jvm { - compilations.create('benchmark') { associateWith(compilations.main) } +plugins { + kotlin("multiplatform") + id("org.jetbrains.kotlinx.benchmark") version "0.4.8" } ``` +
+ +
+Groovy DSL -The dependency `kotlinx-benchmark-runtime` is applied to the `commonMain` source set, indicating that it will be used across all platforms: +In your `build.gradle` file, add the following: -```kotlin -sourceSets { - commonMain { - dependencies { - implementation project(":kotlinx-benchmark-runtime") - } - } +```groovy +plugins { + id 'org.jetbrains.kotlin.multiplatform' + id 'org.jetbrains.kotlinx.benchmark' version '0.4.8' } ``` -
-### Step 3: Writing Benchmarks - -Create a new Kotlin source file in your `src/main/kotlin` directory to write your benchmarks. Each benchmark is a class or object with methods annotated with `@Benchmark`. Here's a simple example: +### Step 2: Configure the Benchmark Plugin -```kotlin -import org.openjdk.jmh.annotations.Benchmark +In your `build.gradle` or `build.gradle.kts` file, add the following: -open class ListBenchmark { - @Benchmark - fun listOfBenchmark() { - listOf(1, 2, 3, 4, 5) +```groovy +benchmark { + targets { + register("jvm") + register("js") + register("native") + register("wasm") } } ``` -Ensure that your benchmark class and methods are `open`, as JMH creates subclasses during the benchmarking process. The `allopen` plugin we added earlier enforces this. - -### Step 4: Running Your Benchmarks - -Executing your benchmarks is an important part of the process. This will allow you to gather performance data about your code. There are two primary ways to run your benchmarks: through the command line or using your IDE. +### Step 3: Add the Runtime Library -#### 4.1 Running Benchmarks From the Command Line +To run benchmarks, add the runtime library, `kotlinx-benchmark-runtime`, to the dependencies of your source set and enable Maven Central for dependencies lookup: -The simplest way to run your benchmarks is by using the Gradle task `benchmark`. You can do this by opening a terminal, navigating to the root of your project, and entering the following command: +```kotlin +kotlin { + sourceSets { + commonMain { + dependencies { + implementation("org.jetbrains.kotlinx:kotlinx-benchmark-runtime:0.4.8") + } + } + } +} -```bash -./gradlew benchmark +repositories { + mavenCentral() +} ``` -This command instructs Gradle to execute the `benchmark` task, which in turn runs your benchmarks. - -#### 4.2 Understanding Benchmark Execution +### Step 4: Write Benchmarks -The execution of your benchmarks might take some time. This is normal and necessary: benchmarks must be run for a sufficient length of time to produce reliable, statistically significant results. +Create new source files in your respective `src/Main/kotlin` directories and write your benchmarks. -For more details regarding the available Gradle tasks, refer to this [document](tasks-overview.md). +### Step 5: Run Benchmarks -### Step 5: Analyze the Results +In the terminal, navigate to your project's root directory and run `./gradlew benchmark`. -To fully understand and make the best use of these results, it's important to know how to interpret and analyze them properly. For a comprehensive guide on interpreting and analyzing benchmarking results, please refer to this dedicated document: [Interpreting and Analyzing Results](interpreting-results.md). +## Conclusion -Congratulations! You have successfully set up a Kotlin Multiplatform benchmark project using kotlinx-benchmark. +This guide has walked you through setting up a multiplatform benchmarking project using the kotlinx-benchmark library in Kotlin. It has covered the creation of new projects, the addition and configuration of the benchmark plugin, writing benchmark tests, and running these benchmarks. Remember, performance benchmarking is an essential part of optimizing your code and ensuring it runs as efficiently as possible. Happy benchmarking! \ No newline at end of file diff --git a/docs/separate-source-sets.md b/docs/separate-source-sets.md index 654c7d25..5f75a1e8 100644 --- a/docs/separate-source-sets.md +++ b/docs/separate-source-sets.md @@ -7,10 +7,9 @@ Elevate your project's performance potential with organized, efficient, and isol 1. [What is a Source Set?](#what-is-a-source-set) 2. [Why Have Separate Source Sets for Benchmarks?](#why-have-separate-source-sets-for-benchmarks) 3. [Step-by-step Setup Guide](#setup-guide) - - [Kotlin JVM Project](#jvm-project) + - [Kotlin Java & JVM Project](#kotlin-java-jvm-projects) - [Kotlin Multiplatform Project](#multiplatform-project) -4. [Frequently Asked Questions](#frequently-asked-questions) -5. [Troubleshooting](#troubleshooting) +4. [Additional Resources](#additional-resources) ## What is a Source Set? @@ -32,7 +31,7 @@ Creating separate source sets for benchmarks is especially beneficial when you a Below are the step-by-step instructions to set up separate source sets for benchmarks in both Kotlin JVM and Multiplatform projects: -### Kotlin JVM Project +### Kotlin Java & JVM Projects Transform your Kotlin JVM project with separate benchmark source sets by following these simple steps: @@ -52,7 +51,7 @@ Transform your Kotlin JVM project with separate benchmark source sets by followi ```groovy dependencies { - benchmarksCompile sourceSets.main.output + sourceSets.main.runtimeClasspath + add("benchmarksImplementation", sourceSets.main.output + sourceSets.main.runtimeClasspath) } ``` @@ -76,7 +75,7 @@ Set up your Kotlin Multiplatform project to accommodate separate benchmark sourc 1. **Define New Compilation**: - Start by defining a new compilation in your target of choice (e.g. jvm, js, etc.) in your `build.gradle.kts` file. In this example, we're associating the new compilation 'benchmark' with the `main` compilation of the `jvm` target. + Start by defining a new compilation in your target of choice (e.g. jvm, js, native, wasm etc.) in your `build.gradle.kts` file. In this example, we're associating the new compilation 'benchmark' with the `main` compilation of the `jvm` target. ```kotlin kotlin { @@ -88,28 +87,19 @@ Set up your Kotlin Multiplatform project to accommodate separate benchmark sourc 2. **Register Benchmark Compilation**: - Conclude by registering your benchmark compilation. This notifies the kotlinx-benchmark tool that benchmarks are located within this compilation and should be executed accordingly. + Conclude by registering your new benchmark compilation using its source set name. In this instance, `jvmBenchmark` is the name for the benchmark compilation for the `jvm` target. ```kotlin benchmark { targets { - register("benchmark") + register("jvmBenchmark") } } ``` For more information on creating a custom compilation, you can refer to the [Kotlin documentation on creating a custom compilation](https://kotlinlang.org/docs/multiplatform-configure-compilations.html#create-a-custom-compilation). -## Frequently Asked Questions - -Here are some common questions about creating separate source sets for benchmarks: - -**Q: Is it recommended to reuse the same benchmark source set for benchmarking multiple target platforms in a Kotlin Multiplatform Project?** -A: It's generally recommended to have separate source sets for different targets to avoid configuration conflicts and ensure more accurate benchmarks. This practice mitigates the risk of configuration conflicts inherent in different platforms that may have unique dependencies and setup requirements. - -Moreover, the performance characteristics can vary significantly across platforms. Having separate source sets for each benchmarking target ensures that your benchmarking process accurately reflects the performance of your code in its specific operational context. - -For instance, consider a multiplatform project with JVM and JavaScript targets. Rather than using a single shared benchmark source set, you should ideally create two separate benchmark source sets, say `jvmBenchmark` and `jsBenchmark`. By doing so, you are able to customize each benchmark source set according to the peculiarities and performance nuances of its corresponding platform, thereby yielding more accurate and meaningful benchmarking results. +## Additional Resources **Q: Where can I ask additional questions?** -A: We invite you to bring your questions or issues to several platforms. For more immediate interactive feedback, consider joining our [Slack channel](https://kotlinlang.slack.com) where developers and Kotlin enthusiasts discuss a range of topics. For more in-depth, threaded discussions, post your queries on the [GitHub Discussions page](https://github.com/Kotlin/kotlinx-benchmark/discussions) for kotlinx-benchmark. You're also welcome to raise specific issues on the [kotlinx-benchmark GitHub page](https://github.com/Kotlin/kotlinx-benchmark). Each of these platforms is actively monitored, and the community is always prepared to assist you! +A: For any additional queries or issues, you can reach out via our Slack channel for real-time interactions, engage in comprehensive conversations on or GitHub Discussions page, or report specific problems on the kotlinx-benchmark GitHub page, with each platform maintained by a skilled, supportive community ready to assist you. \ No newline at end of file diff --git a/docs/singleplatform-setup.md b/docs/singleplatform-setup.md index 692a415e..1f651e9c 100644 --- a/docs/singleplatform-setup.md +++ b/docs/singleplatform-setup.md @@ -1,26 +1,51 @@ -## Step-by-Step Setup Guide for Single-Platform Benchmarking Project Using kotlinx-benchmark +# Step-by-Step Setup Guide for Single-Platform Benchmarking Project Using kotlinx-benchmark -### Prerequisites +This guide will walk you through the process of setting up a single-platform benchmarking project in both Kotlin and Java using the kotlinx-benchmark library. -Before starting, ensure your development environment meets the following [requirements](compatibility.md): +# Table of Contents -- **Kotlin**: Version 1.8.20 or newer. Install Kotlin from the [official website](https://kotlinlang.org/) or a package manager like SDKMAN! or Homebrew. -- **Gradle**: Version 8.0 or newer. Download Gradle from the [official website](https://gradle.org/) or use a package manager. +1. [Prerequisites](#prerequisites) +2. [Kotlin Project Setup](#koltin-project-setup) + - [Step 1: Create a New Java Project](#step-1-create-a-new-java-project) + - [Step 2: Add the Benchmark and AllOpen Plugin](#step-2-add-the-benchmark-plugin-and-allopen-plugin) + - [Step 3: Configure the Benchmark Plugin](#step-3-configure-the-benchmark-plugin) + - [Step 4: Write Benchmarks](#step-4-write-benchmarks) + - [Step 5: Run Benchmarks](#step-5-run-benchmarks) +3. [Java Project Setup](#java-project-setup) + - [Step 1: Create a New Kotlin Project](#step-1-create-a-new-kotlin-project) + - [Step 2: Add the Benchmark Plugin](#step-2-add-the-benchmark-plugin-1) + - [Step 3: Configure the Benchmark Plugin](#step-3-configure-the-benchmark-plugin-1) + - [Step 4: Write Benchmarks](#step-4-write-benchmarks-1) + - [Step 5: Run Benchmarks](#step-5-run-benchmarks-1) +4. [Conclusion](#conclusion) -### Step 1: Create a New Kotlin Project +## Prerequisites + +Ensure your development environment meets the following [requirements](compatibility.md): + +- **Kotlin**: Version 1.8.20 or newer. +- **Gradle**: Version 8.0 or newer. + +## Kotlin Project Setup + +### Step 1: Create a New Java Project + +#### IntelliJ IDEA -If you're starting from scratch, you can begin by creating a new Kotlin project with Gradle. This can be done either manually, through the command line, or by using an IDE like IntelliJ IDEA, which offers built-in support for project generation. +Click `File` > `New` > `Project`, select `Java`, specify your `Project Name` and `Project Location`, ensure the `Project SDK` is 8 or higher, and click `Finish`. -### Step 2: Configure Build +#### Gradle Command Line -In this step, you'll modify your project's build file to add necessary dependencies and plugins. +Open your terminal, navigate to the directory where you want to create your new project, and run `gradle init --type java-application`. + +### Step 2: Add the Benchmark and AllOpen Plugin + +When benchmarking Kotlin/JVM code with Java Microbenchmark Harness (JMH), it is necessary to use the [allopen plugin](https://kotlinlang.org/docs/all-open-plugin.html). This plugin ensures that your benchmark classes and methods are `open`, which is a requirement for JMH.
Kotlin DSL -#### 2.1 Apply the Necessary Plugins - -In your `build.gradle.kts` file, add the required plugins. These plugins are necessary for enabling Kotlin/JVM, making all classes and functions open, and using the kotlinx.benchmark plugin. +In your `build.gradle.kts` file, add the following: ```kotlin plugins { @@ -28,47 +53,40 @@ plugins { kotlin("plugin.allopen") version "1.8.21" id("org.jetbrains.kotlinx.benchmark") version "0.4.8" } -``` - -#### 2.2 Add the Dependencies - -Next, add the `kotlinx-benchmark-runtime` dependency to your project. This dependency contains the necessary runtime components for benchmarking. -```kotlin -dependencies { - implementation("org.jetbrains.kotlinx:kotlinx-benchmark-runtime:0.4.8") -} -``` - -#### 2.3 Apply the AllOpen Annotation - -Now, you need to instruct the [allopen](https://kotlinlang.org/docs/all-open-plugin.html) plugin to consider all benchmark classes and their methods as open. For that, apply the `allOpen` block and specify the JMH annotation `State`. - -```kotlin allOpen { annotation("org.openjdk.jmh.annotations.State") } ``` +
-#### 2.4 Define the Repositories +
+Groovy DSL -Gradle needs to know where to find the libraries your project depends on. In this case, we're using the libraries hosted on Maven Central, so we need to specify that. +In your `build.gradle` file, add the following: -In your `build.gradle.kts` file, add the following code block: +```groovy +plugins { + id 'org.jetbrains.kotlin.jvm' version '1.8.21' + id 'org.jetbrains.kotlin.plugin.allopen' version '1.8.21' + id 'org.jetbrains.kotlinx.benchmark' version '0.4.8' +} -```kotlin -repositories { - mavenCentral() +allOpen { + annotation 'org.openjdk.jmh.annotations.State' } ``` +
-#### 2.5 Register the Benchmark Targets +In Kotlin, classes and methods are `final` by default, which means they can't be overridden. However, JMH requires the ability to generate subclasses for benchmarking, which is why we need to use the allopen plugin. This configuration ensures that any class annotated with `@State` is treated as `open`, allowing JMH to work as expected. -Next, we need to inform the kotlinx.benchmark plugin about our benchmarking target. In this case, we are targeting JVM. +You can alternatively mark your benchmark classes and methods `open` manually, but using the `allopen` plugin improves code maintainability. -In your `build.gradle.kts` file, add the following code block within the `benchmark` block: +### Step 3: Configure the Benchmark Plugin -```kotlin +In your `build.gradle` or `build.gradle.kts` file, add the following: + +```groovy benchmark { targets { register("jvm") @@ -76,110 +94,129 @@ benchmark { } ``` -
+### Step 4: Write Benchmarks -
-Groovy DSL +Create a new source file in your `main/src` directory and write your benchmarks. Here's an example: -#### 2.1 Apply the Necessary Plugins +```java +package test; -In your `build.gradle` file, apply the required plugins. These plugins are necessary for enabling Kotlin/JVM, making all classes and functions open, and using the kotlinx.benchmark plugin. +import org.openjdk.jmh.annotations.*; -```groovy -plugins { - id 'org.jetbrains.kotlin.jvm' version '1.8.21' - id 'org.jetbrains.kotlin.plugin.allopen' version '1.8.21' - id 'org.jetbrains.kotlinx.benchmark' version '0.4.8' +@State(Scope.Benchmark) +@Fork(1) +public class SampleJavaBenchmark { + @Param({"A", "B"}) + String stringValue; + + @Param({"1", "2"}) + int intValue; + + @Benchmark + public String stringBuilder() { + StringBuilder stringBuilder = new StringBuilder(); + stringBuilder.append(10); + stringBuilder.append(stringValue); + stringBuilder.append(intValue); + return stringBuilder.toString(); + } } ``` -#### 2.2 Add the Dependencies +### Step 5: Run Benchmarks -Next, add the `kotlinx-benchmark-runtime` dependency to your project. This dependency contains the necessary runtime components for benchmarking. +In the terminal, navigate to your project's root directory and run `./gradlew benchmark`. -```groovy -dependencies { - implementation 'org.jetbrains.kotlinx:kotlinx-benchmark-runtime:0.4.8' -} -``` +## Java Project Setup -#### 2.3 Apply the AllOpen Annotation +### Step 1: Create a New Kotlin Project -Now, you need to instruct the [allopen](https://kotlinlang.org/docs/all-open-plugin.html) plugin to consider all benchmark classes and their methods as open. For that, apply the `allOpen` block and specify the JMH annotation `State`. +#### IntelliJ IDEA -```groovy -allOpen { - annotation("org.openjdk.jmh.annotations.State") -} -``` +Click `File` > `New` > `Project`, select `Kotlin`, specify your `Project Name` and `Project Location`, ensure the `Project SDK` is 8 or higher, and click `Finish`. -#### 2.4 Define the Repositories +#### Gradle Command Line -Gradle needs to know where to find the libraries your project depends on. In this case, we're using the libraries hosted on Maven Central, so we need to specify that. +Open your terminal, navigate to the directory where you want to create your new project, and run `gradle init --type kotlin-application`. -In your `build.gradle` file, add the following code block: +### Step 2: Add the Benchmark Plugin -```groovy -repositories { - mavenCentral() +
+Kotlin DSL + +In your `build.gradle.kts` file, add the following: + +```kotlin +plugins { + id 'java' + id("org.jetbrains.kotlinx.benchmark") version "0.4.8" } ``` +
-#### 2.5 Register the Benchmark Targets - -Next, we need to inform the kotlinx.benchmark plugin about our benchmarking target. In this case, we are targeting JVM. +
+Groovy DSL -In your `build.gradle` file, add the following code block within the `benchmark` block: +In your `build.gradle` file, add the following: ```groovy -benchmark { - targets { - register("jvm") - } +plugins { + id 'java' + id 'org.jetbrains.kotlinx.benchmark' version '0.4.8' } ``` -
-### Step 3: Writing Benchmarks +### Step 3: Configure the Benchmark Plugin -Create a new Kotlin source file in your `src/main/kotlin` directory to write your benchmarks. Each benchmark is a class or object with methods annotated with `@Benchmark`. Here's a simple example: +In your `build.gradle` or `build.gradle.kts` file, add the following: -```kotlin -import org.openjdk.jmh.annotations.Benchmark - -open class ListBenchmark { - @Benchmark - fun listOfBenchmark() { - listOf(1, 2, 3, 4, 5) +```groovy +benchmark { + targets { + register("main") } } ``` -Ensure that your benchmark class and methods are `open`, as JMH creates subclasses during the benchmarking process. The `allopen` plugin we added earlier enforces this. - -### Step 4: Running Your Benchmarks +### Step 4: Write Benchmarks -Executing your benchmarks is an important part of the process. This will allow you to gather performance data about your code. There are two primary ways to run your benchmarks: through the command line or using your IDE. +Create a new source file in your `src/main/java` directory and write your benchmarks. Here's an example: -#### 4.1 Running Benchmarks From the Command Line +```kotlin +package test -The simplest way to run your benchmarks is by using the Gradle task `benchmark`. You can do this by opening a terminal, navigating to the root of your project, and entering the following command: +import org.openjdk.jmh.annotations.* +import java.util.concurrent.* -```bash -./gradlew benchmark -``` +@State(Scope.Benchmark) +@Fork(1) +@Warmup(iterations = 0) +@Measurement(iterations = 1, time = 1, timeUnit = TimeUnit.SECONDS) +class KtsTestBenchmark { + private var data = 0.0 -This command instructs Gradle to execute the `benchmark` task, which in turn runs your benchmarks. + @Setup + fun setUp() { + data = 3.0 + } -#### 4.2 Understanding Benchmark Execution + @Benchmark + fun sqrtBenchmark(): Double { + return Math.sqrt(data) + } -The execution of your benchmarks might take some time. This is normal and necessary: benchmarks must be run for a sufficient length of time to produce reliable, statistically significant results. + @Benchmark + fun cosBenchmark(): Double { + return Math.cos(data) + } +} +``` -For more details regarding the available Gradle tasks, refer to this [document](tasks-overview.md). +### Step 5: Run Benchmarks -### Step 5: Analyze the Results +In the terminal, navigate to your project's root directory and run `./gradlew benchmark`. -To fully understand and make the best use of these results, it's important to know how to interpret and analyze them properly. For a comprehensive guide on interpreting and analyzing benchmarking results, please refer to this dedicated document: [Interpreting and Analyzing Results](interpreting-results.md). +## Conclusion -Congratulations! You have successfully set up a Kotlin/JVM benchmark project using kotlinx-benchmark. +Congratulations! You've set up a single-platform benchmarking project using `kotlinx-benchmark`. Now you can write your own benchmarks to test the performance of your Java or Kotlin code. Happy benchmarking! \ No newline at end of file diff --git a/docs/tasks-overview.md b/docs/tasks-overview.md index adba5efc..ce9adf79 100644 --- a/docs/tasks-overview.md +++ b/docs/tasks-overview.md @@ -1,8 +1,45 @@ -| Task | Description | -|---|---| -| **assembleBenchmarks** | The task responsible for generating and building all benchmarks in the project. Serves as a dependency for other benchmark tasks. | -| **benchmark** | The primary task for executing all benchmarks in the project. Depends on `assembleBenchmarks` to ensure benchmarks are ready and built. | -| **{configName}Benchmark** | Executes all benchmarks under the specific configuration. Useful when different benchmarking requirements exist for different parts of the application. | -| **{configName}BenchmarkGenerate** | Generates JMH source files for the specified configuration. JMH is a benchmarking toolkit for Java and JVM-targeting languages. | -| **{configName}BenchmarkCompile** | Compiles the JMH source files generated for a specific configuration, transforming them into machine code for JVM execution. | -| **{configName}BenchmarkJar** | Packages the compiled JMH files into a JAR (Java Archive) file for distribution and execution. | \ No newline at end of file +# Overview of Tasks for kotlinx-benchmark Plugin Across Different Platforms + +This document describes the tasks generated by the kotlinx-benchmark plugin when used with different Kotlin and JVM platforms. Understanding these tasks can help you utilize them more effectively in your benchmarking projects. The tasks are divided into two categories: those that apply to all targets, and those that are target-specific. + +## General Tasks + +These tasks are not platform-dependent and thus, are used across all targets. + +| Task | Description | +| ---------------------- | -------------------------------------------------------------------------------------------------------------------- | +| **assembleBenchmarks** | Generates and builds all benchmarks in the project, serving as a dependency for other benchmark tasks. | +| **benchmark** | Executes all benchmarks in the project. It depends on `assembleBenchmarks` to ensure benchmarks are ready and built. | + +## Java & Kotlin/JVM Specific Tasks + +| Task | Description | +| --------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------- | +| **{configName}Benchmark** | Executes all benchmarks under the specific configuration. Useful when different benchmarking requirements exist for different parts of the application. | +| **{configName}BenchmarkGenerate** | Generates JMH source files for the specified configuration. | +| **{configName}BenchmarkCompile** | Compiles the JMH source files generated for a specific configuration, transforming them into machine code for JVM execution. | +| **{configName}BenchmarkJar** | Packages the compiled JMH files into a JAR (Java Archive) file for distribution and execution. | + +## Kotlin/JS Specific Tasks + +| Task | Description | +| -------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------ | +| **compile{configName}BenchmarkKotlin{configName}** | Compiles JS benchmark source files for the specified JS target. This includes setting up dependencies and setting up Kotlin compilation options. | +| **{configName}Benchmark** | Executes all benchmarks for the specified JS target. | +| **{configName}BenchmarkGenerate** | Generates JS source files for the specified JS target. These source files will be used in the benchmarking process. | + +## Kotlin/WASM Specific Tasks + +| Task | Description | +| -------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------- | +| **compile{configName}BenchmarkKotlin{configName}** | Compiles Wasm benchmark source files for the specified Wasm target. This includes setting up dependencies and compiling the benchmarking code. | +| **{configName}Benchmark** | Executes all benchmarks for the specified Wasm target. | +| **{configName}BenchmarkGenerate** | Generates Wasm source files for the specified Wasm target. These source files will be used in the benchmarking process. | + +## Kotlin/Native Specific Tasks + +| Task | Description | +| ---------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| **link{configName}BenchmarkReleaseExecutable{configName}** | Compiles the generated files and creates an executable. The entry point for the executable is the generated main function. | +| **{configName}Benchmark** | Executes the benchmarks for each benchmark configuration defined in the plugin extension corresponding to the specific config. For the "main" configuration, `configName` is dropped. | +| **{configName}BenchmarkGenerate** | Takes compiled user code, retrieves metadata and generates the code needed for measurement. This is a native-target-specific task (e.g., for `macosX64()`, `native` -> `macosX64`). | From 143f0dbdafe6ddee8019555da4d2ae1279a04154 Mon Sep 17 00:00:00 2001 From: wldeh <62161211+wldeh@users.noreply.github.com> Date: Sat, 1 Jul 2023 01:42:50 -0700 Subject: [PATCH 15/30] docs: update structure and wording --- README.md | 21 ++++++++++++++------- 1 file changed, 14 insertions(+), 7 deletions(-) diff --git a/README.md b/README.md index 1cbeafee..92da7c45 100644 --- a/README.md +++ b/README.md @@ -8,7 +8,7 @@ [![Gradle Plugin Portal](https://img.shields.io/maven-metadata/v?label=Gradle%20Plugin&metadataUrl=https://plugins.gradle.org/m2/org/jetbrains/kotlinx/kotlinx-benchmark-plugin/maven-metadata.xml)](https://plugins.gradle.org/plugin/org.jetbrains.kotlinx.benchmark) [![IR](https://img.shields.io/badge/Kotlin%2FJS-IR%20supported-yellow)](https://kotl.in/jsirsupported) -kotlinx.benchmark is a toolkit for running benchmarks for multiplatform code written in Kotlin and running on the following supported targets: JVM, JavaScript and Native. +kotlinx-benchmark is a toolkit for running benchmarks for multiplatform code written in Kotlin. ## Features @@ -46,7 +46,7 @@ kotlinx.benchmark is a toolkit for running benchmarks for multiplatform code wri ## Using in Your Projects -The `kotlinx-benchmark` library is designed to work with Kotlin/JVM, Kotlin/JS, and Kotlin/Native targets. To get started, ensure you're using Kotlin 1.8.20 or newer and Gradle 8.0 or newer. +The `kotlinx-benchmark` library is designed to work with Kotlin/JVM, Kotlin/JS, Kotlin/Native, and Kotlin/WASM (experimental) targets. To get started, ensure you're using Kotlin 1.8.20 or newer and Gradle 8.0 or newer. ### Gradle Setup @@ -149,7 +149,7 @@ You can alternatively mark your benchmark classes and methods `open` manually, b #### Kotlin/JS -Specify a compiler like the [IR compiler](https://kotlinlang.org/docs/js-ir-compiler.html) and set benchmarking targets in one step. Here, `jsIr` and `jsIrBuiltIn` are both using the IR compiler. The former uses benchmark.js, while the latter uses Kotlin's built-in plugin. +Create a JS target with Node.js execution environment and register it as a benchmark target: ```kotlin kotlin { @@ -159,9 +159,17 @@ kotlin { js('jsIrBuiltIn', IR) { nodejs() } + benchmark { + targets { + register("jsIr") + register("jsIrBuiltIn") + } + } } ``` +This setup is using the [IR compiler](https://kotlinlang.org/docs/js-ir-compiler.html). `jsIr` and `jsIrBuiltIn` are both using the IR compiler. The former uses benchmark.js, while the latter uses Kotlin's built-in plugin. + #### Multiplatform For multiplatform projects, add the `kotlinx-benchmark-runtime` dependency to the `commonMain` source set: @@ -180,11 +188,9 @@ kotlin { This setup enables running benchmarks in the main compilation of any registered targets. Another option is to register the compilation you want to run benchmarks from. The platform-specific artifacts will be resolved automatically. For a practical example, please refer to [examples](examples/multiplatform). -### Benchmark Configuration +Note: Benchmark classes located in the common source set will be run in all platforms, while those located in a platform-specific source set will be run in the corresponding platform. -In a `build.gradle` file create `benchmark` section, and inside it add a `targets` section. -In this section register all targets you want to run benchmarks from. -Example for multiplatform project: +Define your benchmark targets within the `benchmark` section in your `build.gradle` file: ```kotlin benchmark { @@ -207,6 +213,7 @@ benchmark { warmups = 20 iterations = 10 iterationTime = 3 + iterationTimeUnit = "s" } smoke { warmups = 5 From e00221de87fdcf5ab12b912b9937f36290fed5ba Mon Sep 17 00:00:00 2001 From: wldeh <62161211+wldeh@users.noreply.github.com> Date: Sun, 2 Jul 2023 18:41:20 -0700 Subject: [PATCH 16/30] update changelog --- CHANGELOG.md | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index a6a4dae0..c5dc9961 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,5 +1,27 @@ # CHANGELOG +## 0.4.8 + +- Drop legacy JS support +- Support building large JARs [#95](https://github.com/Kotlin/kotlinx-benchmark/issues/95) +- Support Kotlin 1.8.20 +- Fix JVM and Native configuration cache warnings + +## 0.4.7 + +- Support Kotlin 1.8.0 + +## 0.4.6 + +- Support Gradle 8.0 +- Sign kotlinx-benchmark-plugin artifacts with the Signing Plugin +- Upgrade Kotlin version to 1.7.20 +- Upgrade Gradle version to 7.4.2 + +## 0.4.5 + +- Remove redundant jmh-core dependency from plugin + ## 0.4.4 - Require the minimum Kotlin version of 1.7.0 From e5cd5122f6213448a40e827f42031d7e0f512d36 Mon Sep 17 00:00:00 2001 From: wldeh <62161211+wldeh@users.noreply.github.com> Date: Tue, 27 Jun 2023 16:21:26 -0700 Subject: [PATCH 17/30] docs: add READMEs to examples --- examples/README.md | 59 ++++++++++++++ examples/java/README.md | 36 +++++++++ examples/kotlin-kts/README.md | 32 ++++++++ examples/kotlin-multiplatform/README.md | 103 ++++++++++++++++++++++++ examples/kotlin/README.md | 35 ++++++++ 5 files changed, 265 insertions(+) create mode 100644 examples/README.md create mode 100644 examples/java/README.md create mode 100644 examples/kotlin-kts/README.md create mode 100644 examples/kotlin-multiplatform/README.md create mode 100644 examples/kotlin/README.md diff --git a/examples/README.md b/examples/README.md new file mode 100644 index 00000000..365639e6 --- /dev/null +++ b/examples/README.md @@ -0,0 +1,59 @@ +# kotlinx-benchmark Examples Guide + +This guide is designed to help you navigate, set up, and run the benchmark examples provided here. Whether you're a seasoned developer or new to Kotlin and benchmarking, we've got you covered. Let's dive in and explore these practical examples together. + +## Prerequisites + +Before you begin, ensure you have the following installed on your local machine: + +- Git: Used to clone the repository. You can download it from [here](https://git-scm.com/downloads). +- Gradle: Used to build the projects. You can download it from [here](https://gradle.org/install/). Note that the projects come with a Gradle wrapper, so this is optional. + +## Getting Started + +1. **Clone the Repository**: Clone the `kotlinx-benchmark` repository to your local machine by running the following command in your terminal: + + ``` + git clone https://github.com/Kotlin/kotlinx-benchmark.git + ``` + +2. **Navigate to the Examples Directory**: Once the repository is cloned, navigate to the `examples` directory by running: + + ``` + cd kotlinx-benchmark/examples + ``` + +## Running the Examples + +Each example is a separate project that can be built and run independently. Here's how you can do it: + +1. **Navigate to the Example Directory**: Navigate to the directory of the example you want to run. For instance, if you want to run the `kotlin-kts` example, you would run: + + ``` + cd kotlin-kts + ``` + +2. **Build the Project**: Each project uses Gradle as a build tool. If you have Gradle installed on your machine, you can build the project by running: + + ``` + gradle build + ``` + +3. **Run the Benchmark**: After the project is built, you can run the benchmark by executing: + + ``` + gradle benchmark + ``` + +Repeat these steps for each example you want to run. + +## Troubleshooting + +If you encounter any issues while setting up or running the examples, please check the following: + +- Ensure you have all the prerequisites installed and they are added to your system's PATH. +- Make sure you are running the commands in the correct directory. + +If you're still having issues, feel free to open an issue on the [kotlinx-benchmark repository](https://github.com/Kotlin/kotlinx-benchmark/issues). + +Happy benchmarking! diff --git a/examples/java/README.md b/examples/java/README.md new file mode 100644 index 00000000..2b59fdeb --- /dev/null +++ b/examples/java/README.md @@ -0,0 +1,36 @@ +# Java Example + +[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/Kotlin/kotlinx-benchmark) + +## Project Structure + +Inside of this example, you'll see the following folders and files: + +``` +/ +├── build.gradle +└── src/ + └── main/ + └── java/ + └── test/ + └── SampleJavaBenchmark.java +``` + +## Tasks + +All tasks can be run from the root of the project, from a terminal: + +| Task Name | Action | +| --- | --- | +| `gradle assembleBenchmarks` | Generate and build all benchmarks in the project | +| `gradle benchmark` | Execute all benchmarks in the project | +| `gradle mainBenchmark` | Execute benchmark for 'main' | +| `gradle mainBenchmarkCompile` | Compile JMH source files for 'main' | +| `gradle mainBenchmarkGenerate` | Generate JMH source files for 'main' | +| `gradle mainBenchmarkJar` | Build JAR for JMH compiled files for 'main' | +| `gradle mainSingleParamBenchmark` | Execute benchmark for 'main' | +| `gradle singleParamBenchmark` | Execute all benchmarks in the project | + +## Want to learn more? + +Feel free to engage in benchmarking discussions on the `#benchmarks` channel on [Kotlinlang Slack](https://kotlinlang.org/community/slack), explore the `kotlinx-benchmark` tagged questions on [Stack Overflow](https://stackoverflow.com/questions/tagged/kotlinx-benchmark), or dive into the [kotlinx-benchmark Github Discussions](https://github.com/Kotlin/kotlinx-benchmark/discussions) for more insights and interactions. diff --git a/examples/kotlin-kts/README.md b/examples/kotlin-kts/README.md new file mode 100644 index 00000000..40ed9db8 --- /dev/null +++ b/examples/kotlin-kts/README.md @@ -0,0 +1,32 @@ +# Kotlin-KTS Example + +[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/Kotlin/kotlinx-benchmark) + +## Project Structure + +Inside of this example, you'll see the following folders and files: + +``` +/ +├── build.gradle.kts +└── main/ + └── src/ + └── KtsTestBenchmark.kt +``` + +## Tasks + +All tasks can be run from the root of the project, from a terminal: + +| Task Name | Action | +| --- | --- | +| `gradle assembleBenchmarks` | Generate and build all benchmarks in the project | +| `gradle benchmark` | Execute all benchmarks in the project | +| `gradle mainBenchmark` | Execute benchmark for 'benchmarks' | +| `gradle mainBenchmarkCompile` | Compile JMH source files for 'benchmarks' | +| `gradle mainBenchmarkGenerate` | Generate JMH source files for 'benchmarks' | +| `gradle mainBenchmarkJar` | Build JAR for JMH compiled files for 'benchmarks' | + +## Want to learn more? + +Feel free to engage in benchmarking discussions on the `#benchmarks` channel on [Kotlinlang Slack](https://kotlinlang.org/community/slack), explore the `kotlinx-benchmark` tagged questions on [Stack Overflow](https://stackoverflow.com/questions/tagged/kotlinx-benchmark), or dive into the [kotlinx-benchmark Github Discussions](https://github.com/Kotlin/kotlinx-benchmark/discussions) for more insights and interactions. diff --git a/examples/kotlin-multiplatform/README.md b/examples/kotlin-multiplatform/README.md new file mode 100644 index 00000000..98c5c3cc --- /dev/null +++ b/examples/kotlin-multiplatform/README.md @@ -0,0 +1,103 @@ +# Kotlin-Multiplatform Example + +[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/Kotlin/kotlinx-benchmark) + +## Project Structure + +Inside of this example, you'll see the following folders and files: + +``` +│ build.gradle ==> Build configuration file for Gradle +│ +└───src ==> Source code root + ├───commonMain ==> Shared code + │ └───kotlin + │ │ CommonBenchmark.kt ==> Common benchmarks + │ │ InheritedBenchmark.kt ==> Inherited benchmarks + │ │ ParamBenchmark.kt ==> Parameterized benchmarks + │ │ + │ └───nested ==> Nested benchmarks + │ CommonBenchmark.kt + │ + ├───jsMain ==> JavaScript-specific code + │ └───kotlin + │ JsAsyncBenchmarks.kt ==> JS async benchmarks + │ JsTestBenchmark.kt ==> JS benchmarks + │ + ├───jvmBenchmark ==> JVM-specific benchmarks + │ └───kotlin + │ JvmBenchmark.kt + │ + ├───jvmMain ==> JVM-specific code + │ └───kotlin + │ JvmTestBenchmark.kt ==> JVM benchmarks + │ + ├───nativeMain ==> Native-specific code + │ └───kotlin + │ NativeTestBenchmark.kt ==> Native benchmarks + │ + └───wasmMain ==> WebAssembly-specific code + └───kotlin + WasmTestBenchmark.kt ==> WebAssembly benchmarks +``` + +## Tasks + +All tasks can be run from the root of the project, from a terminal: + +| Task Name | Action | +| --- | --- | +| `gradle assembleBenchmarks` | Generate and build all benchmarks in the project | +| `gradle benchmark` | Execute all benchmarks in the project | +| `gradle compileJsIrBenchmarkKotlinJsIr` | Compile JS benchmark source files for 'jsIr' | +| `gradle compileJsIrBuiltInBenchmarkKotlinJsIrBuiltIn` | Compile JS benchmark source files for 'jsIrBuiltIn' | +| `gradle compileWasmBenchmarkKotlinWasm` | Compile Wasm benchmark source files for 'wasm' | +| `gradle csvBenchmark` | Execute all benchmarks in a project | +| `gradle fastBenchmark` | Execute all benchmarks in a project | +| `gradle forkBenchmark` | Execute all benchmarks in a project | +| `gradle jsIrBenchmark` | Executes benchmark for 'jsIr' with NodeJS | +| `gradle jsIrBenchmarkGenerate` | Generate JS source files for 'jsIr' | +| `gradle jsIrBuiltInBenchmark` | Executes benchmark for 'jsIrBuiltIn' with NodeJS | +| `gradle jsIrBuiltInBenchmarkGenerate` | Generate JS source files for 'jsIrBuiltIn' | +| `gradle jsIrBuiltInCsvBenchmark` | Executes benchmark for 'jsIrBuiltIn' with NodeJS | +| `gradle jsIrBuiltInFastBenchmark` | Executes benchmark for 'jsIrBuiltIn' with NodeJS | +| `gradle jsIrBuiltInForkBenchmark` | Executes benchmark for 'jsIrBuiltIn' with NodeJS | +| `gradle jsIrBuiltInParamsBenchmark` | Executes benchmark for 'jsIrBuiltIn' with NodeJS | +| `gradle jsIrCsvBenchmark` | Executes benchmark for 'jsIr' with NodeJS | +| `gradle jsIrFastBenchmark` | Executes benchmark for 'jsIr' with NodeJS | +| `gradle jsIrForkBenchmark` | Executes benchmark for 'jsIr' with NodeJS | +| `gradle jsIrParamsBenchmark` | Executes benchmark for 'jsIr' with NodeJS | +| `gradle jvmBenchmark` | Execute benchmark for 'jvm' | +| `gradle jvmBenchmarkBenchmark` | Execute benchmark for 'jvmBenchmark' | +| `gradle jvmBenchmarkBenchmarkCompile` | Compile JMH source files for 'jvmBenchmark' | +| `gradle jvmBenchmarkBenchmarkGenerate` | Generate JMH source files for 'jvmBenchmark' | +| `gradle jvmBenchmarkBenchmarkJar` | Build JAR for JMH compiled files for 'jvmBenchmark' | +| `gradle jvmBenchmarkCompile` | Compile JMH source files for 'jvm' | +| `gradle jvmBenchmarkCsvBenchmark` | Execute benchmark for 'jvmBenchmark' | +| `gradle jvmBenchmarkFastBenchmark` | Execute benchmark for 'jvmBenchmark' | +| `gradle jvmBenchmarkForkBenchmark` | Execute benchmark for 'jvmBenchmark' | +| `gradle jvmBenchmarkGenerate` | Generate JMH source files for 'jvm' | +| `gradle jvmBenchmarkJar` | Build JAR for JMH compiled files for 'jvm' | +| `gradle jvmBenchmarkParamsBenchmark` | Execute benchmark for 'jvmBenchmark' | +| `gradle jvmCsvBenchmark` | Execute benchmark for 'jvm' | +| `gradle jvmFastBenchmark` | Execute benchmark for 'jvm' | +| `gradle jvmForkBenchmark` | Execute benchmark for 'jvm' | +| `gradle jvmParamsBenchmark` | Execute benchmark for 'jvm' | +| `gradle linkNativeBenchmarkReleaseExecutableNative` | Compile Native benchmark source files for 'native' | +| `gradle nativeBenchmark` | Executes benchmark for 'native' | +| `gradle nativeBenchmarkGenerate` | Generate Native source files for 'native' | +| `gradle nativeCsvBenchmark` | Executes benchmark for 'native' | +| `gradle nativeFastBenchmark` | Executes benchmark for 'native' | +| `gradle nativeForkBenchmark` | Executes benchmark for 'native' | +| `gradle nativeParamsBenchmark` | Executes benchmark for 'native' | +| `gradle paramsBenchmark` | Execute all benchmarks in a project | +| `gradle wasmBenchmark` | Executes benchmark for 'wasm' with D8 | +| `gradle wasmBenchmarkGenerate` | Generate Wasm source files for 'wasm' | +| `gradle wasmCsvBenchmark` | Executes benchmark for 'wasm' with D8 | +| `gradle wasmFastBenchmark` | Executes benchmark for 'wasm' with D8 | +| `gradle wasmForkBenchmark` | Executes benchmark for 'wasm' with D8 | +| `gradle wasmParamsBenchmark` | Executes benchmark for 'wasm' with D8 | + +## Want to learn more? + +Feel free to engage in benchmarking discussions on the `#benchmarks` channel on [Kotlinlang Slack](https://kotlinlang.org/community/slack), explore the `kotlinx-benchmark` tagged questions on [Stack Overflow](https://stackoverflow.com/questions/tagged/kotlinx-benchmark), or dive into the [kotlinx-benchmark Github Discussions](https://github.com/Kotlin/kotlinx-benchmark/discussions) for more insights and interactions. diff --git a/examples/kotlin/README.md b/examples/kotlin/README.md new file mode 100644 index 00000000..e918cd75 --- /dev/null +++ b/examples/kotlin/README.md @@ -0,0 +1,35 @@ +# Kotlin Example + +[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/Kotlin/kotlinx-benchmark) + +## Project Structure + +Inside of this example, you'll see the following folders and files: + +``` +/ +├── build.gradle +├── benchmarks/ +│ └── src/ +│ └── TestBenchmark.kt +└── main/ + └── src/ + └── TestData.kt +``` + +## Tasks + +All tasks can be run from the root of the project, from a terminal: + +| Task Name | Action | +| --- | --- | +| `gradle assembleBenchmarks` | Generate and build all benchmarks in the project | +| `gradle benchmark` | Execute all benchmarks in the project | +| `gradle benchmarksBenchmark` | Execute benchmark for 'benchmarks' | +| `gradle benchmarksBenchmarkCompile` | Compile JMH source files for 'benchmarks' | +| `gradle benchmarksBenchmarkGenerate` | Generate JMH source files for 'benchmarks' | +| `gradle benchmarksBenchmarkJar` | Build JAR for JMH compiled files for 'benchmarks' | + +## Want to learn more? + +Feel free to engage in benchmarking discussions on the `#benchmarks` channel on [Kotlinlang Slack](https://kotlinlang.org/community/slack), explore the `kotlinx-benchmark` tagged questions on [Stack Overflow](https://stackoverflow.com/questions/tagged/kotlinx-benchmark), or dive into the [kotlinx-benchmark Github Discussions](https://github.com/Kotlin/kotlinx-benchmark/discussions) for more insights and interactions. From e1d7432119b1c19ede3c60bc8b14bf9633cee72f Mon Sep 17 00:00:00 2001 From: wldeh <62161211+wldeh@users.noreply.github.com> Date: Wed, 5 Jul 2023 23:46:43 -0700 Subject: [PATCH 18/30] docs: improve descriptions and setup steps --- examples/README.md | 56 ++++--------- examples/java/README.md | 20 ++--- examples/kotlin-kts/README.md | 12 +-- examples/kotlin-multiplatform/README.md | 104 ++++++++++++------------ examples/kotlin/README.md | 12 +-- 5 files changed, 90 insertions(+), 114 deletions(-) diff --git a/examples/README.md b/examples/README.md index 365639e6..56e5e6a1 100644 --- a/examples/README.md +++ b/examples/README.md @@ -1,59 +1,35 @@ # kotlinx-benchmark Examples Guide -This guide is designed to help you navigate, set up, and run the benchmark examples provided here. Whether you're a seasoned developer or new to Kotlin and benchmarking, we've got you covered. Let's dive in and explore these practical examples together. - -## Prerequisites - -Before you begin, ensure you have the following installed on your local machine: - -- Git: Used to clone the repository. You can download it from [here](https://git-scm.com/downloads). -- Gradle: Used to build the projects. You can download it from [here](https://gradle.org/install/). Note that the projects come with a Gradle wrapper, so this is optional. +This guide is specifically designed for experienced Kotlin developers. It aims to help you smoothly navigate and run the benchmark examples included in this repository. ## Getting Started -1. **Clone the Repository**: Clone the `kotlinx-benchmark` repository to your local machine by running the following command in your terminal: - - ``` - git clone https://github.com/Kotlin/kotlinx-benchmark.git - ``` - -2. **Navigate to the Examples Directory**: Once the repository is cloned, navigate to the `examples` directory by running: +To begin, you'll need to clone the `kotlinx-benchmark` repository to your local machine: - ``` - cd kotlinx-benchmark/examples - ``` +``` +git clone https://github.com/Kotlin/kotlinx-benchmark.git +``` ## Running the Examples -Each example is a separate project that can be built and run independently. Here's how you can do it: +Each example in this repository is an autonomous project, encapsulated in its own environment. Reference the [tasks-overview](../docs/tasks-overview.md) for a detailed list and explanation of available tasks. -1. **Navigate to the Example Directory**: Navigate to the directory of the example you want to run. For instance, if you want to run the `kotlin-kts` example, you would run: +To execute all benchmarks for a specific example, you'll use the following command structure: - ``` - cd kotlin-kts - ``` +``` +./gradlew :examples:[example-name]:benchmark +``` -2. **Build the Project**: Each project uses Gradle as a build tool. If you have Gradle installed on your machine, you can build the project by running: +Here, `[example-name]` is the name of the example you wish to benchmark. For instance, to run benchmarks for the `kotlin-kts` example, the command would be: - ``` - gradle build - ``` +``` +./gradlew :examples:kotlin-kts:benchmark +``` -3. **Run the Benchmark**: After the project is built, you can run the benchmark by executing: - - ``` - gradle benchmark - ``` - -Repeat these steps for each example you want to run. +This pattern applies to all examples in the repository. ## Troubleshooting -If you encounter any issues while setting up or running the examples, please check the following: - -- Ensure you have all the prerequisites installed and they are added to your system's PATH. -- Make sure you are running the commands in the correct directory. - -If you're still having issues, feel free to open an issue on the [kotlinx-benchmark repository](https://github.com/Kotlin/kotlinx-benchmark/issues). +In case of any issues encountered while setting up or running the benchmarks, verify that you're executing commands from the correct directory. For persisting issues, don't hesitate to open an [issue](https://github.com/Kotlin/kotlinx-benchmark/issues). Happy benchmarking! diff --git a/examples/java/README.md b/examples/java/README.md index 2b59fdeb..84fc8c53 100644 --- a/examples/java/README.md +++ b/examples/java/README.md @@ -18,19 +18,19 @@ Inside of this example, you'll see the following folders and files: ## Tasks -All tasks can be run from the root of the project, from a terminal: +All tasks can be run from the root of the library: | Task Name | Action | | --- | --- | -| `gradle assembleBenchmarks` | Generate and build all benchmarks in the project | -| `gradle benchmark` | Execute all benchmarks in the project | -| `gradle mainBenchmark` | Execute benchmark for 'main' | -| `gradle mainBenchmarkCompile` | Compile JMH source files for 'main' | -| `gradle mainBenchmarkGenerate` | Generate JMH source files for 'main' | -| `gradle mainBenchmarkJar` | Build JAR for JMH compiled files for 'main' | -| `gradle mainSingleParamBenchmark` | Execute benchmark for 'main' | -| `gradle singleParamBenchmark` | Execute all benchmarks in the project | +| `assembleBenchmarks` | Generate and build all benchmarks in the project | +| `benchmark` | Execute all benchmarks in the project | +| `mainBenchmark` | Execute benchmark for the 'main' source set | +| `mainBenchmarkCompile` | Compile JMH source files for the 'main' source set | +| `mainBenchmarkGenerate` | Generate JMH source files for the 'main' source set | +| `mainBenchmarkJar` | Build JAR for JMH compiled files for the 'main' source set | +| `mainSingleParamBenchmark` | Execute benchmark for the 'main' source set with the 'singleParam' configuration | +| `singleParamBenchmark` | Execute all benchmarks in the project with the 'singleParam' configuration | ## Want to learn more? -Feel free to engage in benchmarking discussions on the `#benchmarks` channel on [Kotlinlang Slack](https://kotlinlang.org/community/slack), explore the `kotlinx-benchmark` tagged questions on [Stack Overflow](https://stackoverflow.com/questions/tagged/kotlinx-benchmark), or dive into the [kotlinx-benchmark Github Discussions](https://github.com/Kotlin/kotlinx-benchmark/discussions) for more insights and interactions. +Feel free to engage in benchmarking discussions on the `#benchmarks` channel on [Kotlinlang Slack](https://kotlinlang.org/community/slack), explore the `kotlinx-benchmark` tagged questions on [Stack Overflow](https://stackoverflow.com/questions/tagged/kotlinx-benchmark), or dive into the [kotlinx-benchmark Github Discussions](https://github.com/Kotlin/kotlinx-benchmark/discussions) for more insights and interactions. \ No newline at end of file diff --git a/examples/kotlin-kts/README.md b/examples/kotlin-kts/README.md index 40ed9db8..d3121ef8 100644 --- a/examples/kotlin-kts/README.md +++ b/examples/kotlin-kts/README.md @@ -20,12 +20,12 @@ All tasks can be run from the root of the project, from a terminal: | Task Name | Action | | --- | --- | -| `gradle assembleBenchmarks` | Generate and build all benchmarks in the project | -| `gradle benchmark` | Execute all benchmarks in the project | -| `gradle mainBenchmark` | Execute benchmark for 'benchmarks' | -| `gradle mainBenchmarkCompile` | Compile JMH source files for 'benchmarks' | -| `gradle mainBenchmarkGenerate` | Generate JMH source files for 'benchmarks' | -| `gradle mainBenchmarkJar` | Build JAR for JMH compiled files for 'benchmarks' | +| `assembleBenchmarks` | Generate and build all benchmarks in the project | +| `benchmark` | Execute all benchmarks in the project | +| `mainBenchmark` | Execute benchmark for 'benchmarks' | +| `mainBenchmarkCompile` | Compile JMH source files for 'benchmarks' | +| `mainBenchmarkGenerate` | Generate JMH source files for 'benchmarks' | +| `mainBenchmarkJar` | Build JAR for JMH compiled files for 'benchmarks' | ## Want to learn more? diff --git a/examples/kotlin-multiplatform/README.md b/examples/kotlin-multiplatform/README.md index 98c5c3cc..8da2d9ae 100644 --- a/examples/kotlin-multiplatform/README.md +++ b/examples/kotlin-multiplatform/README.md @@ -43,61 +43,61 @@ Inside of this example, you'll see the following folders and files: ## Tasks -All tasks can be run from the root of the project, from a terminal: +All tasks can be run from the root of the library, from a terminal: | Task Name | Action | | --- | --- | -| `gradle assembleBenchmarks` | Generate and build all benchmarks in the project | -| `gradle benchmark` | Execute all benchmarks in the project | -| `gradle compileJsIrBenchmarkKotlinJsIr` | Compile JS benchmark source files for 'jsIr' | -| `gradle compileJsIrBuiltInBenchmarkKotlinJsIrBuiltIn` | Compile JS benchmark source files for 'jsIrBuiltIn' | -| `gradle compileWasmBenchmarkKotlinWasm` | Compile Wasm benchmark source files for 'wasm' | -| `gradle csvBenchmark` | Execute all benchmarks in a project | -| `gradle fastBenchmark` | Execute all benchmarks in a project | -| `gradle forkBenchmark` | Execute all benchmarks in a project | -| `gradle jsIrBenchmark` | Executes benchmark for 'jsIr' with NodeJS | -| `gradle jsIrBenchmarkGenerate` | Generate JS source files for 'jsIr' | -| `gradle jsIrBuiltInBenchmark` | Executes benchmark for 'jsIrBuiltIn' with NodeJS | -| `gradle jsIrBuiltInBenchmarkGenerate` | Generate JS source files for 'jsIrBuiltIn' | -| `gradle jsIrBuiltInCsvBenchmark` | Executes benchmark for 'jsIrBuiltIn' with NodeJS | -| `gradle jsIrBuiltInFastBenchmark` | Executes benchmark for 'jsIrBuiltIn' with NodeJS | -| `gradle jsIrBuiltInForkBenchmark` | Executes benchmark for 'jsIrBuiltIn' with NodeJS | -| `gradle jsIrBuiltInParamsBenchmark` | Executes benchmark for 'jsIrBuiltIn' with NodeJS | -| `gradle jsIrCsvBenchmark` | Executes benchmark for 'jsIr' with NodeJS | -| `gradle jsIrFastBenchmark` | Executes benchmark for 'jsIr' with NodeJS | -| `gradle jsIrForkBenchmark` | Executes benchmark for 'jsIr' with NodeJS | -| `gradle jsIrParamsBenchmark` | Executes benchmark for 'jsIr' with NodeJS | -| `gradle jvmBenchmark` | Execute benchmark for 'jvm' | -| `gradle jvmBenchmarkBenchmark` | Execute benchmark for 'jvmBenchmark' | -| `gradle jvmBenchmarkBenchmarkCompile` | Compile JMH source files for 'jvmBenchmark' | -| `gradle jvmBenchmarkBenchmarkGenerate` | Generate JMH source files for 'jvmBenchmark' | -| `gradle jvmBenchmarkBenchmarkJar` | Build JAR for JMH compiled files for 'jvmBenchmark' | -| `gradle jvmBenchmarkCompile` | Compile JMH source files for 'jvm' | -| `gradle jvmBenchmarkCsvBenchmark` | Execute benchmark for 'jvmBenchmark' | -| `gradle jvmBenchmarkFastBenchmark` | Execute benchmark for 'jvmBenchmark' | -| `gradle jvmBenchmarkForkBenchmark` | Execute benchmark for 'jvmBenchmark' | -| `gradle jvmBenchmarkGenerate` | Generate JMH source files for 'jvm' | -| `gradle jvmBenchmarkJar` | Build JAR for JMH compiled files for 'jvm' | -| `gradle jvmBenchmarkParamsBenchmark` | Execute benchmark for 'jvmBenchmark' | -| `gradle jvmCsvBenchmark` | Execute benchmark for 'jvm' | -| `gradle jvmFastBenchmark` | Execute benchmark for 'jvm' | -| `gradle jvmForkBenchmark` | Execute benchmark for 'jvm' | -| `gradle jvmParamsBenchmark` | Execute benchmark for 'jvm' | -| `gradle linkNativeBenchmarkReleaseExecutableNative` | Compile Native benchmark source files for 'native' | -| `gradle nativeBenchmark` | Executes benchmark for 'native' | -| `gradle nativeBenchmarkGenerate` | Generate Native source files for 'native' | -| `gradle nativeCsvBenchmark` | Executes benchmark for 'native' | -| `gradle nativeFastBenchmark` | Executes benchmark for 'native' | -| `gradle nativeForkBenchmark` | Executes benchmark for 'native' | -| `gradle nativeParamsBenchmark` | Executes benchmark for 'native' | -| `gradle paramsBenchmark` | Execute all benchmarks in a project | -| `gradle wasmBenchmark` | Executes benchmark for 'wasm' with D8 | -| `gradle wasmBenchmarkGenerate` | Generate Wasm source files for 'wasm' | -| `gradle wasmCsvBenchmark` | Executes benchmark for 'wasm' with D8 | -| `gradle wasmFastBenchmark` | Executes benchmark for 'wasm' with D8 | -| `gradle wasmForkBenchmark` | Executes benchmark for 'wasm' with D8 | -| `gradle wasmParamsBenchmark` | Executes benchmark for 'wasm' with D8 | +| `assembleBenchmarks` | Generates and builds all benchmarks in the project. | +| `benchmark` | Executes all benchmarks in the project. | +| `compileJsIrBenchmarkKotlinJsIr` | Compiles the source files for 'jsIr' benchmark. | +| `compileJsIrBuiltInBenchmarkKotlinJsIrBuiltIn` | Compiles the source files for 'jsIrBuiltIn' benchmark. | +| `compileWasmBenchmarkKotlinWasm` | Compiles the source files for 'wasm' benchmark. | +| `csvBenchmark` | Executes all benchmarks in the project with the CSV configuration. | +| `fastBenchmark` | Executes all benchmarks in the project with the Fast configuration. | +| `forkBenchmark` | Executes all benchmarks in the project with the Fork configuration. | +| `jsIrBenchmark` | Executes benchmark for the 'jsIr' source set. | +| `jsIrBenchmarkGenerate` | Generates source files for the 'jsIr' source set. | +| `jsIrBuiltInBenchmark` | Executes benchmark for the 'jsIrBuiltIn' source set. | +| `jsIrBuiltInBenchmarkGenerate` | Generates source files for the 'jsIrBuiltIn' source set. | +| `jsIrBuiltInCsvBenchmark` | Executes benchmark for the 'jsIrBuiltIn' source set with the CSV configuration. | +| `jsIrBuiltInFastBenchmark` | Executes benchmark for the 'jsIrBuiltIn' source set with the Fast configuration. | +| `jsIrBuiltInForkBenchmark` | Executes benchmark for the 'jsIrBuiltIn' source set with the Fork configuration. | +| `jsIrBuiltInParamsBenchmark` | Executes benchmark for the 'jsIrBuiltIn' source set with the Params configuration. | +| `jsIrCsvBenchmark` | Executes benchmark for the 'jsIr' source set with the CSV configuration. | +| `jsIrFastBenchmark` | Executes benchmark for the 'jsIr' source set with the Fast configuration. | +| `jsIrForkBenchmark` | Executes benchmark for the 'jsIr' source set with the Fork configuration. | +| `jsIrParamsBenchmark` | Executes benchmark for the 'jsIr' source set with the Params configuration. | +| `jvmBenchmark` | Executes benchmark for the 'jvm' source set. | +| `jvmBenchmarkBenchmark` | Executes benchmark for the 'jvmBenchmark' source set. | +| `jvmBenchmarkBenchmarkCompile` | Compiles the source files for 'jvmBenchmark'. | +| `jvmBenchmarkBenchmarkGenerate` | Generates source files for the 'jvmBenchmark' source set. | +| `jvmBenchmarkBenchmarkJar` | Builds the JAR for 'jvmBenchmark' compiled files. | +| `jvmBenchmarkCompile` | Compiles the source files for the 'jvm' benchmark. | +| `jvmBenchmarkCsvBenchmark` | Executes benchmark for the 'jvmBenchmark' source set with the CSV configuration. | +| `jvmBenchmarkFastBenchmark` | Executes benchmark for the 'jvmBenchmark' source set with the Fast configuration. | +| `jvmBenchmarkForkBenchmark` | Executes benchmark for the 'jvmBenchmark' source set with the Fork configuration. | +| `jvmBenchmarkGenerate` | Generates source files for the 'jvm' source set. | +| `jvmBenchmarkJar` | Builds the JAR for 'jvm' compiled files. | +| `jvmBenchmarkParamsBenchmark` | Executes benchmark for the 'j| `jvmBenchmarkParamsBenchmark` | Executes benchmark for the 'jvmBenchmark' source set with the Params configuration. | +| `jvmCsvBenchmark` | Executes benchmark for the 'jvm' source set with the CSV configuration. | +| `jvmFastBenchmark` | Executes benchmark for the 'jvm' source set with the Fast configuration. | +| `jvmForkBenchmark` | Executes benchmark for the 'jvm' source set with the Fork configuration. | +| `jvmParamsBenchmark` | Executes benchmark for the 'jvm' source set with the Params configuration. | +| `linkNativeBenchmarkReleaseExecutableNative` | Compiles the source files for 'native' benchmark. | +| `nativeBenchmark` | Executes benchmark for the 'native' source set. | +| `nativeBenchmarkGenerate` | Generates source files for the 'native' source set. | +| `nativeCsvBenchmark` | Executes benchmark for the 'native' source set with the CSV configuration. | +| `nativeFastBenchmark` | Executes benchmark for the 'native' source set with the Fast configuration. | +| `nativeForkBenchmark` | Executes benchmark for the 'native' source set with the Fork configuration. | +| `nativeParamsBenchmark` | Executes benchmark for the 'native' source set with the Params configuration. | +| `paramsBenchmark` | Executes all benchmarks in the project with the Params configuration. | +| `wasmBenchmark` | Executes benchmark for the 'wasm' source set. | +| `wasmBenchmarkGenerate` | Generates source files for the 'wasm' source set. | +| `wasmCsvBenchmark` | Executes benchmark for the 'wasm' source set with the CSV configuration. | +| `wasmFastBenchmark` | Executes benchmark for the 'wasm' source set with the Fast configuration. | +| `wasmForkBenchmark` | Executes benchmark for the 'wasm' source set with the Fork configuration. | +| `wasmParamsBenchmark` | Executes benchmark for the 'wasm' source set with the Params configuration. | ## Want to learn more? -Feel free to engage in benchmarking discussions on the `#benchmarks` channel on [Kotlinlang Slack](https://kotlinlang.org/community/slack), explore the `kotlinx-benchmark` tagged questions on [Stack Overflow](https://stackoverflow.com/questions/tagged/kotlinx-benchmark), or dive into the [kotlinx-benchmark Github Discussions](https://github.com/Kotlin/kotlinx-benchmark/discussions) for more insights and interactions. +Feel free to engage in benchmarking discussions on the `#benchmarks` channel on [Kotlinlang Slack](https://kotlinlang.org/community/slack), explore the `kotlinx-benchmark` tagged questions on [Stack Overflow](https://stackoverflow.com/questions/tagged/kotlinx-benchmark), or dive into the [kotlinx-benchmark Github Discussions](https://github.com/Kotlin/kotlinx-benchmark/discussions) for more insights and interactions. \ No newline at end of file diff --git a/examples/kotlin/README.md b/examples/kotlin/README.md index e918cd75..0f16cf3a 100644 --- a/examples/kotlin/README.md +++ b/examples/kotlin/README.md @@ -23,12 +23,12 @@ All tasks can be run from the root of the project, from a terminal: | Task Name | Action | | --- | --- | -| `gradle assembleBenchmarks` | Generate and build all benchmarks in the project | -| `gradle benchmark` | Execute all benchmarks in the project | -| `gradle benchmarksBenchmark` | Execute benchmark for 'benchmarks' | -| `gradle benchmarksBenchmarkCompile` | Compile JMH source files for 'benchmarks' | -| `gradle benchmarksBenchmarkGenerate` | Generate JMH source files for 'benchmarks' | -| `gradle benchmarksBenchmarkJar` | Build JAR for JMH compiled files for 'benchmarks' | +| `assembleBenchmarks` | Generate and build all benchmarks in the project | +| `benchmark` | Execute all benchmarks in the project | +| `benchmarksBenchmark` | Execute benchmark for 'benchmarks' | +| `benchmarksBenchmarkCompile` | Compile JMH source files for 'benchmarks' | +| `benchmarksBenchmarkGenerate` | Generate JMH source files for 'benchmarks' | +| `benchmarksBenchmarkJar` | Build JAR for JMH compiled files for 'benchmarks' | ## Want to learn more? From b844a6493fff18caadead6b1913cac32cbb2fc46 Mon Sep 17 00:00:00 2001 From: wldeh <62161211+wldeh@users.noreply.github.com> Date: Sun, 9 Jul 2023 21:35:12 -0700 Subject: [PATCH 19/30] Add CONTRIBUTING.md and bug report template --- .github/ISSUE_TEMPLATE/bug_report.md | 23 +++++++++++ CONTRIBUTING.md | 62 ++++++++++++++++++++++++++++ 2 files changed, 85 insertions(+) create mode 100644 .github/ISSUE_TEMPLATE/bug_report.md diff --git a/.github/ISSUE_TEMPLATE/bug_report.md b/.github/ISSUE_TEMPLATE/bug_report.md new file mode 100644 index 00000000..019ffb40 --- /dev/null +++ b/.github/ISSUE_TEMPLATE/bug_report.md @@ -0,0 +1,23 @@ +--- +name: Bug report +about: Create a report to help us improve +title: "" +labels: "" +assignees: "" +--- + +**Describe the bug** + +**To Reproduce** +Attach a code snippet or test data if possible. + +**Expected behavior** + +**Environment** + +- Kotlin version: [e.g. 1.3.30] +- Library version: [e.g. 0.11.0] +- Kotlin platforms: [e.g. JVM, JS, Native or their combinations] +- Gradle version: [e.g. 4.10] +- IDE version (if bug is related to the IDE) [e.g. IntellijIDEA 2019.1, Android Studio 3.4] +- Other relevant context [e.g. OS version, JRE version, ... ] diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index e69de29b..7b2c3c5e 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -0,0 +1,62 @@ +# Contributing Guidelines + +There are two main ways to contribute to the project — submitting issues and submitting +fixes/changes/improvements via pull requests. + +## Submitting issues + +Both bug reports and feature requests are welcome. +Submit issues [here](https://github.com/Kotlin/kotlinx-benchmark/issues). + +- Search for existing issues to avoid reporting duplicates. +- When submitting a bug report: + - Use a 'bug report' template when creating a new issue. + - Test it against the most recently released version. It might have been already fixed. + - By default, we assume that your problem reproduces in Kotlin/JVM. Please, mention if the problem is + specific to a platform. + - Include the code that reproduces the problem. Provide the complete reproducer code, yet minimize it as much as possible. + - However, don't put off reporting any unusual or rarely appearing issues just because you cannot consistently + reproduce them. + - If the bug is in behavior, then explain what behavior you've expected and what you've got. +- When submitting a feature request: + - Use a 'feature request' template when creating a new issue. + - Explain why you need the feature — what's your use-case, what's your domain. + - Explaining the problem you're facing is more important than suggesting a solution. + Report your problem even if you don't have any proposed solution. + - If there is an alternative way to do what you need, then show the code of the alternative. + +## Submitting PRs + +We love PRs. Submit PRs [here](https://github.com/Kotlin/kotlinx-benchmark/pulls). +However, please keep in mind that maintainers will have to support the resulting code of the project, +so do familiarize yourself with the following guidelines. + +- If you fix documentation: + - If you plan extensive rewrites/additions to the docs, then please [contact the maintainers](#contacting-maintainers) + to coordinate the work in advance. +- If you make any code changes: + - Follow the [Kotlin Coding Conventions](https://kotlinlang.org/docs/reference/coding-conventions.html). + - Use 4 spaces for indentation. + - Use imports with '\*'. + - Build the project to make sure it all works and passes the tests. +- If you fix a bug: + - Write the test that reproduces the bug. + - Fixes without tests are accepted only in exceptional circumstances if it can be shown that writing the + corresponding test is too hard or otherwise impractical. + - Follow the style of writing tests that is used in this project: + name test functions as `testXxx`. Don't use backticks in test names. +- Comment on the existing issue if you want to work on it. Ensure that the issue not only describes a problem, + but also describes a solution that has received positive feedback. Propose a solution if there isn't any. + +## Building + +This library is built with Gradle. + +- Run `./gradlew build` to build. It also runs all the tests. +- Run `./gradlew :check` to test the the module you're currently working on to speed things up during development. + +## Contacting maintainers + +- If something cannot be done, is not convenient, or does not work, — submit an [issue](https://github.com/Kotlin/kotlinx-benchmark/issues). +- "How to do something" questions — [StackOverflow](https://stackoverflow.com). +- Discussions and general inquiries — use `#benchmarks` channel in [KotlinLang Slack](https://kotl.in/slack). From cad3dfbf2d15ce4157b97a6aea5e9bd74192802b Mon Sep 17 00:00:00 2001 From: wldeh <62161211+wldeh@users.noreply.github.com> Date: Sat, 22 Jul 2023 03:23:51 -0700 Subject: [PATCH 20/30] correct taks descriptions in readme --- examples/kotlin-kts/README.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/examples/kotlin-kts/README.md b/examples/kotlin-kts/README.md index d3121ef8..8f9c3893 100644 --- a/examples/kotlin-kts/README.md +++ b/examples/kotlin-kts/README.md @@ -22,10 +22,10 @@ All tasks can be run from the root of the project, from a terminal: | --- | --- | | `assembleBenchmarks` | Generate and build all benchmarks in the project | | `benchmark` | Execute all benchmarks in the project | -| `mainBenchmark` | Execute benchmark for 'benchmarks' | -| `mainBenchmarkCompile` | Compile JMH source files for 'benchmarks' | -| `mainBenchmarkGenerate` | Generate JMH source files for 'benchmarks' | -| `mainBenchmarkJar` | Build JAR for JMH compiled files for 'benchmarks' | +| `mainBenchmark` | Execute benchmark for 'main' | +| `mainBenchmarkCompile` | Compile JMH source files for 'main' | +| `mainBenchmarkGenerate` | Generate JMH source files for 'main' | +| `mainBenchmarkJar` | Build JAR for JMH compiled files for 'main' | ## Want to learn more? From f9830ccf6722639b12e864b8e0da007b8f1e42f9 Mon Sep 17 00:00:00 2001 From: Henok Woldesenbet <62161211+wldeh@users.noreply.github.com> Date: Sun, 13 Aug 2023 05:29:40 -0700 Subject: [PATCH 21/30] Improve configuration-options.md (#138) Co-authored-by: Abduqodiri Qurbonzoda --- README.md | 2 +- docs/configuration-options.md | 93 ++++++++++++++++++++++++++--------- 2 files changed, 70 insertions(+), 25 deletions(-) diff --git a/README.md b/README.md index 92da7c45..e45426cd 100644 --- a/README.md +++ b/README.md @@ -289,4 +289,4 @@ To help you better understand how to use the kotlinx-benchmark library, we've pr ## Contributing -We welcome contributions to kotlinx-benchmark! If you want to contribute, please refer to our Contribution Guidelines. +We welcome contributions to kotlinx-benchmark! If you want to contribute, please refer to our Contribution Guidelines. \ No newline at end of file diff --git a/docs/configuration-options.md b/docs/configuration-options.md index e1882c11..7222bdb7 100644 --- a/docs/configuration-options.md +++ b/docs/configuration-options.md @@ -1,33 +1,78 @@ # Mastering kotlinx-benchmark Configuration -Unleash the power of `kotlinx-benchmark` with our comprehensive guide, highlighting the breadth of configuration options that help fine-tune your benchmarking setup to suit your specific needs. Dive into the heart of the configuration process with both basic and advanced settings, offering a granular level of control to realize accurate, reliable performance measurements every time. +This is a comprehensive guide to configuration options that help fine-tune your benchmarking setup to suit your specific needs. -## Core Configuration Options: The Essential Settings +## The `configurations` Section -The `configurations` section of the `benchmark` block is where you control the parameters of your benchmark profiles. Each configuration offers a rich array of settings. Be aware that values defined in the build script will override those specified by annotations in the code. +The `configurations` section of the `benchmark` block serves as the control center for setting the parameters of your benchmark profiles. The library provides a default configuration profile named "main", which can be configured according to your needs just like any other profile. Here's a basic structure of how configurations can be set up: -| Option | Description | Possible Values | Corresponding Annotation | -| ----------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------ | ------------------------ | -| `iterations` | Sets the number of iterations for measurements. | Integer | @BenchmarkMode | -| `warmups` | Sets the number of iterations for system warming, ensuring accurate measurements. | Integer | @Warmup | -| `iterationTime` | Sets the duration for each iteration, both measurement and warm-up. | Integer | @Measurement | -| `iterationTimeUnit` | Defines the unit for `iterationTime`. | "ns", "μs", "ms", "s", "m", "h", "d" | @Measurement | -| `outputTimeUnit` | Sets the unit for the results display. | "ns", "μs", "ms", "s", "m", "h", "d" | @OutputTimeUnit | -| `mode` | Selects "thrpt" for measuring the number of function calls per unit time or "avgt" for measuring the time per function call. | "thrpt", "avgt" | @BenchmarkMode | -| `include("…")` | Applies a regular expression to include benchmarks that match the substring in their fully qualified names. | Regex pattern | - | -| `exclude("…")` | Applies a regular expression to exclude benchmarks that match the substring in their fully qualified names. | Regex pattern | - | -| `param("name", "value1", "value2")` | Assigns values to a public mutable property, annotated with `@Param`. | Any string values | @Param | -| `reportFormat` | Defines the benchmark report's format options. | "json", "csv", "scsv", "text" | - | +```kotlin +// build.gradle.kts +benchmark { + configurations { + register("smoke") { + // Configure this configuration profile here + } + // here you can create additional profiles + } +} +``` -## Expert Configuration Options: The Power Settings +## Understanding Configuration Profiles -The power of kotlinx-benchmark extends beyond basic settings. Delve into platform-specific options for tighter control over your benchmarks: +Configuration profiles dictate the execution pattern of benchmarks: -| Option | Platform | Description | Possible Values | Corresponding Annotation | -| --------------------------------------------- | ---------------------- | ---------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------ | ------------------------ | -| `advanced("nativeFork", "value")` | Kotlin/Native | Executes iterations within the same process ("perBenchmark") or each iteration in a separate process ("perIteration"). | "perBenchmark", "perIteration" | - | -| `advanced("nativeGCAfterIteration", "value")` | Kotlin/Native | Triggers garbage collection after each iteration when set to `true`. | `true`, `false` | - | -| `advanced("jvmForks", "value")` | Kotlin/JVM | Determines how many times the harness should fork. | "0" (no fork), "1", "definedByJmh" (JMH decides) | @Fork | -| `advanced("jsUseBridge", "value")` | Kotlin/JS, Kotlin/Wasm | Disables the generation of benchmark bridges to stop inlining optimizations when set to `false`. | `true`, `false` | - | +- Utilize `include` and `exclude` options to select specific benchmarks for a profile. +- By default, every benchmark is included. +- Each configuration profile translates to a task in the `kotlinx-benchmark` Gradle plugin. For instance, the task `smokeBenchmark` is tailored to run benchmarks based on the `"smoke"` configuration profile. For an overview of tasks, refer to [tasks-overview.md](tasks-overview.md). -With this guide at your side, you're ready to optimize your benchmarking process with `kotlinx-benchmark`. Happy benchmarking! \ No newline at end of file +## Core Configuration Options + +Note that values defined in the build script take precedence over those specified by annotations in the code. + +| Option | Description | Possible Values | Corresponding Annotation | +| ----------------------------------- |------------------------------------------------------------------------------------------------------------------------------|-------------------------------|-----------------------------------------------------| +| `iterations` | Sets the number of iterations for measurements. | Integer | @Measurement(iterations: Int, ...) | +| `warmups` | Sets the number of iterations for system warming, ensuring accurate measurements. | Integer | @Warmup(iterations: Int) | +| `iterationTime` | Sets the duration for each iteration, both measurement and warm-up. | Integer | @Measurement(..., time: Int, ...) | +| `iterationTimeUnit` | Defines the unit for `iterationTime`. | Time unit, see below | @Measurement(..., timeUnit: BenchmarkTimeUnit, ...) | +| `outputTimeUnit` | Sets the unit for the results display. | Time unit, see below | @OutputTimeUnit(value: BenchmarkTimeUnit) | +| `mode` | Selects "thrpt" (Throughput) for measuring the number of function calls per unit time or "avgt" (AverageTime) for measuring the time per function call. | `thrpt`(default), `Throughput`(default), `avgt`, `AverageTime` | @BenchmarkMode | +| `include("…")` | Applies a regular expression to include benchmarks that match the substring in their fully qualified names. | Regex pattern | - | +| `exclude("…")` | Applies a regular expression to exclude benchmarks that match the substring in their fully qualified names. | Regex pattern | - | +| `param("name", "value1", "value2")` | Assigns values to a public mutable property with the specified name, annotated with `@Param`. | Any string values | @Param | +| `reportFormat` | Defines the benchmark report's format options. | `json`(default), `csv`, `scsv`, `text` | - | + +The following values can be used for specifying time unit: +- "NANOSECONDS", "ns", "nanos" +- "MICROSECONDS", "us", "micros" +- "MILLISECONDS", "ms", "millis" +- "SECONDS", "s", "sec" +- "MINUTES", "m", "min" + +## Platform-Specific Configuration Options + +The options listed in the following sections allow you to tailor the benchmark execution behavior for specific platforms: + +### Kotlin/Native +| Option | Description | Possible Values | Default Value | +|-----------------------------------------------|------------------------------------------------------------------------------------------------------------------------|--------------------------------|----------------| +| `advanced("nativeFork", "value")` | Executes iterations within the same process ("perBenchmark") or each iteration in a separate process ("perIteration"). | `perBenchmark`, `perIteration` | "perBenchmark" | +| `advanced("nativeGCAfterIteration", value)` | Whether to trigger garbage collection after each iteration. | `true`, `false` | `false` | + +### Kotlin/JVM +| Option | Description | Possible Values | Default Value | +|---------------------------------------------|------------------------------------------------------------|--------------------------------|----------------| +| `advanced("jvmForks", value)` | Specifies the number of times the harness should fork. | Integer, "definedByJmh" | `1` | + +**Notes on "jvmForks":** +- **0** - "no fork", i.e., no subprocesses are forked to run benchmarks. +- A positive integer value – the amount used for all benchmarks in this configuration. +- **"definedByJmh"** – Let JMH determine the amount, using the value in the [`@Fork` annotation](https://javadoc.io/static/org.openjdk.jmh/jmh-core/1.21/org/openjdk/jmh/annotations/Fork.html) for the benchmark function or its enclosing class. If not specified by `@Fork`, it defaults to [Defaults.MEASUREMENT_FORKS (`5`)](https://javadoc.io/static/org.openjdk.jmh/jmh-core/1.21/org/openjdk/jmh/runner/Defaults.html#MEASUREMENT_FORKS). + +### Kotlin/JS & Kotlin/Wasm +| Option | Description | Possible Values | Default Value | +|-----------------------------------------------|-------------------------------------------------------------------------------------------------------|-----------------|---------------| +| `advanced("jsUseBridge", value)` | Generate special benchmark bridges to stop inlining optimizations. | `true`, `false` | `true` | + +**Note:** "jsUseBridge" works only when the `BuiltIn` benchmark executor is selected. \ No newline at end of file From a5eb72acb151c74b235123c9c1fcc5ec324644ac Mon Sep 17 00:00:00 2001 From: Abduqodiri Qurbonzoda Date: Sun, 13 Aug 2023 15:50:31 +0300 Subject: [PATCH 22/30] fixup! Improve configuration-options.md (#138) --- docs/configuration-options.md | 27 +++++++++++++-------------- 1 file changed, 13 insertions(+), 14 deletions(-) diff --git a/docs/configuration-options.md b/docs/configuration-options.md index 7222bdb7..e3eb23e3 100644 --- a/docs/configuration-options.md +++ b/docs/configuration-options.md @@ -22,26 +22,25 @@ benchmark { Configuration profiles dictate the execution pattern of benchmarks: -- Utilize `include` and `exclude` options to select specific benchmarks for a profile. -- By default, every benchmark is included. +- Utilize `include` and `exclude` options to select specific benchmarks for a profile. By default, every benchmark is included. - Each configuration profile translates to a task in the `kotlinx-benchmark` Gradle plugin. For instance, the task `smokeBenchmark` is tailored to run benchmarks based on the `"smoke"` configuration profile. For an overview of tasks, refer to [tasks-overview.md](tasks-overview.md). ## Core Configuration Options Note that values defined in the build script take precedence over those specified by annotations in the code. -| Option | Description | Possible Values | Corresponding Annotation | -| ----------------------------------- |------------------------------------------------------------------------------------------------------------------------------|-------------------------------|-----------------------------------------------------| -| `iterations` | Sets the number of iterations for measurements. | Integer | @Measurement(iterations: Int, ...) | -| `warmups` | Sets the number of iterations for system warming, ensuring accurate measurements. | Integer | @Warmup(iterations: Int) | -| `iterationTime` | Sets the duration for each iteration, both measurement and warm-up. | Integer | @Measurement(..., time: Int, ...) | -| `iterationTimeUnit` | Defines the unit for `iterationTime`. | Time unit, see below | @Measurement(..., timeUnit: BenchmarkTimeUnit, ...) | -| `outputTimeUnit` | Sets the unit for the results display. | Time unit, see below | @OutputTimeUnit(value: BenchmarkTimeUnit) | -| `mode` | Selects "thrpt" (Throughput) for measuring the number of function calls per unit time or "avgt" (AverageTime) for measuring the time per function call. | `thrpt`(default), `Throughput`(default), `avgt`, `AverageTime` | @BenchmarkMode | -| `include("…")` | Applies a regular expression to include benchmarks that match the substring in their fully qualified names. | Regex pattern | - | -| `exclude("…")` | Applies a regular expression to exclude benchmarks that match the substring in their fully qualified names. | Regex pattern | - | -| `param("name", "value1", "value2")` | Assigns values to a public mutable property with the specified name, annotated with `@Param`. | Any string values | @Param | -| `reportFormat` | Defines the benchmark report's format options. | `json`(default), `csv`, `scsv`, `text` | - | +| Option | Description | Possible Values | Corresponding Annotation | +| ----------------------------------- |---------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------|-----------------------------------------------------| +| `iterations` | Sets the number of iterations for measurements. | Positive Integer | @Measurement(iterations: Int, ...) | +| `warmups` | Sets the number of iterations for system warming, ensuring accurate measurements. | Non-negative Integer | @Warmup(iterations: Int) | +| `iterationTime` | Sets the duration for each iteration, both measurement and warm-up. | Positive Integer | @Measurement(..., time: Int, ...) | +| `iterationTimeUnit` | Defines the unit for `iterationTime`. | Time unit, see below | @Measurement(..., timeUnit: BenchmarkTimeUnit, ...) | +| `outputTimeUnit` | Sets the unit for the results display. | Time unit, see below | @OutputTimeUnit(value: BenchmarkTimeUnit) | +| `mode` | Selects "thrpt" (Throughput) for measuring the number of function calls per unit time or "avgt" (AverageTime) for measuring the time per function call. | `thrpt`, `Throughput`, `avgt`, `AverageTime` | @BenchmarkMode(value: Mode) | +| `include("…")` | Applies a regular expression to include benchmarks that match the substring in their fully qualified names. | Regex pattern | - | +| `exclude("…")` | Applies a regular expression to exclude benchmarks that match the substring in their fully qualified names. | Regex pattern | - | +| `param("name", "value1", "value2")` | Assigns values to a public mutable property with the specified name, annotated with `@Param`. | String values that represent valid values for the property | @Param | +| `reportFormat` | Defines the benchmark report's format options. | `json`(default), `csv`, `scsv`, `text` | - | The following values can be used for specifying time unit: - "NANOSECONDS", "ns", "nanos" From ff74c248e759f9147d85939c99a40122e9402ac6 Mon Sep 17 00:00:00 2001 From: Henok Woldesenbet <62161211+wldeh@users.noreply.github.com> Date: Mon, 14 Aug 2023 22:01:02 -0700 Subject: [PATCH 23/30] Improve tasks-overview.md (#141) Co-authored-by: Abduqodiri Qurbonzoda --- docs/tasks-overview.md | 85 ++++++++++++++++++++++++++---------------- 1 file changed, 52 insertions(+), 33 deletions(-) diff --git a/docs/tasks-overview.md b/docs/tasks-overview.md index ce9adf79..a0c35bd1 100644 --- a/docs/tasks-overview.md +++ b/docs/tasks-overview.md @@ -1,45 +1,64 @@ -# Overview of Tasks for kotlinx-benchmark Plugin Across Different Platforms +## Overview of Tasks Provided by kotlinx-benchmark Gradle Plugin -This document describes the tasks generated by the kotlinx-benchmark plugin when used with different Kotlin and JVM platforms. Understanding these tasks can help you utilize them more effectively in your benchmarking projects. The tasks are divided into two categories: those that apply to all targets, and those that are target-specific. +The kotlinx-benchmark plugin creates different Gradle tasks depending on how it is configured. +For each pair of configuration profile and registered target a task is created to execute that profile on the respective platform. +To learn more about configuration profiles, refer to [configuration-options.md](configuration-options.md). -## General Tasks +### Example Configuration -These tasks are not platform-dependent and thus, are used across all targets. +To illustrate, consider the following `kotlinx-benchmark` configuration: -| Task | Description | -| ---------------------- | -------------------------------------------------------------------------------------------------------------------- | -| **assembleBenchmarks** | Generates and builds all benchmarks in the project, serving as a dependency for other benchmark tasks. | -| **benchmark** | Executes all benchmarks in the project. It depends on `assembleBenchmarks` to ensure benchmarks are ready and built. | +```kotlin +// build.gradle.kts +benchmark { + configurations { + named("main") { + iterations = 20 + warmups = 20 + iterationTime = 1 + iterationTimeUnit = "s" + } + register("smoke") { + include("Essential") + iterations = 10 + warmups = 10 + iterationTime = 200 + iterationTimeUnit = "ms" + } + } -## Java & Kotlin/JVM Specific Tasks + targets { + register("jvm") + register("js") + } +} +``` -| Task | Description | -| --------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------- | -| **{configName}Benchmark** | Executes all benchmarks under the specific configuration. Useful when different benchmarking requirements exist for different parts of the application. | -| **{configName}BenchmarkGenerate** | Generates JMH source files for the specified configuration. | -| **{configName}BenchmarkCompile** | Compiles the JMH source files generated for a specific configuration, transforming them into machine code for JVM execution. | -| **{configName}BenchmarkJar** | Packages the compiled JMH files into a JAR (Java Archive) file for distribution and execution. | +## Tasks for the "main" Configuration Profile -## Kotlin/JS Specific Tasks +- **`benchmark`**: + - Runs benchmarks within the "main" profile for all registered targets. + - In our example, `benchmark` runs benchmarks within the "main" profile in both `jvm` and `js` targets. -| Task | Description | -| -------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------ | -| **compile{configName}BenchmarkKotlin{configName}** | Compiles JS benchmark source files for the specified JS target. This includes setting up dependencies and setting up Kotlin compilation options. | -| **{configName}Benchmark** | Executes all benchmarks for the specified JS target. | -| **{configName}BenchmarkGenerate** | Generates JS source files for the specified JS target. These source files will be used in the benchmarking process. | +- **`Benchmark`**: + - Runs benchmarks within the "main" profile for a particular target. + - In our example, `jvmBenchmark` runs benchmarks within the "main" profile in the `jvm` target, while `jsBenchmark` runs them in the `js` target. -## Kotlin/WASM Specific Tasks +## Tasks for Custom Configuration Profiles -| Task | Description | -| -------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------- | -| **compile{configName}BenchmarkKotlin{configName}** | Compiles Wasm benchmark source files for the specified Wasm target. This includes setting up dependencies and compiling the benchmarking code. | -| **{configName}Benchmark** | Executes all benchmarks for the specified Wasm target. | -| **{configName}BenchmarkGenerate** | Generates Wasm source files for the specified Wasm target. These source files will be used in the benchmarking process. | +- **`Benchmark`**: + - Runs benchmarks within `` profile in all registered targets. + - In our example, `smokeBenchmark` runs benchmarks within the "smoke" profile. -## Kotlin/Native Specific Tasks +- **`Benchmark`**: + - Runs benchmarks within `` profile in `` target. + - In our example, `jvmSmokeBenchmark` runs benchmarks within the "smoke" profile in `jvm` target while `jsSmokeBenchmark` runs them in `js` target. -| Task | Description | -| ---------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| **link{configName}BenchmarkReleaseExecutable{configName}** | Compiles the generated files and creates an executable. The entry point for the executable is the generated main function. | -| **{configName}Benchmark** | Executes the benchmarks for each benchmark configuration defined in the plugin extension corresponding to the specific config. For the "main" configuration, `configName` is dropped. | -| **{configName}BenchmarkGenerate** | Takes compiled user code, retrieves metadata and generates the code needed for measurement. This is a native-target-specific task (e.g., for `macosX64()`, `native` -> `macosX64`). | +## Other useful tasks + +- **`BenchmarkJar`**: + - Created only when a Kotlin/JVM target is registered for benchmarking. + - Produces a self-contained executable JAR in `build/benchmarks//jars/` directory of your project that contains your benchmarks in `` target, and all essential JMH infrastructure code. + - The JAR file can be run using `java -jar path-to-the.jar` command with relevant options. Run with `-h` to see the available options. + - The Jar file can be used for running JMH profilers + - In our example, `jvmBenchmarkJar` produces a JAR file in `build/benchmarks/jvm/jars/` directory that contains benchmarks in `jvm` target. From 5857760fbfea3396debbc056a95b6c319fbe6570 Mon Sep 17 00:00:00 2001 From: Abduqodiri Qurbonzoda Date: Sat, 8 Jul 2023 14:15:16 +0300 Subject: [PATCH 24/30] [WIP] Improve README.md --- README.md | 363 +++++++++++++++++++++++++++++++++--------------------- 1 file changed, 222 insertions(+), 141 deletions(-) diff --git a/README.md b/README.md index e45426cd..970137a9 100644 --- a/README.md +++ b/README.md @@ -21,14 +21,16 @@ kotlinx-benchmark is a toolkit for running benchmarks for multiplatform code wri - [Using in Your Projects](#using-in-your-projects) - - [Gradle Setup](#gradle-setup) - - [Kotlin DSL](#kotlin-dsl) - - [Groovy DSL](#groovy-dsl) + - [Project Setup](#project-setup) - [Target-specific configurations](#target-specific-configurations) - [Kotlin/JVM](#kotlinjvm) - [Kotlin/JS](#kotlinjs) - - [Multiplatform](#multiplatform) - - [Benchmark Configuration](#benchmark-configuration) + - [Kotlin/Native](#kotlinnative) + - [Kotlin/WASM](#kotlinwasm) + - [Writing Benchmarks](#writing-benchmarks) + - [Running Benchmarks](#running-benchmarks) + - [Benchmark Configuration Profiles](#benchmark-configuration-profiles) + - [Separate source sets for benchmarks](#separate-source-sets-for-benchmarks) - [Examples](#examples) - [Contributing](#contributing) @@ -46,9 +48,12 @@ kotlinx-benchmark is a toolkit for running benchmarks for multiplatform code wri ## Using in Your Projects -The `kotlinx-benchmark` library is designed to work with Kotlin/JVM, Kotlin/JS, Kotlin/Native, and Kotlin/WASM (experimental) targets. To get started, ensure you're using Kotlin 1.8.20 or newer and Gradle 8.0 or newer. +The `kotlinx-benchmark` library is designed to work with Kotlin/JVM, Kotlin/JS, Kotlin/Native, and Kotlin/WASM (experimental) targets. +To get started, ensure you're using Kotlin 1.8.20 or newer and Gradle 8.0 or newer. -### Gradle Setup +### Project Setup + +Follow the steps below to set up a Kotlin Multiplatform project for benchmarking.
Kotlin DSL @@ -56,14 +61,42 @@ The `kotlinx-benchmark` library is designed to work with Kotlin/JVM, Kotlin/JS, 1. **Applying Benchmark Plugin**: Apply the benchmark plugin. ```kotlin + // build.gradle.kts plugins { id("org.jetbrains.kotlinx.benchmark") version "0.4.9" } ``` -2. **Specifying Repository**: Ensure you have `mavenCentral()` for dependencies lookup in the list of repositories: +2. **Specifying Plugin Repository**: Ensure you have the Gradle Plugin Portal for plugin lookup in the list of repositories: + + ```kotlin + // settings.gradle.kts + pluginManagement { + repositories { + gradlePluginPortal() + } + } + ``` + +3. **Adding Runtime Dependency**: Next, add the `kotlinx-benchmark-runtime` dependency to the common source set: + + ```kotlin + // build.gradle.kts + kotlin { + sourceSets { + commonMain { + dependencies { + implementation("org.jetbrains.kotlinx:kotlinx-benchmark-runtime:0.4.9") + } + } + } + } + ``` + +4. **Specifying Runtime Repository**: Ensure you have `mavenCentral()` for dependencies lookup in the list of repositories: ```kotlin + // build.gradle.kts repositories { mavenCentral() } @@ -77,15 +110,42 @@ The `kotlinx-benchmark` library is designed to work with Kotlin/JVM, Kotlin/JS, 1. **Applying Benchmark Plugin**: Apply the benchmark plugin. ```groovy + // build.gradle plugins { - id 'org.jetbrains.kotlin.plugin.allopen' version "1.8.21" id 'org.jetbrains.kotlinx.benchmark' version '0.4.9' } ``` -2. **Specifying Repository**: Ensure you have `mavenCentral()` in the list of repositories: +2. **Specifying Plugin Repository**: Ensure you have the Gradle Plugin Portal for plugin lookup in the list of repositories: + + ```groovy + // settings.gradle + pluginManagement { + repositories { + gradlePluginPortal() + } + } + ``` + +3. **Adding Runtime Dependency**: Next, add the `kotlinx-benchmark-runtime` dependency to the common source set: + + ```groovy + // build.gradle + kotlin { + sourceSets { + commonMain { + dependencies { + implementation 'org.jetbrains.kotlinx:kotlinx-benchmark-runtime:0.4.9' + } + } + } + } + ``` + +4. **Specifying Runtime Repository**: Ensure you have `mavenCentral()` for dependencies lookup in the list of repositories: ```groovy + // build.gradle repositories { mavenCentral() } @@ -95,197 +155,218 @@ The `kotlinx-benchmark` library is designed to work with Kotlin/JVM, Kotlin/JS, ### Target-specific configurations +To run benchmarks on a platform ensure your Kotlin Multiplatform project targets that platform. For different platforms, there may be distinct requirements and settings that need to be configured. +The guide below contains the steps needed to configure each supported platform for benchmarking. #### Kotlin/JVM -When benchmarking Kotlin/JVM code with Java Microbenchmark Harness (JMH), you should use the [allopen plugin](https://kotlinlang.org/docs/all-open-plugin.html). This plugin ensures your benchmark classes and methods are `open`, meeting JMH's requirements. Make sure to apply the jvm plugin. +To run benchmarks in Kotlin/JVM: +1. Create a JVM target: -```kotlin -plugins { - kotlin("jvm") version "1.8.21" - kotlin("plugin.allopen") version "1.8.21" -} + ```kotlin + // build.gradle.kts + kotlin { + jvm() + } + ``` -allOpen { - annotation("org.openjdk.jmh.annotations.State") -} -``` +2. Register `jvm` as a benchmark target: -
- Illustrative Example + ```kotlin + // build.gradle.kts + benchmark { + targets { + register("jvm") + } + } + ``` -Consider you annotated each of your benchmark classes with `@State(Scope.Benchmark)`: +3. Apply [allopen plugin](https://kotlinlang.org/docs/all-open-plugin.html) to ensure your benchmark classes and methods are `open`. -```kotlin -@State(Scope.Benchmark) -class MyBenchmark { - // Benchmarking-related methods and variables - fun benchmarkMethod() { - // benchmarking logic + ```kotlin + // build.gradle.kts + plugins { + kotlin("plugin.allopen") version "1.8.21" } -} -``` -In Kotlin, classes and methods are `final` by default, which means they can't be overridden. This is incompatible with the operation of the Java Microbenchmark Harness (JMH), which needs to generate subclasses for benchmarking. + allOpen { + annotation("org.openjdk.jmh.annotations.State") + } + ``` -This is where the `allopen` plugin comes into play. With the plugin applied, any class annotated with `@State` is treated as `open`, which allows JMH to work as intended. Here's the Kotlin DSL configuration for the `allopen` plugin: +
+ Explanation -```kotlin -plugins { - kotlin("plugin.allopen") version "1.8.21" -} + Consider you annotated each of your benchmark classes with `@State(Scope.Benchmark)`: -allOpen { - annotation("org.openjdk.jmh.annotations.State") -} -``` + ```kotlin + // MyBenchmark.kt + @State(Scope.Benchmark) + class MyBenchmark { + // Benchmarking-related methods and variables + fun benchmarkMethod() { + // benchmarking logic + } + } + ``` + + In Kotlin, classes are `final` by default, which means they can't be overridden. + This is incompatible with the operation of the Java Microbenchmark Harness (JMH), which kotlinx-benchmark uses under the hood for running benchmarks on JVM. + JMH requires benchmark classes and methods to be `open` to be able to generate subclasses and conduct the benchmark. + + This is where the `allopen` plugin comes into play. With the plugin applied, any class annotated with `@State` is treated as `open`, which allows JMH to work as intended: + + ```kotlin + // build.gradle.kts + plugins { + kotlin("plugin.allopen") version "1.8.21" + } + + allOpen { + annotation("org.openjdk.jmh.annotations.State") + } + ``` -This configuration ensures that your `MyBenchmark` class and its `benchmarkMethod` function are treated as `open`, allowing JMH to generate subclasses and conduct the benchmark. + This configuration ensures that your `MyBenchmark` class and its `benchmarkMethod` function are treated as `open`. -
+
-You can alternatively mark your benchmark classes and methods `open` manually, but using the `allopen` plugin enhances code maintainability. For a practical example, please refer to [examples](examples/kotlin-kts). + You can alternatively mark your benchmark classes and methods `open` manually, but using the `allopen` plugin enhances code maintainability. #### Kotlin/JS -Create a JS target with Node.js execution environment and register it as a benchmark target: +To run benchmarks in Kotlin/JS: +1. Create a JS target with Node.js execution environment: -```kotlin -kotlin { - js('jsIr', IR) { - nodejs() - } - js('jsIrBuiltIn', IR) { - nodejs() + ```kotlin + // build.gradle.kts + kotlin { + js(IR) { + nodejs() + } } + ``` + +2. Register `js` as a benchmark target: + + ```kotlin + // build.gradle.kts benchmark { targets { - register("jsIr") - register("jsIrBuiltIn") + register("js") } } -} -``` + ``` -This setup is using the [IR compiler](https://kotlinlang.org/docs/js-ir-compiler.html). `jsIr` and `jsIrBuiltIn` are both using the IR compiler. The former uses benchmark.js, while the latter uses Kotlin's built-in plugin. +For Kotlin/JS, only the [IR compiler backend](https://kotlinlang.org/docs/js-ir-compiler.html) is supported. -#### Multiplatform +#### Kotlin/Native -For multiplatform projects, add the `kotlinx-benchmark-runtime` dependency to the `commonMain` source set: +To run benchmarks in Kotlin/Native: +1. Create a Native target: -```kotlin -kotlin { - sourceSets { - commonMain { - dependencies { - implementation("org.jetbrains.kotlinx:kotlinx-benchmark-runtime:0.4.8") - } + ```kotlin + // build.gradle.kts + kotlin { + linuxX64("native") + } + ``` + +2. Register `native` as a benchmark target: + + ```kotlin + // build.gradle.kts + benchmark { + targets { + register("native") } } -} -``` + ``` + +This library supports all [targets supported by the Kotlin/Native compiler](https://kotlinlang.org/docs/native-target-support.html). -This setup enables running benchmarks in the main compilation of any registered targets. Another option is to register the compilation you want to run benchmarks from. The platform-specific artifacts will be resolved automatically. For a practical example, please refer to [examples](examples/multiplatform). +#### Kotlin/WASM -Note: Benchmark classes located in the common source set will be run in all platforms, while those located in a platform-specific source set will be run in the corresponding platform. +To run benchmarks in Kotlin/WASM: +1. Create a WASM target with D8 execution environment: + + ```kotlin + // build.gradle.kts + kotlin { + wasm { + d8() + } + } + ``` -Define your benchmark targets within the `benchmark` section in your `build.gradle` file: +2. Register `wasm` as a benchmark target: -```kotlin -benchmark { - targets { - register("jvm") - register("js") - register("native") - // Add this line if you are working with WebAssembly (experimental) - // register("wasm") + ```kotlin + // build.gradle.kts + benchmark { + targets { + register("wasm") + } } -} -``` + ``` + +Note: Kotlin/WASM is an experimental compilation target for Kotlin. It may be dropped or changed at any time. + +### Writing Benchmarks + +Now you can write your benchmarks. + +// A short introduction to writing benchmarks. + +Note: Benchmark classes located in the common source set will be run in all platforms, while those located in a platform-specific source set will be run only in the corresponding platform. + +See to for a complete guide for writing benchmarks. + +### Running Benchmarks -To further customize your benchmarks, add a `configurations` section within the `benchmark` block. By default, a `main` configuration is generated, but additional configurations can be added as needed: +To run your benchmarks in all registered platforms run `benchmark` Gradle task in your project. +To run in only in a specific platform run `Benchmark`, e.g., `jvmBenchmark`. + +Learn more about the tasks kotlinx-benchmark plugin creates in [this guide](docs/tasks-overview.md). + +### Benchmark Configuration Profiles + +The kotlinx-benchmark library provides ability to create multiple configuration profiles. The `main` configuration is already created by Toolkit. +Additional profiles can be created as needed in the `configurations` section of the `benchmark` block: ```kotlin +// build.gradle.kts benchmark { configurations { - main { + named("main") { warmups = 20 iterations = 10 iterationTime = 3 iterationTimeUnit = "s" } - smoke { + register("smoke") { + include("") warmups = 5 iterations = 3 iterationTime = 500 iterationTimeUnit = "ms" } - register("native") - register("wasm") // Experimental } } ``` - -# Separate source sets for benchmarks - -Often you want to have benchmarks in the same project, but separated from main code, much like tests. Here is how: -Define source set: -```groovy -sourceSets { - benchmarks -} -``` - -Propagate dependencies and output from `main` sourceSet. - -```groovy -dependencies { - benchmarksCompile sourceSets.main.output + sourceSets.main.runtimeClasspath -} -``` +Refer to our [comprehensive guide](docs/configuration-options.md) to learn about configuration options and how they affect benchmark execution. -You can also add output and compileClasspath from `sourceSets.test` in the same way if you want -to reuse some of the test infrastructure. - - -Register `benchmarks` source set: - -```groovy -benchmark { - targets { - register("benchmarks") - } -} -``` - -For a Kotlin Multiplatform project: - -Define a new compilation in whichever target you'd like (e.g. `jvm`, `js`, etc): -```groovy -kotlin { - jvm { - compilations.create('benchmark') { associateWith(compilations.main) } - } -} -``` - -Register it by its source set name (`jvmBenchmark` is the name for the `benchmark` compilation for `jvm` target): - -```groovy -benchmark { - targets { - register("jvmBenchmark") - } -} -``` +### Separate source sets for benchmarks -For comprehensive guidance on configuring your benchmark setup, please refer to our detailed documentation on [Configuring kotlinx-benchmark](docs/configuration-options.md). +Often you want to have benchmarks in the same project, but separated from main code, much like tests. +Refer to our [detailed documentation](docs/separate-source-sets.md) on configuring your project to add a separate source set for benchmarks. -# Examples +## Examples -To help you better understand how to use the kotlinx-benchmark library, we've provided an [examples](examples) subproject. These examples showcase various use cases and offer practical insights into the library's functionality. +To help you better understand how to use the kotlinx-benchmark library, we've provided an [examples](examples) subproject. +These examples showcase various use cases and offer practical insights into the library's functionality. ## Contributing From 24e0555d40132a17ddb7764bc9a8e8882dfbb4aa Mon Sep 17 00:00:00 2001 From: Abduqodiri Qurbonzoda Date: Sat, 8 Jul 2023 14:45:31 +0300 Subject: [PATCH 25/30] [WIP] Fix the reference to build status logo --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 970137a9..6e7762ba 100644 --- a/README.md +++ b/README.md @@ -3,7 +3,7 @@ [![Kotlin Alpha](https://kotl.in/badges/alpha.svg)](https://kotlinlang.org/docs/components-stability.html) [![JetBrains incubator project](https://jb.gg/badges/incubator.svg)](https://confluence.jetbrains.com/display/ALL/JetBrains+on+GitHub) [![GitHub license](https://img.shields.io/badge/license-Apache%20License%202.0-blue.svg?style=flat)](https://www.apache.org/licenses/LICENSE-2.0) -[![Build status]()](https://teamcity.jetbrains.com/viewType.html?buildTypeId=KotlinTools_KotlinxBenchmark_Build_All) +[![Build status](https://teamcity.jetbrains.com/guestAuth/app/rest/builds/buildType:(id:KotlinTools_KotlinxBenchmark_Build_All)/statusIcon.svg)](https://teamcity.jetbrains.com/viewType.html?buildTypeId=KotlinTools_KotlinxBenchmark_Build_All) [![Maven Central](https://img.shields.io/maven-central/v/org.jetbrains.kotlinx/kotlinx-benchmark-runtime.svg?label=Maven%20Central)](https://search.maven.org/search?q=g:%22org.jetbrains.kotlinx%22%20AND%20a:%22kotlinx-benchmark-runtime%22) [![Gradle Plugin Portal](https://img.shields.io/maven-metadata/v?label=Gradle%20Plugin&metadataUrl=https://plugins.gradle.org/m2/org/jetbrains/kotlinx/kotlinx-benchmark-plugin/maven-metadata.xml)](https://plugins.gradle.org/plugin/org.jetbrains.kotlinx.benchmark) [![IR](https://img.shields.io/badge/Kotlin%2FJS-IR%20supported-yellow)](https://kotl.in/jsirsupported) From 87ea754e36304bb8856f325395e34af8aafe7e36 Mon Sep 17 00:00:00 2001 From: wldeh <62161211+wldeh@users.noreply.github.com> Date: Fri, 21 Jul 2023 03:14:49 +0300 Subject: [PATCH 26/30] Polish README.md --- README.md | 85 +++++++++++++++++++++++++++++++++++++++++++++++++------ 1 file changed, 76 insertions(+), 9 deletions(-) diff --git a/README.md b/README.md index 6e7762ba..4d0a79dd 100644 --- a/README.md +++ b/README.md @@ -198,7 +198,7 @@ To run benchmarks in Kotlin/JVM:
Explanation - Consider you annotated each of your benchmark classes with `@State(Scope.Benchmark)`: + Assume that you've annotated each of your benchmark classes with `@State(Scope.Benchmark)`: ```kotlin // MyBenchmark.kt @@ -315,24 +315,91 @@ Note: Kotlin/WASM is an experimental compilation target for Kotlin. It may be dr ### Writing Benchmarks -Now you can write your benchmarks. +After setting up your project and configuring targets, you can start writing benchmarks: -// A short introduction to writing benchmarks. +1. **Create Benchmark Class**: Create a class in your source set where you'd like to add the benchmark. Annotate this class with `@State(Scope.Benchmark)`. + + ```kotlin + @State(Scope.Benchmark) + class MyBenchmark { + + } + ``` + +2. **Set up Parameters and Variables**: Define variables needed for the benchmark. + + ```kotlin + var param: Int = 10 + + private var list: MutableList = ArrayList() + ``` + +3. **Initialize Resources**: Within the class, you can define any setup or teardown methods using `@Setup` and `@TearDown` annotations respectively. These methods will be executed before and after the entire benchmark run. + + ```kotlin + @Setup + fun prepare() { + for (i in 0 until size) { + list.add(i) + } + } + + @TearDown + fun cleanup() { + list.clear() + } + ``` + +4. **Define Benchmark Method**: Next, create methods that you would like to be benchmarked within this class and annotate them with `@Benchmark`. + + ```kotlin + @Benchmark + fun benchmarkMethod(): Int { + return list.sum() + } + ``` + +Your final benchmark class will look something like this: + + @State(Scope.Benchmark) + class MyBenchmark { + + var param: Int = 10 + + private var list: MutableList = ArrayList() + + @Setup + fun prepare() { + for (i in 0 until size) { + list.add(i) + } + } + + @Benchmark + fun benchmarkMethod(): Int { + return list.sum() + } + + @TearDown + fun cleanup() { + list.clear() + } + } Note: Benchmark classes located in the common source set will be run in all platforms, while those located in a platform-specific source set will be run only in the corresponding platform. -See to for a complete guide for writing benchmarks. +See [writing benchmarks](docs/writing-benchmarks.md) for a complete guide for writing benchmarks. ### Running Benchmarks -To run your benchmarks in all registered platforms run `benchmark` Gradle task in your project. -To run in only in a specific platform run `Benchmark`, e.g., `jvmBenchmark`. +To run your benchmarks in all registered platforms, run `benchmark` Gradle task in your project. +To run only on a specific platform, run `Benchmark`, e.g., `jvmBenchmark`. -Learn more about the tasks kotlinx-benchmark plugin creates in [this guide](docs/tasks-overview.md). +For more details about the tasks created by the kotlinx-benchmark plugin, refer to [this guide](docs/tasks-overview.md). ### Benchmark Configuration Profiles -The kotlinx-benchmark library provides ability to create multiple configuration profiles. The `main` configuration is already created by Toolkit. +The kotlinx-benchmark library provides the ability to create multiple configuration profiles. The `main` configuration is already created by the Toolkit. Additional profiles can be created as needed in the `configurations` section of the `benchmark` block: ```kotlin @@ -370,4 +437,4 @@ These examples showcase various use cases and offer practical insights into the ## Contributing -We welcome contributions to kotlinx-benchmark! If you want to contribute, please refer to our Contribution Guidelines. \ No newline at end of file +We welcome contributions to kotlinx-benchmark! If you want to contribute, please refer to our [Contribution Guidelines](CONTRIBUTING.md). \ No newline at end of file From dc32541862dba622257dcf7b6aab05e0c4a94c89 Mon Sep 17 00:00:00 2001 From: wldeh <62161211+wldeh@users.noreply.github.com> Date: Thu, 20 Jul 2023 17:17:10 -0700 Subject: [PATCH 27/30] Add writing-benchmarks.md --- docs/writing-benchmarks.md | 104 +++++++++++++++++++++++++++++++++++++ 1 file changed, 104 insertions(+) create mode 100644 docs/writing-benchmarks.md diff --git a/docs/writing-benchmarks.md b/docs/writing-benchmarks.md new file mode 100644 index 00000000..f9895f67 --- /dev/null +++ b/docs/writing-benchmarks.md @@ -0,0 +1,104 @@ +## Writing Benchmarks + +To get started, let's look at a simple multiplatform example: + +```kotlin +@BenchmarkMode(Mode.Throughput) +@OutputTimeUnit(TimeUnit.MILLISECONDS) +@Warmup(iterations = 20, time = 1, timeUnit = TimeUnit.SECONDS) +@Measurement(iterations = 20, time = 1, timeUnit = TimeUnit.SECONDS) +@BenchmarkTimeUnit(TimeUnit.MILLISECONDS) +@State(Scope.Benchmark) +class ExampleBenchmark { + + @Param("4", "10") + var size: Int = 0 + + private val list = ArrayList() + + @Setup + fun prepare() { + for (i in 0 until size) { + list.add(i) + } + } + + @Benchmark + fun benchmarkMethod(): Int { + return list.sum() + } + + @TearDown + fun cleanup() { + list.clear() + } +} +``` + +**Example Description**: +Our exmaple tests how fast we can add up numbers in a ArrayList. We try it with a list of 4 numbers and then with 10 numbers. This helps us know how well our adding method works with different list sizes. + +### Explaining the Annotations + +#### @State + +The `@State` annotation, when set to `Scope.Benchmark`, is applied to a class to represent that this class is responsible for holding the state or data for your benchmark tests. This class instance is shared among all benchmark threads, creating a consistent environment across all warmups, iterations, and measured executions. The State annotation is mandatory for all targets, except for Kotlin/JVM. It helps in managing stateful operations in a multi-threaded context, providing an efficient way of handling state that's thread-safe and consistent across multiple runs. This ensures accurate and reliable benchmarking results, as the shared state remains the same throughout all the tests, preventing any discrepancies that could affect the outcomes. In the Kotlin/JVM target, apart from `Scope.Benchmark`, you also have access to `Scope.Thread` and `Scope.Group`. `Scope.Thread` ensures each thread has its own unique state instance, while `Scope.Group` allows state sharing within a benchmark thread group. However, for other target environments, only `Scope.Benchmark` is supported, limiting the scope options to a single instance that is shared among all threads. In our snippet, the ExampleBenchmark class uses the @State(Scope.Benchmark) annotation, indicating that the state in this class is shared across all benchmark threads. + +#### @Setup + +The `@Setup` annotation is used to mark a method that sets up the necessary preconditions for your benchmark test. It serves as a preparatory step where you set up the environment for the benchmark, performing tasks such as generating data, establishing database connections, or preparing any other resources your benchmark requires. In the Kotlin/JVM target, the `@Setup` annotation can operate on three levels - `Iteration`, `Trial`, and `Invocation`. In `Iteration` level, the setup method is run before each benchmark iteration, ensuring a consistent state across all iterations. On the other hand, `Trial` level execution runs the setup method once for the entire set of benchmark method iterations, suitable when state modifications are part of the benchmark itself. The `Invocation` level will run the setup method before each invocation of the benchmark, which allows for even finer-grained control over the setup process. Specify the level using `@Setup(Level.Trial)`, `@Setup(Level.Iteration)`, or `@Setup(Level.Invocation)`; if not defined, it defaults to `Level.Trial`. Specifying the Level is only possible when targeting Kotlin/JVM making `Level.Trial` the only option on all other targets. The key point to remember is that the `@Setup` method's execution time is not included in the final benchmark results - the timer starts only when the `@Benchmark` method begins. This makes `@Setup` an ideal place for initialization tasks that should not impact the timing results of your benchmark. By using the `@Setup` annotation, you ensure consistency across all executions of your benchmark, providing accurate and reliable results. In the provided example, the `@Setup` annotation is used to populate an ArrayList with integers from 0 up to a specified size. + +#### @TearDown + +The `@TearDown` annotation is used to denote a method that's executed after the benchmarking method(s). This method is typically responsible for cleaning up or deallocating any resources or conditions that were initialized in the `@Setup` method. For instance, if your setup method created temporary files or opened network connections, the method marked with `@TearDown` is where you would put the code to delete those files or close those connections. The `@TearDown` annotation helps you avoid performance bias and ensure the proper maintenance of resources and the preparation of a clean environment for the next run. In our example, the `cleanup` function annotated with `@TearDown` is used to clear our ArrayList. + +#### @Benchmark + +The `@Benchmark` annotation is used to specify the methods that you want to measure the performance of. Basically, it's the actual test you're running. It's important to note that the benchmark methods must always be public. The code you want to benchmark goes inside this method. In our example, the `benchmarkMethod` function is annotated with `@Benchmark`, which means the toolkit will measure the performance of the operation of summing all the integers in the list. + +#### @BenchmarkMode + +The `@BenchmarkMode` annotation sets the mode of operation for the benchmark. Applying the `@BenchmarkMode` annotation requires specifying a mode from the `Mode` enum, which includes several options. `Mode.Throughput` measures the raw throughput of your code in terms of the number of operations it can perform per unit of time, such as operations per second. `Mode.AverageTime` is used when you're more interested in the average time it takes to execute an operation. Without an explicit `@BenchmarkMode` annotation, the toolkit defaults to `Mode.Throughput`. In our example, `@BenchmarkMode(Mode.Throughput)` is used, meaning the benchmark focuses on the number of times the benchmark method can be executed per unit of time. + +#### @OutputTimeUnit + +The `@OutputTimeUnit` annotation dictates the time unit in which your results will be presented. This time unit can range from minutes to nanoseconds. If a piece of code executes within a few milliseconds, presenting the result in milliseconds or microseconds provides a more accurate and detailed measurement. Conversely, for operations with longer execution times, you might choose to display the output in microseconds, seconds, or even minutes. Essentially, the `@OutputTimeUnit` annotation is about enhancing the readability and interpretability of your benchmarks. If this annotation isn't specified, it defaults to using seconds as the time unit. In our example, the OutputTimeUnit is set to milliseconds. + +#### @Warmup + +The `@Warmup` annotation is used to specify a preliminary phase before the actual benchmarking takes place. During this warmup phase, the code in your `@Benchmark` method is executed several times, but these runs aren't included in the final benchmark results. The primary purpose of the warmup phase is to let the system "warm up" and reach its optimal performance state. In a typical scenario, when a Java application starts, the JVM (Java Virtual Machine) goes through a process called "JIT (Just-In-Time) compilation" where it learns about your code, optimizes it, and compiles it into native machine code for faster execution. The more the code is run, the more chances the JVM gets to optimize it, potentially making it run faster over time. The warmup phase is akin to giving the JVM a "practice run" to figure out the best optimizations for your code. This is particularly crucial for benchmarking because if you were to start measuring performance right from the first run, your results might be skewed by the JVM's initial learning and optimization process. In our example, the `@Warmup` annotation is used to allow five iterations, each lasting one second, of executing the benchmark method before the actual measurement starts. + +#### @Measurement + +The `@Measurement` annotation is used to control the properties of the actual benchmarking phase. It sets how many times the benchmark method is run (iterations) and how long each run should last. The results from these runs are recorded and reported as the final benchmark results. In our example, the `@Measurement` annotation specifies that the benchmark method will be run once for a duration of one second for the final performance measurement. + +#### @Fork + +The `@Fork` annotation, available only in the Kotlin/JVM target, is utilized to command to launch each benchmark in a standalone Java Virtual Machine (JVM) process. The JVM conducts various behind-the-scenes optimizations such as Just-In-Time compilation, class loading, and garbage collection. These can significantly impact the performance of our code. However, these influences might differ from one run to another, leading to inconsistent or misleading benchmark results if multiple benchmarks are executed within the same JVM process. By triggering the JVM to fork for each benchmark, these JVM-specific factors are eliminated from affecting the benchmark results, providing a clean and independent environment for each benchmark, enhancing the reliability and comparability of results. The value you assign to the `@Fork` annotation determines the number of separate JVM processes initiated for each benchmark. If `@Fork` is not specified, it defaults to [Defaults.MEASUREMENT_FORKS (`5`)](https://javadoc.io/static/org.openjdk.jmh/jmh-core/1.21/org/openjdk/jmh/runner/Defaults.html#MEASUREMENT_FORKS). Repeating the benchmark across multiple JVMs and averaging the results gives a more accurate representation of typical performance and accommodates for variability possibly caused by different JVM startups. The `@Fork(1)` annotation for exmaple, indicates that each benchmark test should run in one separate JVM process, thus ensuring an isolated and reliable testing environment for each test run. + +#### @Param + +The `@Param` annotation is used to pass different parameters to your benchmark method. It allows you to run the same benchmark method with different input values, so you can see how these variations affect performance. The values you provide for the `@Param` annotation are the different inputs you want to use in your benchmark test. The benchmark will run once for each provided value. In our example, `@Param` annotation is used with values '4' and '10', which means the benchmarkMethod will be executed twice, once with the `param` value as '4' and then with '10'. This could serve to help in understanding how the size of the input list impacts the time it takes to sum all the integers in the list. + +## Blackhole + +Modern compilers often remove computations that they deem unnecessary, which could serve to distort benchmark results. In essence, `Blackhole` maintains the integrity of benchmarks by preventing unwanted JVM optimizations. The Blackhole class is available on all targets excluding Kotlin/Wasm(experimental) + +#### How to Use Blackhole: + +Inject `Blackhole` into your benchmark method and use it to consume results: + +```kotlin +@Benchmark +fun longBlackholeBenchmark(bh: Blackhole) { + repeat(1000) { + bh.consume(text.length) + } +} +``` + +By consuming results, you signal to the compiler that these computations are significant and shouldn't be optimized away. + +For a deeper dive into `Blackhole` and its nuances, you can refer to: +- [Official Javadocs](https://javadoc.io/static/org.openjdk.jmh/jmh-core/1.23/org/openjdk/jmh/infra/Blackhole.html) +- [JMH](https://github.com/openjdk/jmh/blob/master/jmh-core/src/main/java/org/openjdk/jmh/infra/Blackhole.java#L157-L254) \ No newline at end of file From 5d82b254cb7f4cf3047fdf998cc34cc693c30493 Mon Sep 17 00:00:00 2001 From: Henok Woldesenbet <62161211+wldeh@users.noreply.github.com> Date: Fri, 1 Sep 2023 02:21:47 -0700 Subject: [PATCH 28/30] Improvements to writing-benchmarks.md doc (#148) --------- Co-authored-by: Abduqodiri Qurbonzoda --- README.md | 47 +++++++++--------- docs/writing-benchmarks.md | 99 ++++++++++++++++++++++++++++++-------- 2 files changed, 103 insertions(+), 43 deletions(-) diff --git a/README.md b/README.md index 4d0a79dd..40d84ae9 100644 --- a/README.md +++ b/README.md @@ -315,7 +315,8 @@ Note: Kotlin/WASM is an experimental compilation target for Kotlin. It may be dr ### Writing Benchmarks -After setting up your project and configuring targets, you can start writing benchmarks: +After setting up your project and configuring targets, you can start writing benchmarks. +As an example, let's write a simplified benchmark that tests how fast we can add up numbers in an ArrayList: 1. **Create Benchmark Class**: Create a class in your source set where you'd like to add the benchmark. Annotate this class with `@State(Scope.Benchmark)`. @@ -326,12 +327,11 @@ After setting up your project and configuring targets, you can start writing ben } ``` -2. **Set up Parameters and Variables**: Define variables needed for the benchmark. +2. **Set up Variables**: Define variables needed for the benchmark. ```kotlin - var param: Int = 10 - - private var list: MutableList = ArrayList() + private val size = 10 + private val list = ArrayList() ``` 3. **Initialize Resources**: Within the class, you can define any setup or teardown methods using `@Setup` and `@TearDown` annotations respectively. These methods will be executed before and after the entire benchmark run. @@ -361,30 +361,31 @@ After setting up your project and configuring targets, you can start writing ben Your final benchmark class will look something like this: - @State(Scope.Benchmark) - class MyBenchmark { - - var param: Int = 10 +```kotlin +@State(Scope.Benchmark) +class MyBenchmark { - private var list: MutableList = ArrayList() + private val size = 10 + private val list = ArrayList() - @Setup - fun prepare() { - for (i in 0 until size) { - list.add(i) - } + @Setup + fun prepare() { + for (i in 0 until size) { + list.add(i) } + } - @Benchmark - fun benchmarkMethod(): Int { - return list.sum() - } + @Benchmark + fun benchmarkMethod(): Int { + return list.sum() + } - @TearDown - fun cleanup() { - list.clear() - } + @TearDown + fun cleanup() { + list.clear() } +} +``` Note: Benchmark classes located in the common source set will be run in all platforms, while those located in a platform-specific source set will be run only in the corresponding platform. diff --git a/docs/writing-benchmarks.md b/docs/writing-benchmarks.md index f9895f67..dd25efef 100644 --- a/docs/writing-benchmarks.md +++ b/docs/writing-benchmarks.md @@ -36,69 +36,128 @@ class ExampleBenchmark { ``` **Example Description**: -Our exmaple tests how fast we can add up numbers in a ArrayList. We try it with a list of 4 numbers and then with 10 numbers. This helps us know how well our adding method works with different list sizes. +Our example tests the speed of summing numbers in an ArrayList. We try it with a list of 4 numbers and then with a list of 10 numbers. +This helps us determine the efficiency of our summing method with different list sizes. ### Explaining the Annotations #### @State -The `@State` annotation, when set to `Scope.Benchmark`, is applied to a class to represent that this class is responsible for holding the state or data for your benchmark tests. This class instance is shared among all benchmark threads, creating a consistent environment across all warmups, iterations, and measured executions. The State annotation is mandatory for all targets, except for Kotlin/JVM. It helps in managing stateful operations in a multi-threaded context, providing an efficient way of handling state that's thread-safe and consistent across multiple runs. This ensures accurate and reliable benchmarking results, as the shared state remains the same throughout all the tests, preventing any discrepancies that could affect the outcomes. In the Kotlin/JVM target, apart from `Scope.Benchmark`, you also have access to `Scope.Thread` and `Scope.Group`. `Scope.Thread` ensures each thread has its own unique state instance, while `Scope.Group` allows state sharing within a benchmark thread group. However, for other target environments, only `Scope.Benchmark` is supported, limiting the scope options to a single instance that is shared among all threads. In our snippet, the ExampleBenchmark class uses the @State(Scope.Benchmark) annotation, indicating that the state in this class is shared across all benchmark threads. +The `@State` annotation is used to mark benchmark classes. +In the Kotlin/JVM target, however, benchmark classes are not required to be annotated with `@State`. +In the Kotlin/JVM target, you can specify to which extent the state object is shared among the worker threads, e.g, `@State(Scope.Group)`. +Refer to [JMH documentation](https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/annotations/Scope.html) +for details about available scopes. Multi-threaded execution of a benchmark method is not supported in other Kotlin targets, +thus only `Scope.Benchmark` is available. +In our snippet, the ExampleBenchmark class is marked with `@State(Scope.Benchmark)`, +indicating that performance of benchmark methods in this class should be measured. #### @Setup -The `@Setup` annotation is used to mark a method that sets up the necessary preconditions for your benchmark test. It serves as a preparatory step where you set up the environment for the benchmark, performing tasks such as generating data, establishing database connections, or preparing any other resources your benchmark requires. In the Kotlin/JVM target, the `@Setup` annotation can operate on three levels - `Iteration`, `Trial`, and `Invocation`. In `Iteration` level, the setup method is run before each benchmark iteration, ensuring a consistent state across all iterations. On the other hand, `Trial` level execution runs the setup method once for the entire set of benchmark method iterations, suitable when state modifications are part of the benchmark itself. The `Invocation` level will run the setup method before each invocation of the benchmark, which allows for even finer-grained control over the setup process. Specify the level using `@Setup(Level.Trial)`, `@Setup(Level.Iteration)`, or `@Setup(Level.Invocation)`; if not defined, it defaults to `Level.Trial`. Specifying the Level is only possible when targeting Kotlin/JVM making `Level.Trial` the only option on all other targets. The key point to remember is that the `@Setup` method's execution time is not included in the final benchmark results - the timer starts only when the `@Benchmark` method begins. This makes `@Setup` an ideal place for initialization tasks that should not impact the timing results of your benchmark. By using the `@Setup` annotation, you ensure consistency across all executions of your benchmark, providing accurate and reliable results. In the provided example, the `@Setup` annotation is used to populate an ArrayList with integers from 0 up to a specified size. +The `@Setup` annotation is used to mark a method that sets up the necessary preconditions for your benchmark test. +It serves as a preparatory step where you set up the environment for the benchmark. +In the Kotlin/JVM target, you can specify when the setup method should be executed, e.g, `@Setup(Level.Iteration)`. +Refer to [JMH documentation](https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/annotations/Level.html) +for details about available levels. In other targets it operates always on the `Trial` level, that is, the setup method is +executed once before the entire set of benchmark method iterations. The key point to remember is that the `@Setup` +method's execution time is not included in the final benchmark results - the timer starts only when the `@Benchmark` +method begins. This makes `@Setup` an ideal place for initialization tasks that should not impact the timing results of your benchmark. +In the provided example, the `@Setup` annotation is used to populate an ArrayList with integers from 0 up to a specified size. #### @TearDown -The `@TearDown` annotation is used to denote a method that's executed after the benchmarking method(s). This method is typically responsible for cleaning up or deallocating any resources or conditions that were initialized in the `@Setup` method. For instance, if your setup method created temporary files or opened network connections, the method marked with `@TearDown` is where you would put the code to delete those files or close those connections. The `@TearDown` annotation helps you avoid performance bias and ensure the proper maintenance of resources and the preparation of a clean environment for the next run. In our example, the `cleanup` function annotated with `@TearDown` is used to clear our ArrayList. +The `@TearDown` annotation is used to denote a method that's executed after the benchmarking method(s). +This method is typically responsible for cleaning up or deallocating any resources or conditions that were initialized in the `@Setup` method. +In the Kotlin/JVM target, you can specify when the tear down method should be executed, e.g, `@TearDown(Level.Iteration)`. +Refer to [JMH documentation](https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/annotations/Level.html) +for details about available levels. In other targets it operates always on `Trial` level, that is, the tear down method +is executed once after the entire set of benchmark method iterations. The `@TearDown` annotation helps you avoid +performance bias and ensure the proper maintenance of resources and the preparation of a clean environment for the next run. +As with the `@Setup` method, the `@TearDown` method's execution time is not included in the final benchmark results. +In our example, the `cleanup` function annotated with `@TearDown` is used to clear our ArrayList. #### @Benchmark -The `@Benchmark` annotation is used to specify the methods that you want to measure the performance of. Basically, it's the actual test you're running. It's important to note that the benchmark methods must always be public. The code you want to benchmark goes inside this method. In our example, the `benchmarkMethod` function is annotated with `@Benchmark`, which means the toolkit will measure the performance of the operation of summing all the integers in the list. +The `@Benchmark` annotation is used to specify the methods that you want to measure the performance of. +Basically, it's the actual test you're running. It's important to note that the benchmark methods must always be public. +The code you want to benchmark goes inside this method. +In our example, the `benchmarkMethod` function is annotated with `@Benchmark`, +which means the toolkit will measure the performance of the operation of summing all the integers in the list. #### @BenchmarkMode -The `@BenchmarkMode` annotation sets the mode of operation for the benchmark. Applying the `@BenchmarkMode` annotation requires specifying a mode from the `Mode` enum, which includes several options. `Mode.Throughput` measures the raw throughput of your code in terms of the number of operations it can perform per unit of time, such as operations per second. `Mode.AverageTime` is used when you're more interested in the average time it takes to execute an operation. Without an explicit `@BenchmarkMode` annotation, the toolkit defaults to `Mode.Throughput`. In our example, `@BenchmarkMode(Mode.Throughput)` is used, meaning the benchmark focuses on the number of times the benchmark method can be executed per unit of time. +The `@BenchmarkMode` annotation sets the mode of operation for the benchmark. +Applying the `@BenchmarkMode` annotation requires specifying a mode from the `Mode` enum, which includes several options. +`Mode.Throughput` measures the raw throughput of your code in terms of the number of operations it can perform per unit +of time, such as operations per second. `Mode.AverageTime` is used when you're more interested in the average time it +takes to execute an operation. Without an explicit `@BenchmarkMode` annotation, the toolkit defaults to `Mode.Throughput`. +In our example, `@BenchmarkMode(Mode.Throughput)` is used, meaning the benchmark focuses on the number of times the +benchmark method can be executed per unit of time. #### @OutputTimeUnit -The `@OutputTimeUnit` annotation dictates the time unit in which your results will be presented. This time unit can range from minutes to nanoseconds. If a piece of code executes within a few milliseconds, presenting the result in milliseconds or microseconds provides a more accurate and detailed measurement. Conversely, for operations with longer execution times, you might choose to display the output in microseconds, seconds, or even minutes. Essentially, the `@OutputTimeUnit` annotation is about enhancing the readability and interpretability of your benchmarks. If this annotation isn't specified, it defaults to using seconds as the time unit. In our example, the OutputTimeUnit is set to milliseconds. +The `@OutputTimeUnit` annotation specifies the time unit in which your results will be presented. +This time unit can range from minutes to nanoseconds. If a piece of code executes within a few milliseconds, +presenting the result in milliseconds or microseconds provides a more accurate and detailed measurement. +Conversely, for operations with longer execution times, you might choose to display the output in microseconds, seconds, or even minutes. +Essentially, the `@OutputTimeUnit` annotation is about enhancing the readability and interpretability of benchmark results. +If this annotation isn't specified, it defaults to using seconds as the time unit. +In our example, the OutputTimeUnit is set to milliseconds. #### @Warmup -The `@Warmup` annotation is used to specify a preliminary phase before the actual benchmarking takes place. During this warmup phase, the code in your `@Benchmark` method is executed several times, but these runs aren't included in the final benchmark results. The primary purpose of the warmup phase is to let the system "warm up" and reach its optimal performance state. In a typical scenario, when a Java application starts, the JVM (Java Virtual Machine) goes through a process called "JIT (Just-In-Time) compilation" where it learns about your code, optimizes it, and compiles it into native machine code for faster execution. The more the code is run, the more chances the JVM gets to optimize it, potentially making it run faster over time. The warmup phase is akin to giving the JVM a "practice run" to figure out the best optimizations for your code. This is particularly crucial for benchmarking because if you were to start measuring performance right from the first run, your results might be skewed by the JVM's initial learning and optimization process. In our example, the `@Warmup` annotation is used to allow five iterations, each lasting one second, of executing the benchmark method before the actual measurement starts. +The `@Warmup` annotation is used to specify a preliminary phase before the actual benchmarking takes place. +During this warmup phase, the code in your `@Benchmark` method is executed several times, but these runs aren't included +in the final benchmark results. The primary purpose of the warmup phase is to let the system "warm up" and reach its +optimal performance state so that the results of measurement iterations are more stable. +In our example, the `@Warmup` annotation is used to allow 20 iterations, each lasting one second, +of executing the benchmark method before the actual measurement starts. #### @Measurement -The `@Measurement` annotation is used to control the properties of the actual benchmarking phase. It sets how many times the benchmark method is run (iterations) and how long each run should last. The results from these runs are recorded and reported as the final benchmark results. In our example, the `@Measurement` annotation specifies that the benchmark method will be run once for a duration of one second for the final performance measurement. +The `@Measurement` annotation is used to control the properties of the actual benchmarking phase. +It sets how many iterations the benchmark method is run and how long each run should last. +The results from these runs are recorded and reported as the final benchmark results. +In our example, the `@Measurement` annotation specifies that the benchmark method will be run 20 iterations +for a duration of one second for the final performance measurement. -#### @Fork +#### @Param -The `@Fork` annotation, available only in the Kotlin/JVM target, is utilized to command to launch each benchmark in a standalone Java Virtual Machine (JVM) process. The JVM conducts various behind-the-scenes optimizations such as Just-In-Time compilation, class loading, and garbage collection. These can significantly impact the performance of our code. However, these influences might differ from one run to another, leading to inconsistent or misleading benchmark results if multiple benchmarks are executed within the same JVM process. By triggering the JVM to fork for each benchmark, these JVM-specific factors are eliminated from affecting the benchmark results, providing a clean and independent environment for each benchmark, enhancing the reliability and comparability of results. The value you assign to the `@Fork` annotation determines the number of separate JVM processes initiated for each benchmark. If `@Fork` is not specified, it defaults to [Defaults.MEASUREMENT_FORKS (`5`)](https://javadoc.io/static/org.openjdk.jmh/jmh-core/1.21/org/openjdk/jmh/runner/Defaults.html#MEASUREMENT_FORKS). Repeating the benchmark across multiple JVMs and averaging the results gives a more accurate representation of typical performance and accommodates for variability possibly caused by different JVM startups. The `@Fork(1)` annotation for exmaple, indicates that each benchmark test should run in one separate JVM process, thus ensuring an isolated and reliable testing environment for each test run. +The `@Param` annotation is used to pass different parameters to your benchmark method. +It allows you to run the same benchmark method with different input values, so you can see how these variations affect +performance. The values you provide for the `@Param` annotation are the different inputs you want to use in your +benchmark test. The benchmark will run once for each provided value. +The property marked with this annotation must be public and mutable (`var`). +In our example, `@Param` annotation is used with values '4' and '10', which means the benchmarkMethod will be executed +twice, once with the `param` value as '4' and then with '10'. This helps to understand how the input list's size affects the time taken to sum its integers. -#### @Param +#### Other JMH annotations -The `@Param` annotation is used to pass different parameters to your benchmark method. It allows you to run the same benchmark method with different input values, so you can see how these variations affect performance. The values you provide for the `@Param` annotation are the different inputs you want to use in your benchmark test. The benchmark will run once for each provided value. In our example, `@Param` annotation is used with values '4' and '10', which means the benchmarkMethod will be executed twice, once with the `param` value as '4' and then with '10'. This could serve to help in understanding how the size of the input list impacts the time it takes to sum all the integers in the list. +In a Kotlin/JVM target, you can use annotations provided by JMH to further tune your benchmarks execution behavior. +Refer to [JMH documentation](https://javadoc.io/doc/org.openjdk.jmh/jmh-core/latest/org/openjdk/jmh/annotations/package-summary.html) for available annotations. ## Blackhole -Modern compilers often remove computations that they deem unnecessary, which could serve to distort benchmark results. In essence, `Blackhole` maintains the integrity of benchmarks by preventing unwanted JVM optimizations. The Blackhole class is available on all targets excluding Kotlin/Wasm(experimental) +Modern compilers often eliminate computations they find unnecessary, which can distort benchmark results. +In essence, `Blackhole` maintains the integrity of benchmarks by preventing unwanted optimizations such as dead-code +elimination by the compiler or the runtime virtual machine. A `Blackhole` is used when the benchmark produces several values. +If the benchmark produces a single value, just return it. It will be implicitly consumed by a `Blackhole`. #### How to Use Blackhole: -Inject `Blackhole` into your benchmark method and use it to consume results: +Inject `Blackhole` into your benchmark method and use it to consume results of your computations: ```kotlin @Benchmark -fun longBlackholeBenchmark(bh: Blackhole) { - repeat(1000) { - bh.consume(text.length) +fun iterateBenchmark(bh: Blackhole) { + for (e in myList) { + bh.consume(e) } } ``` By consuming results, you signal to the compiler that these computations are significant and shouldn't be optimized away. -For a deeper dive into `Blackhole` and its nuances, you can refer to: +For a deeper dive into `Blackhole` and its nuances in JVM, you can refer to: - [Official Javadocs](https://javadoc.io/static/org.openjdk.jmh/jmh-core/1.23/org/openjdk/jmh/infra/Blackhole.html) -- [JMH](https://github.com/openjdk/jmh/blob/master/jmh-core/src/main/java/org/openjdk/jmh/infra/Blackhole.java#L157-L254) \ No newline at end of file +- [JMH](https://github.com/openjdk/jmh/blob/1.37/jmh-core/src/main/java/org/openjdk/jmh/infra/Blackhole.java#L157-L254) \ No newline at end of file From a2116f1f1632b0b1a3e390cb58e0e576c4f60659 Mon Sep 17 00:00:00 2001 From: wldeh <62161211+wldeh@users.noreply.github.com> Date: Mon, 11 Sep 2023 14:28:36 -0700 Subject: [PATCH 29/30] add jvmProfiler to config options doc --- docs/configuration-options.md | 1 + 1 file changed, 1 insertion(+) diff --git a/docs/configuration-options.md b/docs/configuration-options.md index e3eb23e3..dd598057 100644 --- a/docs/configuration-options.md +++ b/docs/configuration-options.md @@ -63,6 +63,7 @@ The options listed in the following sections allow you to tailor the benchmark e | Option | Description | Possible Values | Default Value | |---------------------------------------------|------------------------------------------------------------|--------------------------------|----------------| | `advanced("jvmForks", value)` | Specifies the number of times the harness should fork. | Integer, "definedByJmh" | `1` | +| `advanced("jvmProfiler", value)` | Sets the profiler to be used during benchmarking. | "gc", "stack", "cl", "comp" | `null` (No profiler)| **Notes on "jvmForks":** - **0** - "no fork", i.e., no subprocesses are forked to run benchmarks. From e7f6c40002f9fcd4793512e30ee703970836dea3 Mon Sep 17 00:00:00 2001 From: wldeh <62161211+wldeh@users.noreply.github.com> Date: Mon, 11 Sep 2023 19:45:15 -0700 Subject: [PATCH 30/30] add hyperlinks to jvmProfiler options in doc --- docs/configuration-options.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/configuration-options.md b/docs/configuration-options.md index dd598057..fee253b4 100644 --- a/docs/configuration-options.md +++ b/docs/configuration-options.md @@ -63,7 +63,7 @@ The options listed in the following sections allow you to tailor the benchmark e | Option | Description | Possible Values | Default Value | |---------------------------------------------|------------------------------------------------------------|--------------------------------|----------------| | `advanced("jvmForks", value)` | Specifies the number of times the harness should fork. | Integer, "definedByJmh" | `1` | -| `advanced("jvmProfiler", value)` | Sets the profiler to be used during benchmarking. | "gc", "stack", "cl", "comp" | `null` (No profiler)| +| `advanced("jvmProfiler", value)` | Sets the profiler to be used during benchmarking. | "[gc](https://github.com/openjdk/jmh/blob/master/jmh-samples/src/main/java/org/openjdk/jmh/samples/JMHSample_35_Profilers.java#L170-L212)", "[stack](https://github.com/openjdk/jmh/blob/master/jmh-samples/src/main/java/org/openjdk/jmh/samples/JMHSample_35_Profilers.java#L166-L168)", "[cl](https://github.com/openjdk/jmh/blob/master/jmh-samples/src/main/java/org/openjdk/jmh/samples/JMHSample_35_Profilers.java#L288-L304)", "[comp](https://github.com/openjdk/jmh/blob/master/jmh-samples/src/main/java/org/openjdk/jmh/samples/JMHSample_35_Profilers.java#L306-L318)" | No profiler | **Notes on "jvmForks":** - **0** - "no fork", i.e., no subprocesses are forked to run benchmarks.