You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Introduce runner group definition. (via --runner-group-member-id/--runner-group-total-members or env vars) (#397)
* WIP on benchmark runner groups (splitting tests among workers)
* Introduce runner group definition. (via --runner-group-member-id/--runner-group-total-members or env vars)
* Addressed org change from RedisLabsModules to redis-performance
* Addressed org change from RedisLabsModules to redis-performance
Copy file name to clipboardExpand all lines: docs/Readme.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@
3
3
4
4
The automated benchmark definitions provides a framework for evaluating and comparing feature branches and catching regressions prior letting them into the master branch.
5
5
6
-
To be able to run local benchmarks you need `redisbench_admin>=0.1.64`[[tool repo for full details](https://github.com/RedisLabsModules/redisbench-admin)] and the benchmark tool specified on each configuration file . You can install redisbench-admin via PyPi as any other package.
6
+
To be able to run local benchmarks you need `redisbench_admin>=0.1.64`[[tool repo for full details](https://github.com/redis-performance/redisbench-admin)] and the benchmark tool specified on each configuration file . You can install redisbench-admin via PyPi as any other package.
7
7
```
8
8
pip3 install redisbench_admin>=0.1.64
9
9
```
@@ -21,7 +21,7 @@ A benchmark definition will then consist of:
21
21
22
22
- mandatory client configuration (`clientconfig`) specifing the parameters to pass to the benchmark tool tool. The properties allowed here are: `tool`, `min-tool-version`, `tool_source`, `parameters`. If you don't have the required tools and the `tool_source` property is specified then the benchmark client will be downloaded once to a local path `./binaries/<tool>`.
23
23
24
-
- optional ci remote definition (`remote`), with the proper terraform deployment configurations definition. The properties allowed here are `type` and `setup`. Both properties are used to find the proper benchmark specification folder within [RedisLabsModules/testing-infrastructure](https://github.com/RedisLabsModules/testing-infrastructure). As an example, if you specify ` - type: oss-standalone` and `- setup: redistimeseries-m5` the used terraform setup will be described by the setup at [`testing-infrastructure/tree/terraform/oss-standalone-redistimeseries-m5`](https://github.com/RedisLabsModules/testing-infrastructure/tree/master/terraform/oss-standalone-redistimeseries-m5)
24
+
- optional ci remote definition (`remote`), with the proper terraform deployment configurations definition. The properties allowed here are `type` and `setup`. Both properties are used to find the proper benchmark specification folder within [redis-performance/testing-infrastructure](https://github.com/redis-performance/testing-infrastructure). As an example, if you specify ` - type: oss-standalone` and `- setup: redistimeseries-m5` the used terraform setup will be described by the setup at [`testing-infrastructure/tree/terraform/oss-standalone-redistimeseries-m5`](https://github.com/redis-performance/testing-infrastructure/tree/master/terraform/oss-standalone-redistimeseries-m5)
25
25
26
26
- optional KPIs definition (`kpis`), specifying the target upper or lower bounds for each relevant performance metric. If specified the KPIs definitions constraints the tests passing/failing.
Redis benchmark exporter can help you exporting performance results based on several formats input (CSV, JSON) and
7
7
pushing them to data sinks in a time-series format.
8
8
9
9
Ultimately it provides a framework for evaluating and comparing feature branches and catching regressions prior letting them into the master branch,
10
-
as shown on the bellow sample dashboard produced from exported data via [redisbench-admin export](https://github.com/RedisLabsModules/redisbench-admin).
10
+
as shown on the bellow sample dashboard produced from exported data via [redisbench-admin export](https://github.com/redis-performance/redisbench-admin).
11
11
12
12

13
13
14
14
Current supported benchmark tools to export data from:
help="specify a test regex pattern to use on the tests directory. by default uses '.*'. If --test is defined this options has no effect.",
91
93
)
92
-
94
+
parser.add_argument(
95
+
"--runner-group-member-id",
96
+
type=str,
97
+
default=BENCHMARK_RUNNER_GROUP_M_ID,
98
+
help="Split test files evenly among a runner group. This is the id of the runner. Non-zero remainder of the division of tests will be attributed to the last member.",
99
+
)
100
+
parser.add_argument(
101
+
"--runner-group-total-members",
102
+
type=str,
103
+
default=BENCHMARK_RUNNER_GROUP_TOTAL,
104
+
help="Split test files evenly among a runner group. This is the total number of elements of the runner group",
"Detected a benchmark runner group. Splitting tests evenly. Non-zero remainder will be attributed to the last member. Member ID: {}. Total members: {}. Benchmarks per runner {}.".format(
0 commit comments