Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SMP] Add more _nodist uds experiments #20036

Merged
merged 1 commit into from
Oct 11, 2023
Merged

Conversation

blt
Copy link
Contributor

@blt blt commented Oct 10, 2023

What does this PR do?

This commit extracts experiments from #19882 and #19990, expanding the throughput parameter across the range of concern. Note that at the lower end of the throughput range the full contexts will not be explored, meaning that throughput per second is the dominate metric here. Contexts per second we will need to derive from captured target telemetry.

Motivation

Intended to back ongoing experiments with the stringInterner. See this document.

REF SMPTNG-12

@blt blt added changelog/no-changelog [deprecated] qa/skip-qa - use other qa/ labels [DEPRECATED] Please use qa/done or qa/no-code-change to skip creating a QA card team/single-machine-performance Single Machine Performance labels Oct 10, 2023
@blt blt added this to the 7.50.0 milestone Oct 10, 2023
@blt blt requested a review from a team as a code owner October 10, 2023 18:26
@pr-commenter
Copy link

pr-commenter bot commented Oct 10, 2023

Bloop Bleep... Dogbot Here

Regression Detector Results

Run ID: 615c3c22-21b8-4e86-a483-fdbb6ff41927
Baseline: c3b9631
Comparison: d183364
Total datadog-agent CPUs: 7

Explanation

A regression test is an integrated performance test for datadog-agent in a repeatable rig, with varying configuration for datadog-agent. What follows is a statistical summary of a brief datadog-agent run for each configuration across SHAs given above. The goal of these tests are to determine quickly if datadog-agent performance is changed and to what degree by a pull request.

Because a target's optimization goal performance in each experiment will vary somewhat each time it is run, we can only estimate mean differences in optimization goal relative to the baseline target. We express these differences as a percentage change relative to the baseline target, denoted "Δ mean %". These estimates are made to a precision that balances accuracy and cost control. We represent this precision as a 90.00% confidence interval denoted "Δ mean % CI": there is a 90.00% chance that the true value of "Δ mean %" is in that interval.

We decide whether a change in performance is a "regression" -- a change worth investigating further -- if both of the following two criteria are true:

  1. The estimated |Δ mean %| ≥ 5.00%. This criterion intends to answer the question "Does the estimated change in mean optimization goal performance have a meaningful impact on your customers?". We assume that when |Δ mean %| < 5.00%, the impact on your customers is not meaningful. We also assume that a performance change in optimization goal is worth investigating whether it is an increase or decrease, so long as the magnitude of the change is sufficiently large.

  2. Zero is not in the 90.00% confidence interval "Δ mean % CI" about "Δ mean %". This statement is equivalent to saying that there is at least a 90.00% chance that the mean difference in optimization goal is not zero. This criterion intends to answer the question, "Is there a statistically significant difference in mean optimization goal performance?". It also means there is no more than a 10.00% chance this criterion reports a statistically significant difference when the true difference in mean optimization goal is zero -- a "false positive". We assume you are willing to accept a 10.00% chance of inaccurately detecting a change in performance when no true difference exists.

The table below, if present, lists those experiments that have experienced a statistically significant change in mean optimization goal performance between baseline and comparison SHAs with 90.00% confidence OR have been detected as newly erratic. Negative values of "Δ mean %" mean that baseline is faster, whereas positive values of "Δ mean %" mean that comparison is faster. Results that do not exhibit more than a ±5.00% change in their mean optimization goal are discarded. An experiment is erratic if its coefficient of variation is greater than 0.1. The abbreviated table will be omitted if no interesting change is observed.

Changes in experiment optimization goals with confidence ≥ 90.00% and |Δ mean %| ≥ 5.00%:

experiment goal Δ mean % confidence
uds_dogstatsd_to_api_nodist_nomulti_200MiB ingress throughput -7.41 100.00%
Fine details of change detection per experiment.
experiment goal Δ mean % Δ mean % CI confidence
file_tree egress throughput +6.29 [+3.94, +8.64] 100.00%
otel_to_otel_logs ingress throughput +0.51 [-1.09, +2.11] 40.21%
uds_dogstatsd_to_api_nodist_200MiB ingress throughput +0.44 [+0.33, +0.55] 100.00%
trace_agent_json ingress throughput +0.03 [-0.09, +0.15] 33.48%
uds_dogstatsd_to_api_nodist_16MiB ingress throughput +0.01 [-0.12, +0.13] 6.43%
tcp_dd_logs_filter_exclude ingress throughput +0.00 [-0.06, +0.07] 9.69%
uds_dogstatsd_to_api_nodist_nomulti_100MiB ingress throughput +0.00 [-0.13, +0.13] 0.73%
uds_dogstatsd_to_api_nodist_64MiB ingress throughput +0.00 [-0.13, +0.13] 0.29%
uds_dogstatsd_to_api_nodist_100MiB ingress throughput +0.00 [-0.13, +0.13] 0.18%
file_to_blackhole egress throughput +0.00 [-1.38, +1.38] 0.00%
uds_dogstatsd_to_api_nodist_1MiB ingress throughput +0.00 [-0.00, +0.00] 0.00%
uds_dogstatsd_to_api_nodist_32MiB ingress throughput -0.00 [-0.13, +0.13] 3.52%
trace_agent_msgpack ingress throughput -0.01 [-0.11, +0.08] 18.64%
tcp_syslog_to_blackhole ingress throughput -0.14 [-0.28, +0.00] 89.87%
uds_dogstatsd_to_api ingress throughput -0.94 [-3.19, +1.31] 50.73%
uds_dogstatsd_to_api_nodist_nomulti_200MiB ingress throughput -7.41 [-7.88, -6.95] 100.00%

This commit extracts experiments from #19882 and #19990, expanding the
throughput parameter across the range of concern. Note that at the lower end of
the throughput range the full contexts will not be explored, meaning that
throughput per second is the dominate metric here. Contexts per second we will
need to derive from captured target telemetry.

Intended to back ongoing experiments with the stringInterner.

REF SMPTNG-12

Signed-off-by: Brian L. Troutwine <[email protected]>
@blt blt force-pushed the smp_extract_interner_experiments branch from 48da4a1 to d183364 Compare October 10, 2023 20:18
@blt blt merged commit 0ee3251 into main Oct 11, 2023
@blt blt deleted the smp_extract_interner_experiments branch October 11, 2023 02:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
changelog/no-changelog [deprecated] qa/skip-qa - use other qa/ labels [DEPRECATED] Please use qa/done or qa/no-code-change to skip creating a QA card team/single-machine-performance Single Machine Performance
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants