-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Uses forked pkg/util/interner for dogstatsd workload #19990
Conversation
e216a98
to
a9d5266
Compare
Bloop Bleep... Dogbot HereRegression Detector ResultsRun ID: 573b1dcc-4c51-4a14-ab1c-3f79bcea7dc6 ExplanationA regression test is an integrated performance test for Because a target's optimization goal performance in each experiment will vary somewhat each time it is run, we can only estimate mean differences in optimization goal relative to the baseline target. We express these differences as a percentage change relative to the baseline target, denoted "Δ mean %". These estimates are made to a precision that balances accuracy and cost control. We represent this precision as a 90.00% confidence interval denoted "Δ mean % CI": there is a 90.00% chance that the true value of "Δ mean %" is in that interval. We decide whether a change in performance is a "regression" -- a change worth investigating further -- if both of the following two criteria are true:
The table below, if present, lists those experiments that have experienced a statistically significant change in mean optimization goal performance between baseline and comparison SHAs with 90.00% confidence OR have been detected as newly erratic. Negative values of "Δ mean %" mean that baseline is faster, whereas positive values of "Δ mean %" mean that comparison is faster. Results that do not exhibit more than a ±5.00% change in their mean optimization goal are discarded. An experiment is erratic if its coefficient of variation is greater than 0.1. The abbreviated table will be omitted if no interesting change is observed. No interesting changes in experiment optimization goals with confidence ≥ 90.00% and |Δ mean %| ≥ 5.00%. Fine details of change detection per experiment.
|
This commit extracts experiments from #19882 and #19990, expanding the throughput parameter across the range of concern. Note that at the lower end of the throughput range the full contexts will not be explored, meaning that throughput per second is the dominate metric here. Contexts per second we will need to derive from captured target telemetry. Intended to back ongoing experiments with the stringInterner. REF SMPTNG-12 Signed-off-by: Brian L. Troutwine <[email protected]>
I've introduced #20036 to give us a broader range of experimental throughput. This analysis will concern itself with run-id
Compare to the raising of the default size this PR sees a similar reduction in lifetime allocations and a curious rise in heap live size in only one experiment, otherwise the drop in I would like to see this run repeated with #20036 in its history. I see the results here as positive, an improvement over |
This commit extracts experiments from #19882 and #19990, expanding the throughput parameter across the range of concern. Note that at the lower end of the throughput range the full contexts will not be explored, meaning that throughput per second is the dominate metric here. Contexts per second we will need to derive from captured target telemetry. Intended to back ongoing experiments with the stringInterner. REF SMPTNG-12 Signed-off-by: Brian L. Troutwine <[email protected]>
a9d5266
to
3185ebc
Compare
This commit extracts experiments from #19882 and #19990, expanding the throughput parameter across the range of concern. Note that at the lower end of the throughput range the full contexts will not be explored, meaning that throughput per second is the dominate metric here. Contexts per second we will need to derive from captured target telemetry. Intended to back ongoing experiments with the stringInterner. REF SMPTNG-12 Signed-off-by: Brian L. Troutwine <[email protected]>
This analysis will concern itself with run-id
A consistent result. The low end is not pessimized and the high end is improved. Looking at the 16MiB case capture data does bear out the higher memory consumption. Throughput is improved, profile suggests the bulk of the greater heap is in the UDS listener and then partially in the string interner. I don't have a hypothesis for that result. Compare to the capture results for #19882 here overall memory consumption, throughput gains, CPU reduction appear to be on-par with raising the default size of the |
This commit updates the environment flags on our experiments to ensure that profiling runs are fine grained through the whole of the run. Updated with reference to the ongoing investigation in #19990 REF SMPTNG-12 Signed-off-by: Brian L. Troutwine <[email protected]>
* Ensure profiling is fine-grained in the Regression Detector This commit updates the environment flags on our experiments to ensure that profiling runs are fine grained through the whole of the run. Updated with reference to the ongoing investigation in #19990 REF SMPTNG-12 Signed-off-by: Brian L. Troutwine <[email protected]> * CPUDURATION -> CPU_DURATION --------- Signed-off-by: Brian L. Troutwine <[email protected]> Co-authored-by: George Hahn <[email protected]>
3185ebc
to
0e4392d
Compare
@blt hijacking a bit the thread, sorry, but for a related question. I was thinking about the AFAIR, both implementations rely on the GC to use the equivalent of weak pointers/references, where the GC could possible collect them and that's OK. Wouldn't it mean that the moments the GC runs has an impact on the |
It's an interesting question and apropos I think.
It would have an impact, yeah.
I'd be vaguely against that. Which is to say, I think the profiler is not quite the right tool for this and I could have called this out a little better. Because of the sampling problem, like you point out, we might sample the Agent before GC has kicked on but that is memory use that the user of Agent would actually see, so should we elide it by manually triggering GC before sampling? I think that would give us a different view, but it may be a skewed view. The right tool here is to examine the capture data which is itself sampled but at 1hz. I guess my argument would be that a freshly GC'ed Agent isn't any less representative than one where the GC has yet to run. Both states would be observable to an Agent user and need thought about when deciding resource allocation etc. I do continue to have a hunch that focusing on lifetime allocations is probably the more valuable thing coming out of the profiler, but as @scottopell has pointed out we do want to be careful to avoid pessimizing observable RSS consumption, or at least do so with the understanding of better gains elsewhere. |
Which is an interesting alternative view. Check this out: The CPU numbers with the new interner are consistently on-par or better especially as throughput increases but the memory consumption data is less rosy on the low-end. It might be that with the way sampling works the live heap size ought to be dropped out of consideration, and only referenced from capture data. |
Another view: I think -- see DataDog/lading#702 and #19993 -- we should have a more clear notion of the number of contexts rolling through Agent in these experiments since the implementation is sensitive to that number, but these are neat results. |
IIRC the GC is run before the heap state is captured for the profiler. I'll rebase and kick off another run of this to include the changes from #20081 |
0e4392d
to
98115f5
Compare
This commit replaces the _nodist variants of the `uds_dogstatsd_to_api` experiment with a set of parameterized string interner experiments, varying both total contexts and throughput along two scales. The goal here is to understand how, at 8MiB/sec throughput, the variation of contexts impacts the stringInterner hotspot and along throughput to achieve coverage on a small range of contexts. REF SMPTNG-12 REF #19852 REF #19882 REF #19990 Signed-off-by: Brian L. Troutwine <[email protected]>
* Add experiments focused on dogstatsd's string interner This commit replaces the _nodist variants of the `uds_dogstatsd_to_api` experiment with a set of parameterized string interner experiments, varying both total contexts and throughput along two scales. The goal here is to understand how, at 8MiB/sec throughput, the variation of contexts impacts the stringInterner hotspot and along throughput to achieve coverage on a small range of contexts. REF SMPTNG-12 REF #19852 REF #19882 REF #19990 Signed-off-by: Brian L. Troutwine <[email protected]> * correct target metrics path for Agent Signed-off-by: Brian L. Troutwine <[email protected]> --------- Signed-off-by: Brian L. Troutwine <[email protected]>
tlmSIRSize = telemetry.NewSimpleGauge("dogstatsd", "string_interner_entries", | ||
"Number of entries in the string interner") | ||
tlmSIRBytes = telemetry.NewSimpleGauge("dogstatsd", "string_interner_bytes", | ||
"Number of bytes stored in the string interner") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is really really valuable imformation, can we try to keep it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yup, don't worry I'll make sure all existing telemetry is covered in a final version of this PR.
98115f5
to
04f36b5
Compare
Exploring a slightly different architecture in #20335 |
04f36b5
to
95663e2
Compare
What does this PR do?
Replaces the dogstatsd string interner with a gc-friendlier version that doesn't require dumping the cache.
Motivation
Testing out differences in string interning.
Additional Notes
This will not be merged as-is as it copy-pastas most of
pkg/util/intern
instead of using it directly.Possible Drawbacks / Trade-offs
Describe how to test/QA your changes
Reviewer's Checklist
Triage
milestone is set.major_change
label if your change either has a major impact on the code base, is impacting multiple teams or is changing important well-established internals of the Agent. This label will be use during QA to make sure each team pay extra attention to the changed behavior. For any customer facing change use a releasenote.changelog/no-changelog
label has been applied.qa/skip-qa
label is not applied.team/..
label has been applied, indicating the team(s) that should QA this change.need-change/operator
andneed-change/helm
labels have been applied.k8s/<min-version>
label, indicating the lowest Kubernetes version compatible with this feature.