-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SMP] Increase the DogStatsD default stringInterner size #19882
Conversation
Bloop Bleep... Dogbot HereRegression Detector ResultsRun ID: 277ece38-0649-4a73-93cc-47bdf7a276c4 ExplanationA regression test is an integrated performance test for Because a target's optimization goal performance in each experiment will vary somewhat each time it is run, we can only estimate mean differences in optimization goal relative to the baseline target. We express these differences as a percentage change relative to the baseline target, denoted "Δ mean %". These estimates are made to a precision that balances accuracy and cost control. We represent this precision as a 90.00% confidence interval denoted "Δ mean % CI": there is a 90.00% chance that the true value of "Δ mean %" is in that interval. We decide whether a change in performance is a "regression" -- a change worth investigating further -- if both of the following two criteria are true:
The table below, if present, lists those experiments that have experienced a statistically significant change in mean optimization goal performance between baseline and comparison SHAs with 90.00% confidence OR have been detected as newly erratic. Negative values of "Δ mean %" mean that baseline is faster, whereas positive values of "Δ mean %" mean that comparison is faster. Results that do not exhibit more than a ±5.00% change in their mean optimization goal are discarded. An experiment is erratic if its coefficient of variation is greater than 0.1. The abbreviated table will be omitted if no interesting change is observed. No interesting changes in experiment optimization goals with confidence ≥ 90.00% and |Δ mean %| ≥ 5.00%. Fine details of change detection per experiment.
|
Observations from the Regression Detector:
We now have profiles for this run. Results,
Consider In general I consider this change a desirable one. This obsoletes #19852. |
Some noted concerns, questions:
Apologies if I muddled the paraphrases. To the first, I think we could introduce a new variant experiment in this PR that does, say, 100KiB/sec throughput, all other settings being the same. It's certainly something we can probe. To the second, we can collect the interner's self-telemetry, I just need to wire that in. |
88b6446
to
69fcb49
Compare
I'm trying to establish an intuition for what factors play into this, I don't think its as simple as "high bytes per second vs low bytes per second". The string interner is really about the number of unique strings that are processed (ie, components of a dogstatsd message). But "every metric is unique" is not the case for the Agent. We see many tags and metric names repeated, which brings us back to the idea of a metric context. In a scenario where we have a high number of metric contexts, we correspondingly have a high number of unique strings, we want to re-use existing allocations for all of them, so increasing In a scenario where we have a low number of metric contexts, if the I'm leaning towards leaving this default alone and maybe updating our docs for high throughput dogstatsd to mention this as a good thing to tune. Longer/medium term it would be great to improve the tunables here as its rather hard to say what a good size would be for a given workload. Or make it automatic so nobody ever has to tune this (though I have no idea how we'd do that). Sorry for the big wall of text, but I lean towards a docs change vs a default change. |
69fcb49
to
c9359b2
Compare
Fundamentally, I do not believe that our customers should need to be domain expert in the internals of our Agent to get acceptable performance. I believe that documenting toggles is a kinder way of pushing demand for expertise on customers, but it's still pushing that demand. If a DogStatsD heavy customer is not using distribution metrics, this is the bottleneck and memory allocation center they'll hit even for what I would argue are a moderate number of contexts, see below.
It's not, no. It's also not as simple as contexts per second. If we consider the distribution problem, Agent performance is clearly contingent on throughput, distribution of input through time -- smooth, spikey etc -- and also contingent on factors that impact the internals of a component (contexts, here). That pattern is repeated here. I strongly believe that optimizing Agent to remove allocations / second is the major heuristic available to us today, these results are driven entirely by following that heuristic. Allocating and then freeing memory is not cheap.
These experiments are not exercising unique metric names. The maximum number of contexts is 10k. The minimum is 1,000. I do agree that we lack -- as a project -- the notion of what a reasonable number of contexts for a single agent to handle is as of this writing to frame 'reasonable' here.
This is not correct. If we have a low number of input contexts through the lifetime of the Agent the interner will size up proportionally to that size. We are not pre-allocating storage. The worse behavior here is what happens when we overwhelm the cache. As of this PR we continue to dump the whole storage, reallocate it and then restart the cache. It is true that we do not clear the cache without dumping the whole thing. I have added experiments to this PR to experiment with low-throughput, same context situations. I've re-triggered it with a rebase from
That is this work. I believe a non-trivial improvement is possible here in the near term. As mentioned, I believe we might also profitably explore re-introduction of #19179 as a follow-up but more importantly we really should not be invaliding the whole cache in one shot when it's full. We could explore lumping more work into this PR, if our follow-up discussion from new data still leaves concerns.
|
I agree, this is a great goal. I'll pick one nit which is that we're talking about getting acceptable performance under an abnormally high workload.
I understand that we don't pre-allocate, when I say "we end up wasting space and holding onto strings that we may have only seen once and will never see again", I'm referring to a scenario where there is some application and/or codepath that is hit early in the agent's lifecycle which emits some large strings as part of a dogstatsd payload. This risk is always present with low-context workloads, but the risk is higher with a larger cache size, which is why I don't support this as a new default value.
I think this was poor phrasing on my part, when I say:
I'm referring to what "tunable"/"variable" is available to be changed. Right now we only have a variable that says "size of string cache", but as this discussion highlights, its hard to reason about the impact of changing this. If we want to improve the string interner for high-context/throughput workloads, we should be making changes to the way the string interner works, not just bumping up the default. |
It doesn't look like the pr-commenter ran.
For context, the Workload Checks experiment
Understood, but that problem is not addressed with the project's existing default either. Consider also that the memory consumption under discussion is small, on the order of 2.6MiB in the high-end throughput experiments. The existing default does indicate an allocation led throughput bottleneck across a range of throughputs. My argument for raising the default is that we end up in a no-worse state, set ourselves up for further improvements and address an existing bottleneck for very little effort. I certainly think that eviction of the cache needs to be revisited. I also think that the cache is far too small by default.
I think we should be clear about what constitutes low/high contexts, else we might be thinking about numbers that are way, way off. Context data from the experiments. (Note this indicates the context filter in lading is buggy, or potentially surprising if throughput is low. Something to investigate.) Which is to say, if you'd toss some numbers out I'd consider it a favor and I'll get the experiments up and running for them. Some thoughts on the risk being higher, noting the experimental numbers above:
It seems to me that:
This suggests to me that the proposed risk in cases where it does happen in practice:
Further, correctly setting the tunable flag requires:
To be clear, I would like to follow this work up with:
This work is a good, cheap win -- is my argument -- and we can follow-up with improvements on the low-end of things too. |
#19990 related |
This commit extracts experiments from #19882 and #19990, expanding the throughput parameter across the range of concern. Note that at the lower end of the throughput range the full contexts will not be explored, meaning that throughput per second is the dominate metric here. Contexts per second we will need to derive from captured target telemetry. Intended to back ongoing experiments with the stringInterner. REF SMPTNG-12 Signed-off-by: Brian L. Troutwine <[email protected]>
This commit extracts experiments from #19882 and #19990, expanding the throughput parameter across the range of concern. Note that at the lower end of the throughput range the full contexts will not be explored, meaning that throughput per second is the dominate metric here. Contexts per second we will need to derive from captured target telemetry. Intended to back ongoing experiments with the stringInterner. REF SMPTNG-12 Signed-off-by: Brian L. Troutwine <[email protected]>
This commit extracts experiments from #19882 and #19990, expanding the throughput parameter across the range of concern. Note that at the lower end of the throughput range the full contexts will not be explored, meaning that throughput per second is the dominate metric here. Contexts per second we will need to derive from captured target telemetry. Intended to back ongoing experiments with the stringInterner. REF SMPTNG-12 Signed-off-by: Brian L. Troutwine <[email protected]>
c9359b2
to
55a4897
Compare
See #19990 (comment) |
55a4897
to
965a4f4
Compare
This commit replaces the _nodist variants of the `uds_dogstatsd_to_api` experiment with a set of parameterized string interner experiments, varying both total contexts and throughput along two scales. The goal here is to understand how, at 8MiB/sec throughput, the variation of contexts impacts the stringInterner hotspot and along throughput to achieve coverage on a small range of contexts. REF SMPTNG-12 REF #19852 REF #19882 REF #19990 Signed-off-by: Brian L. Troutwine <[email protected]>
* Add experiments focused on dogstatsd's string interner This commit replaces the _nodist variants of the `uds_dogstatsd_to_api` experiment with a set of parameterized string interner experiments, varying both total contexts and throughput along two scales. The goal here is to understand how, at 8MiB/sec throughput, the variation of contexts impacts the stringInterner hotspot and along throughput to achieve coverage on a small range of contexts. REF SMPTNG-12 REF #19852 REF #19882 REF #19990 Signed-off-by: Brian L. Troutwine <[email protected]> * correct target metrics path for Agent Signed-off-by: Brian L. Troutwine <[email protected]> --------- Signed-off-by: Brian L. Troutwine <[email protected]>
965a4f4
to
5672d60
Compare
5672d60
to
76a992f
Compare
This commit increases the default DogStatsD string interner size from 4096 keys to 20_480 keys. Per #19852 we believe that this will benefit Agent memory consumption, allocation pressure while nominally increasing DogStatsD throughput. Details [in this comment](#19852 (comment)). We will know if this change is specific to a single scenario or more generally applicable after the Regression Detector runs. I will comment with an analysis on this PR once that happens. REF SMPTNG-12 Signed-off-by: Brian L. Troutwine <[email protected]>
Signed-off-by: Brian L. Troutwine <[email protected]>
Signed-off-by: Brian L. Troutwine <[email protected]>
76a992f
to
eeb5415
Compare
Supplanted by #20943 |
What does this PR do?
This commit increases the default DogStatsD string interner size from 4096 keys to 20_480 keys. Per #19852 we believe that this will benefit Agent memory consumption, allocation pressure while nominally increasing DogStatsD throughput. Details in this comment.
Additional Notes
We will know if this change is specific to a single scenario or more generally applicable after the Regression Detector runs. I will comment with an analysis on this PR once that happens.
Possible Drawbacks / Trade-offs
Given the allocation pressure already present in this portion of the codebase I am unaware of any. The string interner does not pre-allocate its storage -- although #19179 suggests that would be an interesting follow-up -- so smaller target deploys will not see higher baseline memory consumption.
Motivation
See this page for details
REF SMPTNG-12