-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[CONS-6832] Fix repetitive warning logs with log file tailer #32533
base: main
Are you sure you want to change the base?
[CONS-6832] Fix repetitive warning logs with log file tailer #32533
Conversation
It is not clear to me why subscription filters were changed to I already verified that the previous fix still works even if this is kept as @L3n41c don't hesitate to add your thoughts if you have more context about this. In the mean time, I will proceed with this PR as the freeze is close, it fixes a customer issue and doesn't seem to break your fix. |
@@ -41,7 +41,7 @@ func NewContainerListener(options ServiceListernerDeps) (ServiceListener, error) | |||
const name = "ad-containerlistener" | |||
l := &ContainerListener{} | |||
filter := workloadmeta.NewFilterBuilder(). | |||
SetSource(workloadmeta.SourceAll). | |||
SetSource(workloadmeta.SourceRuntime). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't remember the issue, But I think @L3n41c tried in the past to change this source but it has some side effect.
the one that I'm thinking of (but not 100%sure) is when user don't give access to user to the container runtime, but we can still have info about the container with the kubelet.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see your point, but I think the kubelet listener already generates container service when it receives events from the kubelet source. (see here)
Keeping the container listener filter source as SourceAll
and changing the kubelet's listener filter source back to NodeOrchestrator
seems to also work (i.e. no duplicate logs, no extra warnings).
I think that since the kubelet
listener is listening for events for NodeOrchestrator
, then it also includes events from the kubelet about the containers. So it should still work IMO even if the container listener is using events from the container runtime and we don't have access to the runtime.
WDYT?
Test changes on VMUse this command from test-infra-definitions to manually test this PR changes on a VM: inv aws.create-vm --pipeline-id=51849344 --os-family=ubuntu Note: This applies to commit 8b2f373 |
Uncompressed package size comparisonComparison with ancestor Diff per package
Decision✅ Passed |
Regression DetectorRegression Detector ResultsMetrics dashboard Baseline: 70abd23 Optimization Goals: ✅ No significant changes detected
|
perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
---|---|---|---|---|---|---|
➖ | tcp_syslog_to_blackhole | ingress throughput | +1.42 | [+1.36, +1.48] | 1 | Logs |
➖ | file_to_blackhole_1000ms_latency_linear_load | egress throughput | +0.68 | [+0.21, +1.15] | 1 | Logs |
➖ | quality_gate_idle_all_features | memory utilization | +0.62 | [+0.54, +0.70] | 1 | Logs bounds checks dashboard |
➖ | quality_gate_idle | memory utilization | +0.18 | [+0.14, +0.21] | 1 | Logs bounds checks dashboard |
➖ | file_to_blackhole_0ms_latency | egress throughput | +0.12 | [-0.70, +0.94] | 1 | Logs |
➖ | tcp_dd_logs_filter_exclude | ingress throughput | +0.00 | [-0.01, +0.01] | 1 | Logs |
➖ | file_to_blackhole_300ms_latency | egress throughput | -0.00 | [-0.64, +0.64] | 1 | Logs |
➖ | file_to_blackhole_100ms_latency | egress throughput | -0.00 | [-0.65, +0.65] | 1 | Logs |
➖ | uds_dogstatsd_to_api | ingress throughput | -0.01 | [-0.12, +0.11] | 1 | Logs |
➖ | file_to_blackhole_0ms_latency_http2 | egress throughput | -0.08 | [-0.96, +0.80] | 1 | Logs |
➖ | file_to_blackhole_0ms_latency_http1 | egress throughput | -0.13 | [-1.06, +0.80] | 1 | Logs |
➖ | file_to_blackhole_1000ms_latency | egress throughput | -0.17 | [-0.98, +0.64] | 1 | Logs |
➖ | file_to_blackhole_500ms_latency | egress throughput | -0.23 | [-1.00, +0.54] | 1 | Logs |
➖ | file_tree | memory utilization | -0.24 | [-0.37, -0.12] | 1 | Logs |
➖ | uds_dogstatsd_to_api_cpu | % cpu utilization | -1.38 | [-2.06, -0.70] | 1 | Logs |
➖ | quality_gate_logs | % cpu utilization | -2.99 | [-6.19, +0.20] | 1 | Logs |
Bounds Checks: ✅ Passed
perf | experiment | bounds_check_name | replicates_passed | links |
---|---|---|---|---|
✅ | file_to_blackhole_0ms_latency | lost_bytes | 10/10 | |
✅ | file_to_blackhole_0ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_0ms_latency_http1 | lost_bytes | 10/10 | |
✅ | file_to_blackhole_0ms_latency_http1 | memory_usage | 10/10 | |
✅ | file_to_blackhole_0ms_latency_http2 | lost_bytes | 10/10 | |
✅ | file_to_blackhole_0ms_latency_http2 | memory_usage | 10/10 | |
✅ | file_to_blackhole_1000ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_1000ms_latency_linear_load | memory_usage | 10/10 | |
✅ | file_to_blackhole_100ms_latency | lost_bytes | 10/10 | |
✅ | file_to_blackhole_100ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_300ms_latency | lost_bytes | 10/10 | |
✅ | file_to_blackhole_300ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_500ms_latency | lost_bytes | 10/10 | |
✅ | file_to_blackhole_500ms_latency | memory_usage | 10/10 | |
✅ | quality_gate_idle | memory_usage | 10/10 | bounds checks dashboard |
✅ | quality_gate_idle_all_features | memory_usage | 10/10 | bounds checks dashboard |
✅ | quality_gate_logs | lost_bytes | 10/10 | |
✅ | quality_gate_logs | memory_usage | 10/10 |
Explanation
Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%
Performance changes are noted in the perf column of each table:
- ✅ = significantly better comparison variant performance
- ❌ = significantly worse comparison variant performance
- ➖ = no significant change in performance
A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".
For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
-
Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.
-
Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.
-
Its configuration does not mark it "erratic".
CI Pass/Fail Decision
✅ Passed. All Quality Gates passed.
- quality_gate_idle_all_features, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check lost_bytes: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_idle, bounds check memory_usage: 10/10 replicas passed. Gate passed.
What does this PR do?
This PR fixes an issue in autodiscovery component where container and kubelet listeners are subscribing to
SourceAll
in workloadmeta, which causes recreation of service that has been already created and finished.As a result, the log agent is trying to make a new tailer for containers of cron jobs that have already stopped and are maintained on the node for some time in the
Completed
state depending on the user configuration.This may or may not happen depending on the order the set, unset events are emitted by different collectors.
Until version 7.55, the kubelet listener was listening for events coming from the
NodeOrchestrator
collector (i.e. the kubelet in kubernetes). The container listener was listening for events coming from theRuntime
collector (e.g. containerd runtime collector).In 7.56, a fix was merged that solves an issue with duplicated logs.
In that fix, the workloadmeta subscription filters in both the kubelet and container listeners were changed to
SourceAll
(see here and here).When a container is stopped, the containerd collector (or docker..) and the kubelet collector will both emit EventTypeUnset events.
If the subscriber had subscription with
SourceAll
, then if the kubelet unsets the container first, the event will be sent as a Set event to the subscriber because we still have another source (i.e. containerd collector) maintaining the entity in the store. The autodiscovery component will then receive a new container entity, but this time only populated with data coming from the container runtime and not from the kubelet. As a result, it will think that this should create a new service for this container because the data are different now.This causes the log agent to try to make a file tailer. However, it won't be able to do so if the container has already been deleted fully from the node and the log file is removed. It will thus emit an extra warning saying that it failed to make a file tailer.
Example:
Motivation
Fix CONS-6832
Describe how you validated your changes
We need to verify 2 things:
By restoring the sources of the kubelet and container listeners to
NodeOrchestrator
andRuntime
, we would still need totest that previous fix still works. We need to repeat the same QA as the previous fix.
Already tested it with a cronjob, and didn't get any duplicate logs for the job:
You should also verify that we almost never get a warning saying
Could not make file tailer for source
in the agent logs.(Note: you should set the log level to warning when doing the QA, and you should leave it running for a sufficient time, preferably an hour at least, before validating that you are not getting many warnings)
Possible Drawbacks / Trade-offs
None
Additional Notes