Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[CONS-6832] Fix repetitive warning logs with log file tailer #32533

Draft
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

adel121
Copy link
Contributor

@adel121 adel121 commented Dec 27, 2024

What does this PR do?

This PR fixes an issue in autodiscovery component where container and kubelet listeners are subscribing to SourceAll in workloadmeta, which causes recreation of service that has been already created and finished.

As a result, the log agent is trying to make a new tailer for containers of cron jobs that have already stopped and are maintained on the node for some time in the Completed state depending on the user configuration.

This may or may not happen depending on the order the set, unset events are emitted by different collectors.

Until version 7.55, the kubelet listener was listening for events coming from the NodeOrchestrator collector (i.e. the kubelet in kubernetes). The container listener was listening for events coming from the Runtime collector (e.g. containerd runtime collector).

In 7.56, a fix was merged that solves an issue with duplicated logs.

In that fix, the workloadmeta subscription filters in both the kubelet and container listeners were changed to SourceAll (see here and here).

When a container is stopped, the containerd collector (or docker..) and the kubelet collector will both emit EventTypeUnset events.

If the subscriber had subscription with SourceAll, then if the kubelet unsets the container first, the event will be sent as a Set event to the subscriber because we still have another source (i.e. containerd collector) maintaining the entity in the store. The autodiscovery component will then receive a new container entity, but this time only populated with data coming from the container runtime and not from the kubelet. As a result, it will think that this should create a new service for this container because the data are different now.

This causes the log agent to try to make a file tailer. However, it won't be able to do so if the container has already been deleted fully from the node and the log file is removed. It will thus emit an extra warning saying that it failed to make a file tailer.

Example:

Could not make file tailer for source container_collect_all (falling back to socket): cannot find pod for container "6f585560fac9d45127f20509c7c84d017126776573772f42e1bd45af59090e54": "6f585560fac9d45127f20509c7c84d017126776573772f42e1bd45af59090e54" not found

Motivation

Fix CONS-6832

Describe how you validated your changes

We need to verify 2 things:

By restoring the sources of the kubelet and container listeners to NodeOrchestrator and Runtime, we would still need to
test that previous fix still works. We need to repeat the same QA as the previous fix.

Already tested it with a cronjob, and didn't get any duplicate logs for the job:

image

You should also verify that we almost never get a warning saying Could not make file tailer for source in the agent logs.

(Note: you should set the log level to warning when doing the QA, and you should leave it running for a sufficient time, preferably an hour at least, before validating that you are not getting many warnings)

Possible Drawbacks / Trade-offs

None

Additional Notes

@github-actions github-actions bot added short review PR is simple enough to be reviewed quickly team/container-platform The Container Platform Team labels Dec 27, 2024
@adel121 adel121 marked this pull request as ready for review December 27, 2024 09:57
@adel121 adel121 requested review from a team as code owners December 27, 2024 09:57
@adel121
Copy link
Contributor Author

adel121 commented Dec 27, 2024

It is not clear to me why subscription filters were changed to SourceAll in the previous fix.

I already verified that the previous fix still works even if this is kept as SourceRuntime and NodeOrchestrator.

@L3n41c don't hesitate to add your thoughts if you have more context about this.

In the mean time, I will proceed with this PR as the freeze is close, it fixes a customer issue and doesn't seem to break your fix.

@adel121 adel121 added the qa/rc-required Only for a PR that requires validation on the Release Candidate label Dec 27, 2024
@adel121 adel121 modified the milestone: 7.62.0 Dec 27, 2024
@@ -41,7 +41,7 @@ func NewContainerListener(options ServiceListernerDeps) (ServiceListener, error)
const name = "ad-containerlistener"
l := &ContainerListener{}
filter := workloadmeta.NewFilterBuilder().
SetSource(workloadmeta.SourceAll).
SetSource(workloadmeta.SourceRuntime).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't remember the issue, But I think @L3n41c tried in the past to change this source but it has some side effect.

the one that I'm thinking of (but not 100%sure) is when user don't give access to user to the container runtime, but we can still have info about the container with the kubelet.

Copy link
Contributor Author

@adel121 adel121 Dec 27, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see your point, but I think the kubelet listener already generates container service when it receives events from the kubelet source. (see here)

Keeping the container listener filter source as SourceAll and changing the kubelet's listener filter source back to NodeOrchestrator seems to also work (i.e. no duplicate logs, no extra warnings).

I think that since the kubelet listener is listening for events for NodeOrchestrator, then it also includes events from the kubelet about the containers. So it should still work IMO even if the container listener is using events from the container runtime and we don't have access to the runtime.

WDYT?

@github-actions github-actions bot added medium review PR review might take time and removed short review PR is simple enough to be reviewed quickly labels Dec 27, 2024
@agent-platform-auto-pr
Copy link
Contributor

agent-platform-auto-pr bot commented Dec 27, 2024

Test changes on VM

Use this command from test-infra-definitions to manually test this PR changes on a VM:

inv aws.create-vm --pipeline-id=51849344 --os-family=ubuntu

Note: This applies to commit 8b2f373

@agent-platform-auto-pr
Copy link
Contributor

agent-platform-auto-pr bot commented Dec 27, 2024

Uncompressed package size comparison

Comparison with ancestor 70abd234c3950faf793ae73b1276bd6d1dc85c46

Diff per package
package diff status size ancestor threshold
datadog-agent-amd64-deb 0.00MB 1191.10MB 1191.10MB 140.00MB
datadog-iot-agent-aarch64-rpm 0.00MB 108.88MB 108.88MB 10.00MB
datadog-agent-x86_64-rpm 0.00MB 1200.42MB 1200.42MB 140.00MB
datadog-agent-x86_64-suse 0.00MB 1200.42MB 1200.42MB 140.00MB
datadog-agent-aarch64-rpm 0.00MB 944.69MB 944.69MB 140.00MB
datadog-agent-arm64-deb 0.00MB 935.39MB 935.39MB 140.00MB
datadog-dogstatsd-amd64-deb 0.00MB 78.57MB 78.57MB 10.00MB
datadog-dogstatsd-x86_64-rpm 0.00MB 78.64MB 78.64MB 10.00MB
datadog-dogstatsd-x86_64-suse 0.00MB 78.64MB 78.64MB 10.00MB
datadog-dogstatsd-arm64-deb 0.00MB 55.77MB 55.77MB 10.00MB
datadog-heroku-agent-amd64-deb 0.00MB 505.27MB 505.27MB 70.00MB
datadog-iot-agent-amd64-deb 0.00MB 113.35MB 113.35MB 10.00MB
datadog-iot-agent-arm64-deb 0.00MB 108.81MB 108.81MB 10.00MB
datadog-iot-agent-x86_64-rpm -0.00MB 113.42MB 113.42MB 10.00MB
datadog-iot-agent-x86_64-suse -0.00MB 113.42MB 113.42MB 10.00MB

Decision

✅ Passed

@adel121 adel121 requested a review from clamoriniere December 27, 2024 10:37
@adel121 adel121 marked this pull request as draft December 27, 2024 13:07
Copy link

Regression Detector

Regression Detector Results

Metrics dashboard
Target profiles
Run ID: 5c30cd54-50fa-4e41-931c-9b306a66ab51

Baseline: 70abd23
Comparison: 8b2f373
Diff

Optimization Goals: ✅ No significant changes detected

Fine details of change detection per experiment

perf experiment goal Δ mean % Δ mean % CI trials links
tcp_syslog_to_blackhole ingress throughput +1.42 [+1.36, +1.48] 1 Logs
file_to_blackhole_1000ms_latency_linear_load egress throughput +0.68 [+0.21, +1.15] 1 Logs
quality_gate_idle_all_features memory utilization +0.62 [+0.54, +0.70] 1 Logs bounds checks dashboard
quality_gate_idle memory utilization +0.18 [+0.14, +0.21] 1 Logs bounds checks dashboard
file_to_blackhole_0ms_latency egress throughput +0.12 [-0.70, +0.94] 1 Logs
tcp_dd_logs_filter_exclude ingress throughput +0.00 [-0.01, +0.01] 1 Logs
file_to_blackhole_300ms_latency egress throughput -0.00 [-0.64, +0.64] 1 Logs
file_to_blackhole_100ms_latency egress throughput -0.00 [-0.65, +0.65] 1 Logs
uds_dogstatsd_to_api ingress throughput -0.01 [-0.12, +0.11] 1 Logs
file_to_blackhole_0ms_latency_http2 egress throughput -0.08 [-0.96, +0.80] 1 Logs
file_to_blackhole_0ms_latency_http1 egress throughput -0.13 [-1.06, +0.80] 1 Logs
file_to_blackhole_1000ms_latency egress throughput -0.17 [-0.98, +0.64] 1 Logs
file_to_blackhole_500ms_latency egress throughput -0.23 [-1.00, +0.54] 1 Logs
file_tree memory utilization -0.24 [-0.37, -0.12] 1 Logs
uds_dogstatsd_to_api_cpu % cpu utilization -1.38 [-2.06, -0.70] 1 Logs
quality_gate_logs % cpu utilization -2.99 [-6.19, +0.20] 1 Logs

Bounds Checks: ✅ Passed

perf experiment bounds_check_name replicates_passed links
file_to_blackhole_0ms_latency lost_bytes 10/10
file_to_blackhole_0ms_latency memory_usage 10/10
file_to_blackhole_0ms_latency_http1 lost_bytes 10/10
file_to_blackhole_0ms_latency_http1 memory_usage 10/10
file_to_blackhole_0ms_latency_http2 lost_bytes 10/10
file_to_blackhole_0ms_latency_http2 memory_usage 10/10
file_to_blackhole_1000ms_latency memory_usage 10/10
file_to_blackhole_1000ms_latency_linear_load memory_usage 10/10
file_to_blackhole_100ms_latency lost_bytes 10/10
file_to_blackhole_100ms_latency memory_usage 10/10
file_to_blackhole_300ms_latency lost_bytes 10/10
file_to_blackhole_300ms_latency memory_usage 10/10
file_to_blackhole_500ms_latency lost_bytes 10/10
file_to_blackhole_500ms_latency memory_usage 10/10
quality_gate_idle memory_usage 10/10 bounds checks dashboard
quality_gate_idle_all_features memory_usage 10/10 bounds checks dashboard
quality_gate_logs lost_bytes 10/10
quality_gate_logs memory_usage 10/10

Explanation

Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%

Performance changes are noted in the perf column of each table:

  • ✅ = significantly better comparison variant performance
  • ❌ = significantly worse comparison variant performance
  • ➖ = no significant change in performance

A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".

For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:

  1. Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.

  2. Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.

  3. Its configuration does not mark it "erratic".

CI Pass/Fail Decision

Passed. All Quality Gates passed.

  • quality_gate_idle_all_features, bounds check memory_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_logs, bounds check lost_bytes: 10/10 replicas passed. Gate passed.
  • quality_gate_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_idle, bounds check memory_usage: 10/10 replicas passed. Gate passed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
medium review PR review might take time qa/rc-required Only for a PR that requires validation on the Release Candidate team/container-platform The Container Platform Team
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants