-
Notifications
You must be signed in to change notification settings - Fork 493
chore(config): migrate tracer.retryInterval #4240
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Codecov Report❌ Patch coverage is
Additional details and impacted files
🚀 New features to boost your workflow:
|
This comment has been minimized.
This comment has been minimized.
| // Reload process tags to ensure consistent state (previous tests may have disabled them) | ||
| processtags.Reload() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For changes to this test, see: dd.slack.com/archives/C03D7LNP0TD/p1765313037158839
by newly importing internalconfig "github.com/DataDog/dd-trace-go/v2/internal/config", I believe that triggered a dependency tree that caused processtags init() function to run earlier than it did before. As a result, processtags.enabled evaluated to true (by default), and process tags were appended onto span metadata, causing the payload size to increase from 185 bytes to 308.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As the init() is ran earlier than before, could it trigger some unwanted behavior in other places or in deployed code ?
| cfg.logDirectory = provider.getString("DD_TRACE_LOG_DIRECTORY", "") | ||
| cfg.traceRateLimitPerSecond = provider.getFloat("DD_TRACE_RATE_LIMIT", 0.0) | ||
|
|
||
| cfg.retryInterval = provider.getDuration("DD_TRACE_RETRY_INTERVAL", time.Millisecond) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same question as in another PR, is this the same env var as other tracer if they have this feature ?
| // Reload process tags to ensure consistent state (previous tests may have disabled them) | ||
| processtags.Reload() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As the init() is ran earlier than before, could it trigger some unwanted behavior in other places or in deployed code ?
What does this PR do?
Tracer migrates to using
Config.logStartupinstead of its ownconfig.logStartup.Motivation
datadoghq.atlassian.net/browse/APMAPI-1748
Reviewer's Checklist
./scripts/lint.shlocally.Unsure? Have a question? Request a review!