Skip to content

Conversation

@mtoffl01
Copy link
Contributor

@mtoffl01 mtoffl01 commented Dec 9, 2025

What does this PR do?

Tracer migrates to using Config.logStartup instead of its own config.logStartup.

Motivation

datadoghq.atlassian.net/browse/APMAPI-1748

Reviewer's Checklist

  • Changed code has unit tests for its functionality at or near 100% coverage.
  • System-Tests covering this feature have been added and enabled with the va.b.c-dev version tag.
  • There is a benchmark for any new code, or changes to existing code.
  • If this interacts with the agent in a new way, a system test has been added.
  • New code is free of linting errors. You can check this by running ./scripts/lint.sh locally.
  • Add an appropriate team label so this PR gets put in the right place for the release notes.
  • Non-trivial go.mod changes, e.g. adding new modules, are reviewed by @DataDog/dd-trace-go-guild.

Unsure? Have a question? Request a review!

@pr-commenter
Copy link

pr-commenter bot commented Dec 9, 2025

Benchmarks

Benchmark execution time: 2025-12-10 19:51:36

Comparing candidate commit c8e1968 in PR branch mtoff/retryInterval with baseline commit b760677 in branch main.

Found 0 performance improvements and 0 performance regressions! Performance is the same for 9 metrics, 0 unstable metrics.

@codecov
Copy link

codecov bot commented Dec 9, 2025

Codecov Report

❌ Patch coverage is 33.33333% with 10 lines in your changes missing coverage. Please review.
✅ Project coverage is 54.38%. Comparing base (0fc0961) to head (c8e1968).
⚠️ Report is 3 commits behind head on main.

Files with missing lines Patch % Lines
internal/config/config.go 9.09% 10 Missing ⚠️
Additional details and impacted files
Files with missing lines Coverage Δ
ddtrace/tracer/civisibility_writer.go 78.00% <100.00%> (ø)
ddtrace/tracer/option.go 85.18% <100.00%> (+0.56%) ⬆️
ddtrace/tracer/telemetry.go 87.50% <100.00%> (ø)
ddtrace/tracer/writer.go 91.70% <100.00%> (ø)
internal/config/config.go 71.42% <9.09%> (-28.58%) ⬇️

... and 76 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@datadog-datadog-prod-us1

This comment has been minimized.

@mtoffl01 mtoffl01 changed the title chore(config) migrate retryInterval chore(config): migrate retryInterval Dec 10, 2025
@mtoffl01 mtoffl01 changed the title chore(config): migrate retryInterval chore(config): migrate tracer.retryInterval Dec 10, 2025
@mtoffl01 mtoffl01 marked this pull request as ready for review December 10, 2025 21:36
@mtoffl01 mtoffl01 requested review from a team as code owners December 10, 2025 21:36
Comment on lines +377 to +378
// Reload process tags to ensure consistent state (previous tests may have disabled them)
processtags.Reload()
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For changes to this test, see: dd.slack.com/archives/C03D7LNP0TD/p1765313037158839

by newly importing internalconfig "github.com/DataDog/dd-trace-go/v2/internal/config", I believe that triggered a dependency tree that caused processtags init() function to run earlier than it did before. As a result, processtags.enabled evaluated to true (by default), and process tags were appended onto span metadata, causing the payload size to increase from 185 bytes to 308.

Copy link
Member

@genesor genesor Dec 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As the init() is ran earlier than before, could it trigger some unwanted behavior in other places or in deployed code ?

cfg.logDirectory = provider.getString("DD_TRACE_LOG_DIRECTORY", "")
cfg.traceRateLimitPerSecond = provider.getFloat("DD_TRACE_RATE_LIMIT", 0.0)

cfg.retryInterval = provider.getDuration("DD_TRACE_RETRY_INTERVAL", time.Millisecond)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same question as in another PR, is this the same env var as other tracer if they have this feature ?

Comment on lines +377 to +378
// Reload process tags to ensure consistent state (previous tests may have disabled them)
processtags.Reload()
Copy link
Member

@genesor genesor Dec 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As the init() is ran earlier than before, could it trigger some unwanted behavior in other places or in deployed code ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants