-
Notifications
You must be signed in to change notification settings - Fork 457
perf(telemetry): batch telemetry events into one request #13354
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Bootstrap import analysisComparison of import times between this PR and base. SummaryThe average import time from this PR is: 271 ± 4 ms. The average import time from base is: 270 ± 2 ms. The import time difference between this PR and base is: 0.7 ± 0.1 ms. Import time breakdownThe following import paths have grown:
|
BenchmarksBenchmark execution time: 2025-05-21 12:40:41 Comparing candidate commit 9723554 in PR branch Found 0 performance improvements and 0 performance regressions! Performance is the same for 485 metrics, 5 unstable metrics. |
e9d9fc7
to
4ee8b1e
Compare
Blocked by: #13391 |
This pull request has been automatically closed after a period of inactivity. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there are some merge conflicts, it wasn't obvious to me how to resolve.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Two important test suites are failing on this branch:
- multiple_os_tests suite is failing, probably because it's not able to retrieve the collected sbom via telemetry. It's an important test suite both checking specific feature but also one of the rare (only) test suite doing end to end tests on windows and macos.
- system tests are failing on assertion on telemetry events, this is more concerning as it should not impact them? Does it mean we need to decrease the 60 seconds time for some system tests scenario? Or can we force flush?
- Also does it mean that apm has not a single telemetry test on system tests with metrics and logs or that the appsec scenarios failing are a symptom that some additional refactor is required for this branch on telemetry for appsec?
I believe most tests are failing because we are batching requests under a new request type |
This pull request has been automatically closed after a period of inactivity. |
Performance SLOsComparing candidate munir/batch-all-telemetry-requests (4482428) with baseline main (19ca460) ❌ Test Failures (2 suites)❌ djangosimple - 27/28✅ appsecTime: ✅ 20.492ms (SLO: <22.300ms -8.1%) vs baseline: -0.4% Memory: ✅ 65.226MB (SLO: <66.000MB 🟡 -1.2%) vs baseline: +6.0% ✅ exception-replay-enabledTime: ✅ 1.346ms (SLO: <1.450ms -7.2%) vs baseline: -0.7% Memory: ✅ 64.096MB (SLO: <66.000MB -2.9%) vs baseline: +5.7% ✅ iastTime: ✅ 20.563ms (SLO: <22.250ms -7.6%) vs baseline: +0.3% Memory: ✅ 65.287MB (SLO: <66.000MB 🟡 -1.1%) vs baseline: +6.2% ✅ profilerTime: ✅ 15.223ms (SLO: <16.550ms -8.0%) vs baseline: ~same Memory: ✅ 53.320MB (SLO: <53.500MB 🟡 -0.3%) vs baseline: +5.5% ✅ span-code-originTime: ✅ 26.247ms (SLO: <28.200ms -6.9%) vs baseline: +0.3% Memory: ✅ 67.366MB (SLO: <68.500MB 🟡 -1.7%) vs baseline: +6.0% ✅ tracerTime: ✅ 20.564ms (SLO: <21.750ms -5.5%) vs baseline: ~same Memory: ✅ 65.246MB (SLO: <66.000MB 🟡 -1.1%) vs baseline: +6.1% ✅ tracer-and-profilerTime: ✅ 22.133ms (SLO: <23.500ms -5.8%) vs baseline: +0.4% Memory: ✅ 66.375MB (SLO: <67.000MB 🟡 -0.9%) vs baseline: +5.7% ✅ tracer-dont-create-db-spansTime: ✅ 19.318ms (SLO: <21.500ms 📉 -10.2%) vs baseline: ~same Memory: ✅ 65.257MB (SLO: <66.000MB 🟡 -1.1%) vs baseline: +6.2% ✅ tracer-minimalTime: ✅ 16.672ms (SLO: <17.500ms -4.7%) vs baseline: ~same Memory: ✅ 64.914MB (SLO: <66.000MB 🟡 -1.6%) vs baseline: +5.4% ❌ tracer-nativeTime: ✅ 20.532ms (SLO: <21.750ms -5.6%) vs baseline: +0.1% Memory: ❌ 71.113MB (SLO: <66.000MB +7.7%) vs baseline: 📈 +13.2% ✅ tracer-no-cachesTime: ✅ 18.426ms (SLO: <19.650ms -6.2%) vs baseline: ~same Memory: ✅ 65.148MB (SLO: <66.000MB 🟡 -1.3%) vs baseline: +6.0% ✅ tracer-no-databasesTime: ✅ 18.888ms (SLO: <20.100ms -6.0%) vs baseline: +0.3% Memory: ✅ 64.830MB (SLO: <66.000MB 🟡 -1.8%) vs baseline: +5.4% ✅ tracer-no-middlewareTime: ✅ 20.144ms (SLO: <21.500ms -6.3%) vs baseline: -0.4% Memory: ✅ 65.186MB (SLO: <66.000MB 🟡 -1.2%) vs baseline: +6.0% ✅ tracer-no-templatesTime: ✅ 20.282ms (SLO: <22.000ms -7.8%) vs baseline: -0.5% Memory: ✅ 65.169MB (SLO: <66.000MB 🟡 -1.3%) vs baseline: +5.8% ❌ flasksimple - 15/17✅ appsec-getTime: ✅ 4.580ms (SLO: <4.750ms -3.6%) vs baseline: -0.3% Memory: ✅ 62.873MB (SLO: <64.500MB -2.5%) vs baseline: +5.8% ✅ appsec-postTime: ✅ 6.576ms (SLO: <6.750ms -2.6%) vs baseline: ~same Memory: ✅ 63.209MB (SLO: <64.500MB -2.0%) vs baseline: +6.2% ✅ appsec-telemetryTime: ✅ 4.580ms (SLO: <4.750ms -3.6%) vs baseline: ~same Memory: ✅ 62.988MB (SLO: <64.500MB -2.3%) vs baseline: +6.0% ❌ debuggerTime: ✅ 1.852ms (SLO: <2.000ms -7.4%) vs baseline: -0.2% Memory: ❌ 45.402MB (SLO: <45.000MB +0.9%) vs baseline: +6.3% ✅ iast-getTime: ✅ 1.863ms (SLO: <2.000ms -6.9%) vs baseline: +0.3% Memory: ✅ 41.853MB (SLO: <49.000MB 📉 -14.6%) vs baseline: +5.1% ✅ profilerTime: ✅ 1.917ms (SLO: <2.100ms -8.7%) vs baseline: ~same Memory: ✅ 45.125MB (SLO: <46.500MB -3.0%) vs baseline: +6.5% ✅ tracerTime: ✅ 3.375ms (SLO: <3.650ms -7.5%) vs baseline: ~same Memory: ✅ 52.182MB (SLO: <53.500MB -2.5%) vs baseline: +6.3% ❌ tracer-nativeTime: ✅ 3.385ms (SLO: <3.650ms -7.3%) vs baseline: +0.4% Memory: ❌ 58.039MB (SLO: <53.500MB +8.5%) vs baseline: 📈 +15.5% 📈 Performance Regressions (1 suite)📈 iastaspectsospath - 24/24✅ ospathbasename_aspectTime: ✅ 4.380µs (SLO: <10.000µs 📉 -56.2%) vs baseline: +3.7% Memory: ✅ 37.650MB (SLO: <39.000MB -3.5%) vs baseline: +5.4% ✅ ospathbasename_noaspectTime: ✅ 1.072µs (SLO: <10.000µs 📉 -89.3%) vs baseline: ~same Memory: ✅ 37.611MB (SLO: <39.000MB -3.6%) vs baseline: +5.4% ✅ ospathjoin_aspectTime: ✅ 6.074µs (SLO: <10.000µs 📉 -39.3%) vs baseline: -1.9% Memory: ✅ 37.690MB (SLO: <39.000MB -3.4%) vs baseline: +5.5% ✅ ospathjoin_noaspectTime: ✅ 2.281µs (SLO: <10.000µs 📉 -77.2%) vs baseline: ~same Memory: ✅ 37.650MB (SLO: <39.000MB -3.5%) vs baseline: +5.3% ✅ ospathnormcase_aspectTime: ✅ 3.513µs (SLO: <10.000µs 📉 -64.9%) vs baseline: +0.8% Memory: ✅ 37.690MB (SLO: <39.000MB -3.4%) vs baseline: +5.4% ✅ ospathnormcase_noaspectTime: ✅ 0.566µs (SLO: <10.000µs 📉 -94.3%) vs baseline: -1.5% Memory: ✅ 37.650MB (SLO: <39.000MB -3.5%) vs baseline: +5.2% ✅ ospathsplit_aspectTime: ✅ 4.948µs (SLO: <10.000µs 📉 -50.5%) vs baseline: +2.2% Memory: ✅ 37.690MB (SLO: <39.000MB -3.4%) vs baseline: +5.5% ✅ ospathsplit_noaspectTime: ✅ 1.617µs (SLO: <10.000µs 📉 -83.8%) vs baseline: +1.7% Memory: ✅ 37.690MB (SLO: <39.000MB -3.4%) vs baseline: +5.6% ✅ ospathsplitdrive_aspectTime: ✅ 3.730µs (SLO: <10.000µs 📉 -62.7%) vs baseline: +0.5% Memory: ✅ 37.749MB (SLO: <39.000MB -3.2%) vs baseline: +5.5% ✅ ospathsplitdrive_noaspectTime: ✅ 0.694µs (SLO: <10.000µs 📉 -93.1%) vs baseline: -0.2% Memory: ✅ 37.709MB (SLO: <39.000MB -3.3%) vs baseline: +5.4% ✅ ospathsplitext_aspectTime: ✅ 5.292µs (SLO: <10.000µs 📉 -47.1%) vs baseline: 📈 +16.4% Memory: ✅ 37.709MB (SLO: <39.000MB -3.3%) vs baseline: +5.5% ✅ ospathsplitext_noaspectTime: ✅ 1.381µs (SLO: <10.000µs 📉 -86.2%) vs baseline: +0.4% Memory: ✅ 37.729MB (SLO: <39.000MB -3.3%) vs baseline: +5.3% 🟡 Near SLO Breach (3 suites)🟡 errortrackingdjangosimple - 6/6✅ errortracking-enabled-allTime: ✅ 18.051ms (SLO: <19.850ms -9.1%) vs baseline: +0.3% Memory: ✅ 64.974MB (SLO: <65.500MB 🟡 -0.8%) vs baseline: +5.6% ✅ errortracking-enabled-userTime: ✅ 18.095ms (SLO: <19.400ms -6.7%) vs baseline: +0.2% Memory: ✅ 65.087MB (SLO: <65.500MB 🟡 -0.6%) vs baseline: +5.8% ✅ tracer-enabledTime: ✅ 18.037ms (SLO: <19.450ms -7.3%) vs baseline: -0.1% Memory: ✅ 64.940MB (SLO: <65.500MB 🟡 -0.9%) vs baseline: +5.5% 🟡 flasksqli - 6/6✅ appsec-enabledTime: ✅ 3.949ms (SLO: <4.200ms -6.0%) vs baseline: -0.6% Memory: ✅ 63.033MB (SLO: <66.000MB -4.5%) vs baseline: +5.6% ✅ iast-enabledTime: ✅ 2.462ms (SLO: <2.800ms 📉 -12.1%) vs baseline: ~same Memory: ✅ 58.668MB (SLO: <59.000MB 🟡 -0.6%) vs baseline: +5.8% ✅ tracer-enabledTime: ✅ 2.080ms (SLO: <2.250ms -7.6%) vs baseline: -0.4% Memory: ✅ 51.846MB (SLO: <53.500MB -3.1%) vs baseline: +6.1% 🟡 otelspan - 22/22✅ add-eventTime: ✅ 45.244ms (SLO: <47.150ms -4.0%) vs baseline: ~same Memory: ✅ 45.068MB (SLO: <46.500MB -3.1%) vs baseline: +5.7% ✅ add-metricsTime: ✅ 322.194ms (SLO: <344.800ms -6.6%) vs baseline: +0.3% Memory: ✅ 552.862MB (SLO: <562.000MB 🟡 -1.6%) vs baseline: +4.7% ✅ add-tagsTime: ✅ 291.674ms (SLO: <314.000ms -7.1%) vs baseline: +0.2% Memory: ✅ 554.283MB (SLO: <563.500MB 🟡 -1.6%) vs baseline: +4.9% ✅ get-contextTime: ✅ 85.075ms (SLO: <92.350ms -7.9%) vs baseline: +3.2% Memory: ✅ 40.219MB (SLO: <46.500MB 📉 -13.5%) vs baseline: +5.8% ✅ is-recordingTime: ✅ 42.923ms (SLO: <44.500ms -3.5%) vs baseline: +0.2% Memory: ✅ 44.437MB (SLO: <46.500MB -4.4%) vs baseline: +5.8% ✅ record-exceptionTime: ✅ 63.054ms (SLO: <67.650ms -6.8%) vs baseline: +2.5% Memory: ✅ 40.436MB (SLO: <46.500MB 📉 -13.0%) vs baseline: +5.6% ✅ set-statusTime: ✅ 48.814ms (SLO: <50.400ms -3.1%) vs baseline: ~same Memory: ✅ 44.427MB (SLO: <46.500MB -4.5%) vs baseline: +5.7% ✅ startTime: ✅ 43.224ms (SLO: <43.450ms 🟡 -0.5%) vs baseline: +2.2% Memory: ✅ 44.442MB (SLO: <46.500MB -4.4%) vs baseline: +5.8% ✅ start-finishTime: ✅ 84.899ms (SLO: <88.000ms -3.5%) vs baseline: +1.9% Memory: ✅ 34.564MB (SLO: <46.500MB 📉 -25.7%) vs baseline: +6.9% ✅ start-finish-telemetryTime: ✅ 84.541ms (SLO: <89.000ms -5.0%) vs baseline: -0.2% Memory: ✅ 34.583MB (SLO: <46.500MB 📉 -25.6%) vs baseline: +6.6% ✅ update-nameTime: ✅ 44.248ms (SLO: <45.150ms 🟡 -2.0%) vs baseline: +0.2% Memory: ✅ 44.733MB (SLO: <46.500MB -3.8%) vs baseline: +5.8%
|
fa3caae
to
4f4bda9
Compare
91407eb
to
1d2fc4d
Compare
1d2fc4d
to
1b8f44b
Compare
Previously, ddtrace sent a separate synchronous request for each telemetry event (e.g., app-started, config-changed, dependencies-loaded). In the worst case, we would have sent over 16 requests per minute (6 log events, 6 metric events, 1 configuration event, 1 app started event, etc.) but the typical number was 6–10 per minute.
With this change we will batch all telemetry events queued in the telemetry interval (default: 60 seconds) into on e request using the
message-batch
request type. This will reduce overhead by sending instrumentation telemetry events.Here's the before and after for an instrumented process with a duration of ~60 seconds.
Before:
After:
Since each request is sent synchronously. This speeds up ddtrace shutdown by 3-4 seconds.
Checklist
Reviewer Checklist