Skip to content

[APMSP-1874] Add fork support by dropping runtime #1056

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 19 commits into from
Jun 10, 2025

Conversation

VianneyRuhlmann
Copy link
Contributor

@VianneyRuhlmann VianneyRuhlmann commented May 13, 2025

What does this PR do?

Add a PausableWorker to allow background workers using the tokio runtime to be paused and save their state to restart them later.

Motivation

To support forks in python we need to drop the tokio runtime before the fork to stop all the threads.

Additional Notes

Some changes have been made to the ddtelemetry crate to expose the internal worker.

How to test the change?

Tests have been added to the PausableWorker. The changes can also be tested using the NativeWriter in dd-trace-py !13071 by building ddtrace-native using this branch.

@VianneyRuhlmann VianneyRuhlmann force-pushed the vianney/data-pipeline/add-threads-shutdown branch 2 times, most recently from b954af4 to 575db6c Compare May 21, 2025 08:57
@pr-commenter
Copy link

pr-commenter bot commented May 21, 2025

Benchmarks

Comparison

Benchmark execution time: 2025-06-10 13:28:35

Comparing candidate commit 87d96ad in PR branch vianney/data-pipeline/add-threads-shutdown with baseline commit 377c94a in branch main.

Found 2 performance improvements and 0 performance regressions! Performance is the same for 50 metrics, 2 unstable metrics.

scenario:normalization/normalize_service/normalize_service/[empty string]

  • 🟩 execution_time [-12.611µs; -12.549µs] or [-25.091%; -24.967%]
  • 🟩 throughput [+6628545.527op/s; +6655021.308op/s] or [+33.316%; +33.449%]

Candidate

Candidate benchmark details

Group 1

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 87d96ad 1749561425 vianney/data-pipeline/add-threads-shutdown
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
credit_card/is_card_number/ execution_time 3.893µs 3.911µs ± 0.003µs 3.910µs ± 0.002µs 3.912µs 3.915µs 3.917µs 3.918µs 0.20% -0.924 7.973 0.07% 0.000µs 1 200
credit_card/is_card_number/ throughput 255210124.167op/s 255711121.523op/s ± 177950.288op/s 255722631.453op/s ± 113462.168op/s 255823409.213op/s 255940698.155op/s 255977952.895op/s 256872778.357op/s 0.45% 0.943 8.099 0.07% 12582.986op/s 1 200
credit_card/is_card_number/ 3782-8224-6310-005 execution_time 76.740µs 78.530µs ± 0.707µs 78.463µs ± 0.491µs 79.002µs 79.704µs 80.142µs 80.563µs 2.68% 0.155 -0.330 0.90% 0.050µs 1 200
credit_card/is_card_number/ 3782-8224-6310-005 throughput 12412624.337op/s 12735070.792op/s ± 114570.595op/s 12744785.131op/s ± 79915.892op/s 12817793.341op/s 12918818.642op/s 12969861.506op/s 13031086.777op/s 2.25% -0.111 -0.345 0.90% 8101.364op/s 1 200
credit_card/is_card_number/ 378282246310005 execution_time 69.900µs 71.207µs ± 0.675µs 71.119µs ± 0.489µs 71.671µs 72.348µs 73.075µs 73.716µs 3.65% 0.649 0.359 0.95% 0.048µs 1 200
credit_card/is_card_number/ 378282246310005 throughput 13565532.925op/s 14044898.412op/s ± 132453.166op/s 14060954.200op/s ± 96319.195op/s 14145927.605op/s 14232842.215op/s 14269370.934op/s 14306167.612op/s 1.74% -0.595 0.229 0.94% 9365.853op/s 1 200
credit_card/is_card_number/37828224631 execution_time 3.896µs 3.911µs ± 0.002µs 3.911µs ± 0.001µs 3.912µs 3.916µs 3.917µs 3.918µs 0.18% -0.607 6.904 0.06% 0.000µs 1 200
credit_card/is_card_number/37828224631 throughput 255261344.774op/s 255690446.967op/s ± 160281.977op/s 255718730.209op/s ± 86114.797op/s 255788336.954op/s 255884313.060op/s 255940935.850op/s 256695860.297op/s 0.38% 0.623 7.000 0.06% 11333.647op/s 1 200
credit_card/is_card_number/378282246310005 execution_time 66.959µs 68.294µs ± 0.678µs 68.287µs ± 0.479µs 68.717µs 69.481µs 70.067µs 70.271µs 2.90% 0.393 -0.240 0.99% 0.048µs 1 200
credit_card/is_card_number/378282246310005 throughput 14230699.976op/s 14643916.617op/s ± 144873.327op/s 14644028.401op/s ± 103346.120op/s 14754225.858op/s 14858977.644op/s 14899571.139op/s 14934563.752op/s 1.98% -0.346 -0.305 0.99% 10244.091op/s 1 200
credit_card/is_card_number/37828224631000521389798 execution_time 52.142µs 52.191µs ± 0.028µs 52.185µs ± 0.016µs 52.206µs 52.247µs 52.267µs 52.288µs 0.20% 0.909 0.611 0.05% 0.002µs 1 200
credit_card/is_card_number/37828224631000521389798 throughput 19124955.587op/s 19160269.366op/s ± 10165.056op/s 19162544.890op/s ± 5937.333op/s 19167317.328op/s 19173291.126op/s 19175658.938op/s 19178483.659op/s 0.08% -0.906 0.604 0.05% 718.778op/s 1 200
credit_card/is_card_number/x371413321323331 execution_time 6.026µs 6.033µs ± 0.004µs 6.032µs ± 0.002µs 6.035µs 6.039µs 6.043µs 6.047µs 0.24% 0.718 0.344 0.06% 0.000µs 1 200
credit_card/is_card_number/x371413321323331 throughput 165383844.632op/s 165759096.407op/s ± 102898.887op/s 165776550.907op/s ± 65042.164op/s 165829049.116op/s 165900593.836op/s 165923432.802op/s 165956530.199op/s 0.11% -0.715 0.336 0.06% 7276.050op/s 1 200
credit_card/is_card_number_no_luhn/ execution_time 3.892µs 3.912µs ± 0.003µs 3.912µs ± 0.002µs 3.914µs 3.917µs 3.918µs 3.920µs 0.19% -1.456 10.591 0.07% 0.000µs 1 200
credit_card/is_card_number_no_luhn/ throughput 255116752.081op/s 255607611.634op/s ± 188621.654op/s 255602577.078op/s ± 111275.121op/s 255728194.036op/s 255870720.784op/s 255932268.824op/s 256925947.853op/s 0.52% 1.479 10.770 0.07% 13337.565op/s 1 200
credit_card/is_card_number_no_luhn/ 3782-8224-6310-005 execution_time 64.115µs 64.461µs ± 0.150µs 64.454µs ± 0.092µs 64.547µs 64.740µs 64.805µs 64.952µs 0.77% 0.363 0.008 0.23% 0.011µs 1 200
credit_card/is_card_number_no_luhn/ 3782-8224-6310-005 throughput 15395885.963op/s 15513284.149op/s ± 36063.394op/s 15515046.595op/s ± 22122.063op/s 15535447.497op/s 15568429.734op/s 15583133.229op/s 15597038.122op/s 0.53% -0.350 -0.007 0.23% 2550.067op/s 1 200
credit_card/is_card_number_no_luhn/ 378282246310005 execution_time 58.143µs 58.364µs ± 0.154µs 58.326µs ± 0.078µs 58.424µs 58.665µs 58.806µs 59.101µs 1.33% 1.474 3.049 0.26% 0.011µs 1 200
credit_card/is_card_number_no_luhn/ 378282246310005 throughput 16920222.430op/s 17134066.337op/s ± 44986.265op/s 17145020.206op/s ± 22813.285op/s 17163345.114op/s 17188340.214op/s 17198307.016op/s 17198868.314op/s 0.31% -1.452 2.940 0.26% 3181.009op/s 1 200
credit_card/is_card_number_no_luhn/37828224631 execution_time 3.892µs 3.910µs ± 0.002µs 3.910µs ± 0.001µs 3.912µs 3.914µs 3.916µs 3.917µs 0.16% -1.713 15.538 0.06% 0.000µs 1 200
credit_card/is_card_number_no_luhn/37828224631 throughput 255325565.059op/s 255733770.947op/s ± 155636.732op/s 255744435.608op/s ± 74727.711op/s 255815558.283op/s 255922602.761op/s 255958502.997op/s 256920345.951op/s 0.46% 1.740 15.763 0.06% 11005.179op/s 1 200
credit_card/is_card_number_no_luhn/378282246310005 execution_time 54.573µs 54.827µs ± 0.197µs 54.761µs ± 0.083µs 54.919µs 55.225µs 55.479µs 55.787µs 1.87% 1.768 3.814 0.36% 0.014µs 1 200
credit_card/is_card_number_no_luhn/378282246310005 throughput 17925349.441op/s 18239306.418op/s ± 65033.645op/s 18261265.118op/s ± 27586.000op/s 18282095.889op/s 18304458.051op/s 18317626.295op/s 18324242.973op/s 0.34% -1.739 3.661 0.36% 4598.573op/s 1 200
credit_card/is_card_number_no_luhn/37828224631000521389798 execution_time 52.138µs 52.205µs ± 0.036µs 52.201µs ± 0.023µs 52.223µs 52.280µs 52.305µs 52.338µs 0.26% 0.931 1.223 0.07% 0.003µs 1 200
credit_card/is_card_number_no_luhn/37828224631000521389798 throughput 19106734.298op/s 19155204.837op/s ± 13119.039op/s 19156611.234op/s ± 8483.463op/s 19165152.004op/s 19172772.076op/s 19176834.691op/s 19179767.264op/s 0.12% -0.926 1.210 0.07% 927.656op/s 1 200
credit_card/is_card_number_no_luhn/x371413321323331 execution_time 6.026µs 6.033µs ± 0.004µs 6.033µs ± 0.002µs 6.035µs 6.040µs 6.042µs 6.063µs 0.50% 2.155 12.450 0.07% 0.000µs 1 200
credit_card/is_card_number_no_luhn/x371413321323331 throughput 164932635.762op/s 165755620.083op/s ± 113936.547op/s 165760681.567op/s ± 67299.218op/s 165834945.569op/s 165898835.061op/s 165925027.166op/s 165947178.553op/s 0.11% -2.135 12.266 0.07% 8056.531op/s 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
credit_card/is_card_number/ execution_time [3.910µs; 3.911µs] or [-0.010%; +0.010%] None None None
credit_card/is_card_number/ throughput [255686459.324op/s; 255735783.721op/s] or [-0.010%; +0.010%] None None None
credit_card/is_card_number/ 3782-8224-6310-005 execution_time [78.432µs; 78.628µs] or [-0.125%; +0.125%] None None None
credit_card/is_card_number/ 3782-8224-6310-005 throughput [12719192.410op/s; 12750949.175op/s] or [-0.125%; +0.125%] None None None
credit_card/is_card_number/ 378282246310005 execution_time [71.113µs; 71.300µs] or [-0.131%; +0.131%] None None None
credit_card/is_card_number/ 378282246310005 throughput [14026541.677op/s; 14063255.147op/s] or [-0.131%; +0.131%] None None None
credit_card/is_card_number/37828224631 execution_time [3.911µs; 3.911µs] or [-0.009%; +0.009%] None None None
credit_card/is_card_number/37828224631 throughput [255668233.426op/s; 255712660.507op/s] or [-0.009%; +0.009%] None None None
credit_card/is_card_number/378282246310005 execution_time [68.200µs; 68.388µs] or [-0.138%; +0.138%] None None None
credit_card/is_card_number/378282246310005 throughput [14623838.567op/s; 14663994.666op/s] or [-0.137%; +0.137%] None None None
credit_card/is_card_number/37828224631000521389798 execution_time [52.188µs; 52.195µs] or [-0.007%; +0.007%] None None None
credit_card/is_card_number/37828224631000521389798 throughput [19158860.587op/s; 19161678.145op/s] or [-0.007%; +0.007%] None None None
credit_card/is_card_number/x371413321323331 execution_time [6.032µs; 6.033µs] or [-0.009%; +0.009%] None None None
credit_card/is_card_number/x371413321323331 throughput [165744835.610op/s; 165773357.203op/s] or [-0.009%; +0.009%] None None None
credit_card/is_card_number_no_luhn/ execution_time [3.912µs; 3.913µs] or [-0.010%; +0.010%] None None None
credit_card/is_card_number_no_luhn/ throughput [255581470.487op/s; 255633752.782op/s] or [-0.010%; +0.010%] None None None
credit_card/is_card_number_no_luhn/ 3782-8224-6310-005 execution_time [64.440µs; 64.482µs] or [-0.032%; +0.032%] None None None
credit_card/is_card_number_no_luhn/ 3782-8224-6310-005 throughput [15508286.109op/s; 15518282.189op/s] or [-0.032%; +0.032%] None None None
credit_card/is_card_number_no_luhn/ 378282246310005 execution_time [58.342µs; 58.385µs] or [-0.037%; +0.037%] None None None
credit_card/is_card_number_no_luhn/ 378282246310005 throughput [17127831.673op/s; 17140301.001op/s] or [-0.036%; +0.036%] None None None
credit_card/is_card_number_no_luhn/37828224631 execution_time [3.910µs; 3.911µs] or [-0.008%; +0.008%] None None None
credit_card/is_card_number_no_luhn/37828224631 throughput [255712201.193op/s; 255755340.701op/s] or [-0.008%; +0.008%] None None None
credit_card/is_card_number_no_luhn/378282246310005 execution_time [54.800µs; 54.855µs] or [-0.050%; +0.050%] None None None
credit_card/is_card_number_no_luhn/378282246310005 throughput [18230293.380op/s; 18248319.456op/s] or [-0.049%; +0.049%] None None None
credit_card/is_card_number_no_luhn/37828224631000521389798 execution_time [52.200µs; 52.210µs] or [-0.009%; +0.009%] None None None
credit_card/is_card_number_no_luhn/37828224631000521389798 throughput [19153386.665op/s; 19157023.010op/s] or [-0.009%; +0.009%] None None None
credit_card/is_card_number_no_luhn/x371413321323331 execution_time [6.032µs; 6.034µs] or [-0.010%; +0.010%] None None None
credit_card/is_card_number_no_luhn/x371413321323331 throughput [165739829.574op/s; 165771410.593op/s] or [-0.010%; +0.010%] None None None

Group 2

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 87d96ad 1749561425 vianney/data-pipeline/add-threads-shutdown
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
sql/obfuscate_sql_string execution_time 90.149µs 90.550µs ± 0.174µs 90.565µs ± 0.088µs 90.636µs 90.760µs 90.945µs 91.965µs 1.55% 2.649 20.690 0.19% 0.012µs 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
sql/obfuscate_sql_string execution_time [90.526µs; 90.574µs] or [-0.027%; +0.027%] None None None

Group 3

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 87d96ad 1749561425 vianney/data-pipeline/add-threads-shutdown
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
benching string interning on wordpress profile execution_time 150.297µs 150.979µs ± 0.748µs 150.915µs ± 0.143µs 151.037µs 151.460µs 151.834µs 160.819µs 6.56% 11.491 147.942 0.49% 0.053µs 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
benching string interning on wordpress profile execution_time [150.876µs; 151.083µs] or [-0.069%; +0.069%] None None None

Group 4

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 87d96ad 1749561425 vianney/data-pipeline/add-threads-shutdown
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
concentrator/add_spans_to_concentrator execution_time 8.113ms 8.130ms ± 0.010ms 8.128ms ± 0.005ms 8.135ms 8.151ms 8.158ms 8.185ms 0.69% 1.641 4.806 0.13% 0.001ms 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
concentrator/add_spans_to_concentrator execution_time [8.129ms; 8.132ms] or [-0.018%; +0.018%] None None None

Group 5

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 87d96ad 1749561425 vianney/data-pipeline/add-threads-shutdown
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
redis/obfuscate_redis_string execution_time 35.087µs 35.686µs ± 1.002µs 35.222µs ± 0.059µs 35.335µs 37.821µs 37.848µs 39.163µs 11.19% 1.732 1.178 2.80% 0.071µs 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
redis/obfuscate_redis_string execution_time [35.547µs; 35.825µs] or [-0.389%; +0.389%] None None None

Group 6

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 87d96ad 1749561425 vianney/data-pipeline/add-threads-shutdown
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
benching deserializing traces from msgpack to their internal representation execution_time 72.038ms 72.284ms ± 0.175ms 72.255ms ± 0.071ms 72.325ms 72.582ms 73.003ms 73.419ms 1.61% 3.160 14.507 0.24% 0.012ms 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
benching deserializing traces from msgpack to their internal representation execution_time [72.260ms; 72.308ms] or [-0.034%; +0.034%] None None None

Group 7

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 87d96ad 1749561425 vianney/data-pipeline/add-threads-shutdown
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
tags/replace_trace_tags execution_time 2.319µs 2.383µs ± 0.020µs 2.382µs ± 0.006µs 2.390µs 2.427µs 2.432µs 2.437µs 2.33% -0.429 2.990 0.84% 0.001µs 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
tags/replace_trace_tags execution_time [2.380µs; 2.386µs] or [-0.116%; +0.116%] None None None

Group 8

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 87d96ad 1749561425 vianney/data-pipeline/add-threads-shutdown
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
two way interface execution_time 18.233µs 26.443µs ± 10.063µs 18.391µs ± 0.133µs 34.874µs 44.566µs 45.605µs 68.753µs 273.84% 1.051 1.239 37.96% 0.712µs 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
two way interface execution_time [25.049µs; 27.838µs] or [-5.274%; +5.274%] None None None

Group 9

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 87d96ad 1749561425 vianney/data-pipeline/add-threads-shutdown
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
ip_address/quantize_peer_ip_address_benchmark execution_time 5.011µs 5.089µs ± 0.047µs 5.078µs ± 0.024µs 5.108µs 5.196µs 5.203µs 5.205µs 2.50% 0.905 0.144 0.92% 0.003µs 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
ip_address/quantize_peer_ip_address_benchmark execution_time [5.082µs; 5.095µs] or [-0.128%; +0.128%] None None None

Group 10

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 87d96ad 1749561425 vianney/data-pipeline/add-threads-shutdown
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
normalization/normalize_service/normalize_service/A0000000000000000000000000000000000000000000000000... execution_time 532.251µs 532.923µs ± 0.470µs 532.889µs ± 0.224µs 533.091µs 533.656µs 534.010µs 536.770µs 0.73% 3.214 21.959 0.09% 0.033µs 1 200
normalization/normalize_service/normalize_service/A0000000000000000000000000000000000000000000000000... throughput 1862995.120op/s 1876443.622op/s ± 1650.349op/s 1876563.827op/s ± 788.173op/s 1877464.569op/s 1878406.930op/s 1878751.752op/s 1878814.062op/s 0.12% -3.178 21.582 0.09% 116.697op/s 1 200
normalization/normalize_service/normalize_service/Data🐨dog🐶 繋がっ⛰てて execution_time 379.565µs 380.335µs ± 0.348µs 380.270µs ± 0.179µs 380.498µs 381.016µs 381.362µs 381.705µs 0.38% 0.952 1.407 0.09% 0.025µs 1 200
normalization/normalize_service/normalize_service/Data🐨dog🐶 繋がっ⛰てて throughput 2619823.613op/s 2629262.009op/s ± 2403.873op/s 2629709.417op/s ± 1235.446op/s 2630764.659op/s 2632241.390op/s 2633927.885op/s 2634592.149op/s 0.19% -0.945 1.389 0.09% 169.979op/s 1 200
normalization/normalize_service/normalize_service/Test Conversion 0f Weird !@#$%^&**() Characters execution_time 188.994µs 189.594µs ± 0.225µs 189.589µs ± 0.139µs 189.742µs 189.969µs 190.134µs 190.300µs 0.37% 0.149 0.094 0.12% 0.016µs 1 200
normalization/normalize_service/normalize_service/Test Conversion 0f Weird !@#$%^&**() Characters throughput 5254862.870op/s 5274422.267op/s ± 6271.859op/s 5274566.904op/s ± 3873.553op/s 5277895.408op/s 5284518.439op/s 5288270.816op/s 5291178.290op/s 0.31% -0.141 0.091 0.12% 443.487op/s 1 200
normalization/normalize_service/normalize_service/[empty string] execution_time 37.498µs 37.682µs ± 0.070µs 37.673µs ± 0.046µs 37.728µs 37.805µs 37.838µs 37.890µs 0.58% 0.223 -0.114 0.18% 0.005µs 1 200
normalization/normalize_service/normalize_service/[empty string] throughput 26391858.190op/s 26537621.066op/s ± 48973.559op/s 26544263.580op/s ± 32534.173op/s 26571078.922op/s 26614426.801op/s 26639137.052op/s 26667870.140op/s 0.47% -0.213 -0.118 0.18% 3462.954op/s 1 200
normalization/normalize_service/normalize_service/test_ASCII execution_time 44.933µs 45.174µs ± 0.142µs 45.177µs ± 0.129µs 45.291µs 45.378µs 45.466µs 45.512µs 0.74% 0.046 -1.122 0.31% 0.010µs 1 200
normalization/normalize_service/normalize_service/test_ASCII throughput 21972361.211op/s 22136772.666op/s ± 69513.075op/s 22135159.996op/s ± 63030.136op/s 22202229.261op/s 22239144.564op/s 22249571.347op/s 22255495.562op/s 0.54% -0.038 -1.128 0.31% 4915.317op/s 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
normalization/normalize_service/normalize_service/A0000000000000000000000000000000000000000000000000... execution_time [532.858µs; 532.989µs] or [-0.012%; +0.012%] None None None
normalization/normalize_service/normalize_service/A0000000000000000000000000000000000000000000000000... throughput [1876214.900op/s; 1876672.345op/s] or [-0.012%; +0.012%] None None None
normalization/normalize_service/normalize_service/Data🐨dog🐶 繋がっ⛰てて execution_time [380.287µs; 380.383µs] or [-0.013%; +0.013%] None None None
normalization/normalize_service/normalize_service/Data🐨dog🐶 繋がっ⛰てて throughput [2628928.855op/s; 2629595.162op/s] or [-0.013%; +0.013%] None None None
normalization/normalize_service/normalize_service/Test Conversion 0f Weird !@#$%^&**() Characters execution_time [189.563µs; 189.626µs] or [-0.016%; +0.016%] None None None
normalization/normalize_service/normalize_service/Test Conversion 0f Weird !@#$%^&**() Characters throughput [5273553.047op/s; 5275291.486op/s] or [-0.016%; +0.016%] None None None
normalization/normalize_service/normalize_service/[empty string] execution_time [37.673µs; 37.692µs] or [-0.026%; +0.026%] None None None
normalization/normalize_service/normalize_service/[empty string] throughput [26530833.802op/s; 26544408.330op/s] or [-0.026%; +0.026%] None None None
normalization/normalize_service/normalize_service/test_ASCII execution_time [45.154µs; 45.194µs] or [-0.044%; +0.044%] None None None
normalization/normalize_service/normalize_service/test_ASCII throughput [22127138.822op/s; 22146406.510op/s] or [-0.044%; +0.044%] None None None

Group 11

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 87d96ad 1749561425 vianney/data-pipeline/add-threads-shutdown
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
normalization/normalize_trace/test_trace execution_time 245.096ns 256.547ns ± 13.747ns 250.204ns ± 3.122ns 261.255ns 289.185ns 294.041ns 296.984ns 18.70% 1.535 1.093 5.35% 0.972ns 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
normalization/normalize_trace/test_trace execution_time [254.642ns; 258.453ns] or [-0.743%; +0.743%] None None None

Group 12

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 87d96ad 1749561425 vianney/data-pipeline/add-threads-shutdown
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
normalization/normalize_name/normalize_name/Too-Long-.Too-Long-.Too-Long-.Too-Long-.Too-Long-.Too-Lo... execution_time 204.221µs 204.704µs ± 0.277µs 204.649µs ± 0.152µs 204.813µs 205.222µs 205.753µs 205.890µs 0.61% 1.404 2.961 0.14% 0.020µs 1 200
normalization/normalize_name/normalize_name/Too-Long-.Too-Long-.Too-Long-.Too-Long-.Too-Long-.Too-Lo... throughput 4856960.543op/s 4885108.293op/s ± 6607.734op/s 4886419.150op/s ± 3622.885op/s 4889573.444op/s 4893662.953op/s 4895801.130op/s 4896657.370op/s 0.21% -1.392 2.912 0.13% 467.237op/s 1 200
normalization/normalize_name/normalize_name/bad-name execution_time 18.582µs 18.681µs ± 0.063µs 18.675µs ± 0.042µs 18.716µs 18.788µs 18.890µs 18.924µs 1.33% 1.126 1.822 0.34% 0.004µs 1 200
normalization/normalize_name/normalize_name/bad-name throughput 52842240.833op/s 53529913.108op/s ± 179829.773op/s 53547231.842op/s ± 121894.357op/s 53671542.904op/s 53765039.264op/s 53804856.247op/s 53814857.735op/s 0.50% -1.101 1.732 0.34% 12715.885op/s 1 200
normalization/normalize_name/normalize_name/good execution_time 10.725µs 10.798µs ± 0.032µs 10.796µs ± 0.020µs 10.817µs 10.851µs 10.874µs 10.924µs 1.18% 0.469 0.618 0.30% 0.002µs 1 200
normalization/normalize_name/normalize_name/good throughput 91544246.619op/s 92612115.160op/s ± 277612.919op/s 92627324.572op/s ± 174807.469op/s 92796478.656op/s 93014626.408op/s 93152430.634op/s 93242880.621op/s 0.66% -0.448 0.570 0.30% 19630.198op/s 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
normalization/normalize_name/normalize_name/Too-Long-.Too-Long-.Too-Long-.Too-Long-.Too-Long-.Too-Lo... execution_time [204.666µs; 204.743µs] or [-0.019%; +0.019%] None None None
normalization/normalize_name/normalize_name/Too-Long-.Too-Long-.Too-Long-.Too-Long-.Too-Long-.Too-Lo... throughput [4884192.524op/s; 4886024.061op/s] or [-0.019%; +0.019%] None None None
normalization/normalize_name/normalize_name/bad-name execution_time [18.673µs; 18.690µs] or [-0.047%; +0.047%] None None None
normalization/normalize_name/normalize_name/bad-name throughput [53504990.431op/s; 53554835.785op/s] or [-0.047%; +0.047%] None None None
normalization/normalize_name/normalize_name/good execution_time [10.793µs; 10.802µs] or [-0.042%; +0.042%] None None None
normalization/normalize_name/normalize_name/good throughput [92573640.680op/s; 92650589.641op/s] or [-0.042%; +0.042%] None None None

Group 13

cpu_model git_commit_sha git_commit_date git_branch
Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz 87d96ad 1749561425 vianney/data-pipeline/add-threads-shutdown
scenario metric min mean ± sd median ± mad p75 p95 p99 max peak_to_median_ratio skewness kurtosis cv sem runs sample_size
write only interface execution_time 1.266µs 3.270µs ± 1.442µs 3.045µs ± 0.030µs 3.077µs 3.673µs 14.245µs 15.039µs 393.87% 7.320 54.854 43.99% 0.102µs 1 200
scenario metric 95% CI mean Shapiro-Wilk pvalue Ljung-Box pvalue (lag=1) Dip test pvalue
write only interface execution_time [3.071µs; 3.470µs] or [-6.112%; +6.112%] None None None

Baseline

Omitted due to size.

@VianneyRuhlmann VianneyRuhlmann force-pushed the vianney/data-pipeline/add-threads-shutdown branch from 3782462 to c802b61 Compare May 27, 2025 13:46
@VianneyRuhlmann VianneyRuhlmann force-pushed the vianney/data-pipeline/add-threads-shutdown branch from c802b61 to efc38ea Compare May 27, 2025 14:03
@codecov-commenter
Copy link

Codecov Report

Attention: Patch coverage is 65.95174% with 127 lines in your changes missing coverage. Please review.

Project coverage is 70.97%. Comparing base (87dbb16) to head (efc38ea).

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1056      +/-   ##
==========================================
- Coverage   71.05%   70.97%   -0.09%     
==========================================
  Files         323      324       +1     
  Lines       49503    49723     +220     
==========================================
+ Hits        35176    35291     +115     
- Misses      14327    14432     +105     
Components Coverage Δ
datadog-crashtracker 42.02% <ø> (ø)
datadog-crashtracker-ffi 6.11% <ø> (ø)
datadog-alloc 98.73% <ø> (ø)
data-pipeline 89.76% <70.46%> (-1.05%) ⬇️
data-pipeline-ffi 88.94% <0.00%> (-0.33%) ⬇️
ddcommon 78.28% <ø> (ø)
ddcommon-ffi 66.37% <ø> (ø)
ddtelemetry 59.89% <35.41%> (-0.68%) ⬇️
ddtelemetry-ffi 21.32% <ø> (ø)
dogstatsd-client 83.26% <ø> (ø)
datadog-ipc 82.58% <ø> (ø)
datadog-profiling 77.49% <ø> (ø)
datadog-profiling-ffi 62.12% <ø> (ø)
datadog-sidecar 42.63% <ø> (ø)
datdog-sidecar-ffi 12.84% <ø> (ø)
spawn-worker 55.35% <ø> (ø)
tinybytes 90.96% <ø> (ø)
datadog-trace-normalization 98.24% <ø> (ø)
datadog-trace-obfuscation 94.16% <ø> (ø)
datadog-trace-protobuf 77.10% <ø> (ø)
datadog-trace-utils 89.28% <ø> (ø)
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@codecov-commenter
Copy link

codecov-commenter commented May 27, 2025

Codecov Report

Attention: Patch coverage is 67.20000% with 123 lines in your changes missing coverage. Please review.

Project coverage is 71.03%. Comparing base (377c94a) to head (87d96ad).

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1056      +/-   ##
==========================================
- Coverage   71.09%   71.03%   -0.06%     
==========================================
  Files         334      335       +1     
  Lines       50889    51081     +192     
==========================================
+ Hits        36178    36287     +109     
- Misses      14711    14794      +83     
Components Coverage Δ
datadog-crashtracker 44.36% <ø> (-0.06%) ⬇️
datadog-crashtracker-ffi 6.03% <ø> (ø)
datadog-alloc 98.73% <ø> (ø)
data-pipeline 89.35% <68.88%> (-0.84%) ⬇️
data-pipeline-ffi 88.94% <0.00%> (-0.33%) ⬇️
ddcommon 79.07% <ø> (ø)
ddcommon-ffi 68.64% <ø> (ø)
ddtelemetry 60.17% <54.54%> (-0.40%) ⬇️
ddtelemetry-ffi 21.32% <ø> (ø)
dogstatsd-client 83.26% <ø> (ø)
datadog-ipc 82.58% <ø> (ø)
datadog-profiling 77.17% <ø> (ø)
datadog-profiling-ffi 62.12% <ø> (ø)
datadog-sidecar 42.32% <ø> (ø)
datdog-sidecar-ffi 10.36% <ø> (ø)
spawn-worker 55.35% <ø> (ø)
tinybytes 90.96% <ø> (ø)
datadog-trace-normalization 98.24% <ø> (ø)
datadog-trace-obfuscation 94.16% <ø> (ø)
datadog-trace-protobuf 77.10% <ø> (ø)
datadog-trace-utils 89.23% <ø> (ø)
datadog-log 76.41% <ø> (ø)
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@VianneyRuhlmann VianneyRuhlmann marked this pull request as ready for review May 27, 2025 16:07
@VianneyRuhlmann VianneyRuhlmann requested review from a team as code owners May 27, 2025 16:07
@VianneyRuhlmann VianneyRuhlmann changed the title Vianney/data pipeline/add threads shutdown [APMSP-1874] Add fork support by dropping runtime May 28, 2025
@VianneyRuhlmann VianneyRuhlmann self-assigned this May 28, 2025
Comment on lines 133 to 168
#[allow(dead_code)]
#[derive(Debug)]
struct TelemetryWorker<'a> {
flavor: &'a TelemetryWorkerFlavor,
config: &'a Config,
mailbox: &'a mpsc::Receiver<TelemetryActions>,
cancellation_token: &'a CancellationToken,
seq_id: &'a AtomicU64,
runtime_id: &'a String,
deadlines: &'a scheduler::Scheduler<LifecycleAction>,
data: &'a TelemetryWorkerData,
}
let Self {
flavor,
config,
mailbox,
cancellation_token,
seq_id,
runtime_id,
client: _,
deadlines,
data,
} = self;
Debug::fmt(
&TelemetryWorker {
flavor,
config,
mailbox,
cancellation_token,
seq_id,
runtime_id,
deadlines,
data,
},
f,
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: this could be simpler
https://doc.rust-lang.org/std/fmt/struct.Formatter.html#method.debug_struct

Suggested change
#[allow(dead_code)]
#[derive(Debug)]
struct TelemetryWorker<'a> {
flavor: &'a TelemetryWorkerFlavor,
config: &'a Config,
mailbox: &'a mpsc::Receiver<TelemetryActions>,
cancellation_token: &'a CancellationToken,
seq_id: &'a AtomicU64,
runtime_id: &'a String,
deadlines: &'a scheduler::Scheduler<LifecycleAction>,
data: &'a TelemetryWorkerData,
}
let Self {
flavor,
config,
mailbox,
cancellation_token,
seq_id,
runtime_id,
client: _,
deadlines,
data,
} = self;
Debug::fmt(
&TelemetryWorker {
flavor,
config,
mailbox,
cancellation_token,
seq_id,
runtime_id,
deadlines,
data,
},
f,
)
f.debug_struct("TelemetryWorker")
.field("flavor", self.flavor)
.field(...)
.finish()

// SPDX-License-Identifier: Apache-2.0

//! Defines a pausable worker to be able to stop background processes before forks
use anyhow::{anyhow, Result};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't we use the internal errors?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

fn run(&mut self) -> impl std::future::Future<Output = ()> + Send;
}

impl Worker for StatsExporter {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These impls should probably live with the respective workers and not here in the trait definition?

}
}

/// A pausable worker which can be paused and restarded on forks.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
/// A pausable worker which can be paused and restarded on forks.
/// A pausable worker which can be paused and restarted on forks.

// SPDX-License-Identifier: Apache-2.0

//! Defines a pausable worker to be able to stop background processes before forks
use anyhow::{anyhow, Result};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

Self::Paused { worker }
}

/// Start the worker on the given runtime.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the public functions, should we have more thorough rustdoc comments with examples? Or perhaps, just more thorough rustdoc comment for the module or enum that explains why this is necessary and when it should be used? If someone outside of our team needs to implement a worker in the future, do they have enough information to do so independently?

pub async fn pause(&mut self) -> Result<()> {
match self {
PausableWorker::Running { handle, stop_token } => {
stop_token.cancel();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a possible race condition here? If the task is already shutting down and pause is called can we wind up in an InvalidState when we don't want to?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we're safe since pause takes a mutable reference. handle.await() only returns an error if the tokio task has been aborted.

Copy link
Contributor

@ekump ekump Jun 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we're safe since pause takes a mutable reference

Yes, which is why I wasn't concerned with pause() being called multiple times. I was more concerned with a potential race condition when pause() is called at the same time that a worker is shutting down outside of the pause workflow. If the task shuts down gracefully in this situation we don't want to be in InvalidState because of timing.

handle.await() only returns an error if the tokio task has been aborted.

Looking at this some more, I agree that we are probably ok. handle.await() is going to return Ok in the scenario I described as long as it's a graceful shutdown. We'll be in an InvalidState with a panic or abort...but that is what we want anyway.

InvalidState,
}

impl<T: Worker + Send + Sync + 'static> PausableWorker<T> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you need to impl Drop? What happens if PausableWorker goes out of scope? Are tokio tasks going to continue to run in the background? Should you be canceling the stop_token?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The task is going to continue in the background until the runtime is dropped. I think it is fine to not cancel the token. Either way this should be described in the doc.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have a very strong opinion, but why is it ok for the task to continue? Is it not likely that workers can go out of scope but the runtime stick around?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have a strong opinion either. The way I see it the PausableWorker is more of a handle to the worker which is running in the runtime.

if let Self::Running { .. } = self {
Ok(())
} else if let Self::Paused { mut worker } = std::mem::replace(self, Self::InvalidState) {
// Worker is temporarly in an invalid state, but since this block is failsafe it will
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// Worker is temporarly in an invalid state, but since this block is failsafe it will
// Worker is temporarily in an invalid state, but since this block is failsafe it will

match self {
PausableWorker::Running { handle, stop_token } => {
stop_token.cancel();
if let Ok(worker) = handle.await {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What happens if we have workers that don't frequently yield, or wind up deadlocked? Are we going to await forever? Or, if not forever, long enough where it causes problems?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch, I'll add a warning in doc and a timeout in stop_worker.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually tokio timeout won't work as it will be checked when the task yields so we'll have the same issue. I think we can keep the warning for now and add timeout as a follow-up item.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For what it's worth, tokio supports "cancellation tokens", which is how in profiling we stop tokio if folks e.g. ctrl+c and the profiler is uploading a profile on a slow connection.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The pausable worker uses a cancellation token internally but we still have to wait for the worker to yield. I don't think there's a way force the worker to yield earlier though.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think there's a way force the worker to yield earlier though.

I think it's possible, but would require a big refactor and add significant complexity to our workers. I don't think it's worth pursuing until we see evidence this solution isn't sufficient.

I think the only real problem is high-frequency forking apps. We need to support that situation. I guess the only way to do that is to ensure the workers are as efficient as possible and yield appropriately? I don't think there is anything that can be practically done in pause().

Copy link
Contributor

@ekump ekump left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM on the approach. Just some suggested doc edits, request for more docs, and suggestion about errors.

@VianneyRuhlmann VianneyRuhlmann requested a review from a team as a code owner June 6, 2025 14:10
@github-actions github-actions bot added the common label Jun 6, 2025
@VianneyRuhlmann VianneyRuhlmann force-pushed the vianney/data-pipeline/add-threads-shutdown branch from 93c9a63 to da6dee5 Compare June 6, 2025 15:32
@VianneyRuhlmann VianneyRuhlmann force-pushed the vianney/data-pipeline/add-threads-shutdown branch from 3979219 to 17ba3b8 Compare June 10, 2025 08:18
@r1viollet
Copy link
Contributor

r1viollet commented Jun 10, 2025

Artifact Size Benchmark Report

aarch64-alpine-linux-musl
Artifact Baseline Commit Change
/aarch64-alpine-linux-musl/lib/libdatadog_profiling.so 9.16 MB 9.16 MB +.09% (+8.78 KB) 🔍
/aarch64-alpine-linux-musl/lib/libdatadog_profiling.so.debug 22.03 MB 22.02 MB --.03% (-8.59 KB) 💪
/aarch64-alpine-linux-musl/lib/libdatadog_profiling.a 70.01 MB 69.99 MB --.02% (-20.95 KB) 💪
aarch64-unknown-linux-gnu
Artifact Baseline Commit Change
/aarch64-unknown-linux-gnu/lib/libdatadog_profiling.so.debug 25.91 MB 25.91 MB --.02% (-5.63 KB) 💪
/aarch64-unknown-linux-gnu/lib/libdatadog_profiling.a 81.74 MB 81.71 MB --.04% (-33.80 KB) 💪
/aarch64-unknown-linux-gnu/lib/libdatadog_profiling.so 9.08 MB 9.08 MB +.03% (+3.58 KB) 🔍
libdatadog-x64-windows
Artifact Baseline Commit Change
/libdatadog-x64-windows/debug/dynamic/datadog_profiling_ffi.dll 16.03 MB 16.02 MB --.08% (-13.50 KB) 💪
/libdatadog-x64-windows/debug/dynamic/datadog_profiling_ffi.lib 62.65 KB 62.65 KB 0% (0 B) 👌
/libdatadog-x64-windows/debug/dynamic/datadog_profiling_ffi.pdb 109.04 MB 109.18 MB +.12% (+144.00 KB) 🔍
/libdatadog-x64-windows/debug/static/datadog_profiling_ffi.lib 578.02 MB 583.53 MB +.95% (+5.51 MB) 🔍
/libdatadog-x64-windows/release/dynamic/datadog_profiling_ffi.dll 5.00 MB 5.01 MB +.24% (+12.50 KB) 🔍
/libdatadog-x64-windows/release/dynamic/datadog_profiling_ffi.lib 62.65 KB 62.65 KB 0% (0 B) 👌
/libdatadog-x64-windows/release/dynamic/datadog_profiling_ffi.pdb 15.53 MB 15.54 MB +.05% (+8.00 KB) 🔍
/libdatadog-x64-windows/release/static/datadog_profiling_ffi.lib 28.56 MB 28.59 MB +.08% (+26.05 KB) 🔍
libdatadog-x86-windows
Artifact Baseline Commit Change
/libdatadog-x86-windows/debug/dynamic/datadog_profiling_ffi.dll 13.59 MB 13.58 MB --.07% (-11.00 KB) 💪
/libdatadog-x86-windows/debug/dynamic/datadog_profiling_ffi.lib 63.60 KB 63.60 KB 0% (0 B) 👌
/libdatadog-x86-windows/debug/dynamic/datadog_profiling_ffi.pdb 110.62 MB 110.96 MB +.30% (+344.00 KB) 🔍
/libdatadog-x86-windows/debug/static/datadog_profiling_ffi.lib 569.21 MB 575.01 MB +1.02% (+5.80 MB) ⚠️
/libdatadog-x86-windows/release/dynamic/datadog_profiling_ffi.dll 3.78 MB 3.78 MB +.07% (+3.00 KB) 🔍
/libdatadog-x86-windows/release/dynamic/datadog_profiling_ffi.lib 63.60 KB 63.60 KB 0% (0 B) 👌
/libdatadog-x86-windows/release/dynamic/datadog_profiling_ffi.pdb 16.51 MB 16.51 MB +.04% (+8.00 KB) 🔍
/libdatadog-x86-windows/release/static/datadog_profiling_ffi.lib 26.78 MB 26.79 MB +.05% (+15.82 KB) 🔍
x86_64-alpine-linux-musl
Artifact Baseline Commit Change
/x86_64-alpine-linux-musl/lib/libdatadog_profiling.a 62.73 MB 62.72 MB --.01% (-10.08 KB) 💪
/x86_64-alpine-linux-musl/lib/libdatadog_profiling.so 9.78 MB 9.81 MB +.27% (+27.25 KB) 🔍
/x86_64-alpine-linux-musl/lib/libdatadog_profiling.so.debug 20.87 MB 20.86 MB --.02% (-6.39 KB) 💪
x86_64-unknown-linux-gnu
Artifact Baseline Commit Change
/x86_64-unknown-linux-gnu/lib/libdatadog_profiling.a 76.77 MB 76.75 MB --.01% (-15.31 KB) 💪
/x86_64-unknown-linux-gnu/lib/libdatadog_profiling.so 9.69 MB 9.70 MB +.10% (+10.40 KB) 🔍
/x86_64-unknown-linux-gnu/lib/libdatadog_profiling.so.debug 23.86 MB 23.86 MB +0% (+744 B) 👌

@VianneyRuhlmann VianneyRuhlmann force-pushed the vianney/data-pipeline/add-threads-shutdown branch from d4a857b to a85c21c Compare June 10, 2025 12:42
@VianneyRuhlmann VianneyRuhlmann merged commit f356aec into main Jun 10, 2025
36 checks passed
@VianneyRuhlmann VianneyRuhlmann deleted the vianney/data-pipeline/add-threads-shutdown branch June 10, 2025 13:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants