Skip to content

Conversation

@asimmahmood1
Copy link
Contributor

@asimmahmood1 asimmahmood1 commented Sep 26, 2025

Description

Follow up to PR 13130 , add support for sub-aggregations. Also clean up code. sub will always be non-null. It'll be set to LeafBucketCollector.NO_OP_COLLECTOR which will handle the no op case.

Related Issues

Updates [#19384]
Closes [#17283]

Check List

  • Functionality includes testing.
  • [n/a] API changes companion pull request created, if applicable.
  • [TBD] Public documentation issue/PR created, if applicable.

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.

@github-actions
Copy link
Contributor

Hello!
We have added a performance benchmark workflow that runs by adding a comment on the PR.
Please refer https://github.com/opensearch-project/OpenSearch/blob/main/PERFORMANCE_BENCHMARKS.md on how to run benchmarks on pull requests.

@asimmahmood1 asimmahmood1 moved this from Todo to In Progress in Performance Roadmap Sep 26, 2025
@github-actions
Copy link
Contributor

❌ Gradle check result for f49d176: FAILURE

Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?

@asimmahmood1 asimmahmood1 changed the title Add sub aggregation support for Date HistogramSkiplistLeafCollector Add sub aggregation support for histogram aggregation using skiplist Sep 26, 2025
@github-actions
Copy link
Contributor

❌ Gradle check result for 91144b1: FAILURE

Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?

Copy link
Contributor

@jainankitk jainankitk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor comment. Can you also merge from main and resolve the CHANGELOG conflict?

@asimmahmood1
Copy link
Contributor Author

asimmahmood1 commented Sep 30, 2025

Manually tested the usage stats:



            "debug": {
              "total_buckets": 4,
              "single_valued_collectors_used": 0,
              "multi_valued_collectors_used": 0,
              "skip_list_collectors_used": 8,
              "no_op_collectors_used": 0
       }

big5_histogram_with_filter_profile.json

@asimmahmood1
Copy link
Contributor Author

Comparing response of big5 with a filter, no difference in computed buckets.

curl -X POST "http://localhost:9200/big5/_search"     -H "Content-Type: application/json"     -d '{
      "size": 0,
      "query": {
        "bool": {
          "must": [
            {
              "term": {
                "process.name": "systemd"
              }
            }
          ]
        }
      },
      "aggs": {
        "by_hour": {
          "date_histogram": {
            "field": "@timestamp",
            "calendar_interval": "hour"
          }
        }
      }
    }'
[ec2-user@ip-172-31-61-197 ~]$ diff big5_histogram_with_filter_baseline.json big5_histogram_with_filter.json
2c2
<   "took": 16,
---
>   "took": 12,

@asimmahmood1
Copy link
Contributor Author

Testing debug with sub stats:

"aggregations": [
          {
            "type": "DateHistogramAggregator",
            "description": "by_hour",
            "time_in_nanos": 68514415,
            "max_slice_time_in_nanos": 68514415,
            "min_slice_time_in_nanos": 31509532,
            "avg_slice_time_in_nanos": 50011973,
            "breakdown": {
              "min_build_leaf_collector": 211585,
              "build_aggregation_count": 2,
              "post_collection": 41562891,
              "max_collect_count": 146490,
              "initialize_count": 2,
              "reduce_count": 0,
              "avg_collect": 48821393,
              "max_build_aggregation": 118013,
              "avg_collect_count": 82842,
              "max_build_leaf_collector": 1912502,
              "min_build_leaf_collector_count": 1,
              "build_aggregation": 41661680,
              "min_initialize": 15066,
              "max_reduce": 0,
              "build_leaf_collector_count": 9,
              "avg_reduce": 0,
              "min_collect_count": 19194,
              "avg_build_leaf_collector_count": 4,
              "avg_build_leaf_collector": 1062043,
              "max_collect": 68181447,
              "reduce": 0,
              "avg_build_aggregation": 108640,
              "min_post_collection": 2612,
              "max_initialize": 18674,
              "max_post_collection": 3442,
              "collect_count": 165684,
              "avg_post_collection": 3027,
              "avg_initialize": 16870,
              "post_collection_count": 2,
              "build_leaf_collector": 1965228,
              "min_collect": 29461339,
              "min_build_aggregation": 99267,
              "initialize": 744818,
              "max_build_leaf_collector_count": 8,
              "min_reduce": 0,
              "collect": 68181447
            },
            "debug": {
              "total_buckets": 4,
              "single_valued_collectors_used": 0,
              "multi_valued_collectors_used": 0,
              "skip_list_collectors_used": 8,
              "no_op_collectors_used": 0
            },
            "children": [
              {
                "type": "StatsAggregator",
                "description": "size_stats",
                "time_in_nanos": 36491270,
                "max_slice_time_in_nanos": 36491270,
                "min_slice_time_in_nanos": 17498759,
                "avg_slice_time_in_nanos": 26995014,
                "breakdown": {
                  "min_build_leaf_collector": 38616,
                  "build_aggregation_count": 2,
                  "post_collection": 41561028,
                  "max_collect_count": 146490,
                  "initialize_count": 2,
                  "reduce_count": 0,
                  "avg_collect": 26832479,
                  "max_build_aggregation": 12686,
                  "avg_collect_count": 82842,
                  "max_build_leaf_collector": 258201,
                  "min_build_leaf_collector_count": 1,
                  "build_aggregation": 41580379,
                  "min_initialize": 1138,
                  "max_reduce": 0,
                  "build_leaf_collector_count": 9,
                  "avg_reduce": 0,
                  "min_collect_count": 19194,
                  "avg_build_leaf_collector_count": 4,
                  "avg_build_leaf_collector": 148408,
                  "max_collect": 36437256,
                  "reduce": 0,
                  "avg_build_aggregation": 11830,
                  "min_post_collection": 742,
                  "max_initialize": 1497,
                  "max_post_collection": 1215,
                  "collect_count": 165684,
                  "avg_post_collection": 978,
                  "avg_initialize": 1317,
                  "post_collection_count": 2,
                  "build_leaf_collector": 391749,
                  "min_collect": 17227703,
                  "min_build_aggregation": 10975,
                  "initialize": 728095,
                  "max_build_leaf_collector_count": 8,
                  "min_reduce": 0,
                  "collect": 36437256
                }
              }
            ]
          }

@asimmahmood1
Copy link
Contributor Author

 opensearch-benchmark execute-test --pipeline=benchmark-only --workload=nyc_taxis --target-hosts=localhost:9200 --kill-running-processes   --workload-param "max_num_segments:10,ingest_percentage:20"  --include-tasks="date_histogram_calendar_interval_with_filter,date_histogram_calendar_interval,date_histogram_fixed_interval_with_metrics" --user-tag asimmahm:3.3-nyc-dateskiplist-subagg-final

INFO] [Test Execution ID]: 40405b2d-2c15-4cf7-a483-5a0671adc672
[WARNING] Local changes in [/home/ec2-user/.osb/benchmarks/workloads/default] prevent workloads update from remote. Please commit your changes.
[WARNING] Local changes in [/home/ec2-user/.osb/benchmarks/workloads/default] prevent workloads update from remote. Please commit your changes.
[WARNING] Local changes in [/home/ec2-user/.osb/benchmarks/workloads/default] prevent workloads update from remote. Please commit your changes.
[INFO] Executing test with workload [nyc_taxis], test_procedure [append-no-conflicts] and provision_config_instance ['external'] with version [3.3.0-SNAPSHOT].

Running date_histogram_calendar_interval                                       [100% done]
Running date_histogram_calendar_interval_with_filter                           [100% done]
Running date_histogram_fixed_interval_with_metrics                             [100% done]

------------------------------------------------------
    _______             __   _____
   / ____(_)___  ____ _/ /  / ___/_________  ________
  / /_  / / __ \/ __ `/ /   \__ \/ ___/ __ \/ ___/ _ \
 / __/ / / / / / /_/ / /   ___/ / /__/ /_/ / /  /  __/
/_/   /_/_/ /_/\__,_/_/   /____/\___/\____/_/   \___/
------------------------------------------------------

|                                                         Metric |                                         Task |       Value |   Unit |
|---------------------------------------------------------------:|---------------------------------------------:|------------:|-------:|
|                     Cumulative indexing time of primary shards |                                              |           0 |    min |
|             Min cumulative indexing time across primary shards |                                              |           0 |    min |
|          Median cumulative indexing time across primary shards |                                              |           0 |    min |
|             Max cumulative indexing time across primary shards |                                              |           0 |    min |
|            Cumulative indexing throttle time of primary shards |                                              |           0 |    min |
|    Min cumulative indexing throttle time across primary shards |                                              |           0 |    min |
| Median cumulative indexing throttle time across primary shards |                                              |           0 |    min |
|    Max cumulative indexing throttle time across primary shards |                                              |           0 |    min |
|                        Cumulative merge time of primary shards |                                              |           0 |    min |
|                       Cumulative merge count of primary shards |                                              |           0 |        |
|                Min cumulative merge time across primary shards |                                              |           0 |    min |
|             Median cumulative merge time across primary shards |                                              |           0 |    min |
|                Max cumulative merge time across primary shards |                                              |           0 |    min |
|               Cumulative merge throttle time of primary shards |                                              |           0 |    min |
|       Min cumulative merge throttle time across primary shards |                                              |           0 |    min |
|    Median cumulative merge throttle time across primary shards |                                              |           0 |    min |
|       Max cumulative merge throttle time across primary shards |                                              |           0 |    min |
|                      Cumulative refresh time of primary shards |                                              |           0 |    min |
|                     Cumulative refresh count of primary shards |                                              |           2 |        |
|              Min cumulative refresh time across primary shards |                                              |           0 |    min |
|           Median cumulative refresh time across primary shards |                                              |           0 |    min |
|              Max cumulative refresh time across primary shards |                                              |           0 |    min |
|                        Cumulative flush time of primary shards |                                              |           0 |    min |
|                       Cumulative flush count of primary shards |                                              |           1 |        |
|                Min cumulative flush time across primary shards |                                              |           0 |    min |
|             Median cumulative flush time across primary shards |                                              |           0 |    min |
|                Max cumulative flush time across primary shards |                                              |           0 |    min |
|                                        Total Young Gen GC time |                                              |       0.032 |      s |
|                                       Total Young Gen GC count |                                              |           2 |        |
|                                          Total Old Gen GC time |                                              |           0 |      s |
|                                         Total Old Gen GC count |                                              |           0 |        |
|                                                     Store size |                                              |     4.36969 |     GB |
|                                                  Translog size |                                              | 5.12227e-08 |     GB |
|                                         Heap used for segments |                                              |           0 |     MB |
|                                       Heap used for doc values |                                              |           0 |     MB |
|                                            Heap used for terms |                                              |           0 |     MB |
|                                            Heap used for norms |                                              |           0 |     MB |
|                                           Heap used for points |                                              |           0 |     MB |
|                                    Heap used for stored fields |                                              |           0 |     MB |
|                                                  Segment count |                                              |          10 |        |
|                                                 Min Throughput |             date_histogram_calendar_interval |         1.5 |  ops/s |
|                                                Mean Throughput |             date_histogram_calendar_interval |         1.5 |  ops/s |
|                                              Median Throughput |             date_histogram_calendar_interval |         1.5 |  ops/s |
|                                                 Max Throughput |             date_histogram_calendar_interval |         1.5 |  ops/s |
|                                        50th percentile latency |             date_histogram_calendar_interval |      227.48 |     ms |
|                                        90th percentile latency |             date_histogram_calendar_interval |     248.178 |     ms |
|                                        99th percentile latency |             date_histogram_calendar_interval |     262.979 |     ms |
|                                       100th percentile latency |             date_histogram_calendar_interval |      263.36 |     ms |
|                                   50th percentile service time |             date_histogram_calendar_interval |     226.161 |     ms |
|                                   90th percentile service time |             date_histogram_calendar_interval |     246.997 |     ms |
|                                   99th percentile service time |             date_histogram_calendar_interval |      261.85 |     ms |
|                                  100th percentile service time |             date_histogram_calendar_interval |     262.355 |     ms |
|                                                     error rate |             date_histogram_calendar_interval |           0 |      % |
|                                                 Min Throughput | date_histogram_calendar_interval_with_filter |        1.51 |  ops/s |
|                                                Mean Throughput | date_histogram_calendar_interval_with_filter |        1.52 |  ops/s |
|                                              Median Throughput | date_histogram_calendar_interval_with_filter |        1.51 |  ops/s |
|                                                 Max Throughput | date_histogram_calendar_interval_with_filter |        1.53 |  ops/s |
|                                        50th percentile latency | date_histogram_calendar_interval_with_filter |     9.87088 |     ms |
|                                        90th percentile latency | date_histogram_calendar_interval_with_filter |     11.1966 |     ms |
|                                        99th percentile latency | date_histogram_calendar_interval_with_filter |     13.2912 |     ms |
|                                       100th percentile latency | date_histogram_calendar_interval_with_filter |     13.4544 |     ms |
|                                   50th percentile service time | date_histogram_calendar_interval_with_filter |      8.4715 |     ms |
|                                   90th percentile service time | date_histogram_calendar_interval_with_filter |     9.47064 |     ms |
|                                   99th percentile service time | date_histogram_calendar_interval_with_filter |     11.7108 |     ms |
|                                  100th percentile service time | date_histogram_calendar_interval_with_filter |     11.8357 |     ms |
|                                                     error rate | date_histogram_calendar_interval_with_filter |           0 |      % |
|                                                 Min Throughput |   date_histogram_fixed_interval_with_metrics |        0.24 |  ops/s |
|                                                Mean Throughput |   date_histogram_fixed_interval_with_metrics |        0.24 |  ops/s |
|                                              Median Throughput |   date_histogram_fixed_interval_with_metrics |        0.24 |  ops/s |
|                                                 Max Throughput |   date_histogram_fixed_interval_with_metrics |        0.24 |  ops/s |
|                                        50th percentile latency |   date_histogram_fixed_interval_with_metrics |      357487 |     ms |
|                                        90th percentile latency |   date_histogram_fixed_interval_with_metrics |      497919 |     ms |
|                                        99th percentile latency |   date_histogram_fixed_interval_with_metrics |      529457 |     ms |
|                                       100th percentile latency |   date_histogram_fixed_interval_with_metrics |      532986 |     ms |
|                                   50th percentile service time |   date_histogram_fixed_interval_with_metrics |     4214.45 |     ms |
|                                   90th percentile service time |   date_histogram_fixed_interval_with_metrics |     4243.44 |     ms |
|                                   99th percentile service time |   date_histogram_fixed_interval_with_metrics |      4274.8 |     ms |
|                                  100th percentile service time |   date_histogram_fixed_interval_with_metrics |     4293.39 |     ms |
|                                                     error rate |   date_histogram_fixed_interval_with_metrics |           0 |      % |


---------------------------------
[INFO] SUCCESS (took 844 seconds)
---------------------------------

@asimmahmood1
Copy link
Contributor Author

opensearch-benchmark compare -c 40405b2d-2c15-4cf7-a483-5a0671adc672 -b 65b238a9-325a-4393-b457-0588a57a58fa


/ __ ____ ___ ____ / / ____ / / / __ ) ____ / / ____ ___ ____ / /
/ / / / __ / _ / __ \ / _ / __ / ___/ ___/ __ \ / __ / _ \/ __ \/ ___/ __ \/ __ / __ `/ / ///
/ /
/ / /
/ / / / / // / / /_/ / / / // / / / / // / / / / / // / / / / / / / / // / / / ,<
_
/ .
/_// //____/_
/_,/_/ _
// // //_// //_// /// // //_,// //||
/_/

Comparing baseline
TestExecution ID: 65b238a9-325a-4393-b457-0588a57a58fa
TestExecution timestamp: 2025-09-26 19:46:26
TestProcedure: append-no-conflicts
ProvisionConfigInstance: external
User tags: asimmahm=3.3-nyc-dateskiplist-subagg-baseline

with contender
TestExecution ID: 40405b2d-2c15-4cf7-a483-5a0671adc672
TestExecution timestamp: 2025-09-30 21:07:54
TestProcedure: append-no-conflicts
ProvisionConfigInstance: external
User tags: asimmahm=3.3-nyc-dateskiplist-subagg-final


_______             __   _____

/ () ____ / / / /_____ ________
/ /_ / / __ / __ `/ / __ / / __ / / _
/ __/ / / / / / /
/ / / / / // // / / / __/
/
/ /
/
/ /
/_
,// /__/_/_// ___/

Metric Task Baseline Contender %Diff Diff Unit
Cumulative indexing time of primary shards 0 0 0.00% 0 min
Min cumulative indexing time across primary shard 0 0 0.00% 0 min
Median cumulative indexing time across primary shard 0 0 0.00% 0 min
Max cumulative indexing time across primary shard 0 0 0.00% 0 min
Cumulative indexing throttle time of primary shards 0 0 0.00% 0 min
Min cumulative indexing throttle time across primary shard 0 0 0.00% 0 min
Median cumulative indexing throttle time across primary shard 0 0 0.00% 0 min
Max cumulative indexing throttle time across primary shard 0 0 0.00% 0 min
Cumulative merge time of primary shards 0 0 0.00% 0 min
Cumulative merge count of primary shards 0 0 0.00% 0
Min cumulative merge time across primary shard 0 0 0.00% 0 min
Median cumulative merge time across primary shard 0 0 0.00% 0 min
Max cumulative merge time across primary shard 0 0 0.00% 0 min
Cumulative merge throttle time of primary shards 0 0 0.00% 0 min
Min cumulative merge throttle time across primary shard 0 0 0.00% 0 min
Median cumulative merge throttle time across primary shard 0 0 0.00% 0 min
Max cumulative merge throttle time across primary shard 0 0 0.00% 0 min
Cumulative refresh time of primary shards 0 0 0.00% 0 min
Cumulative refresh count of primary shards 2 2 0.00% 0
Min cumulative refresh time across primary shard 0 0 0.00% 0 min
Median cumulative refresh time across primary shard 0 0 0.00% 0 min
Max cumulative refresh time across primary shard 0 0 0.00% 0 min
Cumulative flush time of primary shards 0 0 0.00% 0 min
Cumulative flush count of primary shards 1 1 0.00% 0
Min cumulative flush time across primary shard 0 0 0.00% 0 min
Median cumulative flush time across primary shard 0 0 0.00% 0 min
Max cumulative flush time across primary shard 0 0 0.00% 0 min
Total Young Gen GC time 0.033 0.032 -0.00% -0.001 s
Total Young Gen GC count 2 2 0.00% 0
Total Old Gen GC time 0 0 0.00% 0 s
Total Old Gen GC count 0 0 0.00% 0
Store size 4.36969 4.36969 0.00% 0 GB
Translog size 5.12227e-08 5.12227e-08 0.00% 0 GB
Heap used for segments 0 0 0.00% 0 MB
Heap used for doc values 0 0 0.00% 0 MB
Heap used for terms 0 0 0.00% 0 MB
Heap used for norms 0 0 0.00% 0 MB
Heap used for points 0 0 0.00% 0 MB
Heap used for stored fields 0 0 0.00% 0 MB
Segment count 10 10 0.00% 0
Min Throughput date_histogram_calendar_interval 1.22898 1.50108 +22.14% 🔴 0.27211 ops/s
Mean Throughput date_histogram_calendar_interval 1.24148 1.50176 +20.97% 🔴 0.26028 ops/s
Median Throughput date_histogram_calendar_interval 1.24395 1.50162 +20.71% 🔴 0.25767 ops/s
Max Throughput date_histogram_calendar_interval 1.24555 1.50309 +20.68% 🔴 0.25753 ops/s
50th percentile latency date_histogram_calendar_interval 14122.3 227.48 -98.39% 🟢 -13894.9 ms
90th percentile latency date_histogram_calendar_interval 19465.8 248.178 -98.73% 🟢 -19217.7 ms
99th percentile latency date_histogram_calendar_interval 20690.9 262.979 -98.73% 🟢 -20427.9 ms
100th percentile latency date_histogram_calendar_interval 20831.6 263.36 -98.74% 🟢 -20568.2 ms
50th percentile service time date_histogram_calendar_interval 794.394 226.161 -71.53% 🟢 -568.233 ms
90th percentile service time date_histogram_calendar_interval 809.454 246.997 -69.49% 🟢 -562.457 ms
99th percentile service time date_histogram_calendar_interval 846.295 261.85 -69.06% 🟢 -584.445 ms
100th percentile service time date_histogram_calendar_interval 856.366 262.355 -69.36% 🟢 -594.011 ms
error rate date_histogram_calendar_interval 0 0 0.00% 0 %
Min Throughput date_histogram_calendar_interval_with_filter 1.50911 1.50943 0.02% 0.00032 ops/s
Mean Throughput date_histogram_calendar_interval_with_filter 1.51506 1.5156 0.04% 0.00054 ops/s
Median Throughput date_histogram_calendar_interval_with_filter 1.51371 1.51419 0.03% 0.00048 ops/s
Max Throughput date_histogram_calendar_interval_with_filter 1.52712 1.52811 0.06% 0.00098 ops/s
50th percentile latency date_histogram_calendar_interval_with_filter 19.384 9.87088 -49.08% 🟢 -9.51314 ms
90th percentile latency date_histogram_calendar_interval_with_filter 20.1579 11.1966 -44.46% 🟢 -8.96132 ms
99th percentile latency date_histogram_calendar_interval_with_filter 23.0539 13.2912 -42.35% 🟢 -9.76267 ms
100th percentile latency date_histogram_calendar_interval_with_filter 23.2335 13.4544 -42.09% 🟢 -9.77906 ms
50th percentile service time date_histogram_calendar_interval_with_filter 17.8957 8.4715 -52.66% 🟢 -9.42423 ms
90th percentile service time date_histogram_calendar_interval_with_filter 18.55 9.47064 -48.95% 🟢 -9.0794 ms
99th percentile service time date_histogram_calendar_interval_with_filter 21.1604 11.7108 -44.66% 🟢 -9.4496 ms
100th percentile service time date_histogram_calendar_interval_with_filter 21.3555 11.8357 -44.58% 🟢 -9.51977 ms
error rate date_histogram_calendar_interval_with_filter 0 0 0.00% 0 %
Min Throughput date_histogram_fixed_interval_with_metrics 0.21029 0.236863 +12.64% 🔴 0.02657 ops/s
Mean Throughput date_histogram_fixed_interval_with_metrics 0.210669 0.236931 +12.47% 🔴 0.02626 ops/s
Median Throughput date_histogram_fixed_interval_with_metrics 0.210624 0.236915 +12.48% 🔴 0.02629 ops/s
Max Throughput date_histogram_fixed_interval_with_metrics 0.210908 0.237073 +12.41% 🔴 0.02616 ops/s
50th percentile latency date_histogram_fixed_interval_with_metrics 410523 357487 -12.92% 🟢 -53035.9 ms
90th percentile latency date_histogram_fixed_interval_with_metrics 571403 497919 -12.86% 🟢 -73484.4 ms
99th percentile latency date_histogram_fixed_interval_with_metrics 607548 529457 -12.85% 🟢 -78090.3 ms
100th percentile latency date_histogram_fixed_interval_with_metrics 611552 532986 -12.85% 🟢 -78566.1 ms
50th percentile service time date_histogram_fixed_interval_with_metrics 4731.7 4214.45 -10.93% 🟢 -517.248 ms
90th percentile service time date_histogram_fixed_interval_with_metrics 4763.05 4243.44 -10.91% 🟢 -519.614 ms
99th percentile service time date_histogram_fixed_interval_with_metrics 4794.49 4274.8 -10.84% 🟢 -519.697 ms
100th percentile service time date_histogram_fixed_interval_with_metrics 4813.14 4293.39 -10.80% 🟢 -519.75 ms
error rate date_histogram_fixed_interval_with_metrics 0 0 0.00% 0 %

[INFO] SUCCESS (took 0 seconds)

@github-actions
Copy link
Contributor

❌ Gradle check result for a70324d: FAILURE

Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?

@asimmahmood1
Copy link
Contributor Author

Known flaky

org.opensearch.remotestore.WritableWarmIT.testWritableWarmBasic
org.opensearch.index.IndexServiceTests.testAsyncTranslogTrimTaskOnClosedIndex

@asimmahmood1 asimmahmood1 reopened this Sep 30, 2025
@github-project-automation github-project-automation bot moved this from In Progress to Done in Performance Roadmap Sep 30, 2025
@github-project-automation github-project-automation bot moved this from Done to In Progress in Performance Roadmap Sep 30, 2025
@github-actions
Copy link
Contributor

github-actions bot commented Oct 1, 2025

✅ Gradle check result for 6e07aec: SUCCESS

@jainankitk jainankitk merged commit 17b762a into opensearch-project:main Oct 1, 2025
32 of 33 checks passed
@github-project-automation github-project-automation bot moved this from In Progress to Done in Performance Roadmap Oct 1, 2025
@asimmahmood1 asimmahmood1 deleted the skip_date_sub_agg branch October 1, 2025 18:22
peteralfonsi pushed a commit to peteralfonsi/OpenSearch that referenced this pull request Oct 15, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Performance This is for any performance related enhancements or bugs Search:Aggregations v3.3.0

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

2 participants