Skip to content

Commit 2168df7

Browse files
committed
[SPARK-54422][K8S] Increase spark.kubernetes.allocation.batch.size to 20
### What changes were proposed in this pull request? This PR aims to increase `spark.kubernetes.allocation.batch.size` to 20 from 10 in Apache Spark 4.2.0. ### Why are the changes needed? Since Apache Spark 4.0.0, Apache Spark uses `10` as the default value of executor allocation batch size. This PR aims to increase it further in 2025. - #49681 ### Does this PR introduce _any_ user-facing change? Yes, the users will see faster Spark job resource allocation. The migration guide is updated correspondingly. ### How was this patch tested? Pass the CIs. ### Was this patch authored or co-authored using generative AI tooling? No. Closes #53134 from dongjoon-hyun/SPARK-54422. Authored-by: Dongjoon Hyun <[email protected]> Signed-off-by: Dongjoon Hyun <[email protected]>
1 parent 158a132 commit 2168df7

File tree

3 files changed

+6
-2
lines changed

3 files changed

+6
-2
lines changed

docs/core-migration-guide.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -22,6 +22,10 @@ license: |
2222
* Table of contents
2323
{:toc}
2424

25+
## Upgrading from Core 4.1 to 4.2
26+
27+
- Since Spark 4.2, Spark will allocate executor pods with a batch size of `20`. To restore the legacy behavior, you can set `spark.kubernetes.allocation.batch.size` to `10`.
28+
2529
## Upgrading from Core 4.0 to 4.1
2630

2731
- Since Spark 4.1, Spark Master deamon provides REST API by default. To restore the behavior before Spark 4.1, you can set `spark.master.rest.enabled` to `false`.

docs/running-on-kubernetes.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -685,7 +685,7 @@ See the [configuration page](configuration.html) for information on Spark config
685685
</tr>
686686
<tr>
687687
<td><code>spark.kubernetes.allocation.batch.size</code></td>
688-
<td><code>10</code></td>
688+
<td><code>20</code></td>
689689
<td>
690690
Number of pods to launch at once in each round of executor pod allocation.
691691
</td>

resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -487,7 +487,7 @@ private[spark] object Config extends Logging {
487487
.version("2.3.0")
488488
.intConf
489489
.checkValue(value => value > 0, "Allocation batch size should be a positive integer")
490-
.createWithDefault(10)
490+
.createWithDefault(20)
491491

492492
val KUBERNETES_ALLOCATION_BATCH_DELAY =
493493
ConfigBuilder("spark.kubernetes.allocation.batch.delay")

0 commit comments

Comments
 (0)