|
| 1 | +# MachinePool |
| 2 | + |
| 3 | +## Table of Contents |
| 4 | + |
| 5 | +- [Introduction](#introduction) |
| 6 | +- [What is a MachinePool?](#what-is-a-machinepool) |
| 7 | +- [Why MachinePool?](#why-machinepool) |
| 8 | +- [When to use MachinePool vs MachineDeployment](#when-to-use-machinepool-vs-machinedeployment) |
| 9 | +- [Next steps](#next-steps) |
| 10 | + |
| 11 | +## Introduction |
| 12 | + |
| 13 | +Cluster API (CAPI) manages Kubernetes worker nodes primarily through Machine, MachineSet, and MachineDeployment objects. These primitives manage nodes individually (Machines), and have served well across a wide variety of providers. |
| 14 | + |
| 15 | +However, many infrastructure providers already offer first-class abstractions for groups of compute instances (AWS: Auto Scaling Groups (ASG), Azure: Virtual Machine Scale Sets (VMSS), or GCP: Managed Instance Groups (MIG)). These primitives natively support scaling, rolling upgrades, and health management. |
| 16 | + |
| 17 | +MachinePool brings these provider features into Cluster API by introducing a higher-level abstraction for managing a group of machines as a single unit. |
| 18 | + |
| 19 | +## What is a MachinePool? |
| 20 | + |
| 21 | +A MachinePool is a Cluster API resource representing a group of worker nodes. Instead of reconciling each machine individually, CAPI delegates lifecycle management to the infrastructure provider. |
| 22 | + |
| 23 | +- **MachinePool (core API)**: defines desired state (replicas, Kubernetes version, bootstrap template, infrastructure reference). |
| 24 | +- **InfrastructureMachinePool (provider API)**: provides an implementation that backs a pool. A provider may offer more than one type depending on how it is managed. For example: |
| 25 | + - `AWSMachinePool`: self-managed ASG |
| 26 | + - `AWSManagedMachinePool`: EKS managed node group |
| 27 | + - `AzureMachinePool`: VM Scale Set |
| 28 | + - `AzureManagedMachinePool`: AKS managed node pool |
| 29 | + - `GCPManagedMachinePool`: GKE managed node pool |
| 30 | + - `OCIManagedMachinePool`: OKE managed node pool |
| 31 | + - `ScalewayManagedMachinePool`: Scaleway Kapsule node pool |
| 32 | +- **Bootstrap configuration**: still applies (e.g., kubeadm configs), ensuring that new nodes join the cluster with the correct setup. |
| 33 | + |
| 34 | +The MachinePool controller coordinates between the Cluster API core and provider-specific implementations: |
| 35 | + |
| 36 | +- Reconciles desired replicas with the infrastructure pool. |
| 37 | +- Matches provider IDs from the infrastructure resource with Kubernetes Nodes in the workload cluster. |
| 38 | +- Updates MachinePool status (ready replicas, conditions, etc.) |
| 39 | + |
| 40 | +## Why MachinePool? |
| 41 | + |
| 42 | +### Leverage provider primitives |
| 43 | + |
| 44 | +Most cloud providers already manage scaling, instance replacement, and health monitoring at the group level. MachinePool lets CAPI delegate lifecycle operations instead of duplicating that logic. |
| 45 | + |
| 46 | +**Example:** |
| 47 | +- AWS Auto Scaling Groups replace failed nodes automatically. |
| 48 | +- Azure VM Scale Sets support rolling upgrades with configurable surge/availability strategies. |
| 49 | + |
| 50 | +### Simplify upgrades and scaling |
| 51 | + |
| 52 | +Upgrades and scaling events are managed at the pool level: |
| 53 | +- Update Kubernetes version or bootstrap template → cloud provider handles rolling replacement. |
| 54 | +- Scale up/down replicas → provider adjusts capacity. |
| 55 | + |
| 56 | +This provides more predictable, cloud-native semantics compared to reconciling many individual Machine objects. |
| 57 | + |
| 58 | +### Autoscaling integration |
| 59 | + |
| 60 | +MachinePool integrates with the Cluster Autoscaler in the same way that MachineDeployments do. In practice, the autoscaler treats a MachinePool as a node group, enabling scale-up and scale-down decisions based on cluster load. |
| 61 | + |
| 62 | +### Tradeoffs and limitations |
| 63 | + |
| 64 | +While powerful, MachinePool comes with tradeoffs: |
| 65 | + |
| 66 | +- **Infrastructure provider complexity**: requires infrastructure providers to implement and maintain an InfrastructureMachinePool type. |
| 67 | +- **Less per-machine granularity**: you cannot configure each node individually; the pool defines a shared template. |
| 68 | + > **Note**: While this is typically true, certain cloud providers do offer flexibility. |
| 69 | + > **Example**: AWS allows `AWSMachinepool.spec.mixedInstancesPolicy.instancesDistribution` while Azure allows `AzureMachinePool.spec.orchestrationMode`. |
| 70 | +- **Complex reconciliation**: node-to-providerID matching introduces edge cases (delays, inconsistent states). |
| 71 | +- **Draining**: The cloud resources for MachinePool may not necessarily support draining of Kubernetes worker nodes. For example, with an AWSMachinePool, AWS would normally terminate instances as quickly as possible. To solve this, tools like `aws-node-termination-handler` combined with ASG lifecycle hooks (defined in `AWSMachine.spec.lifecycleHooks`) must be installed, and is not a built-in feature of the infrastructure provider (CAPA in this example). |
| 72 | +- **Maturity**: The MachinePool API is still considered experimental/beta. |
| 73 | + |
| 74 | +## When to use MachinePool vs MachineDeployment |
| 75 | + |
| 76 | +Both MachineDeployment and MachinePool are valid options for managing worker nodes in Cluster API. The right choice depends on your infrastructure provider's capabilities and your operational requirements. |
| 77 | + |
| 78 | +### Use MachinePool when: |
| 79 | + |
| 80 | +- **Cloud provider supports scaling group primitives**: AWS Auto Scaling Groups, Azure Virtual Machine Scale Sets, GCP Managed Instance Groups, OCI Compute Instances, Scaleway Kapsule. These resources natively handle scaling, rolling upgrades, and health checks. |
| 81 | +- **You want to leverage cloud provider-level features**: MachinePool enables direct use of cloud-native upgrade strategies (e.g., surge, maxUnavailable) and autoscaling behaviors. |
| 82 | +- **You are operating medium-to-large node groups**: Managing 50+ nodes through individual Machine objects can add significant reconciliation overhead. MachinePool reduces this by consolidating the group into a single object. |
| 83 | + |
| 84 | +### Use MachineDeployment when: |
| 85 | + |
| 86 | +- **The provider does not support scaling groups**: Common in environments such as bare metal, vSphere, or Docker. |
| 87 | +- **You need fine-grained per-machine control**: MachineDeployments allow unique bootstrap configurations, labels, and taints across different MachineSets. |
| 88 | +- **You prefer maturity and portability**: MachineDeployment is stable, GA, and supported across all providers. MachinePool remains experimental in some implementations. |
| 89 | +- **Your clusters are small**: For clusters with only a handful of nodes, the additional API object overhead from Machines is minimal, and MachineDeployment provides simpler semantics. |
| 90 | + |
| 91 | +## Next Steps |
| 92 | + |
| 93 | +- **Enable the feature**: [MachinePool Experimental Feature](../tasks/experimental-features/machine-pools.md) |
| 94 | +- **Developer documentation**: [MachinePool Controller](../developer/core/controllers/machine-pool.md) |
| 95 | +- **Future work**: Planned improvements are tracked [here](https://github.com/kubernetes-sigs/cluster-api/issues/9005) |
0 commit comments