Kapsule API.
- Kapsule cluster management commands
- Manage your Kubernetes Kapsule cluster's kubeconfig files
- Kapsule node management commands
- Kapsule pool management commands
- Available Kubernetes version commands
A cluster is a fully managed Kubernetes cluster.
It is composed of different pools, each pool containing the same kind of nodes.
Create a new Kubernetes cluster on a Scaleway account.
Usage:
scw k8s cluster create [arg=value ...]
Args:
Name | Description | |
---|---|---|
project-id | Project ID to use. If none is passed the default project ID will be used | |
type | Type of the cluster | |
name | Required Default: <generated> |
Name of the cluster |
description | Description of the cluster | |
tags.{index} | Tags associated with the cluster | |
version | Required Default: latest |
Kubernetes version of the cluster |
cni | Required Default: cilium One of: unknown_cni , cilium , calico , weave , flannel , kilo |
Container Network Interface (CNI) plugin that will run in the cluster |
Deprecated | Defines if the Kubernetes Dashboard is enabled in the cluster | |
Deprecated One of: unknown_ingress , none , nginx , traefik , traefik2 |
Ingress Controller that will run in the cluster | |
pools.{index}.name | Required | Name of the pool |
pools.{index}.node-type | Required | Node type is the type of Scaleway Instance wanted for the pool |
pools.{index}.placement-group-id | Placement group ID in which all the nodes of the pool will be created | |
pools.{index}.autoscaling | Defines whether the autoscaling feature is enabled for the pool | |
pools.{index}.size | Required | Size (number of nodes) of the pool |
pools.{index}.min-size | Minimum size of the pool | |
pools.{index}.max-size | Maximum size of the pool | |
pools.{index}.container-runtime | One of: unknown_runtime , docker , containerd , crio |
Container runtime for the nodes of the pool |
pools.{index}.autohealing | Defines whether the autohealing feature is enabled for the pool | |
pools.{index}.tags.{index} | Tags associated with the pool | |
pools.{index}.kubelet-args.{key} | Kubelet arguments to be used by this pool. Note that this feature is to be considered as experimental | |
pools.{index}.upgrade-policy.max-unavailable | The maximum number of nodes that can be not ready at the same time | |
pools.{index}.upgrade-policy.max-surge | The maximum number of nodes to be created during the upgrade | |
pools.{index}.zone | Zone in which the pool's nodes will be spawned | |
pools.{index}.root-volume-type | One of: default_volume_type , l_ssd , b_ssd |
System volume disk type |
pools.{index}.root-volume-size | System volume disk size | |
autoscaler-config.scale-down-disabled | Disable the cluster autoscaler | |
autoscaler-config.scale-down-delay-after-add | How long after scale up that scale down evaluation resumes | |
autoscaler-config.estimator | One of: unknown_estimator , binpacking |
Type of resource estimator to be used in scale up |
autoscaler-config.expander | One of: unknown_expander , random , most_pods , least_waste , priority , price |
Type of node group expander to be used in scale up |
autoscaler-config.ignore-daemonsets-utilization | Ignore DaemonSet pods when calculating resource utilization for scaling down | |
autoscaler-config.balance-similar-node-groups | Detect similar node groups and balance the number of nodes between them | |
autoscaler-config.expendable-pods-priority-cutoff | Pods with priority below cutoff will be expendable | |
autoscaler-config.scale-down-unneeded-time | How long a node should be unneeded before it is eligible for scale down | |
autoscaler-config.scale-down-utilization-threshold | Node utilization level, defined as sum of requested resources divided by capacity, below which a node can be considered for scale down | |
autoscaler-config.max-graceful-termination-sec | Maximum number of seconds the cluster autoscaler waits for pod termination when trying to scale down a node | |
auto-upgrade.enable | Whether or not auto upgrade is enabled for the cluster | |
auto-upgrade.maintenance-window.start-hour | Start time of the two-hour maintenance window | |
auto-upgrade.maintenance-window.day | One of: any , monday , tuesday , wednesday , thursday , friday , saturday , sunday |
Day of the week for the maintenance window |
feature-gates.{index} | List of feature gates to enable | |
admission-plugins.{index} | List of admission plugins to enable | |
open-id-connect-config.issuer-url | URL of the provider which allows the API server to discover public signing keys | |
open-id-connect-config.client-id | A client id that all tokens must be issued for | |
open-id-connect-config.username-claim | JWT claim to use as the user name | |
open-id-connect-config.username-prefix | Prefix prepended to username | |
open-id-connect-config.groups-claim.{index} | JWT claim to use as the user's group | |
open-id-connect-config.groups-prefix | Prefix prepended to group claims | |
open-id-connect-config.required-claim.{index} | Multiple key=value pairs that describes a required claim in the ID token | |
apiserver-cert-sans.{index} | Additional Subject Alternative Names for the Kubernetes API server certificate | |
organization-id | Organization ID to use. If none is passed the default organization ID will be used | |
region | Default: fr-par One of: fr-par , nl-ams , pl-waw |
Region to target. If none is passed will use default region from the config |
Examples:
Create a Kubernetes cluster named foo with cilium as CNI, in version 1.24.7 and with a pool named default composed of 3 DEV1-M
scw k8s cluster create name=foo version=1.24.7 pools.0.size=3 pools.0.node-type=DEV1-M pools.0.name=default
Create a Kubernetes cluster named bar, tagged, calico as CNI, in version 1.24.7 and with a tagged pool named default composed of 2 RENDER-S and autohealing and autoscaling enabled (between 1 and 10 nodes)
scw k8s cluster create name=bar version=1.24.7 tags.0=tag1 tags.1=tag2 cni=calico pools.0.size=2 pools.0.node-type=RENDER-S pools.0.min-size=1 pools.0.max-size=10 pools.0.autohealing=true pools.0.autoscaling=true pools.0.tags.0=pooltag1 pools.0.tags.1=pooltag2 pools.0.name=default
Delete a specific cluster and all its associated pools and nodes. Note that this method will not delete any Load Balancers or Block Volumes that are associated with the cluster.
Usage:
scw k8s cluster delete <cluster-id ...> [arg=value ...]
Args:
Name | Description | |
---|---|---|
cluster-id | Required | ID of the cluster to delete |
with-additional-resources | Set true if you want to delete all volumes (including retain volume type) and loadbalancers whose name start with cluster ID | |
region | Default: fr-par One of: fr-par , nl-ams , pl-waw |
Region to target. If none is passed will use default region from the config |
Examples:
Delete a cluster
scw k8s cluster delete 11111111-1111-1111-111111111111
Get details about a specific Kubernetes cluster.
Usage:
scw k8s cluster get <cluster-id ...> [arg=value ...]
Args:
Name | Description | |
---|---|---|
cluster-id | Required | The ID of the requested cluster |
region | Default: fr-par One of: fr-par , nl-ams , pl-waw |
Region to target. If none is passed will use default region from the config |
Examples:
Get a cluster information
scw k8s cluster get 11111111-1111-1111-111111111111
List all the existing Kubernetes clusters in a specific Region.
Usage:
scw k8s cluster list [arg=value ...]
Args:
Name | Description | |
---|---|---|
project-id | Project ID on which to filter the returned clusters | |
order-by | One of: created_at_asc , created_at_desc , updated_at_asc , updated_at_desc , name_asc , name_desc , status_asc , status_desc , version_asc , version_desc |
Sort order of the returned clusters |
name | Name on which to filter the returned clusters | |
status | One of: unknown , creating , ready , deleting , deleted , updating , locked , pool_required |
Status on which to filter the returned clusters |
type | Type on which to filter the returned clusters | |
organization-id | Organization ID on which to filter the returned clusters | |
region | Default: fr-par One of: fr-par , nl-ams , pl-waw , all |
Region to target. If none is passed will use default region from the config |
Examples:
List all clusters on your default region
scw k8s cluster list
List the ready clusters on your default region
scw k8s cluster list status=ready
List the clusters that match the given name on fr-par ('cluster1' will return 'cluster100' and 'cluster1' but not 'foo')
scw k8s cluster list region=fr-par name=cluster1
List the versions that a specific Kubernetes cluster is allowed to upgrade to. Results will comprise every patch version greater than the current patch, as well as one minor version ahead of the current version. Any upgrade skipping a minor version will not work.
Usage:
scw k8s cluster list-available-versions <cluster-id ...> [arg=value ...]
Args:
Name | Description | |
---|---|---|
cluster-id | Required | ID of the cluster which the available Kuberentes versions will be listed from |
region | Default: fr-par One of: fr-par , nl-ams , pl-waw |
Region to target. If none is passed will use default region from the config |
Examples:
List all available versions for a cluster to upgrade to
scw k8s cluster list-available-versions 11111111-1111-1111-111111111111
Reset the admin token for a specific Kubernetes cluster. This will invalidate the old admin token (which will not be usable afterwards) and create a new one. Note that you will need to redownload kubeconfig in order to keep interacting with the cluster.
Usage:
scw k8s cluster reset-admin-token <cluster-id ...> [arg=value ...]
Args:
Name | Description | |
---|---|---|
cluster-id | Required | ID of the cluster on which the admin token will be renewed |
region | Default: fr-par One of: fr-par , nl-ams , pl-waw |
Region to target. If none is passed will use default region from the config |
Examples:
Reset the admin token for a cluster
scw k8s cluster reset-admin-token 11111111-1111-1111-111111111111
Update a specific Kubernetes cluster. Note that this method is designed to update details such as name, description, tags and configuration. However, you cannot upgrade a cluster with this method. To do so, use the dedicated endpoint.
Usage:
scw k8s cluster update <cluster-id ...> [arg=value ...]
Args:
Name | Description | |
---|---|---|
cluster-id | Required | ID of the cluster to update |
name | New external name of the cluster | |
description | New description of the cluster | |
tags.{index} | New tags associated with the cluster | |
autoscaler-config.scale-down-disabled | Disable the cluster autoscaler | |
autoscaler-config.scale-down-delay-after-add | How long after scale up that scale down evaluation resumes | |
autoscaler-config.estimator | One of: unknown_estimator , binpacking |
Type of resource estimator to be used in scale up |
autoscaler-config.expander | One of: unknown_expander , random , most_pods , least_waste , priority , price |
Type of node group expander to be used in scale up |
autoscaler-config.ignore-daemonsets-utilization | Ignore DaemonSet pods when calculating resource utilization for scaling down | |
autoscaler-config.balance-similar-node-groups | Detect similar node groups and balance the number of nodes between them | |
autoscaler-config.expendable-pods-priority-cutoff | Pods with priority below cutoff will be expendable | |
autoscaler-config.scale-down-unneeded-time | How long a node should be unneeded before it is eligible for scale down | |
autoscaler-config.scale-down-utilization-threshold | Node utilization level, defined as sum of requested resources divided by capacity, below which a node can be considered for scale down | |
autoscaler-config.max-graceful-termination-sec | Maximum number of seconds the cluster autoscaler waits for pod termination when trying to scale down a node | |
Deprecated | New value of the Kubernetes Dashboard enablement | |
Deprecated One of: unknown_ingress , none , nginx , traefik , traefik2 |
New Ingress Controller for the cluster | |
auto-upgrade.enable | Whether or not auto upgrade is enabled for the cluster | |
auto-upgrade.maintenance-window.start-hour | Start time of the two-hour maintenance window | |
auto-upgrade.maintenance-window.day | One of: any , monday , tuesday , wednesday , thursday , friday , saturday , sunday |
Day of the week for the maintenance window |
feature-gates.{index} | List of feature gates to enable | |
admission-plugins.{index} | List of admission plugins to enable | |
open-id-connect-config.issuer-url | URL of the provider which allows the API server to discover public signing keys | |
open-id-connect-config.client-id | A client id that all tokens must be issued for | |
open-id-connect-config.username-claim | JWT claim to use as the user name | |
open-id-connect-config.username-prefix | Prefix prepended to username | |
open-id-connect-config.groups-claim.{index} | JWT claim to use as the user's group | |
open-id-connect-config.groups-prefix | Prefix prepended to group claims | |
open-id-connect-config.required-claim.{index} | Multiple key=value pairs that describes a required claim in the ID token | |
apiserver-cert-sans.{index} | Additional Subject Alternative Names for the Kubernetes API server certificate | |
region | Default: fr-par One of: fr-par , nl-ams , pl-waw |
Region to target. If none is passed will use default region from the config |
Examples:
Enable dashboard on a cluster
scw k8s cluster update 11111111-1111-1111-111111111111 enable-dashboard=true
Add TTLAfterFinished and ServiceNodeExclusion as feature gates on a cluster
scw k8s cluster update 11111111-1111-1111-111111111111 feature-gates.0=TTLAfterFinished feature-gates.1=ServiceNodeExclusion
Upgrade a specific Kubernetes cluster and/or its associated pools to a specific and supported Kubernetes version.
Usage:
scw k8s cluster upgrade <cluster-id ...> [arg=value ...]
Args:
Name | Description | |
---|---|---|
cluster-id | Required | ID of the cluster to upgrade |
version | Required | New Kubernetes version of the cluster |
upgrade-pools | Enablement of the pools upgrade | |
region | Default: fr-par One of: fr-par , nl-ams , pl-waw |
Region to target. If none is passed will use default region from the config |
Examples:
Upgrade a cluster to Kubernetes version 1.24.7 (without upgrading the pools)
scw k8s cluster upgrade 11111111-1111-1111-111111111111 version=1.24.7
Upgrade a cluster to Kubernetes version 1.24.7 (and upgrade the pools)
scw k8s cluster upgrade 11111111-1111-1111-111111111111 version=1.24.7 upgrade-pools=true
Wait for server to reach a stable state. This is similar to using --wait flag on other action commands, but without requiring a new action on the server.
Usage:
scw k8s cluster wait <cluster-id ...> [arg=value ...]
Args:
Name | Description | |
---|---|---|
cluster-id | Required | ID of the cluster. |
wait-for-pools | Wait for pools to be ready. | |
region | Default: fr-par |
Region to target. If none is passed will use default region from the config |
Examples:
Wait for a cluster to reach a stable state
scw k8s cluster wait 11111111-1111-1111-1111-111111111111
Retrieve the kubeconfig for a specified cluster.
Usage:
scw k8s kubeconfig get <cluster-id ...> [arg=value ...]
Args:
Name | Description | |
---|---|---|
cluster-id | Required | Cluster ID from which to retrieve the kubeconfig |
region | Default: fr-par |
Region to target. If none is passed will use default region from the config |
Examples:
Get the kubeconfig for a given cluster
scw k8s kubeconfig get 11111111-1111-1111-1111-111111111111
Retrieve the kubeconfig for a specified cluster and write it on disk. It will merge the new kubeconfig in the file pointed by the KUBECONFIG variable. If empty it will default to $HOME/.kube/config.
Usage:
scw k8s kubeconfig install <cluster-id ...> [arg=value ...]
Args:
Name | Description | |
---|---|---|
cluster-id | Required | Cluster ID from which to retrieve the kubeconfig |
keep-current-context | Whether or not to keep the current kubeconfig context unmodified | |
region | Default: fr-par |
Region to target. If none is passed will use default region from the config |
Examples:
Install the kubeconfig for a given cluster and using the new context
scw k8s kubeconfig install 11111111-1111-1111-1111-111111111111
Remove specified cluster from kubeconfig file specified by the KUBECONFIG env, if empty it will default to $HOME/.kube/config. If the current context points to this cluster, it will be set to an empty context.
Usage:
scw k8s kubeconfig uninstall <cluster-id ...> [arg=value ...]
Args:
Name | Description | |
---|---|---|
cluster-id | Required | Cluster ID from which to uninstall the kubeconfig |
Examples:
Uninstall the kubeconfig for a given cluster
scw k8s kubeconfig uninstall 11111111-1111-1111-1111-111111111111
A node (short for worker node) is an abstraction for a Scaleway Instance. A node is always part of a pool. Each of them will have Kubernetes software automatically installed and configured by Scaleway. Please note that Kubernetes nodes cannot be accessed with SSH.
Delete a specific node. Note that when there is not enough space to reschedule all the pods (in a one-node cluster for instance), you may experience some disruption of your applications.
Usage:
scw k8s node delete <node-id ...> [arg=value ...]
Args:
Name | Description | |
---|---|---|
node-id | Required | ID of the node to replace |
skip-drain | Skip draining node from its workload | |
replace | Add a new node after the deletion of this node | |
region | Default: fr-par One of: fr-par , nl-ams , pl-waw |
Region to target. If none is passed will use default region from the config |
Examples:
Delete a node
scw k8s node delete 11111111-1111-1111-111111111111
Delete a node without evicting workloads
scw k8s node delete 11111111-1111-1111-111111111111 skip-drain=true
Replace a node by a new one
scw k8s node delete 11111111-1111-1111-111111111111 replace=true
Get details about a specific Kubernetes node.
Usage:
scw k8s node get <node-id ...> [arg=value ...]
Args:
Name | Description | |
---|---|---|
node-id | Required | ID of the requested node |
region | Default: fr-par One of: fr-par , nl-ams , pl-waw |
Region to target. If none is passed will use default region from the config |
Examples:
Get a node
scw k8s node get 11111111-1111-1111-111111111111
List all the existing nodes for a specific Kubernetes cluster.
Usage:
scw k8s node list [arg=value ...]
Args:
Name | Description | |
---|---|---|
cluster-id | Required | Cluster ID from which the nodes will be listed from |
pool-id | Pool ID on which to filter the returned nodes | |
order-by | One of: created_at_asc , created_at_desc |
Sort order of the returned nodes |
name | Name on which to filter the returned nodes | |
status | One of: unknown , creating , not_ready , ready , deleting , deleted , locked , rebooting , creation_error , upgrading , starting , registering |
Status on which to filter the returned nodes |
region | Default: fr-par One of: fr-par , nl-ams , pl-waw , all |
Region to target. If none is passed will use default region from the config |
Examples:
List all the nodes in the cluster
scw k8s node list cluster-id=11111111-1111-1111-111111111111
List all the nodes in the pool 2222222222222-2222-222222222222 in the cluster
scw k8s node list cluster-id=11111111-1111-1111-111111111111 pool-id=2222222222222-2222-222222222222
List all ready nodes in the cluster
scw k8s node list cluster-id=11111111-1111-1111-111111111111 status=ready
Reboot a specific node. This node will first be cordoned, meaning that scheduling will be disabled. Then the existing pods on the node will be drained and rescheduled onto another schedulable node. Note that when there is not enough space to reschedule all the pods (in a one-node cluster, for instance), you may experience some disruption of your applications.
Usage:
scw k8s node reboot <node-id ...> [arg=value ...]
Args:
Name | Description | |
---|---|---|
node-id | Required | ID of the node to reboot |
region | Default: fr-par One of: fr-par , nl-ams , pl-waw |
Region to target. If none is passed will use default region from the config |
Examples:
Reboot a node
scw k8s node reboot 11111111-1111-1111-111111111111
Replace a specific node. The node will be set cordoned, meaning that scheduling will be disabled. Then the existing pods on the node will be drained and reschedule onto another schedulable node. Then the node will be deleted, and a new one will be created after the deletion. Note that when there is not enough space to reschedule all the pods (in a one node cluster for instance), you may experience some disruption of your applications.
Usage:
scw k8s node replace <node-id ...> [arg=value ...]
Args:
Name | Description | |
---|---|---|
node-id | Required | ID of the node to replace |
region | Default: fr-par One of: fr-par , nl-ams , pl-waw |
Region to target. If none is passed will use default region from the config |
Examples:
Replace a node
scw k8s node replace 11111111-1111-1111-111111111111
Wait for a node to reach a stable state. This is similar to using --wait flag on other action commands, but without requiring a new action on the node.
Usage:
scw k8s node wait <node-id ...> [arg=value ...]
Args:
Name | Description | |
---|---|---|
node-id | Required | ID of the node. |
region | Default: fr-par |
Region to target. If none is passed will use default region from the config |
Examples:
Wait for a node to reach a stable state
scw k8s node wait 11111111-1111-1111-1111-111111111111
A pool is a set of identical nodes. A pool has a name, a size (its current number of nodes), node number limits (min, max), and a Scaleway Instance type. Changing those limits increases/decreases the size of a pool. Thus, the pool will grow or shrink inside those limits when autoscaling is enabled, depending on its load. A "default pool" is automatically created with every cluster.
Create a new pool in a specific Kubernetes cluster.
Usage:
scw k8s pool create [arg=value ...]
Args:
Name | Description | |
---|---|---|
cluster-id | Required | ID of the cluster in which the pool will be created |
name | Required Default: <generated> |
Name of the pool |
node-type | Required Default: DEV1-M |
Node type is the type of Scaleway Instance wanted for the pool |
placement-group-id | Placement group ID in which all the nodes of the pool will be created | |
autoscaling | Defines whether the autoscaling feature is enabled for the pool | |
size | Required Default: 1 |
Size (number of nodes) of the pool |
min-size | Minimum size of the pool | |
max-size | Maximum size of the pool | |
container-runtime | One of: unknown_runtime , docker , containerd , crio |
Container runtime for the nodes of the pool |
autohealing | Defines whether the autohealing feature is enabled for the pool | |
tags.{index} | Tags associated with the pool | |
kubelet-args.{key} | Kubelet arguments to be used by this pool. Note that this feature is to be considered as experimental | |
upgrade-policy.max-unavailable | ||
upgrade-policy.max-surge | ||
zone | Zone in which the pool's nodes will be spawned | |
root-volume-type | One of: default_volume_type , l_ssd , b_ssd |
System volume disk type |
root-volume-size | System volume disk size | |
region | Default: fr-par One of: fr-par , nl-ams , pl-waw |
Region to target. If none is passed will use default region from the config |
Examples:
Create a pool named bar with 2 DEV1-XL on a cluster
scw k8s pool create cluster-id=11111111-1111-1111-111111111111 name=bar node-type=DEV1-XL size=2
Create a pool named 'fish' with 5 GP1-L with autoscaling enabled within 0 and 10 nodes, autohealing enabled, and containerd as the container runtime on a cluster
scw k8s pool create cluster-id=11111111-1111-1111-111111111111 name=fish node-type=GP1-L size=5 min-size=0 max-size=10 autoscaling=true autohealing=true container-runtime=containerd
Create a tagged pool named 'turtle' with 1 GP1-S which is using the already created placement group 2222222222222-2222-222222222222 for all the nodes in the pool on a cluster
scw k8s pool create cluster-id=11111111-1111-1111-111111111111 name=turtle node-type=GP1-S size=1 placement-group-id=2222222222222-2222-222222222222 tags.0=turtle tags.1=placement-group
Delete a specific pool from a cluster. All of the pool's nodes will also be deleted.
Usage:
scw k8s pool delete <pool-id ...> [arg=value ...]
Args:
Name | Description | |
---|---|---|
pool-id | Required | ID of the pool to delete |
region | Default: fr-par One of: fr-par , nl-ams , pl-waw |
Region to target. If none is passed will use default region from the config |
Examples:
Delete a specific pool
scw k8s pool delete 11111111-1111-1111-111111111111
Get details about a specific pool in a Kubernetes cluster.
Usage:
scw k8s pool get <pool-id ...> [arg=value ...]
Args:
Name | Description | |
---|---|---|
pool-id | Required | ID of the requested pool |
region | Default: fr-par One of: fr-par , nl-ams , pl-waw |
Region to target. If none is passed will use default region from the config |
Examples:
Get a given pool
scw k8s pool get 11111111-1111-1111-111111111111
List all the existing pools for a specific Kubernetes cluster.
Usage:
scw k8s pool list [arg=value ...]
Args:
Name | Description | |
---|---|---|
cluster-id | Required | ID of the cluster from which the pools will be listed from |
order-by | One of: created_at_asc , created_at_desc , updated_at_asc , updated_at_desc , name_asc , name_desc , status_asc , status_desc , version_asc , version_desc |
Sort order of the returned pools |
name | Name on which to filter the returned pools | |
status | One of: unknown , ready , deleting , deleted , scaling , warning , locked , upgrading |
Status on which to filter the returned pools |
region | Default: fr-par One of: fr-par , nl-ams , pl-waw , all |
Region to target. If none is passed will use default region from the config |
Examples:
List all pools for a cluster
scw k8s pool list cluster-id=11111111-1111-1111-111111111111
List all scaling pools for a cluster
scw k8s pool list cluster-id=11111111-1111-1111-111111111111 status=scaling
List all pools for a cluster that contains the word 'foo' in the pool name
scw k8s pool list cluster-id=11111111-1111-1111-111111111111 name=foo
List all pools for a cluster and order them by ascending creation date
scw k8s pool list cluster-id=11111111-1111-1111-111111111111 order-by=created_at_asc
Update attributes of a specific pool, such as size, autoscaling settings, and tags.
Usage:
scw k8s pool update <pool-id ...> [arg=value ...]
Args:
Name | Description | |
---|---|---|
pool-id | Required | ID of the pool to update |
autoscaling | New value for the enablement of autoscaling for the pool | |
size | New size for the pool | |
min-size | New minimun size for the pool | |
max-size | New maximum size for the pool | |
autohealing | New value for the enablement of autohealing for the pool | |
tags.{index} | New tags associated with the pool | |
kubelet-args.{key} | New Kubelet arguments to be used by this pool. Note that this feature is to be considered as experimental | |
upgrade-policy.max-unavailable | ||
upgrade-policy.max-surge | ||
region | Default: fr-par One of: fr-par , nl-ams , pl-waw |
Region to target. If none is passed will use default region from the config |
Examples:
Enable autoscaling on a given pool
scw k8s pool update 11111111-1111-1111-111111111111 autoscaling=true
Reduce the size and max size of a given pool to 4
scw k8s pool update 11111111-1111-1111-111111111111 size=4 max-size=4
Change the tags of the given pool
scw k8s pool update 11111111-1111-1111-111111111111 tags.0=my tags.1=new tags.2=pool
Upgrade the Kubernetes version of a specific pool. Note that this will work when the targeted version is the same than the version of the cluster.
Usage:
scw k8s pool upgrade <pool-id ...> [arg=value ...]
Args:
Name | Description | |
---|---|---|
pool-id | Required | ID of the pool to upgrade |
version | Required | New Kubernetes version for the pool |
region | Default: fr-par One of: fr-par , nl-ams , pl-waw |
Region to target. If none is passed will use default region from the config |
Examples:
Upgrade a specific pool to the Kubernetes version 1.24.7
scw k8s pool upgrade 11111111-1111-1111-111111111111 version=1.24.7
Wait for a pool to reach a stable state. This is similar to using --wait flag on other action commands, but without requiring a new action on the node.
Usage:
scw k8s pool wait <pool-id ...> [arg=value ...]
Args:
Name | Description | |
---|---|---|
pool-id | Required | ID of the pool. |
region | Default: fr-par |
Region to target. If none is passed will use default region from the config |
Examples:
Wait for a pool to reach a stable state
scw k8s pool wait 11111111-1111-1111-1111-111111111111
A version is a vanilla Kubernetes version like x.y.z
. It comprises a major version x, a minor version y, and a patch version z. Scaleway's managed Kubernetes, Kapsule, will support at minimum the last patch version for the last three minor releases. Also, each version has a different set of container runtimes, CNIs, ingresses, feature gates, and admission plugins available.
Get a specific Kubernetes version and the details about the version.
Usage:
scw k8s version get <version-name ...> [arg=value ...]
Args:
Name | Description | |
---|---|---|
version-name | Required | Requested version name |
region | Default: fr-par One of: fr-par , nl-ams , pl-waw |
Region to target. If none is passed will use default region from the config |
Examples:
Get the Kubernetes version 1.24.7
scw k8s version get 1.24.7
List all available versions for the creation of a new Kubernetes cluster.
Usage:
scw k8s version list [arg=value ...]
Args:
Name | Description | |
---|---|---|
region | Default: fr-par One of: fr-par , nl-ams , pl-waw |
Region to target. If none is passed will use default region from the config |
Examples:
List all available Kubernetes version in Kapsule
scw k8s version list