From 16cf151b71f5371b4901f7a6cbd87fd096b7636b Mon Sep 17 00:00:00 2001 From: Ellis Tarn Date: Thu, 22 Jun 2023 15:45:09 -0700 Subject: [PATCH] docs: Updated latest release version to v0.28.1 (#4112) --- hack/release/prepare-website.sh | 2 +- website/config.yaml | 2 +- website/content/en/docs/concepts/provisioners.md | 12 ++++++------ website/content/en/docs/concepts/threat-model.md | 10 +++++----- website/content/en/docs/faq.md | 4 ++-- .../getting-started-with-karpenter/_index.md | 2 +- .../grafana-values.yaml | 4 ++-- .../getting-started/migrating-from-cas/_index.md | 4 ++-- website/content/en/docs/troubleshooting.md | 2 +- website/content/en/docs/upgrade-guide.md | 8 ++++---- website/content/en/v0.28/concepts/provisioners.md | 12 ++++++------ website/content/en/v0.28/concepts/threat-model.md | 10 +++++----- website/content/en/v0.28/faq.md | 4 ++-- .../getting-started-with-karpenter/_index.md | 2 +- .../grafana-values.yaml | 4 ++-- .../getting-started/migrating-from-cas/_index.md | 4 ++-- website/content/en/v0.28/upgrade-guide.md | 8 ++++---- 17 files changed, 47 insertions(+), 47 deletions(-) diff --git a/hack/release/prepare-website.sh b/hack/release/prepare-website.sh index 0a9d4b2b508f..e8ae83bec089 100755 --- a/hack/release/prepare-website.sh +++ b/hack/release/prepare-website.sh @@ -6,7 +6,7 @@ source "${SCRIPT_DIR}/common.sh" config -GIT_TAG=$(git describe --exact-match --tags || echo "none") +GIT_TAG=${GIT_TAG:-$(git describe --exact-match --tags || echo "none")} if [[ $(releaseType "$GIT_TAG") != $RELEASE_TYPE_STABLE ]]; then echo "Not a stable release. Missing required git tag." exit 1 diff --git a/website/config.yaml b/website/config.yaml index a5928d9d10f6..e5d3fde4dcc5 100644 --- a/website/config.yaml +++ b/website/config.yaml @@ -66,7 +66,7 @@ params: url: 'https://slack.k8s.io/' icon: fab fa-slack desc: 'Chat with us on Slack in the #aws-provider channel' - latest_release_version: v0.28.0 + latest_release_version: v0.28.1 versions: - v0.28 - v0.27 diff --git a/website/content/en/docs/concepts/provisioners.md b/website/content/en/docs/concepts/provisioners.md index 77f3cfb66e1f..9c092798189e 100644 --- a/website/content/en/docs/concepts/provisioners.md +++ b/website/content/en/docs/concepts/provisioners.md @@ -112,7 +112,7 @@ spec: cpuCFSQuota: true podsPerCore: 2 maxPods: 20 - + # Resource limits constrain the total size of the cluster. # Limits prevent Karpenter from creating new instances once the limit is exceeded. @@ -258,7 +258,7 @@ For more information on weighting Provisioners, see the [Weighting Provisioners ## spec.kubeletConfiguration Karpenter provides the ability to specify a few additional Kubelet args. These are all optional and provide support for -additional customization and use cases. Adjust these only if you know you need to do so. For more details on kubelet configuration arguments, [see the KubeletConfiguration API specification docs](https://kubernetes.io/docs/reference/config-api/kubelet-config.v1beta1/). +additional customization and use cases. Adjust these only if you know you need to do so. For more details on kubelet configuration arguments, [see the KubeletConfiguration API specification docs](https://kubernetes.io/docs/reference/config-api/kubelet-config.v1beta1/). The implemented fields are a subset of the full list of upstream kubelet configuration arguments. Please cut an issue if you'd like to see another field implemented. ```yaml spec: @@ -393,7 +393,7 @@ Bottlerocket AMIFamily currently does not support `podsPerCore` configuration. I ## spec.limits.resources -The provisioner spec includes a limits section (`spec.limits.resources`), which constrains the maximum amount of resources that the provisioner will manage. +The provisioner spec includes a limits section (`spec.limits.resources`), which constrains the maximum amount of resources that the provisioner will manage. Karpenter supports limits of any resource type reported by your cloudprovider. It limits instance types when scheduling to those that will not exceed the specified limits. If a limit has been exceeded, nodes provisioning is prevented until some nodes have been terminated. @@ -409,7 +409,7 @@ spec: values: ["spot"] limits: resources: - cpu: 1000 + cpu: 1000 memory: 1000Gi nvidia.com/gpu: 2 ``` @@ -450,7 +450,7 @@ kind: Provisioner metadata: name: gpu spec: - consolidation: + consolidation: enabled: true requirements: - key: node.kubernetes.io/instance-type @@ -473,7 +473,7 @@ kind: Provisioner metadata: name: cilium-startup spec: - consolidation: + consolidation: enabled: true startupTaints: - key: node.cilium.io/agent-not-ready diff --git a/website/content/en/docs/concepts/threat-model.md b/website/content/en/docs/concepts/threat-model.md index a2e532b8a5c9..2fd7d2be5601 100644 --- a/website/content/en/docs/concepts/threat-model.md +++ b/website/content/en/docs/concepts/threat-model.md @@ -31,11 +31,11 @@ A Cluster Developer has the ability to create pods via Deployments, ReplicaSets, Karpenter has permissions to create and manage cloud instances. Karpenter has Kubernetes API permissions to create, update, and remove nodes, as well as evict pods. For a full list of the permissions, see the RBAC rules in the helm chart template. Karpenter also has AWS IAM permissions to create instances with IAM roles. -* [aggregate-clusterrole.yaml](https://github.com/aws/karpenter/blob/v0.28.0/charts/karpenter/templates/aggregate-clusterrole.yaml) -* [clusterrole-core.yaml](https://github.com/aws/karpenter/blob/v0.28.0/charts/karpenter/templates/clusterrole-core.yaml) -* [clusterrole.yaml](https://github.com/aws/karpenter/blob/v0.28.0/charts/karpenter/templates/clusterrole.yaml) -* [rolebinding.yaml](https://github.com/aws/karpenter/blob/v0.28.0/charts/karpenter/templates/rolebinding.yaml) -* [role.yaml](https://github.com/aws/karpenter/blob/v0.28.0/charts/karpenter/templates/role.yaml) +* [aggregate-clusterrole.yaml](https://github.com/aws/karpenter/blob/v0.28.1/charts/karpenter/templates/aggregate-clusterrole.yaml) +* [clusterrole-core.yaml](https://github.com/aws/karpenter/blob/v0.28.1/charts/karpenter/templates/clusterrole-core.yaml) +* [clusterrole.yaml](https://github.com/aws/karpenter/blob/v0.28.1/charts/karpenter/templates/clusterrole.yaml) +* [rolebinding.yaml](https://github.com/aws/karpenter/blob/v0.28.1/charts/karpenter/templates/rolebinding.yaml) +* [role.yaml](https://github.com/aws/karpenter/blob/v0.28.1/charts/karpenter/templates/role.yaml) ## Assumptions diff --git a/website/content/en/docs/faq.md b/website/content/en/docs/faq.md index fa3a03d96807..464ec50a774a 100644 --- a/website/content/en/docs/faq.md +++ b/website/content/en/docs/faq.md @@ -15,7 +15,7 @@ AWS is the first cloud provider supported by Karpenter, although it is designed ### Can I write my own cloud provider for Karpenter? Yes, but there is no documentation yet for it. -Start with Karpenter's GitHub [cloudprovider](https://github.com/aws/karpenter-core/tree/v0.28.0/pkg/cloudprovider) documentation to see how the AWS provider is built, but there are other sections of the code that will require changes too. +Start with Karpenter's GitHub [cloudprovider](https://github.com/aws/karpenter-core/tree/v0.28.1/pkg/cloudprovider) documentation to see how the AWS provider is built, but there are other sections of the code that will require changes too. ### What operating system nodes does Karpenter deploy? By default, Karpenter uses Amazon Linux 2 images. @@ -28,7 +28,7 @@ Karpenter is flexible to multi architecture configurations using [well known lab ### What RBAC access is required? All of the required RBAC rules can be found in the helm chart template. -See [clusterrolebinding.yaml](https://github.com/aws/karpenter/blob/v0.28.0/charts/karpenter/templates/clusterrolebinding.yaml), [clusterrole.yaml](https://github.com/aws/karpenter/blob/v0.28.0/charts/karpenter/templates/clusterrole.yaml), [rolebinding.yaml](https://github.com/aws/karpenter/blob/v0.28.0/charts/karpenter/templates/rolebinding.yaml), and [role.yaml](https://github.com/aws/karpenter/blob/v0.28.0/charts/karpenter/templates/role.yaml) files for details. +See [clusterrolebinding.yaml](https://github.com/aws/karpenter/blob/v0.28.1/charts/karpenter/templates/clusterrolebinding.yaml), [clusterrole.yaml](https://github.com/aws/karpenter/blob/v0.28.1/charts/karpenter/templates/clusterrole.yaml), [rolebinding.yaml](https://github.com/aws/karpenter/blob/v0.28.1/charts/karpenter/templates/rolebinding.yaml), and [role.yaml](https://github.com/aws/karpenter/blob/v0.28.1/charts/karpenter/templates/role.yaml) files for details. ### Can I run Karpenter outside of a Kubernetes cluster? Yes, as long as the controller has network and IAM/RBAC access to the Kubernetes API and your provider API. diff --git a/website/content/en/docs/getting-started/getting-started-with-karpenter/_index.md b/website/content/en/docs/getting-started/getting-started-with-karpenter/_index.md index f019ee36028c..68617d2f0158 100644 --- a/website/content/en/docs/getting-started/getting-started-with-karpenter/_index.md +++ b/website/content/en/docs/getting-started/getting-started-with-karpenter/_index.md @@ -44,7 +44,7 @@ authenticate properly by running `aws sts get-caller-identity`. After setting up the tools, set the Karpenter version number: ```bash -export KARPENTER_VERSION=v0.28.0 +export KARPENTER_VERSION=v0.28.1 ``` Then set the following environment variable: diff --git a/website/content/en/docs/getting-started/getting-started-with-karpenter/grafana-values.yaml b/website/content/en/docs/getting-started/getting-started-with-karpenter/grafana-values.yaml index 2b4528ceb495..4568307c170f 100644 --- a/website/content/en/docs/getting-started/getting-started-with-karpenter/grafana-values.yaml +++ b/website/content/en/docs/getting-started/getting-started-with-karpenter/grafana-values.yaml @@ -22,6 +22,6 @@ dashboardProviders: dashboards: default: capacity-dashboard: - url: https://karpenter.sh/v0.28.0/getting-started/getting-started-with-karpenter/karpenter-capacity-dashboard.json + url: https://karpenter.sh/v0.28.1/getting-started/getting-started-with-karpenter/karpenter-capacity-dashboard.json performance-dashboard: - url: https://karpenter.sh/v0.28.0/getting-started/getting-started-with-karpenter/karpenter-performance-dashboard.json + url: https://karpenter.sh/v0.28.1/getting-started/getting-started-with-karpenter/karpenter-performance-dashboard.json diff --git a/website/content/en/docs/getting-started/migrating-from-cas/_index.md b/website/content/en/docs/getting-started/migrating-from-cas/_index.md index 242e711a8e53..c805c8c72913 100644 --- a/website/content/en/docs/getting-started/migrating-from-cas/_index.md +++ b/website/content/en/docs/getting-started/migrating-from-cas/_index.md @@ -92,7 +92,7 @@ One for your Karpenter node role and one for your existing node group. First set the Karpenter release you want to deploy. ```bash -export KARPENTER_VERSION=v0.28.0 +export KARPENTER_VERSION=v0.28.1 ``` We can now generate a full Karpenter deployment yaml from the helm chart. @@ -134,7 +134,7 @@ Now that our deployment is ready we can create the karpenter namespace, create t ## Create default provisioner We need to create a default provisioner so Karpenter knows what types of nodes we want for unscheduled workloads. -You can refer to some of the [example provisioners](https://github.com/aws/karpenter/tree/v0.28.0/examples/provisioner) for specific needs. +You can refer to some of the [example provisioners](https://github.com/aws/karpenter/tree/v0.28.1/examples/provisioner) for specific needs. {{% script file="./content/en/{VERSION}/getting-started/migrating-from-cas/scripts/step11-create-provisioner.sh" language="bash" %}} diff --git a/website/content/en/docs/troubleshooting.md b/website/content/en/docs/troubleshooting.md index 7eadc573ddaf..590c0ea708e5 100644 --- a/website/content/en/docs/troubleshooting.md +++ b/website/content/en/docs/troubleshooting.md @@ -15,7 +15,7 @@ If you installed the controller in the `karpenter` namespace you can see the cur ``` kubectl get configmap -n karpenter config-logging -o yaml -apiVersion: v1 +apiVersion: v1 data: loglevel.webhook: error zap-logger-config: | diff --git a/website/content/en/docs/upgrade-guide.md b/website/content/en/docs/upgrade-guide.md index 803ca3f197f1..b3cab3d53490 100644 --- a/website/content/en/docs/upgrade-guide.md +++ b/website/content/en/docs/upgrade-guide.md @@ -51,9 +51,9 @@ If you get the error `invalid ownership metadata; label validation error:` while In general, you can reapply the CRDs in the `crds` directory of the Karpenter helm chart: ```shell -kubectl apply -f https://raw.githubusercontent.com/aws/karpenter/v0.28.0/pkg/apis/crds/karpenter.sh_provisioners.yaml -kubectl apply -f https://raw.githubusercontent.com/aws/karpenter/v0.28.0/pkg/apis/crds/karpenter.sh_machines.yaml -kubectl apply -f https://raw.githubusercontent.com/aws/karpenter/v0.28.0/pkg/apis/crds/karpenter.k8s.aws_awsnodetemplates.yaml +kubectl apply -f https://raw.githubusercontent.com/aws/karpenter/v0.28.1/pkg/apis/crds/karpenter.sh_provisioners.yaml +kubectl apply -f https://raw.githubusercontent.com/aws/karpenter/v0.28.1/pkg/apis/crds/karpenter.sh_machines.yaml +kubectl apply -f https://raw.githubusercontent.com/aws/karpenter/v0.28.1/pkg/apis/crds/karpenter.k8s.aws_awsnodetemplates.yaml ``` ### How Do We Break Incompatibility? @@ -189,7 +189,7 @@ kubectl delete mutatingwebhookconfigurations defaulting.webhook.karpenter.sh * The karpenter webhook and controller containers are combined into a single binary, which requires changes to the helm chart. If your Karpenter installation (helm or otherwise) currently customizes the karpenter webhook, your deployment tooling may require minor changes. * Karpenter now supports native interruption handling. If you were previously using Node Termination Handler for spot interruption handling and health events, you will need to remove the component from your cluster before enabling `aws.interruptionQueueName`. For more details on Karpenter's interruption handling, see the [Interruption Handling Docs]({{< ref "./concepts/deprovisioning/#interruption" >}}). For common questions on the migration process, see the [FAQ]({{< ref "./faq/#interruption-handling" >}}) * Instance category defaults are now explicitly persisted in the Provisioner, rather than handled implicitly in memory. By default, Provisioners will limit instance category to c,m,r. If any instance type constraints are applied, it will override this default. If you have created Provisioners in the past with unconstrained instance type, family, or category, Karpenter will now more flexibly use instance types than before. If you would like to apply these constraints, they must be included in the Provisioner CRD. -* Karpenter CRD raw YAML URLs have migrated from `https://raw.githubusercontent.com/aws/karpenter/v0.28.0/charts/karpenter/crds/...` to `https://raw.githubusercontent.com/aws/karpenter/v0.28.0/pkg/apis/crds/...`. If you reference static Karpenter CRDs or rely on `kubectl replace -f` to apply these CRDs from their remote location, you will need to migrate to the new location. +* Karpenter CRD raw YAML URLs have migrated from `https://raw.githubusercontent.com/aws/karpenter/v0.28.1/charts/karpenter/crds/...` to `https://raw.githubusercontent.com/aws/karpenter/v0.28.1/pkg/apis/crds/...`. If you reference static Karpenter CRDs or rely on `kubectl replace -f` to apply these CRDs from their remote location, you will need to migrate to the new location. * Pods without an ownerRef (also called "controllerless" or "naked" pods) will now be evicted by default during node termination and consolidation. Users can prevent controllerless pods from being voluntarily disrupted by applying the `karpenter.sh/do-not-evict: "true"` annotation to the pods in question. * The following CLI options/environment variables are now removed and replaced in favor of pulling settings dynamically from the [`karpenter-global-settings`]({{}}) ConfigMap. See the [Settings docs]({{}}) for more details on configuring the new values in the ConfigMap. diff --git a/website/content/en/v0.28/concepts/provisioners.md b/website/content/en/v0.28/concepts/provisioners.md index 77f3cfb66e1f..9c092798189e 100644 --- a/website/content/en/v0.28/concepts/provisioners.md +++ b/website/content/en/v0.28/concepts/provisioners.md @@ -112,7 +112,7 @@ spec: cpuCFSQuota: true podsPerCore: 2 maxPods: 20 - + # Resource limits constrain the total size of the cluster. # Limits prevent Karpenter from creating new instances once the limit is exceeded. @@ -258,7 +258,7 @@ For more information on weighting Provisioners, see the [Weighting Provisioners ## spec.kubeletConfiguration Karpenter provides the ability to specify a few additional Kubelet args. These are all optional and provide support for -additional customization and use cases. Adjust these only if you know you need to do so. For more details on kubelet configuration arguments, [see the KubeletConfiguration API specification docs](https://kubernetes.io/docs/reference/config-api/kubelet-config.v1beta1/). +additional customization and use cases. Adjust these only if you know you need to do so. For more details on kubelet configuration arguments, [see the KubeletConfiguration API specification docs](https://kubernetes.io/docs/reference/config-api/kubelet-config.v1beta1/). The implemented fields are a subset of the full list of upstream kubelet configuration arguments. Please cut an issue if you'd like to see another field implemented. ```yaml spec: @@ -393,7 +393,7 @@ Bottlerocket AMIFamily currently does not support `podsPerCore` configuration. I ## spec.limits.resources -The provisioner spec includes a limits section (`spec.limits.resources`), which constrains the maximum amount of resources that the provisioner will manage. +The provisioner spec includes a limits section (`spec.limits.resources`), which constrains the maximum amount of resources that the provisioner will manage. Karpenter supports limits of any resource type reported by your cloudprovider. It limits instance types when scheduling to those that will not exceed the specified limits. If a limit has been exceeded, nodes provisioning is prevented until some nodes have been terminated. @@ -409,7 +409,7 @@ spec: values: ["spot"] limits: resources: - cpu: 1000 + cpu: 1000 memory: 1000Gi nvidia.com/gpu: 2 ``` @@ -450,7 +450,7 @@ kind: Provisioner metadata: name: gpu spec: - consolidation: + consolidation: enabled: true requirements: - key: node.kubernetes.io/instance-type @@ -473,7 +473,7 @@ kind: Provisioner metadata: name: cilium-startup spec: - consolidation: + consolidation: enabled: true startupTaints: - key: node.cilium.io/agent-not-ready diff --git a/website/content/en/v0.28/concepts/threat-model.md b/website/content/en/v0.28/concepts/threat-model.md index a2e532b8a5c9..2fd7d2be5601 100644 --- a/website/content/en/v0.28/concepts/threat-model.md +++ b/website/content/en/v0.28/concepts/threat-model.md @@ -31,11 +31,11 @@ A Cluster Developer has the ability to create pods via Deployments, ReplicaSets, Karpenter has permissions to create and manage cloud instances. Karpenter has Kubernetes API permissions to create, update, and remove nodes, as well as evict pods. For a full list of the permissions, see the RBAC rules in the helm chart template. Karpenter also has AWS IAM permissions to create instances with IAM roles. -* [aggregate-clusterrole.yaml](https://github.com/aws/karpenter/blob/v0.28.0/charts/karpenter/templates/aggregate-clusterrole.yaml) -* [clusterrole-core.yaml](https://github.com/aws/karpenter/blob/v0.28.0/charts/karpenter/templates/clusterrole-core.yaml) -* [clusterrole.yaml](https://github.com/aws/karpenter/blob/v0.28.0/charts/karpenter/templates/clusterrole.yaml) -* [rolebinding.yaml](https://github.com/aws/karpenter/blob/v0.28.0/charts/karpenter/templates/rolebinding.yaml) -* [role.yaml](https://github.com/aws/karpenter/blob/v0.28.0/charts/karpenter/templates/role.yaml) +* [aggregate-clusterrole.yaml](https://github.com/aws/karpenter/blob/v0.28.1/charts/karpenter/templates/aggregate-clusterrole.yaml) +* [clusterrole-core.yaml](https://github.com/aws/karpenter/blob/v0.28.1/charts/karpenter/templates/clusterrole-core.yaml) +* [clusterrole.yaml](https://github.com/aws/karpenter/blob/v0.28.1/charts/karpenter/templates/clusterrole.yaml) +* [rolebinding.yaml](https://github.com/aws/karpenter/blob/v0.28.1/charts/karpenter/templates/rolebinding.yaml) +* [role.yaml](https://github.com/aws/karpenter/blob/v0.28.1/charts/karpenter/templates/role.yaml) ## Assumptions diff --git a/website/content/en/v0.28/faq.md b/website/content/en/v0.28/faq.md index fa3a03d96807..464ec50a774a 100644 --- a/website/content/en/v0.28/faq.md +++ b/website/content/en/v0.28/faq.md @@ -15,7 +15,7 @@ AWS is the first cloud provider supported by Karpenter, although it is designed ### Can I write my own cloud provider for Karpenter? Yes, but there is no documentation yet for it. -Start with Karpenter's GitHub [cloudprovider](https://github.com/aws/karpenter-core/tree/v0.28.0/pkg/cloudprovider) documentation to see how the AWS provider is built, but there are other sections of the code that will require changes too. +Start with Karpenter's GitHub [cloudprovider](https://github.com/aws/karpenter-core/tree/v0.28.1/pkg/cloudprovider) documentation to see how the AWS provider is built, but there are other sections of the code that will require changes too. ### What operating system nodes does Karpenter deploy? By default, Karpenter uses Amazon Linux 2 images. @@ -28,7 +28,7 @@ Karpenter is flexible to multi architecture configurations using [well known lab ### What RBAC access is required? All of the required RBAC rules can be found in the helm chart template. -See [clusterrolebinding.yaml](https://github.com/aws/karpenter/blob/v0.28.0/charts/karpenter/templates/clusterrolebinding.yaml), [clusterrole.yaml](https://github.com/aws/karpenter/blob/v0.28.0/charts/karpenter/templates/clusterrole.yaml), [rolebinding.yaml](https://github.com/aws/karpenter/blob/v0.28.0/charts/karpenter/templates/rolebinding.yaml), and [role.yaml](https://github.com/aws/karpenter/blob/v0.28.0/charts/karpenter/templates/role.yaml) files for details. +See [clusterrolebinding.yaml](https://github.com/aws/karpenter/blob/v0.28.1/charts/karpenter/templates/clusterrolebinding.yaml), [clusterrole.yaml](https://github.com/aws/karpenter/blob/v0.28.1/charts/karpenter/templates/clusterrole.yaml), [rolebinding.yaml](https://github.com/aws/karpenter/blob/v0.28.1/charts/karpenter/templates/rolebinding.yaml), and [role.yaml](https://github.com/aws/karpenter/blob/v0.28.1/charts/karpenter/templates/role.yaml) files for details. ### Can I run Karpenter outside of a Kubernetes cluster? Yes, as long as the controller has network and IAM/RBAC access to the Kubernetes API and your provider API. diff --git a/website/content/en/v0.28/getting-started/getting-started-with-karpenter/_index.md b/website/content/en/v0.28/getting-started/getting-started-with-karpenter/_index.md index f019ee36028c..68617d2f0158 100644 --- a/website/content/en/v0.28/getting-started/getting-started-with-karpenter/_index.md +++ b/website/content/en/v0.28/getting-started/getting-started-with-karpenter/_index.md @@ -44,7 +44,7 @@ authenticate properly by running `aws sts get-caller-identity`. After setting up the tools, set the Karpenter version number: ```bash -export KARPENTER_VERSION=v0.28.0 +export KARPENTER_VERSION=v0.28.1 ``` Then set the following environment variable: diff --git a/website/content/en/v0.28/getting-started/getting-started-with-karpenter/grafana-values.yaml b/website/content/en/v0.28/getting-started/getting-started-with-karpenter/grafana-values.yaml index 2b4528ceb495..4568307c170f 100644 --- a/website/content/en/v0.28/getting-started/getting-started-with-karpenter/grafana-values.yaml +++ b/website/content/en/v0.28/getting-started/getting-started-with-karpenter/grafana-values.yaml @@ -22,6 +22,6 @@ dashboardProviders: dashboards: default: capacity-dashboard: - url: https://karpenter.sh/v0.28.0/getting-started/getting-started-with-karpenter/karpenter-capacity-dashboard.json + url: https://karpenter.sh/v0.28.1/getting-started/getting-started-with-karpenter/karpenter-capacity-dashboard.json performance-dashboard: - url: https://karpenter.sh/v0.28.0/getting-started/getting-started-with-karpenter/karpenter-performance-dashboard.json + url: https://karpenter.sh/v0.28.1/getting-started/getting-started-with-karpenter/karpenter-performance-dashboard.json diff --git a/website/content/en/v0.28/getting-started/migrating-from-cas/_index.md b/website/content/en/v0.28/getting-started/migrating-from-cas/_index.md index 242e711a8e53..c805c8c72913 100644 --- a/website/content/en/v0.28/getting-started/migrating-from-cas/_index.md +++ b/website/content/en/v0.28/getting-started/migrating-from-cas/_index.md @@ -92,7 +92,7 @@ One for your Karpenter node role and one for your existing node group. First set the Karpenter release you want to deploy. ```bash -export KARPENTER_VERSION=v0.28.0 +export KARPENTER_VERSION=v0.28.1 ``` We can now generate a full Karpenter deployment yaml from the helm chart. @@ -134,7 +134,7 @@ Now that our deployment is ready we can create the karpenter namespace, create t ## Create default provisioner We need to create a default provisioner so Karpenter knows what types of nodes we want for unscheduled workloads. -You can refer to some of the [example provisioners](https://github.com/aws/karpenter/tree/v0.28.0/examples/provisioner) for specific needs. +You can refer to some of the [example provisioners](https://github.com/aws/karpenter/tree/v0.28.1/examples/provisioner) for specific needs. {{% script file="./content/en/{VERSION}/getting-started/migrating-from-cas/scripts/step11-create-provisioner.sh" language="bash" %}} diff --git a/website/content/en/v0.28/upgrade-guide.md b/website/content/en/v0.28/upgrade-guide.md index 803ca3f197f1..b3cab3d53490 100644 --- a/website/content/en/v0.28/upgrade-guide.md +++ b/website/content/en/v0.28/upgrade-guide.md @@ -51,9 +51,9 @@ If you get the error `invalid ownership metadata; label validation error:` while In general, you can reapply the CRDs in the `crds` directory of the Karpenter helm chart: ```shell -kubectl apply -f https://raw.githubusercontent.com/aws/karpenter/v0.28.0/pkg/apis/crds/karpenter.sh_provisioners.yaml -kubectl apply -f https://raw.githubusercontent.com/aws/karpenter/v0.28.0/pkg/apis/crds/karpenter.sh_machines.yaml -kubectl apply -f https://raw.githubusercontent.com/aws/karpenter/v0.28.0/pkg/apis/crds/karpenter.k8s.aws_awsnodetemplates.yaml +kubectl apply -f https://raw.githubusercontent.com/aws/karpenter/v0.28.1/pkg/apis/crds/karpenter.sh_provisioners.yaml +kubectl apply -f https://raw.githubusercontent.com/aws/karpenter/v0.28.1/pkg/apis/crds/karpenter.sh_machines.yaml +kubectl apply -f https://raw.githubusercontent.com/aws/karpenter/v0.28.1/pkg/apis/crds/karpenter.k8s.aws_awsnodetemplates.yaml ``` ### How Do We Break Incompatibility? @@ -189,7 +189,7 @@ kubectl delete mutatingwebhookconfigurations defaulting.webhook.karpenter.sh * The karpenter webhook and controller containers are combined into a single binary, which requires changes to the helm chart. If your Karpenter installation (helm or otherwise) currently customizes the karpenter webhook, your deployment tooling may require minor changes. * Karpenter now supports native interruption handling. If you were previously using Node Termination Handler for spot interruption handling and health events, you will need to remove the component from your cluster before enabling `aws.interruptionQueueName`. For more details on Karpenter's interruption handling, see the [Interruption Handling Docs]({{< ref "./concepts/deprovisioning/#interruption" >}}). For common questions on the migration process, see the [FAQ]({{< ref "./faq/#interruption-handling" >}}) * Instance category defaults are now explicitly persisted in the Provisioner, rather than handled implicitly in memory. By default, Provisioners will limit instance category to c,m,r. If any instance type constraints are applied, it will override this default. If you have created Provisioners in the past with unconstrained instance type, family, or category, Karpenter will now more flexibly use instance types than before. If you would like to apply these constraints, they must be included in the Provisioner CRD. -* Karpenter CRD raw YAML URLs have migrated from `https://raw.githubusercontent.com/aws/karpenter/v0.28.0/charts/karpenter/crds/...` to `https://raw.githubusercontent.com/aws/karpenter/v0.28.0/pkg/apis/crds/...`. If you reference static Karpenter CRDs or rely on `kubectl replace -f` to apply these CRDs from their remote location, you will need to migrate to the new location. +* Karpenter CRD raw YAML URLs have migrated from `https://raw.githubusercontent.com/aws/karpenter/v0.28.1/charts/karpenter/crds/...` to `https://raw.githubusercontent.com/aws/karpenter/v0.28.1/pkg/apis/crds/...`. If you reference static Karpenter CRDs or rely on `kubectl replace -f` to apply these CRDs from their remote location, you will need to migrate to the new location. * Pods without an ownerRef (also called "controllerless" or "naked" pods) will now be evicted by default during node termination and consolidation. Users can prevent controllerless pods from being voluntarily disrupted by applying the `karpenter.sh/do-not-evict: "true"` annotation to the pods in question. * The following CLI options/environment variables are now removed and replaced in favor of pulling settings dynamically from the [`karpenter-global-settings`]({{}}) ConfigMap. See the [Settings docs]({{}}) for more details on configuring the new values in the ConfigMap.