Skip to content

Conversation

renovate[bot]
Copy link
Contributor

@renovate renovate bot commented Oct 9, 2025

This PR contains the following updates:

Package Update Change
kube-prometheus-stack (source) major 77.14.0 -> 78.1.0

Warning

Some dependencies could not be looked up. Check the Dependency Dashboard for more information.


Release Notes

prometheus-community/helm-charts (kube-prometheus-stack)

v78.1.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

  • [kube-prometheus-stack] Update kube-prometheus-stack dependency non-major updates by @​renovate[bot] in #​6226

Full Changelog: prometheus-community/helm-charts@prometheus-pgbouncer-exporter-0.9.0...kube-prometheus-stack-78.1.0

v78.0.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

Full Changelog: prometheus-community/helm-charts@prometheus-27.40.0...kube-prometheus-stack-78.0.0


Configuration

📅 Schedule: Branch creation - "after 9am,before 5pm" (UTC), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

Copy link

github-actions bot commented Oct 9, 2025

--- kubernetes/apps/monitoring/kube-prometheus-stack/app Kustomization: flux-system/cluster-apps-kube-prometheus-stack HelmRelease: monitoring/kube-prometheus-stack

+++ kubernetes/apps/monitoring/kube-prometheus-stack/app Kustomization: flux-system/cluster-apps-kube-prometheus-stack HelmRelease: monitoring/kube-prometheus-stack

@@ -14,13 +14,13 @@

     spec:
       chart: kube-prometheus-stack
       sourceRef:
         kind: HelmRepository
         name: prometheus-community
         namespace: flux-system
-      version: 77.14.0
+      version: 78.1.0
   install:
     crds: CreateReplace
     createNamespace: true
     remediation:
       retries: 3
     timeout: 30m

Copy link

github-actions bot commented Oct 9, 2025

--- HelmRelease: monitoring/kube-prometheus-stack ClusterRole: monitoring/kube-prometheus-stack-operator

+++ HelmRelease: monitoring/kube-prometheus-stack ClusterRole: monitoring/kube-prometheus-stack-operator

@@ -27,16 +27,19 @@

   - prometheusagents/finalizers
   - prometheusagents/status
   - thanosrulers
   - thanosrulers/finalizers
   - thanosrulers/status
   - scrapeconfigs
+  - scrapeconfigs/status
   - servicemonitors
   - servicemonitors/status
   - podmonitors
+  - podmonitors/status
   - probes
+  - probes/status
   - prometheusrules
   verbs:
   - '*'
 - apiGroups:
   - apps
   resources:
--- HelmRelease: monitoring/kube-prometheus-stack Deployment: monitoring/kube-prometheus-stack-operator

+++ HelmRelease: monitoring/kube-prometheus-stack Deployment: monitoring/kube-prometheus-stack-operator

@@ -31,20 +31,20 @@

         app: kube-prometheus-stack-operator
         app.kubernetes.io/name: kube-prometheus-stack-prometheus-operator
         app.kubernetes.io/component: prometheus-operator
     spec:
       containers:
       - name: kube-prometheus-stack
-        image: quay.io/prometheus-operator/prometheus-operator:v0.85.0
+        image: quay.io/prometheus-operator/prometheus-operator:v0.86.0
         imagePullPolicy: IfNotPresent
         args:
         - --kubelet-service=kube-system/kube-prometheus-stack-kubelet
         - --kubelet-endpoints=true
         - --kubelet-endpointslice=false
         - --localhost=127.0.0.1
-        - --prometheus-config-reloader=quay.io/prometheus-operator/prometheus-config-reloader:v0.85.0
+        - --prometheus-config-reloader=quay.io/prometheus-operator/prometheus-config-reloader:v0.86.0
         - --config-reloader-cpu-request=0
         - --config-reloader-cpu-limit=0
         - --config-reloader-memory-request=0
         - --config-reloader-memory-limit=0
         - --thanos-default-base-image=quay.io/thanos/thanos:v0.39.2
         - --secret-field-selector=type!=kubernetes.io/dockercfg,type!=kubernetes.io/service-account-token,type!=helm.sh/release.v1
--- HelmRelease: monitoring/kube-prometheus-stack PrometheusRule: monitoring/kube-prometheus-stack-alertmanager.rules

+++ HelmRelease: monitoring/kube-prometheus-stack PrometheusRule: monitoring/kube-prometheus-stack-alertmanager.rules

@@ -21,13 +21,13 @@

           $labels.pod}}.
         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/alertmanager/alertmanagerfailedreload
         summary: Reloading an Alertmanager configuration has failed.
       expr: |-
         # Without max_over_time, failed scrapes could create false negatives, see
         # https://www.robustperception.io/alerting-on-gauges-in-prometheus-2-0 for details.
-        max_over_time(alertmanager_config_last_reload_successful{job="kube-prometheus-stack-alertmanager",namespace="monitoring"}[5m]) == 0
+        max_over_time(alertmanager_config_last_reload_successful{job="kube-prometheus-stack-alertmanager",container="alertmanager",namespace="monitoring"}[5m]) == 0
       for: 10m
       labels:
         severity: critical
     - alert: AlertmanagerMembersInconsistent
       annotations:
         description: Alertmanager {{ $labels.namespace }}/{{ $labels.pod}} has only
@@ -35,30 +35,30 @@

         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/alertmanager/alertmanagermembersinconsistent
         summary: A member of an Alertmanager cluster has not found all other cluster
           members.
       expr: |-
         # Without max_over_time, failed scrapes could create false negatives, see
         # https://www.robustperception.io/alerting-on-gauges-in-prometheus-2-0 for details.
-          max_over_time(alertmanager_cluster_members{job="kube-prometheus-stack-alertmanager",namespace="monitoring"}[5m])
+          max_over_time(alertmanager_cluster_members{job="kube-prometheus-stack-alertmanager",container="alertmanager",namespace="monitoring"}[5m])
         < on (namespace,service,cluster) group_left
-          count by (namespace,service,cluster) (max_over_time(alertmanager_cluster_members{job="kube-prometheus-stack-alertmanager",namespace="monitoring"}[5m]))
+          count by (namespace,service,cluster) (max_over_time(alertmanager_cluster_members{job="kube-prometheus-stack-alertmanager",container="alertmanager",namespace="monitoring"}[5m]))
       for: 15m
       labels:
         severity: critical
     - alert: AlertmanagerFailedToSendAlerts
       annotations:
         description: Alertmanager {{ $labels.namespace }}/{{ $labels.pod}} failed
           to send {{ $value | humanizePercentage }} of notifications to {{ $labels.integration
           }}.
         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/alertmanager/alertmanagerfailedtosendalerts
         summary: An Alertmanager instance failed to send notifications.
       expr: |-
         (
-          rate(alertmanager_notifications_failed_total{job="kube-prometheus-stack-alertmanager",namespace="monitoring"}[15m])
+          rate(alertmanager_notifications_failed_total{job="kube-prometheus-stack-alertmanager",container="alertmanager",namespace="monitoring"}[15m])
         /
-          ignoring (reason) group_left rate(alertmanager_notifications_total{job="kube-prometheus-stack-alertmanager",namespace="monitoring"}[15m])
+          ignoring (reason) group_left rate(alertmanager_notifications_total{job="kube-prometheus-stack-alertmanager",container="alertmanager",namespace="monitoring"}[15m])
         )
         > 0.01
       for: 5m
       labels:
         severity: warning
     - alert: AlertmanagerClusterFailedToSendAlerts
@@ -68,15 +68,15 @@

           humanizePercentage }}.
         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/alertmanager/alertmanagerclusterfailedtosendalerts
         summary: All Alertmanager instances in a cluster failed to send notifications
           to a critical integration.
       expr: |-
         min by (namespace,service, integration) (
-          rate(alertmanager_notifications_failed_total{job="kube-prometheus-stack-alertmanager",namespace="monitoring", integration=~`.*`}[15m])
+          rate(alertmanager_notifications_failed_total{job="kube-prometheus-stack-alertmanager",container="alertmanager",namespace="monitoring", integration=~`.*`}[15m])
         /
-          ignoring (reason) group_left rate(alertmanager_notifications_total{job="kube-prometheus-stack-alertmanager",namespace="monitoring", integration=~`.*`}[15m])
+          ignoring (reason) group_left rate(alertmanager_notifications_total{job="kube-prometheus-stack-alertmanager",container="alertmanager",namespace="monitoring", integration=~`.*`}[15m])
         )
         > 0.01
       for: 5m
       labels:
         severity: critical
     - alert: AlertmanagerClusterFailedToSendAlerts
@@ -86,15 +86,15 @@

           humanizePercentage }}.
         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/alertmanager/alertmanagerclusterfailedtosendalerts
         summary: All Alertmanager instances in a cluster failed to send notifications
           to a non-critical integration.
       expr: |-
         min by (namespace,service, integration) (
-          rate(alertmanager_notifications_failed_total{job="kube-prometheus-stack-alertmanager",namespace="monitoring", integration!~`.*`}[15m])
+          rate(alertmanager_notifications_failed_total{job="kube-prometheus-stack-alertmanager",container="alertmanager",namespace="monitoring", integration!~`.*`}[15m])
         /
-          ignoring (reason) group_left rate(alertmanager_notifications_total{job="kube-prometheus-stack-alertmanager",namespace="monitoring", integration!~`.*`}[15m])
+          ignoring (reason) group_left rate(alertmanager_notifications_total{job="kube-prometheus-stack-alertmanager",container="alertmanager",namespace="monitoring", integration!~`.*`}[15m])
         )
         > 0.01
       for: 5m
       labels:
         severity: warning
     - alert: AlertmanagerConfigInconsistent
@@ -102,13 +102,13 @@

         description: Alertmanager instances within the {{$labels.job}} cluster have
           different configurations.
         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/alertmanager/alertmanagerconfiginconsistent
         summary: Alertmanager instances within the same cluster have different configurations.
       expr: |-
         count by (namespace,service,cluster) (
-          count_values by (namespace,service,cluster) ("config_hash", alertmanager_config_hash{job="kube-prometheus-stack-alertmanager",namespace="monitoring"})
+          count_values by (namespace,service,cluster) ("config_hash", alertmanager_config_hash{job="kube-prometheus-stack-alertmanager",container="alertmanager",namespace="monitoring"})
         )
         != 1
       for: 20m
       labels:
         severity: critical
     - alert: AlertmanagerClusterDown
@@ -119,17 +119,17 @@

         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/alertmanager/alertmanagerclusterdown
         summary: Half or more of the Alertmanager instances within the same cluster
           are down.
       expr: |-
         (
           count by (namespace,service,cluster) (
-            avg_over_time(up{job="kube-prometheus-stack-alertmanager",namespace="monitoring"}[5m]) < 0.5
+            avg_over_time(up{job="kube-prometheus-stack-alertmanager",container="alertmanager",namespace="monitoring"}[5m]) < 0.5
           )
         /
           count by (namespace,service,cluster) (
-            up{job="kube-prometheus-stack-alertmanager",namespace="monitoring"}
+            up{job="kube-prometheus-stack-alertmanager",container="alertmanager",namespace="monitoring"}
           )
         )
         >= 0.5
       for: 5m
       labels:
         severity: critical
@@ -141,17 +141,17 @@

         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/alertmanager/alertmanagerclustercrashlooping
         summary: Half or more of the Alertmanager instances within the same cluster
           are crashlooping.
       expr: |-
         (
           count by (namespace,service,cluster) (
-            changes(process_start_time_seconds{job="kube-prometheus-stack-alertmanager",namespace="monitoring"}[10m]) > 4
+            changes(process_start_time_seconds{job="kube-prometheus-stack-alertmanager",container="alertmanager",namespace="monitoring"}[10m]) > 4
           )
         /
           count by (namespace,service,cluster) (
-            up{job="kube-prometheus-stack-alertmanager",namespace="monitoring"}
+            up{job="kube-prometheus-stack-alertmanager",container="alertmanager",namespace="monitoring"}
           )
         )
         >= 0.5
       for: 5m
       labels:
         severity: critical

@renovate renovate bot force-pushed the renovate/kube-prometheus-stack-78.x branch from b647e57 to 2dd946e Compare October 10, 2025 21:13
@renovate renovate bot changed the title feat(helm)!: Update chart kube-prometheus-stack to 78.0.0 feat(helm)!: Update chart kube-prometheus-stack to 78.1.0 Oct 10, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

0 participants