Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix yaml rule templates #139

Merged
merged 1 commit into from
Jan 15, 2025
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
fix yaml rule templates
asiyani committed Jan 15, 2025
commit 339029366d66fe1b285752cd21ee998a7a18fb96
2 changes: 1 addition & 1 deletion capacity-experiments/capacity-experiments.yaml.tmpl
Original file line number Diff line number Diff line change
@@ -11,7 +11,7 @@ groups:
team: infra
annotations:
summary: "AZ {{ $labels.zone}} is running out of memory for pods"
dashboard: <https://grafana.$ENVIRONMENT.$PROVIDER.uw.systems/d/Mig_eDNVz/kubernetes-cluster-utilization"|link>
dashboard: <https://grafana.$ENVIRONMENT.$PROVIDER.uw.systems/d/Mig_eDNVz/kubernetes-cluster-utilization|link>
- alert: AvailabilityZoneRunningOutOfMemory99for10m
expr: avg(node_memory_working_set_bytes/on(node)(kube_node_status_capacity{resource="memory"} - on (node) node_eviction_threshold) * on(node) group_left(zone) kube_node_labels{role="worker"}) by (zone) > 0.99
for: 10m
2 changes: 1 addition & 1 deletion common/stock/terraform_sync.yaml.tmpl
Original file line number Diff line number Diff line change
@@ -22,6 +22,6 @@ groups:

If module is using kube backend and the state is locked you can remove the lock with the following command:
`kubectl --context={{ $labels.kubernetes_cluster }} -n {{ $labels.namespace }} patch lease lock-tfstate-default-{{ $labels.module }} --type=json -p='[{"op":"remove","path":"/spec/holderIdentity"}]'`
dashboard: <https://terraform-applier-system.$ENVIRONMENT.$PROVIDER.uw.systems/#{{$labels.namespace}}-{{$labels.module}}"|link>
dashboard: <https://terraform-applier-system.$ENVIRONMENT.$PROVIDER.uw.systems/#{{$labels.namespace}}-{{$labels.module}}|link>
logs: 'https://grafana.$ENVIRONMENT.aws.uw.systems/explore?left=["now-1h","now","Loki",{"expr":"{kubernetes_cluster=\"{{$labels.kubernetes_cluster}}\",kubernetes_namespace=\"{{$labels.kubernetes_namespace}}\"} |=\"{{$labels.module}}\""}]'
tf_applier: "https://terraform-applier-system.$ENVIRONMENT.$PROVIDER.uw.systems/"
4 changes: 2 additions & 2 deletions common/terraform-applier.yaml.tmpl
Original file line number Diff line number Diff line change
@@ -17,7 +17,7 @@ groups:
Please also collect `goroutine` info before restart for debugging the issue.
`https://terraform-applier-system.$ENVIRONMENT.$PROVIDER.uw.systems/debug/pprof/goroutine?debug=1`
command: "`kubectl --context $ENVIRONMENT-$PROVIDER --namespace {{ $labels.kubernetes_namespace }} rollout restart sts {{ $labels.kubernetes_name }}`"
dashboard: <https://grafana.$ENVIRONMENT.$PROVIDER.uw.systems/d/9kH3Tk0Zzd/terraform-applier-v2?orgId=1&refresh=5s"|link>
dashboard: <https://grafana.$ENVIRONMENT.$PROVIDER.uw.systems/d/9kH3Tk0Zzd/terraform-applier-v2?orgId=1&refresh=5s|link>
logs: <https://grafana.$ENVIRONMENT.aws.uw.systems/explore?left=["now-1h","now","Loki",{"expr":"{kubernetes_cluster=\"{{$labels.kubernetes_cluster}}\",kubernetes_namespace=\"{{$labels.kubernetes_namespace}}\",kubernetes_pod_name=\"{{$labels.kubernetes_pod_name}}\"}"}]|link>
- alert: TerraformApplierGitMirrorError
expr: time() - max by (repo) (terraform_applier_git_last_mirror_timestamp{}) > 600
@@ -27,5 +27,5 @@ groups:
annotations:
summary: "terraform-applier has not been able to fetch {{ $labels.repo }} repository in the last 10m"
impact: "terraform-applier will not be running modules from this repository"
dashboard: <https://grafana.$ENVIRONMENT.$PROVIDER.uw.systems/d/9kH3Tk0Zzd/terraform-applier-v2?orgId=1&refresh=5s"|link>
dashboard: <https://grafana.$ENVIRONMENT.$PROVIDER.uw.systems/d/9kH3Tk0Zzd/terraform-applier-v2?orgId=1&refresh=5s|link>
logs: <https://grafana.$ENVIRONMENT.aws.uw.systems/explore?left=["now-1h","now","Loki",{"expr":"{kubernetes_cluster=\"$ENVIRONMENT-$PROVIDER\",kubernetes_namespace=\"sys-terraform-applier\"}"}]|link>
12 changes: 6 additions & 6 deletions common/thanos.yaml.tmpl
Original file line number Diff line number Diff line change
@@ -24,7 +24,7 @@ groups:
team: infra
annotations:
summary: "Thanos Rule {{$labels.kubernetes_namespace}}/{{$labels.kubernetes_pod_name}} is failing to queue alerts"
dashboard: <https://grafana.$ENVIRONMENT.$PROVIDER.uw.systems/d/35da848f5f92b2dc612e0c3a0577b8a1/thanos-rule?refresh=5sv"|link>
dashboard: <https://grafana.$ENVIRONMENT.$PROVIDER.uw.systems/d/35da848f5f92b2dc612e0c3a0577b8a1/thanos-rule?refresh=5sv|link>
logs: <https://grafana.$ENVIRONMENT.aws.uw.systems/explore?left=["now-1h","now","Loki",{"expr":"{kubernetes_cluster=\"{{$labels.kubernetes_cluster}}\",kubernetes_namespace=\"{{$labels.kubernetes_namespace}}\",kubernetes_pod_name=~\"{{$labels.kubernetes_pod_name}}\"}"}]|link>
- alert: ThanosRuleSenderIsFailingAlerts
expr: sum by (kubernetes_cluster,kubernetes_namespace, kubernetes_pod_name) (rate(thanos_alert_sender_alerts_dropped_total{}[5m])) > 0
@@ -33,7 +33,7 @@ groups:
team: infra
annotations:
summary: "Thanos Rule {{$labels.kubernetes_namespace}}/{{$labels.kubernetes_pod_name}} is failing to send alerts to alertmanager."
dashboard: <https://grafana.$ENVIRONMENT.$PROVIDER.uw.systems/d/35da848f5f92b2dc612e0c3a0577b8a1/thanos-rule?refresh=5sv"|link>
dashboard: <https://grafana.$ENVIRONMENT.$PROVIDER.uw.systems/d/35da848f5f92b2dc612e0c3a0577b8a1/thanos-rule?refresh=5sv|link>
logs: <https://grafana.$ENVIRONMENT.aws.uw.systems/explore?left=["now-1h","now","Loki",{"expr":"{kubernetes_cluster=\"{{$labels.kubernetes_cluster}}\",kubernetes_namespace=\"{{$labels.kubernetes_namespace}}\",kubernetes_pod_name=~\"{{$labels.kubernetes_pod_name}}\"}"}]|link>
- alert: ThanosNoRuleEvaluations
expr: |
@@ -45,7 +45,7 @@ groups:
team: infra
annotations:
summary: "Thanos Rule {{$labels.kubernetes_namespace}}/{{$labels.kubernetes_pod_name}} did not perform any rule evaluations in the past 10 minutes."
dashboard: <https://grafana.$ENVIRONMENT.$PROVIDER.uw.systems/d/35da848f5f92b2dc612e0c3a0577b8a1/thanos-rule?refresh=5sv"|link>
dashboard: <https://grafana.$ENVIRONMENT.$PROVIDER.uw.systems/d/35da848f5f92b2dc612e0c3a0577b8a1/thanos-rule?refresh=5sv|link>
logs: <https://grafana.$ENVIRONMENT.aws.uw.systems/explore?left=["now-1h","now","Loki",{"expr":"{kubernetes_cluster=\"{{$labels.kubernetes_cluster}}\",kubernetes_namespace=\"{{$labels.kubernetes_namespace}}\",kubernetes_pod_name=~\"{{$labels.kubernetes_pod_name}}\"}"}]|link>
- alert: ThanosRuleEvaluationLatencyHigh
expr: |
@@ -60,7 +60,7 @@ groups:
annotations:
summary: "Thanos rule {{$labels.kubernetes_namespace}}/{{$labels.kubernetes_pod_name}} has higher evaluation latency than interval for more then 10 group rules"
impact: "Slow evaluation can result in missed evaluations"
dashboard: <https://grafana.$ENVIRONMENT.$PROVIDER.uw.systems/d/35da848f5f92b2dc612e0c3a0577b8a1/thanos-rule?refresh=5sv"|link>
dashboard: <https://grafana.$ENVIRONMENT.$PROVIDER.uw.systems/d/35da848f5f92b2dc612e0c3a0577b8a1/thanos-rule?refresh=5sv|link>
logs: <https://grafana.$ENVIRONMENT.aws.uw.systems/explore?left=["now-1h","now","Loki",{"expr":"{kubernetes_cluster=\"{{$labels.kubernetes_cluster}}\",kubernetes_namespace=\"{{$labels.kubernetes_namespace}}\",kubernetes_pod_name=~\"{{$labels.kubernetes_pod_name}}\"}"}]|link>
- alert: ThanosRuleHighRuleEvaluationFailures
expr: |
@@ -75,7 +75,7 @@ groups:
team: infra
annotations:
summary: "Thanos Rule {{$labels.kubernetes_namespace}}/{{$labels.kubernetes_pod_name}} is failing to evaluate more then 10 group rules."
dashboard: <https://grafana.$ENVIRONMENT.$PROVIDER.uw.systems/d/35da848f5f92b2dc612e0c3a0577b8a1/thanos-rule?refresh=5sv"|link>
dashboard: <https://grafana.$ENVIRONMENT.$PROVIDER.uw.systems/d/35da848f5f92b2dc612e0c3a0577b8a1/thanos-rule?refresh=5sv|link>
logs: <https://grafana.$ENVIRONMENT.aws.uw.systems/explore?left=["now-1h","now","Loki",{"expr":"{kubernetes_cluster=\"{{$labels.kubernetes_cluster}}\",kubernetes_namespace=\"{{$labels.kubernetes_namespace}}\",kubernetes_pod_name=~\"{{$labels.kubernetes_pod_name}}\"}"}]|link>
- alert: ThanosRuleNoEvaluationFor10Intervals
expr: |
@@ -89,7 +89,7 @@ groups:
summary: Thanos Rule {{$labels.kubernetes_namespace}}/{{$labels.kubernetes_name}} has rule groups that did not evaluate for 10 intervals.
description: The rule group {{$labels.rule_group}} did not evaluate for at least 10x of their expected interval.
impact: "Alerts are not evaluated hence they wont be fired even if conditions are met"
dashboard: <https://grafana.$ENVIRONMENT.$PROVIDER.uw.systems/d/35da848f5f92b2dc612e0c3a0577b8a1/thanos-rule?refresh=5sv"|link>
dashboard: <https://grafana.$ENVIRONMENT.$PROVIDER.uw.systems/d/35da848f5f92b2dc612e0c3a0577b8a1/thanos-rule?refresh=5sv|link>
logs: <https://grafana.$ENVIRONMENT.aws.uw.systems/explore?left=["now-1h","now","Loki",{"expr":"{kubernetes_cluster=\"{{$labels.kubernetes_cluster}}\",kubernetes_namespace=\"{{$labels.kubernetes_namespace}}\",kubernetes_pod_name=~\"{{$labels.kubernetes_name}}.*\"}"}]|link>
- alert: ThanosBucketOperationsFailing
expr: |