Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add rule group related alerts #111

Merged
merged 5 commits into from
Sep 3, 2024
Merged
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
28 changes: 26 additions & 2 deletions common/metrics.yaml.tmpl
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,8 @@
groups:
- name: metrics
rules:
# thanos-compact is a slow crasher, so it needs a more sensitive
# "ContainerRestartingOften" alert than the stock one
# thanos-compact is a slow crasher, so it needs a more sensitive
# "ContainerRestartingOften" alert than the stock one
- alert: ThanosCompactRestartingOften
expr: increase(kube_pod_container_status_restarts_total{container="thanos-compact"}[2h]) > 3
labels:
Expand All @@ -16,3 +16,27 @@ groups:
action: "Check pod status and container logs to figure out if there's a problem"
command: "`kubectl --context $ENVIRONMENT-$PROVIDER --namespace {{ $labels.namespace }} describe pod {{ $labels.pod }}`"
logs: <https://grafana.$ENVIRONMENT.aws.uw.systems/explore?left=["now-1h","now","Loki",{"expr":"{kubernetes_cluster=\"{{$labels.kubernetes_cluster}}\",kubernetes_namespace=\"{{$labels.namespace}}\",app_kubernetes_io_name=\"{{$labels.label_app_kubernetes_io_name}}\"}"}]|link>
- alert: ThanosRuleGroupEvaluationsTooSlow
expr: |
count by (kubernetes_cluster, kubernetes_namespace, kubernetes_name) (
sum by(kubernetes_cluster,kubernetes_namespace, kubernetes_name, rule_group) (prometheus_rule_group_last_duration_seconds{})
>
sum by(kubernetes_cluster, kubernetes_namespace, kubernetes_name, rule_group) (prometheus_rule_group_interval_seconds{})
) > 5
labels:
team: infra
annotations:
summary: "Thanos rule group evaluation is too slow for more then 5 group rules in {{$labels.namespace}}/{{$labels.kubernetes_name}}"
impact: "Slow evaluation can result in missed evaluations"
dashboard: <https://grafana.$ENVIRONMENT.$PROVIDER.uw.systems/d/35da848f5f92b2dc612e0c3a0577b8a1/thanos-rule?refresh=5sv"|link>
logs: <https://grafana.$ENVIRONMENT.aws.uw.systems/explore?left=["now-1h","now","Loki",{"expr":"{kubernetes_cluster=\"{{$labels.kubernetes_cluster}}\",kubernetes_namespace=\"{{$labels.kubernetes_namespace}}\",kubernetes_pod_name=~\"{{$labels.kubernetes_name}}.*\"}"}]|link>
- alert: ThanosRuleGroupEvaluationsMissed
expr: |
sum by (kubernetes_cluster,kubernetes_namespace, kubernetes_name, rule_group) (increase(prometheus_rule_group_iterations_missed_total{}[5m])) > 0
labels:
team: infra
annotations:
summary: "Thanos rule group {{$labels.rule_group}} evaluation is missed in {{$labels.namespace}}/{{$labels.kubernetes_name}}"
impact: "Alerts are not evaluated hence they wont be fired even if conditions are met"
dashboard: <https://grafana.$ENVIRONMENT.$PROVIDER.uw.systems/d/35da848f5f92b2dc612e0c3a0577b8a1/thanos-rule?refresh=5sv"|link>
logs: <https://grafana.$ENVIRONMENT.aws.uw.systems/explore?left=["now-1h","now","Loki",{"expr":"{kubernetes_cluster=\"{{$labels.kubernetes_cluster}}\",kubernetes_namespace=\"{{$labels.kubernetes_namespace}}\",kubernetes_pod_name=~\"{{$labels.kubernetes_name}}.*\"}"}]|link>