Do you have control over alert_relabel_configs?

You could do

alert_relabel_configs:
- source_labels: [alertname]
  regex: KubeJobFailed
  target_label: severity
  replacement: info

On 04 Jul 22:27, John Swarbrick wrote:
> Hi Julius - thanks for replying!
> 
> Yes, I'm using the latest 47.3.0 version of kube-prometheus-stack.
> 
> This appears to ship a set of Prometheus rules which includes KubeJobFailed:
> 
> https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack/templates/prometheus/rules-1.14/kubernetes-apps.yaml#L404C24-L428
> 
> The severity of the alert is set to "warning" but I want to override that
> severity and set it to "info" by adding a configuration in the top-level
> values.yaml, maybe under the alertmanagerSpec: section.
> 
> However I searched everywhere and couldn't find a method of modifying just
> the severity level of a named alert without changing any other parameters.
> 
> While the example here is KubeJobFailed this could apply to any alert we
> need to override.
> 
> Dave
> 
> 
> 
> 
> 
> On Mon, 3 Jul 2023 at 13:56, Julius Volz <[email protected]> wrote:
> 
> > Hi David,
> >
> > Prometheus does not have built-in alerts, but I assume you are using some
> > framework around Prometheus that configures a "KubeJobFailed" alert? Is it
> > kube-prometheus with the kubernetes-mixin (since that includes an example
> > alert by that name:
> > https://github.com/kubernetes-monitoring/kubernetes-mixin/blob/003ba5eadfbd69817d1215952133d3ecf99fbd92/alerts/apps_alerts.libsonnet#L268-L281)?
> > If you share more about your kube-prometheus or kubernetes-mixin jsonnet
> > pipeline, I'm sure someone could help. Although at that point it's probably
> > more of a jsonnet question than a Prometheus one.
> >
> > Kind regards,
> > Julius
> >
> > On Mon, Jul 3, 2023 at 9:11 AM David Dean <[email protected]> wrote:
> >
> >> Hi -
> >>
> >> How can I modify the severity of built in alert KubeJobFailed from
> >> warning to info?
> >>
> >> I don't want to change anything else about the built in alert, only the
> >> severity.
> >>
> >> And I only want to change the one alert, KubeJobFailed, not any other.
> >>
> >> I need to change the severity to info so PagerDuty will not generate
> >> alerts outside working hours. In our scenario, Kubernetes Jobs don't need
> >> an immediate callout.
> >>
> >> I've spent ages Googling and checking the docs but cannot find out how to
> >> do it anywhere!
> >>
> >> Thanks, Dave
> >>
> >> --
> >> You received this message because you are subscribed to the Google Groups
> >> "Prometheus Users" group.
> >> To unsubscribe from this group and stop receiving emails from it, send an
> >> email to [email protected].
> >> To view this discussion on the web visit
> >> https://groups.google.com/d/msgid/prometheus-users/2c6284f5-35a6-43d9-b892-41803d8bd4d9n%40googlegroups.com
> >> <https://groups.google.com/d/msgid/prometheus-users/2c6284f5-35a6-43d9-b892-41803d8bd4d9n%40googlegroups.com?utm_medium=email&utm_source=footer>
> >> .
> >>
> >
> >
> > --
> > Julius Volz
> > PromLabs - promlabs.com
> >
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Prometheus Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected].
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/prometheus-users/CAHtOzUfJbzLGQ9X8zZDzOB9KFPwOe%3D5bLCbEzYDDSYk031mBUA%40mail.gmail.com.

-- 
Julien Pivotto
@roidelapluie

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/ZKSRGST2pZcCMHQZ%40nixos.

Reply via email to