I did bullet the second value under the first value as well.
Yet some minor mistake is occuring, hence it's not reflecting in
alertmanager.

Here's my Config:

global:
  resolve_timeout: 5m
receivers:
  - name: pdmso_alerts
    webhook_configs:
      - url: "
http://prometheus-msteams.monitoring.svc.cluster.local:2000/pdmsoalert";
        send_resolved: true
  - name: default_receiver_test
    webhook_configs:
      - url: "
http://prometheus-msteams.monitoring.svc.cluster.local:2000/test";
        send_resolved: true
route:
  group_by:
    - namespace
  group_interval: 5m
  group_wait: 30s
  repeat_interval: 3h
  receiver: default_receiver_test
  routes:
    - matchers:
        -
alertname=~"HostOutOfDiskSpace|HostHighCpuLoad|HostHighCpuLoad|KubeNodeNotReady"
    receiver: pdmso_alerts



Thanks & Regards,


On Mon, 27 Feb 2023, 8:34 pm Brian Candler, <[email protected]> wrote:

> On Monday, 27 February 2023 at 13:22:04 UTC Sampada Thorat wrote:
>
> Hello Brian I tried your change yet my alertmanager isn't taking config
> changes and shows older config. Can u have a look ?
>
>
> You mentioned ConfigMap, which suggests that you are deploying Prometheus
> on a Kubernetes cluster.  It looks like your problem is primarily with
> Kubernetes, not Prometheus.
>
> If you deployed Prometheus using one of the various third-party Helm
> charts, then you could ask on the tracker for that Helm chart.  They might
> be able to tell you how it's supposed to work if you change the ConfigMap,
> e.g. whether you're supposed to destroy and recreate the pod manually to
> pick up the change.
>
> Alternatively, it might be that your config has errors in it, and
> Alertmanager is sticking with the old config.
>
> I tested the config you posted, by writing it to tmp.yaml and then running
> a standalone instance of alertmanager by hand:
>
> /opt/alertmanager/alertmanager  --config.file tmp.yaml
>  --web.listen-address=:19093 --cluster.listen-address="0.0.0.0:19094"
>
> It gave me the following error:
>
> ts=2023-02-27T14:56:01.186Z caller=coordinator.go:118 level=error
> component=configuration msg="Loading configuration file failed"
> file=tmp.yaml err="yaml: unmarshal errors:\n  line 22: cannot unmarshal
> !!str `alertna...` into []string\n  *line 23: field receiver already set*
> in type config.plain"
>
> (I would expect such errors to appear in pod logs too)
>
> It's complaining that you have duplicate values for the same "receiver"
> key:
>
> route:
>   ...
>   *receiver:* default_receiver_test
>   ...
>   *receiver:* pdmso_alerts
>
> This is because you did not indent the second 'receiver:' correctly.  It
> has to be under the bullet point for the 'routes:'
>
> route:
>   receiver: default_receiver_test
>   routes:
>   - matchers:
>       -
> alertname=~"HostOutOfDiskSpace|HostHighCpuLoad|HostHighCpuLoad|KubeNodeNotReady"
> *      ^ dash required here because 'matchers' is a list*
>     *receiver: pdmso_alerts*
> *    ^ should be here, to line up with "matchers" as it's part of the same
> route (list element under "routes")*
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-users/a183d004-896c-4f7c-88d1-7bed759b1f49n%40googlegroups.com
> <https://groups.google.com/d/msgid/prometheus-users/a183d004-896c-4f7c-88d1-7bed759b1f49n%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/CAAfscSwKjNEuD90x894Z0vF848TG-3G7Eddw8W_vt_XHz3v%2B2Q%40mail.gmail.com.

Reply via email to