global:
  resolve_timeout: 5m
receivers:
  - name: pdmso_alerts
    webhook_configs:
      - url: 
"http://prometheus-msteams.monitoring.svc.cluster.local:2000/pdmsoalert";
        send_resolved: true
  - name: default_receiver_test
    webhook_configs:
      - url: 
"http://prometheus-msteams.monitoring.svc.cluster.local:2000/test";
        send_resolved: true
route:
  group_by:
    - alertname
    - severity
  group_interval: 5m
  group_wait: 30s
  repeat_interval: 3h
  receiver: default_receiver_test
  routes:
  - matchers:
      
alertname=~"HostOutOfDiskSpace|HostHighCpuLoad|HostHighCpuLoad|KubeNodeNotReady"
  receiver: pdmso_alerts

On Monday, February 27, 2023 at 6:52:04 PM UTC+5:30 Sampada Thorat wrote:

> Hello Brian I tried your change yet my alertmanager isn't taking config 
> changes and shows older config. Can u have a look ?
>
> global:
>   resolve_timeout: 5m
> receivers:
>   - name: pdmso_alerts
>     webhook_configs:
>       - url: "
> http://prometheus-msteams.monitoring.svc.cluster.local:2000/pdmsoalert";
>         send_resolved: true
>   - name: default_receiver_test
>     webhook_configs:
>       - url: "
> http://prometheus-msteams.monitoring.svc.cluster.local:2000/test";
>         send_resolved: true
> route:
>   group_by:
>     - alertname
>     - severity
>   group_interval: 5m
>   group_wait: 30s
>   repeat_interval: 3h
>   receiver: default_receiver_test
>   routes:
>   - matchers:
>       
> alertname=~"HostOutOfDiskSpace|HostHighCpuLoad|HostHighCpuLoad|KubeNodeNotReady"
>   receiver: pdmso_alerts
>
> On Monday, February 27, 2023 at 3:51:33 PM UTC+5:30 Brian Candler wrote:
>
>> >   routes:
>> >   - matchers:
>> >      
>>  
>> alertname:['HostOutOfDiskSpace','HostHighCpuLoad','HostOutOfMemory','KubeNodeNotReady']
>>
>> That's invalid: alertmanager should not even start.  I tested your 
>> config, and I get the following error:
>>
>> ts=2023-02-27T10:17:54.702Z caller=coordinator.go:118 level=error 
>> component=configuration msg="Loading configuration file failed" 
>> file=tmp.yaml err="yaml: unmarshal errors:\n  line 22: cannot unmarshal 
>> !!str `alertna...` into []string"
>>
>> 'matchers' is a list of strings, not a map.  This should work:
>>
>> route:
>>   routes:
>>   - matchers:
>>     - 
>> alertname=~"HostOutOfDiskSpace|HostHighCpuLoad|HostHighCpuLoad|KubeNodeNotReady"
>>   receiver: elevate_alerts
>>
>> See:
>> https://prometheus.io/docs/alerting/latest/configuration/#matcher
>> https://prometheus.io/docs/alerting/latest/configuration/#example
>>
>> On Sunday, 26 February 2023 at 14:53:26 UTC Sampada Thorat wrote:
>>
>>> Hello Everyone,
>>>
>>> I want to receive Alerts for  
>>> 'HostOutOfDiskSpace','HostHighCpuLoad','HostOutOfMemory','KubeNodeNotReady' 
>>> alertnames in "elevate_alerts" channel and rest all other alerts in 
>>> " default_receiver_test" channel. But for the below configuration, I'm 
>>> getting all the alerts in   "elevate_alerts" only.
>>>
>>> This is my ConfigMap:
>>>
>>> apiVersion: v1
>>> data:
>>>   connectors.yaml: |
>>>     connectors:
>>>       - test: 
>>> https://sasoffice365.webhook.office.com/webhookb2/d2415be1-2360-49c3-af48-7baf41aa1371@b1c14d5c-3625-45b3-a430-9552373a0c2f/IncomingWebhook/c7c62c1315d24c1fb5d1c731d2467dc6/5c8c1e6c-e827-4114-a893-9a1788ad41b5
>>>       - alertmanager: 
>>> https://sasoffice365.webhook.office.com/webhookb2/a7cb86de-1543-4e6d-b927-387c1f1e35ad@b1c14d5c-3625-45b3-a430-9552373a0c2f/IncomingWebhook/687a7973ffe248d081f58d94a090fb4c/05be66ae-90eb-42f5-8e0c-9c10975012ca
>>> kind: ConfigMap
>>> metadata:
>>>   annotations:
>>>     meta.helm.sh/release-name: prometheus-msteams
>>>     meta.helm.sh/release-namespace: monitoring
>>>   creationTimestamp: "2023-02-26T12:33:36Z"
>>>   labels:
>>>     app.kubernetes.io/managed-by: Helm
>>>   name: prometheus-msteams-config
>>>   namespace: monitoring
>>>   resourceVersion: "18040490"
>>>   uid: 795c96d5-8318-4885-804f-71bba707c885
>>>
>>>
>>> This is my alertmanager.yaml:
>>>
>>> global:
>>>   resolve_timeout: 5m
>>> receivers:
>>> - name: elevate_alerts
>>>   webhook_configs:
>>>   - url: "
>>> http://prometheus-msteams.default.svc.cluster.local:2000/alertmanager";
>>>     send_resolved: true
>>> - name: default_receiver_test
>>>   webhook_configs:
>>>   - url: "http://prometheus-msteams.default.svc.cluster.local:2000/test";
>>>     send_resolved: true
>>> route:
>>>   group_by:
>>>   - alertname
>>>   - severity
>>>   group_interval: 5m
>>>   group_wait: 30s
>>>   repeat_interval: 3h
>>>   receiver: default_receiver_test
>>>   routes:
>>>   - matchers:
>>>       
>>> alertname:['HostOutOfDiskSpace','HostHighCpuLoad','HostOutOfMemory','KubeNodeNotReady']
>>>     receiver: elevate_alerts
>>>
>>> Please help
>>>
>>>
>>>
>>>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/ab1378f2-5b13-4c48-b0c5-c2a031c4fb4en%40googlegroups.com.

Reply via email to