Sorry, but this is the last config I am going to test for you.  I think it 
would be better if you run alertmanager yourself locally, then you can see 
the errors yourself and correct them.  Or at least, please paste your 
config into an online YAML validator like yamllint.com - it can highlight 
structural errors like this one.

The config you just posted gives the following error from alertmanager:

ts=2023-02-27T17:37:25.399Z caller=coordinator.go:118 level=error 
component=configuration msg="Loading configuration file failed" 
file=tmp.yaml err="yaml: line 21: did not find expected '-' indicator"

Again, this is because you have not lined up "receiver" with "matchers".  
Here it is again, this time replacing spaces with asterisks to try to make 
it 100% clear.

WRONG:

**routes:
****- matchers:
********- 
alertname=~"HostOutOfDiskSpace|HostHighCpuLoad|HostHighCpuLoad|KubeNodeNotReady"
****receiver: pdmso_alerts

CORRECT:

**routes:
****- matchers:
********- 
alertname=~"HostOutOfDiskSpace|HostHighCpuLoad|HostHighCpuLoad|KubeNodeNotReady"
******receiver: pdmso_alerts

The first three lines are the same, but notice the different indentation of 
the last line: it needs 6 spaces not 4 so the "r" of receiver lines up with 
the "m" of matchers (they are two keys in the same object).

On Monday, 27 February 2023 at 17:32:57 UTC Sampada Thorat wrote:

> I did bullet the second value under the first value as well. 
> Yet some minor mistake is occuring, hence it's not reflecting in 
> alertmanager.
>
> Here's my Config:
>
> global:
>   resolve_timeout: 5m
> receivers:
>   - name: pdmso_alerts
>     webhook_configs:
>       - url: "
> http://prometheus-msteams.monitoring.svc.cluster.local:2000/pdmsoalert";
>         send_resolved: true
>   - name: default_receiver_test
>     webhook_configs:
>       - url: "
> http://prometheus-msteams.monitoring.svc.cluster.local:2000/test";
>         send_resolved: true
> route:
>   group_by:
>
>     - namespace
>
>
>   group_interval: 5m
>   group_wait: 30s
>   repeat_interval: 3h
>
>   receiver: default_receiver_test
>   routes:
>     - matchers:
>         - 
> alertname=~"HostOutOfDiskSpace|HostHighCpuLoad|HostHighCpuLoad|KubeNodeNotReady"
>
>     receiver: pdmso_alerts
>  
>
>
> Thanks & Regards,
>
>
> On Mon, 27 Feb 2023, 8:34 pm Brian Candler, <[email protected]> wrote:
>
>> On Monday, 27 February 2023 at 13:22:04 UTC Sampada Thorat wrote:
>>
>> Hello Brian I tried your change yet my alertmanager isn't taking config 
>> changes and shows older config. Can u have a look ?
>>
>>
>> You mentioned ConfigMap, which suggests that you are deploying Prometheus 
>> on a Kubernetes cluster.  It looks like your problem is primarily with 
>> Kubernetes, not Prometheus.
>>
>> If you deployed Prometheus using one of the various third-party Helm 
>> charts, then you could ask on the tracker for that Helm chart.  They might 
>> be able to tell you how it's supposed to work if you change the ConfigMap, 
>> e.g. whether you're supposed to destroy and recreate the pod manually to 
>> pick up the change.
>>
>> Alternatively, it might be that your config has errors in it, and 
>> Alertmanager is sticking with the old config.
>>
>> I tested the config you posted, by writing it to tmp.yaml and then 
>> running a standalone instance of alertmanager by hand:
>>
>> /opt/alertmanager/alertmanager  --config.file tmp.yaml 
>>  --web.listen-address=:19093 --cluster.listen-address="0.0.0.0:19094"
>>
>> It gave me the following error:
>>
>> ts=2023-02-27T14:56:01.186Z caller=coordinator.go:118 level=error 
>> component=configuration msg="Loading configuration file failed" 
>> file=tmp.yaml err="yaml: unmarshal errors:\n  line 22: cannot unmarshal 
>> !!str `alertna...` into []string\n  *line 23: field receiver already set* 
>> in type config.plain"
>>
>> (I would expect such errors to appear in pod logs too)
>>
>> It's complaining that you have duplicate values for the same "receiver" 
>> key:
>>
>> route:
>>   ...
>>   *receiver:* default_receiver_test
>>   ...
>>   *receiver:* pdmso_alerts
>>
>> This is because you did not indent the second 'receiver:' correctly.  It 
>> has to be under the bullet point for the 'routes:'
>>
>> route:
>>   receiver: default_receiver_test
>>   routes:
>>   - matchers:
>>       - 
>> alertname=~"HostOutOfDiskSpace|HostHighCpuLoad|HostHighCpuLoad|KubeNodeNotReady"
>> *      ^ dash required here because 'matchers' is a list*
>>     *receiver: pdmso_alerts*
>> *    ^ should be here, to line up with "matchers" as it's part of the 
>> same route (list element under "routes")*
>>
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Prometheus Users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to [email protected].
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/prometheus-users/a183d004-896c-4f7c-88d1-7bed759b1f49n%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/prometheus-users/a183d004-896c-4f7c-88d1-7bed759b1f49n%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/af3c15db-03e7-41fe-8557-47c000eb65ban%40googlegroups.com.

Reply via email to