Thanks, Eugen.  I’m afraid I haven’t yet found a way to either disable the 
CephPGImbalance alert or change it to handle different OSD sizes.  Changing 
/var/lib/ceph/<cluster_id>/home/ceph_default_alerts.yml doesn’t seem to have 
any effect, and I haven’t even managed to change the behavior from within the 
running prometheus container.

If you have a functioning workaround, can you give a little more detail on 
exactly what yaml file you’re changing and where?

Thanks again,
Devin

> On Dec 30, 2024, at 12:39 PM, Eugen Block <ebl...@nde.ag> wrote:
>
> Funny, I wanted to take a look next week how to deal with different OSD sizes 
> or if somebody already has a fix for that. My workaround is changing the yaml 
> file for Prometheus as well.
>
> Zitat von "Devin A. Bougie" <devin.bou...@cornell.edu>:
>
>> Hi, All.  We are using cephadm to manage a 19.2.0 cluster on fully-updated 
>> AlmaLinux 9 hosts, and would greatly appreciate help modifying or overriding 
>> the alert rules in ceph_default_alerts.yml.  Is the best option to simply 
>> update the /var/lib/ceph/<cluster_id>/home/ceph_default_alerts.yml file?
>>
>> In particular, we’d like to either disable the CephPGImbalance alert or 
>> change it to calculate averages per-pool or per-crush_rule instead of 
>> globally as in [1].
>>
>> We currently have PG autoscaling enabled, and have two separate crush_rules 
>> (one with large spinning disks, one with much smaller nvme drives).  
>> Although I don’t believe it causes any technical issues with our 
>> configuration, our dashboard is full of CephPGImbalance alerts that would be 
>> nice to clean up without having to create periodic silences.
>>
>> Any help or suggestions would be greatly appreciated.
>>
>> Many thanks,
>> Devin
>>
>> [1] https://github.com/rook/rook/discussions/13126#discussioncomment-10043490
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io


_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to