Hi all,
We have 2 Ceph clusters in multisite configuration, both are working fine
(syncing correctly) but 1 of them is showing warning 32 large omap objects
in the log pool.
This seems to be coming from the sync error list
for i in `rados -p wilxite.rgw.log ls`; do echo -n "$i:"; rados -p
wilxit
Hey, I would really appreciate any help I can get on this as googling has
led me to a dead end.
We have 2 data centers each with 4 servers running ceph on kubernetes in
multisite config, everything is working great but recently the master
cluster changed status to HEALTH_WARN and the issues are la
can bump up the warning threshold to make the warning go
> away - a few releases ago it was reduced to 1/10 of the prior value.
>
> There’s also information about trimming usage logs and for removing
> specific usage log objects.
>
> > On Oct 27, 2022, at 4:05 AM, Sarah Coxon