Florian,
Thank you for your detailed reply. I was right in thinking that the 223k+
usage log entries were causing my large omap object warning. You've also
confirmed my suspicions that osd_deep_scrub_large_omap_object_key_threshold
was changed between Ceph versions. I ended up trimming all of the
Hi David,
On 28/10/2019 20:44, David Monschein wrote:
> Hi All,
>
> Running an object storage cluster, originally deployed with Nautilus
> 14.2.1 and now running 14.2.4.
>
> Last week I was alerted to a new warning from my object storage cluster:
>
> [root@ceph1 ~]# ceph health detail
> HEALTH_