[ceph-users] Re: LARGE_OMAP_OBJECTS warning and bucket has lot of unknown objects and 1999 shards.

2023-07-31 Thread Uday Bhaskar Jalagam
{ "name": "_multipart_6d92918e003ca6e3fc622900542e3e9f-7a88afbb.2~hY3WLcQJhpv8qXsXnqBcW2C4Q18Vc73.100", "instance": "", "ver": { "pool": 9, "epoch": 24

[ceph-users] LARGE_OMAP_OBJECTS warning and bucket has lot of unknown objects and 1999 shards.

2023-07-28 Thread Uday Bhaskar Jalagam
Hello Everyone , I am getting [WRN] LARGE_OMAP_OBJECTS: 18 large omap objects warning in one of my clusters . I see one of the buckets has a huge number of shards 1999 and "num_objects": 221185360 when I check bucket stats using radosgw-admin bucket stats . However I see only 8 files when I

[ceph-users] Re: Frequest LARGE_OMAP_OBJECTS in cephfs metadata pool

2020-02-24 Thread Uday Bhaskar jalagam
Thanks Patrick, is this the bug you are referring to https://tracker.ceph.com/issues/42515 ? We also see performance issues mainly on metadata operations like finding file stats operations , however mds perf dump shows no sign of any latencies . could this bug cause any performance issues ? h

[ceph-users] Re: Frequest LARGE_OMAP_OBJECTS in cephfs metadata pool

2020-02-24 Thread Uday Bhaskar jalagam
Hello Patrick, File system created around 4 months back. Using ceph version 14.2.3 version. [root@knode25 /]# ceph fs dump dumped fsmap epoch 577 e577 enable_multiple, ever_enabled_multiple: 0,0 compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layou

[ceph-users] Frequest LARGE_OMAP_OBJECTS in cephfs metadata pool

2020-02-24 Thread Uday Bhaskar jalagam
Hello Team , I am getting frequent LARGE_OMAP_OBJECTS 1 large omap objects in one of my cephfs metadata pools , anyone can explain why would this pool getting into this state frequently and how could I prevent this in future ? # ceph health detail HEALTH_WARN 1 large omap objects LARGE_OMAP_OB