On Wed, Oct 30, 2019 at 9:28 AM Jake Grimmett <j...@mrc-lmb.cam.ac.uk> wrote:
>
> Hi Zheng,
>
> Many thanks for your helpful post, I've done the following:
>
> 1) set the threshold to 1024 * 1024:
>
> # ceph config set osd \
> osd_deep_scrub_large_omap_object_key_threshold 1048576
>
> 2) deep scrubbed all of the pgs on the two OSD that reported "Large omap
> object found." - these were all in pool 1, which has just four osd.
>
>
> Result: After 30 minutes, all deep-scrubs completed, and all "large omap
> objects" warnings disappeared.
>
> ...should we be worried about the size of these OMAP objects?

No. There are only a few of these objects and it's not caused problems
up to now in any other cluster.

-- 
Patrick Donnelly, Ph.D.
He / Him / His
Senior Software Engineer
Red Hat Sunnyvale, CA
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to