I had to use rocksdb repair tool before because the rocksdb files got
corrupted, for another reason (another bug possibly). Maybe that is why now it
crash loops, although it ran fine for a day.What is meant with "turn it off and
rebuild from remainder"?
Am Samstag, 5. Oktober 2019, 02:03:4
Hi,
On inspecting new installed cluster (Nautilus), I find following result.
ssd-test pool is cache pool for hdd-test pool. After running some RBD bench
and delete all rbd images used for benchmarking, it there is some hidden
objects inside both pools (except rbd_directory, rbd_info and rbd_trash)
Out of the blue this popped up (on an otherwise healthy cluster):
HEALTH_WARN 1 large omap objects
LARGE_OMAP_OBJECTS 1 large omap objects
1 large objects found in pool 'cephfs_metadata'
Search the cluster log for 'Large omap object found' for more details.
"Search the cluster log" is som
I followed some other suggested steps, and have this:
root@cnx-17:/var/log/ceph# zcat ceph-osd.178.log.?.gz|fgrep Large
2019-10-02 13:28:39.412 7f482ab1c700 0 log_channel(cluster) log [WRN] :
Large omap object found. Object: 2:654134d2:::mds0_openfiles.0:head Key
count: 306331 Size (bytes): 13993
I've adjusted the threshold:
ceph config set osd osd_deep_scrub_large_omap_object_key_threshold 35
Colleague suggested that this will take effect on the next deep-scrub.
Is the default of 200,000 too small? will this be adjusted in future
releases or is it meant to be adjusted in some use-ca