Hello Jason,
i had another 8 cases where scrub was running for hours. Sadly i
couldn't get it to hang again after an osd restart. Any further ideas?
Coredump of the OSD with hanging scrub?
Greets,
Stefan
Am 18.05.2017 um 17:26 schrieb Jason Dillaman:
> I'm unfortunately out of ideas at the mome
On Sun, May 21, 2017 at 6:44 PM, Andreas Gerstmayr
wrote:
> Some progress: The return code is -EBUSY (that explains why I didn't
> find anything looking for the number 16 in the source code of Ceph)
> https://github.com/ceph/ceph/blob/kraken/src/mds/MDCache.cc#L11996
>
> The status is inside Scrub
Some progress: The return code is -EBUSY (that explains why I didn't
find anything looking for the number 16 in the source code of Ceph)
https://github.com/ceph/ceph/blob/kraken/src/mds/MDCache.cc#L11996
The status is inside ScrubStack#inode_stack and
ScrubStack#scrubs_in_progress - is there a way
Using ceph-objectstore-tool apply-layout-settings I applied layout on
all storages to
filestore_merge_threshold = 40
filestore_split_multiple = 8
I checked some directories in OSDs, there were 1200-2000 files per folder
Split will occur on 5120 files per folder
But problem still exists, af