It seems the trigger for the problem is this:
> 24.9141130d 1000527. [write 0~242] snapc 1=[] ondisk+write
> e320)
>-40> 2016-09-20 20:38:02.007942 708f67bbd700 0
> filestore(/var/lib/ceph/osd/ceph-0) write couldn't open
> 24.32_head/#24:4d11884b:::1000504.:head#: (24)
Looks like the OSD didn't like an error return it got from the
underlying fs. Can you reproduce with
debug filestore = 20
debug osd = 20
debug ms = 1
on the osd and post the whole log?
-Sam
On Wed, Sep 21, 2016 at 12:10 AM, Peter Maloney
wrote:
> Hi,
>
> I created a one disk osd with data and
Hi,
I created a one disk osd with data and separate journal on the same lvm
volume group just for test, one mon, one mds on my desktop.
I managed to crash the osd just by mounting cephfs and doing cp -a of
the linux-stable git tree into it. It crashed after copying 2.1G which
only covers some of