Hi Eugene,
this looks like https://tracker.ceph.com/issues/42223 indeed.
Would you please find the first crash for these OSDs and share
corresponding logs in the ticket.
Unfortunately I don't know reliable enough ways to recover OSD after
such a failure. If they exist at all... :(
I've been told offline by Rafal Wadolowski that ceph-kvstore-tool's
destructive-repair command helped in 1 of 2 attempts. But I would
strictly advise to refrain from this command usage for now unless
you're absolutely immune to data loss. It might make things even worse....
Thanks,
Igor
On 11/7/2019 11:56 AM, Eugene de Beste wrote:
Hi, does anyone have any feedback for me regarding this?
Here's the log I get when trying to restart the OSD via systemctl:
https://pastebin.com/tshuqsLP
On Mon, 4 Nov 2019 at 12:42, Eugene de Beste <eug...@sanbi.ac.za
<mailto:eug...@sanbi.ac.za>> wrote:
Hi everyone
I have a cluster that was initially set up with bad defaults in
Luminous. After upgrading to Nautilus I've had a few OSDs crash on
me, due to errors seemingly related to
https://tracker.ceph.com/issues/42223 and
https://tracker.ceph.com/issues/22678.
One of my pools have been running in min_size 1 (yes, I know) and
I am not stuck with incomplete pgs due to aforementioned OSD crash.
When trying to use the ceph-objectstore-tool to get the pgs out of
the OSD, I'm running into the same issue as trying to start the
OSD, which is the crashes. ceph-objectstore-tool core dumps and I
can't retrieve the pg.
Does anyone have any input on this? I would like to be able to
retrieve that data if possible.
Here's the log for ceph-objectstore-tool --debug --data-path
/var/lib/ceph/osd/ceph-22 --skip-journal-replay --skip-mount-omap
--op info --pgid 2.9f -- https://pastebin.com/9aGtAfSv
Regards and thanks,
Eugene
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com