I have a ceph node that has an os filesystem going into read only for what ever 
reason[1]. 

1. How long will ceph continue to run before it starts complaining about this?
Looks like it is fine for a few hours, ceph osd tree and ceph -s, seem not to 
notice anything.

2. This is still nautilus with majority of ceph-disk and maybe some ceph-volume 
disks
What would be a good procedure to try and recover data from this drive to use 
on a new os disk?



[1]
Feb 21 14:41:30 kernel: XFS (dm-0): writeback error on sector 11610872
Feb 21 14:41:30 systemd: ceph-mon@c.service failed.
Feb 21 14:41:31 kernel: XFS (dm-0): metadata I/O error: block 0x2ee001 
("xfs_buf_iodone_callback_error") error 121 numblks 1
Feb 21 14:41:31 kernel: XFS (dm-0): metadata I/O error: block 0x5dd5cd 
("xlog_iodone") error 121 numblks 64
Feb 21 14:41:31 kernel: XFS (dm-0): Log I/O Error Detected. Shutting down 
filesystem
Feb 21 14:41:31 kernel: XFS (dm-0): Please umount the filesystem and rectify 
the problem(s)


_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to