Sorry, above post has to be corrected as:"Out of the info now emerged so far 
seems Ceph client wanted to write an object of size 1555896 but managed to 
write only  1540096 bytes to the journal."

Sagara
    On Saturday, May 22, 2021, 08:29:34 PM GMT+8, Sagara Wijetunga 
<sagara...@yahoo.com> wrote:  
 
  Out of the info now emerged so far seems Ceph client wanted to write an 
object of size 1555896 but managed to write only 1555896 bytes to the journal.
I think what we need to do now is:1. Get the MDS.0 recover, discard if 
necessary part of the object 200.00006048 and bring the MSD.0 up.
2. Do the same recovery for the MSD.1 as in step 1 and bring MDS.1 also up.
3. Above two steps to the most probability may bring CephFS up.
4. Once the CephFS is up, scan for corrupted files, remove them and bring from 
backup.
5. Get the MDS.2 to sync to MSD.0 or 1 and bring the cluster to sync'ed stage.

My question is, what exactly necessary to carry above step 1?
Sagara

  
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to