Re: [ceph-users] Cannot mount CephFS after irreversible OSD lost

2015-11-19 Thread Mykola Dvornik
Thanks for the tip. I will stay of the safe side and wait until it will be merged into master) Many thanks for all your help. -Mykola On 19 November 2015 at 11:10, John Spray wrote: > On Thu, Nov 19, 2015 at 10:07 AM, Mykola Dvornik > wrote: > > I'm guessing in this context that "write data"

Re: [ceph-users] Cannot mount CephFS after irreversible OSD lost

2015-11-19 Thread John Spray
On Thu, Nov 19, 2015 at 10:07 AM, Mykola Dvornik wrote: > I'm guessing in this context that "write data" possibly means creating > a file (as opposed to writing to an existing file). > > Indeed. Sorry for the confusion. > > You've pretty much hit the limits of what the disaster recovery tools > ar

Re: [ceph-users] Cannot mount CephFS after irreversible OSD lost

2015-11-19 Thread Mykola Dvornik
I'm guessing in this context that "write data" possibly means creating a file (as opposed to writing to an existing file). Indeed. Sorry for the confusion. You've pretty much hit the limits of what the disaster recovery tools are currently capable of. What I'd recommend you do at this stage is m

Re: [ceph-users] Cannot mount CephFS after irreversible OSD lost

2015-11-19 Thread John Spray
On Wed, Nov 18, 2015 at 9:21 AM, Mykola Dvornik wrote: > Hi John, > > It turned out that mds triggers an assertion > > mds/MDCache.cc: 269: FAILED assert(inode_map.count(in->vino()) == 0) > > on any attempt to write data to the filesystem mounted via fuse. I'm guessing in this context that "write

Re: [ceph-users] Cannot mount CephFS after irreversible OSD lost

2015-11-19 Thread Mykola Dvornik
Dear Yan, Thanks for your reply. The problem is that the back-up I've made was done after the data corruption (but before any manipulations with the journal). Since FS cannot be mounted via in-kernel client, I tend to believe that cephfs_metadata corruption is the cause. Since I do have a read-o

Re: [ceph-users] Cannot mount CephFS after irreversible OSD lost

2015-11-18 Thread Yan, Zheng
On Wed, Nov 18, 2015 at 5:21 PM, Mykola Dvornik wrote: > Hi John, > > It turned out that mds triggers an assertion > > *mds/MDCache.cc: 269: FAILED assert(inode_map.count(in->vino()) == 0)* > > on any attempt to write data to the filesystem mounted via fuse. > > Deleting data is still OK. > > I c

Re: [ceph-users] Cannot mount CephFS after irreversible OSD lost

2015-11-18 Thread Mykola Dvornik
Hi John, It turned out that mds triggers an assertion *mds/MDCache.cc: 269: FAILED assert(inode_map.count(in->vino()) == 0)* on any attempt to write data to the filesystem mounted via fuse. Deleting data is still OK. I cannot really follow why duplicated inodes appear. Are there any ways to f

Re: [ceph-users] Cannot mount CephFS after irreversible OSD lost

2015-11-17 Thread John Spray
On Tue, Nov 17, 2015 at 12:17 PM, Mykola Dvornik wrote: > Dear John, > > Thanks for such a prompt reply! > > Seems like something happens on the mon side, since there are no > mount-specific requests logged on the mds side (see below). > FYI, some hours ago I've disabled auth completely, but it di

Re: [ceph-users] Cannot mount CephFS after irreversible OSD lost

2015-11-17 Thread Mykola Dvornik
Dear John, Thanks for such a prompt reply! Seems like something happens on the mon side, since there are no mount-specific requests logged on the mds side (see below). FYI, some hours ago I've disabled auth completely, but it didn't help. The serialized metadata pool is 9.7G. I can try to compre

Re: [ceph-users] Cannot mount CephFS after irreversible OSD lost

2015-11-17 Thread John Spray
On Tue, Nov 17, 2015 at 10:08 AM, Mykola Dvornik wrote: > However when I've brought the mds back online the CephFS cannot be mounted > anymore complaining on the client side 'mount error 5 = Input/output error'. > Since mds was running just fine without any suspicious messages in its log, > I've d

[ceph-users] Cannot mount CephFS after irreversible OSD lost

2015-11-17 Thread Mykola Dvornik
Dear ceph experts, I've built and administrating 12 OSD ceph cluster (spanning over 3 nodes) with replication count of 2. The ceph version is ceph version 9.2.0 (bb2ecea240f3a1d525bcb35670cb07bd1f0ca299) The cluster hosts two pools (data and metadata) that are exported over CephFS. At some