Re: [ceph-users] CEPH DR RBD Mount

2018-12-03 Thread Jason Dillaman
FYI -- that "entries_behind_master=175226727" bit is telling you that it has only mirrored about 80% of the recent changes from primary to non-primary. Was the filesystem already in place? Are their any partitions/LVM volumes in-use on the device? Did you map the volume read-only? On Tue, Nov 27,

Re: [ceph-users] CEPH DR RBD Mount

2018-11-30 Thread David C
Is that one big xfs filesystem? Are you able to mount with krbd? On Tue, 27 Nov 2018, 13:49 Vikas Rana Hi There, > > We are replicating a 100TB RBD image to DR site. Replication works fine. > > rbd --cluster cephdr mirror pool status nfs --verbose > > health: OK > > images: 1 total > > 1 repl